Senior Software Engineer - Data Tooling

Senior Software Engineer - Data Tooling

This job is no longer open

About the role:

The Samsara R&D Data team drives analytic capabilities and scalable data technologies, enabling decision-making, science, engineering, and product development across Samsara. As a Senior Software Engineer on the Data Tooling team, you will be software tools and architecture focused on enabling self-service adoption at scale.

Your work and scope will include components such as data orchestration engines, data catalogs, and other applications or libraries to ensure internal Samsara employees are as efficient as possible. You will work closely with Scientists, Data Engineers, Software Engineers, as well as full-stack, firmware, and platform teams.

You’re a great fit if you have data engineering/pipelining experience but now focus on ensuring Data Engineering can be done efficiently at scale through easy-to-use tools, libraries, and web applications.

It can be fully remote in Canada. 

In this role, you will: 

  • Manage our data orchestration environment (Dagster) and how data engineers leverage it in a flexible and standardized way
  • Contribute towards our Data Catalog (DataHub) efforts to enrich the catalog with more and more metadata to make data discoverability easier
  • Build and maintain our Metrics Repository and associated APIs for our internal users to easily access key business metrics
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices

Minimum requirements for the role:

  • 5+ years experience as a Software Engineer, or a Data Engineer with a software focus
  • A strong understanding of SWE fundamentals
  • Expertise in Python and building libraries for others to use 
  • Expert experience in Spark (Spark SQL, and PySpark)
  • Demonstrated experience managing deployments of data orchestration tools, such as Airflow, Dagster, Prefect, or similar
  • Previous experience working in a public cloud (e.g AWS, GCP, Azure).
  • Proven ability to communicate verbally and in writing to technical peers and leadership teams with various levels of technical knowledge

An ideal candidate also has:

  • Masters or higher in Computer Engineering or other related field
  • Experience working closely with Data Engineers focused on building large-scale ETL pipelines and data models for analytics and science use cases
  • Familiarity with Golang
  • Strong functional knowledge of tools like Databricks, Delta Lake, Dagster, or DataHub, etc. 
  • Experience working with large datasets (including product and time series data) using distributed computing (e.g., Spark, Hive)
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.