Senior Data Engineer

Senior Data Engineer

Docker is a remote first company with employees across Europe, APAC and the Americas that simplifies the lives of developers who are making world-changing apps.  We raised our Series C funding in March 2022 for $105M at a $2.1B valuation. We continued to see exponential revenue growth last year.  Join us for a whale of a ride!

Docker is looking for a Senior Data Engineer to join our Data Engineering team which is led by our Director of Data Engineering. The team transforms billions of data points generated from the Docker products and services into actionable insights to directly influence product strategy and development. You'll leverage both software engineering and analytics skills as part of the team responsible for managing data pipelines across the company: Sales, Marketing, Finance, HR, Customer Support, Engineering, and Product Development.

In this role, you'll help design and implement event ingestion, data models, and ETL processes that support mission-critical reporting and analysis while building in mechanisms to support our privacy and compliance posture. You will also lay the foundation for our ML infrastructure to support data scientists and enhance our analytics capabilities. Our data stack consists of Snowflake as the central data warehouse, DBT/Airflow as the orchestration layer and Looker for visualization and reporting. Data flows in from Segment, Fivetran, S3, Kafka, and a variety of other cloud sources and systems. You'll work together with other data engineers, analysts, and subject matter experts to deliver impactful outcomes to the organization.  As the company grows, ensuring reliable and secure data flows to all business units and surfacing insights and analytics is a huge and exciting challenge!

Responsibilities:

  • Manage and develop ETL jobs, warehouse, and event collection tools and test process, validate, transport, collate, aggregate, and distribute data

  • Build and manage the Central Data Model that powers most of our reporting

  • Integrate emerging methodology, technology, and version control practices that best fit the team

  • Build data pipelines and tooling to support our ML and AI projects

  • Contribute to enforce SOC2 compliance across the data platform

  • Support and enable our stakeholders and other data practitioners across the company

  • Write and maintain documentation of technical architecture

Qualifications:

  • 4+ yrs of relevant industry experience

  • Experienced in data modeling and building scalable data pipelines involving complex transformations

  • Proficiency working with a Data Warehouse platform (Snowflake or BigQuery preferred)

  • Experience with data governance, data access, and security controls. Experience with Snowflake and dbt is strongly preferred

  • Experience creating production-ready ETL scripts and pipelines using Python and SQL and using orchestration frameworks such as Airflow/Dagster/Prefect

  • Experience designing and deploying high-performance systems with reliable monitoring and logging practices

  • Familiarity with at least one cloud ecosystem: AWS/Azure Infrastructure/Google Cloud

  • Experience with a comprehensive BI and visualization framework such Tableau or Looker

  • Experience working in an agile environment on multiple projects and prioritizing work based on organizational priorities

  • Strong verbal and written English communication skills

What to expect in the first 30 days:

  • Onboard and meet data engineers, analysts, and key stakeholders and attend team meetings

  • Develop an understanding of the current data architecture and pipelines 

  • Review current projects, roadmap, and priorities

  • Identify areas for quick wins for improving data engineer and analyst experience 

  • Understand our privacy and compliance requirements and current design/workflows

What to expect in the first 90 days:

  • Contribute meaningfully to the data engineering projects 

  • Recommend opportunities for continuous improvement for data pipelines and infrastructure


We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on April 13, 2024.

Please see the independent bias audit report covering our use of Covey here.

We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on April 13, 2024.

Please see the independent bias audit report covering our use of Covey here.

Perks (for Full-Time Employees Only)

  • Freedom & flexibility; fit your work around your life

  • Home office setup; we want you comfortable while you work

  • 16 weeks of paid Parental leave

  • Technology stipend equivalent to $100 net/month

  • PTO plan that encourages you to take time to do the things you enjoy

  • Quarterly, company-wide hackathons

  • Training stipend for conferences, courses and classes

  • Equity; we are a growing start-up and want all employees to have a share in the success of the company

  • Docker Swag

  • Medical benefits, retirement and holidays vary by country

Docker embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better our company will be.

Due to the remote nature of this role, we are unable to provide visa sponsorship.

#LI-REMOTE

Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.