Senior Data Engineer

Senior Data Engineer

Docker is a remote first company with employees across Europe and the Americas that simplifies the lives of developers who are making world-changing apps.  We raised our Series C funding in March 2022 for $105M at a $2.1B valuation. We continued to see exponential revenue growth last year.  Join us for a whale of a ride!

Docker is looking for a Senior Data Engineer to join our Data Engineering team which is led by our Senior Manager of Data Engineering. The team transforms billions of data points generated from the Docker products and services into actionable insights to directly influence product strategy and development. You'll leverage both software engineering and analytics skills as part of the team responsible for managing data pipelines across the company: Sales, Marketing, Finance, HR, Customer Support, Engineering, and Product Development.

In this role, you'll help design and implement event ingestion, data models, and ETL processes that support mission-critical reporting and analysis while building in mechanisms to support our privacy and compliance posture. You will also lay the foundation for our ML infrastructure to support data scientists and enhance our analytics capabilities. Our data stack consists of Snowflake as the central data warehouse and Looker as a visualization layer. Data flows in from Segment, Fivetran, AWS, and a variety of other cloud sources and systems. You'll work together with other data engineers, analysts, and subject matter experts to deliver impactful outcomes to the organization.  As the company grows, ensuring reliable and secure data flows to all business units and surfacing insights and analytics is a huge and exciting challenge!

Responsibilities:

  • Manage and develop ETL jobs, warehouse, and event collection tools and test process, validate, transport, collate, aggregate, and distribute data

  • Build environments for running machine learning workloads and deploying models 

  • Integrate emerging methodology, technology, and version control practices that best fit the team. 

  • Contribute to enforce SOC2 compliance across the data platform

  • Write and maintain documentation of technical architecture

  • Build, own, and maintain the infrastructure to support ML models 

Qualifications:

  • 4+ yrs of relevant industry experience

  • Experienced in data modeling and building scalable data pipelines involving complex transformations

  • Proficiency working with a Data Warehouse platform (Snowflake preferred)

  • Experience with data governance, data access, and security controls. Experience with Snowflake and dbt is strongly preferred

  • Experience creating production-ready ETL scripts and pipelines using Python and SQL and using orchestration frameworks such as Airflow/Dagster/Prefect

  • Experience designing and deploying high-performance systems with reliable monitoring and logging practices

  • Familiarity with at least one cloud ecosystem: AWS/Azure Infrastructure/Google Cloud

  • Experience with cloud-based Big Data and Analytics technologies

  • Experience working in an agile environment on multiple projects and prioritizing work based on organizational priorities

  • Strong verbal and written English communication skills

What to expect in the first 30 days:

  • Onboard and meet data engineers, analysts, and key stakeholders and attend team meetings

  • Develop an understanding of the current data architecture and pipelines 

  • Review current projects, roadmap, and priorities

  • Identify areas for quick wins for improving data engineer and analyst experience 

  • Understand our privacy and compliance requirements and current design/workflows

What to expect in the first 90 days:

  • Contribute meaningfully to the data engineering team 

  • Recommend opportunities for continuous improvement for data engineering 

  • Work with program and engineering managers to determine roadmap for privacy and security compliance and begin execution

What to expect in the first year:

  • Lead efforts around privacy and soc-2 compliance for our data platform 

  • Select and build tooling for our ML infrastructure capabilities 

  • Contribute to our data engineering workflow enhancements, orchestration, and CI/CD 

Perks:

  • Freedom & flexibility; fit your work around your life

  • Variety of virtual and in-person social events to build connections and have fun

  • Home office setup; we want you comfortable while you work

  • Generous maternity and parental leave

  • Technology stipend equivalent to $100 net/month

  • PTO plan that encourages you to take time to do the things you enjoy

  • Whaleness Days: companywide day off each month

  • Quarterly, company-wide hackathons

  • Training stipend for conferences, courses and classes

  • Stock Options; we are a growing start-up and want all employees to have a share in the success of the company

  • Docker Swag

  • Medical benefits, retirement and holidays vary by country

Docker embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be.


CA/NY/Boulder, CO/Denver, CO  $150,000 - $206,000

Colorado  $140,000 - $194,000

*salary range can change depending on the level

Due to the remote nature of this role, we are unable to provide visa sponsorship.

#LI-REMOTE

Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.