Sr. Data Engineer

Sr. Data Engineer

This job is no longer open

Docker is a remote first company with employees across Europe and the Americas that simplifies the lives of developers who are making world-changing apps.  We raised our Series C funding in March 2022 for $105M at a $2.1B valuation. We continued to see exponential revenue growth last year.  Join us for a whale of a ride!

Millions of developers use Docker daily to help them be more productive and to build better software. In order to build products that developers love, we collect billions of anonymized data points generated by our products and services to gain actionable insights that directly impact our product strategy and development. And data is also a product for us, as we supply insights to customers who are publishing their software on our platform.

Our Data Engineers are embedded into product development teams to help those teams deliver reliable data. Data is collected from various sources (like Segment, databases, logs), processed using dbt pipelines and stored in Snowflake before being exposed in Looker and Salesforce.

We are looking for data engineers excited to join Docker and to advocate for good data practices in the teams that they work with. You should be passionate about using data to understand how users behave and to make better products that people love to use, and about supplying data to customers to help them make better decisions too. You will develop and maintain the different data pipelines in order to ensure the quality and accuracy of our product analytics, as well as build reports and visualizations for internal use and for customers. Although this is an engineering role, you will work very closely with Product Managers and Data Analysts to help them understand user behavior and to influence the product roadmap. You will also work with our customers to understand their data needs when they use our platform.

Responsibilities:

  • Ensure data reliability and accuracy from product development teams

  • Build and maintain ELT pipelines using dbt

  • Contribute to the evolution and maintenance of Docker’s data platform

  • Create data analytical capabilities to measure product success

  • Contribute to the visualization of the collected data to gain product insights

  • Follow engineering best practices for design, architecture and testing

  • Compensated daytime on-call for production data systems

Key qualifications:

  • 5+ years of experience building data pipelines

  • Knowledge of data warehousing platforms (Snowflake, Redshift, Bigquery, etc) including data transformation (dbt), data model design and query optimization strategies

  • Proficiency with SQL

  • Experience using data visualization tools, for example Looker, Tableau or Power BI

  • Experience developing data-intensive services: Golang and/or Python a plus

  • Strong written and verbal English communication skills

Docker embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be.

EU Salary Ranges by Country

80,000 EUR - 120,000 EUR

*salary range varies based on location and level

Due to the remote nature of this role, we are unable to provide visa sponsorship.

#LI-REMOTE

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.