Senior Analytics Engineer

Senior Analytics Engineer

This job is no longer open

Docker is a remote first company with employees across Europe, APAC and the Americas that simplifies the lives of developers who are making world-changing apps.  We raised our Series C funding in March 2022 for $105M at a $2.1B valuation. We continued to see exponential revenue growth last year.  Join us for a whale of a ride!

Product - Data & Growth team. This role is pivotal in shaping and advancing our data infrastructure and analytics capabilities, driving our organization’s data-driven decision-making efforts. In this role, you will collaborate with cross-functional teams to design, build, and optimize complex data solutions, with a focus on scalability, performance, and accuracy.

As a Senior Analytics Engineer, you will be immersed in our data model, taking ownership of the construction of data pipelines, foundational reporting structures, and data models that support key business objectives. You will be responsible for solving complex data challenges, transforming data into valuable insights, and mentoring junior team members while contributing to the strategic direction of our data initiatives.

Key Responsibilities:

  • Data Pipeline Leadership: Design, develop, and maintain highly scalable and efficient data pipelines, ensuring timely and accurate collection, transformation, and integration of data from various sources.

  • Advanced Data Modeling: Architect and implement robust data models and data warehousing solutions that enable efficient storage, retrieval, and analysis of large, complex datasets.

  • Cross-Functional Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data requirements, translating them into actionable data models and insights.

  • Data Quality Assurance: Implement and oversee rigorous data validation, cleansing, and error-handling mechanisms to maintain high data quality and reliability.

  • Performance Optimization: Continuously monitor and optimize data pipeline performance, identifying and resolving bottlenecks and inefficiencies to maintain optimal system responsiveness.

  • Mentorship and Leadership: Provide guidance and mentorship to junior analytics engineers, fostering a collaborative and learning-oriented environment.

  • Strategic Contribution: Contribute to the strategic direction of data initiatives, staying abreast of industry best practices, emerging technologies, and trends in data engineering and analytics.

  • Documentation & Knowledge Sharing: Build and maintain user-facing documentation for key processes, metrics, and data models to enhance the data-driven culture within the organization.

  • Tool and Technology Expertise: Serve as a key expert in tools such as Snowflake, DBT, and Looker, ensuring they are leveraged effectively to meet business needs.


Key Skills and Qualifications:

  • Experience: 5+ years of experience in data engineering or analytics engineering roles, with a proven track record of leading complex data projects and initiatives.

  • Technical Expertise: Deep expertise in SQL, DBT, and data modeling, with a strong understanding of data pipeline design, ETL processes, and data warehousing.

  • Software Engineering Skills: Proficiency in software engineering principles, including CI/CD pipelines, version control (e.g., Git), and scripting languages (e.g., Python).

  • Data Tools Proficiency: Hands-on experience with tools like Snowflake, DBT, and Looker. Familiarity with additional tools and platforms (e.g., AWS, Kubernetes) is a plus.

  • Problem-Solving: Strong analytical and problem-solving skills, with the ability to diagnose and resolve complex technical issues related to data infrastructure.

  • Leadership: Demonstrated ability to mentor and lead junior engineers, with a focus on fostering a collaborative and high-performance team environment.

  • Communication: Excellent communication skills, with the ability to clearly and concisely convey complex technical concepts to both technical and non-technical stakeholders.

  • Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.

What to expect in the first 30 days:

  • Get to know Docker! Familiarize yourself with our vision, mission, values, and product offerings.

  • Complete all required onboarding and training sessions.

  • Gain access to necessary systems, databases, and platforms.

  • Familiarize yourself with our tech stack and internal processes.

  • Shadow team members to learn about our development workflows.

  • Meet with key stakeholders and team members to understand projects and OKRs.

  • Perform initial data pipeline and modeling tasks, and review our documentation to become familiar with key data assets


What to expect in the first 90 days:

  • Achieve a deep understanding of key data models, pipelines, and reporting structures.

  • Take ownership of key data engineering projects, driving them from inception to delivery.

  • Provide regular mentorship and support to junior engineers, fostering a collaborative team environment.

  • Participate in peer reviews and pair programming sessions.

  • Build and update documentation for key processes, metrics, and data models.

  • Triage incoming data requests, scoping and delegating tasks as needed.

  • Lead complex modeling initiatives and data pipeline optimizations.

  • Foster strong relationships with key stakeholders across the organization.


What to expect in the first year:

  • Lead initiatives to optimize data pipelines, reducing processing times and enhancing data accuracy.

  • Establish and maintain high standards for data quality, reducing inconsistencies and ensuring reliable data delivery.

  • Mentor and lead a small team of analytics engineers, managing their priorities and deliverables.

  • Contribute significantly to Docker’s data strategy, helping shape the future of our data infrastructure.

  • Develop and refine robust data governance practices, ensuring compliance and alignment with business objectives.

  • Drive the creation of an analytics knowledge base, serving as the single source of truth for key metrics and data processes.


We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on April 13, 2024.

Please see the independent bias audit report covering our use of Covey here.

We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on April 13, 2024.

Please see the independent bias audit report covering our use of Covey here.

Perks (for Full Time Employees)

  • Freedom & flexibility; fit your work around your life

  • Home office setup; we want you comfortable while you work

  • 16 weeks of paid Parental leave

  • Technology stipend equivalent to $100 net/month

  • PTO plan that encourages you to take time to do the things you enjoy

  • Quarterly, company-wide hackathons

  • Training stipend for conferences, courses and classes

  • Equity; we are a growing start-up and want all employees to have a share in the success of the company

  • Docker Swag

  • Medical benefits, retirement and holidays vary by country

Docker embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better our company will be.

Due to the remote nature of this role, we are unable to provide visa sponsorship.

#LI-REMOTE

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.