Senior Data Infrastructure Engineer

Senior Data Infrastructure Engineer

This job is no longer open

Role: Senior Data Infrastructure Engineer

Reports to: Director, Infrastructure

Department: Engineering

Location: Remote, US or Germany

Job Type: Full Time, Exempt

 

Help us Shape the Future of Data

Anaconda is the world’s most popular data science platform. With more than 21 million users, the open source Anaconda Distribution is the easiest way to do data science and machine learning. We pioneered the use of Python for data science, champion its vibrant community, and continue to steward open-source projects that make tomorrow’s innovations possible. Our enterprise-grade solutions enable corporate, research, and academic institutions around the world to harness the power of open source for competitive advantage and groundbreaking research.

Anaconda is seeking people who want to play a role in shaping the future of enterprise machine learning, and data science. Candidates should be knowledgeable and capable, but always eager to learn more and to teach others. Overall, we strive to create a culture of ability and humility and an environment that is both relaxed and focused. We stress empathy and collaboration with our customers, open-source users, and each other. 

Here is what people love most about working here: We’re not just a company, we’re part of a movement. Our dedicated employees and user community are democratizing data science and creating and promoting open-source technologies for a better world, and our commercial offerings make it possible for enterprise users to leverage the most innovative output from open source in a secure, governed way.

 

Summary

Anaconda is seeking a talented Senior Data Infrastructure Engineer to join our rapidly-growing company. This is an excellent opportunity for you to leverage your experience and skills and apply it to the world of data science and machine learning.

 

What You’ll Do:

  • Help build our data foundation, data pipelines and ETL to enable the creation of insightful data for our Data Science, Engineering and Product teams.
  • Drive table design and architecture, transformation logic and efficient query development to support our growing data needs.
  • Build secure, scalable data infrastructure that enables self-service dashboards and automation of product experimentation results.
  • Implement testing and monitoring across the data infrastructure to ensure data quality from raw sources to downstream models.
  • Write documentation that supports code maintainability.
  • Enable data engineers to quickly promote prototypes to production.
  • Take complete ownership of the data quality, data mapping, business logic and transformation rules for the data feeds, and have a passion for high quality data.
  • Work closely with stakeholders to build next generation data integration capabilities.
  • Have a high sense of urgency to deliver projects as well as troubleshoot and fix data queries/ issues.
  • Collaboratively work and act as liaison with Infrastructure and Product teams to meet milestones in a fast-paced environment.

 

What You Need:

  • 8+ years of relevant experience inside the engineering domain.
  • Database experience with NoSQL, SQL and Big Query.
  • Experience with infrastructure as code (Terraform, Cloud Formation or Ansible).
  • Deep experience in ETL design and implementation using tools like Airflow, Kafka and Spark.
  • Experience working with very large data sets, and an understanding of how to write code that leverages the parallel capabilities of Python and database platforms.
  • Strong knowledge of database performance concepts like indices, segmentation, projections, and partitions.
  • A proficiency in Python or Scala.
  • Deep experience in designing, documenting and developing scalable data architecture and process flows.
  • Experience executing projects with Data Science and Engineering teams from start to finish.
  • To be self-directed/motivated with excellent organizational skills.
  • To be comfortable with changing requirements and priorities.
  • Experience with cloud infrastructure.
  • A team attitude: “I am not done, until WE are done”.
  • To embody our core values:  
    • Ability & Humility
    • Innovation & Action
    • Empathy & Connection
  • To care deeply about fostering an environment where people of all backgrounds and experiences can flourish.

 

What Will Make You Stand Out:

  • Experience working in a fast-paced startup environment.
  • Experience working in an open source or data science-oriented company.
  • Familiarity with the Hadoop ecosystem of tools.
  • Experience with Kafka or other eventing streaming technologies.
  • Experience with Spark.
  • Experience with container orchestration.

 

Why You’ll Like Working Here:

  • This is a unique opportunity to translate strong open source adoption and user enthusiasm into commercial product growth.
  • We are a dynamic company that rewards high performers.
  • We’re on the cutting edge of the enterprise application of data science, machine learning and AI.
  • Collaborative team environment that values multiple perspectives and clear thinking.
  • Employees-first culture.
  • Flexible working hours.
  • Medical, Dental, Vision, HSA, Life and 401K.
  • Health and Remote working reimbursement.
  • Paid parental leave - both mothers and fathers.
  • Pre-IPO stock options.
  • Open vacation policy.

 

An Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.

 

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.