Data Engineer III

Data Engineer III

About the Role:

Data is the driver for our future at Cars. We’re searching for a collaborative, analytical, and innovative engineer to build scalable and highly performant platforms, systems and tools to enable innovations with data. If you are passionate about building large scale systems and data driven products, we want to hear from you.

Responsibilities Include:

  • Build data pipelines and deriving insights out of the data using advanced analytic techniques, streaming and machine learning at scale
  • Work within a dynamic, forward thinking team environment where you will design, develop, and maintain mission-critical, highly visible Big Data applications
  • Build, deploy and support data pipelines into production.
  • Work in close partnership with other Engineering teams, including Data Science, & cross-functional teams, such as Product Management & Product Design
  • Opportunity to mentor others on the team and share your knowledge across the Cars.com organization

Required Skills

  • Experience with one or more query language (e.g., SQL, PL/SQL, DDL, HiveQL, SparkSQL)
  • Experience with data modeling, warehousing and building ETL pipelines
  • Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java.
  • Solid understanding of Spark and ability to write, debug and optimize Spark code in Python.
  • Solid understanding of various file formats and compression techniques.
  • Experience with any of ETL schedulers such as Airflow or similar frameworks.
  • Experience with source code management systems such as Github and developing CI/CD pipelines with tools such as Jenkins for data.
  • Excellent communication and collaboration skills. 
  • Ability to design, develop and debug at a cross-project level.

Required Experience

  • Data Engineering | 3+ years of designing & developing complex, real-time applications at enterprise scale; specifically Python and/or Scala. 
  • Big Data Ecosystem | 2+ years of hands-on, professional experience with tools and platforms like Spark Streaming, EMR, Kafka.
  • AWS Cloud | 2+ years of professional experience in developing Big Data applications in the cloud, specifically AWS.

Preferred:

  • Experience with developing REST APIs
  • Experience to develop Spark streaming jobs to read data from Kafka.
  • Experience working with Google Analytics
  • Experience working with digital marketing platforms such as Facebook, Google Ads, and so on.
#LI-REMOTE #LI-KO1
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.