(Mid-Level/Senior) Data Engineer

(Mid-Level/Senior) Data Engineer

This job is no longer open

About Us

At Cars.com, we help shoppers meet their perfect car match, and people find their perfect career match. As one of the top places to work in Chicago, according to The Chicago Tribune, Built-In Chicago and others, we pride ourselves on a culture of growth and innovation.

Cars.com has revolutionized the automotive industry for both shoppers and sellers through technology and solutions for buyers and sellers alike. We never shy away from a challenge, move fast, collaborate across functions to approach problems from every angle. We’ve built a culture that’s second-to-none and shares core values that keep everyone working full-speed at the same goals with the same open, outcome-driven and bold attitudes.

Cars.com is a CARS brand. CARS includes the following brands: Cars.com, Dealer Inspire, DealerRater.

 

About the Role

Data is the driver for our future at Cars. We’re searching for a collaborative, analytical, and innovative engineer to build scalable and highly performant platforms, systems and tools to enable innovations with data.

Working within a dynamic, forward thinking team environment, you will design, develop, and maintain mission-critical, highly visible Big Data pipelines in direct support of our business objectives. As a member of our Enterprise Data Team, you will also work in close partnership with teams from every part of the company, including Product Engineering, Data Science, Business Intelligence, and Digital Marketing

If you are passionate about building large scale systems and data driven products, we want to hear from you.

 

Responsibilities Include:

  • Build data pipelines to ingest data from various source systems such as Kafka, Databases, APIs, flat files, etc.
  • Lead or participate in development projects from end to end, including requirement gathering, design, development, deployment, and debugging.
  • Work closely with domain experts and stakeholders from across the company.
  • Develop Spark jobs to cleanse/enrich/process/aggregate large amounts of data.
  • Monitor and optimize pipelines, tuning Spark jobs for efficient performance taking into account cluster resources, execution time of the job, execution memory, etc.
  • Identify and drive innovative improvements to our data ecosystem in areas such as data validation and meeting SLAs.

 

Required Skills & Experience:

  • Bachelor’s Degree in Computer Science, Mathematics, Engineering or related field/equivalent experience.
  • Software or Data Engineering | 4 - 6 years of designing & developing complex applications at enterprise scale; specifically Python, Java, and/or Scala. 
  • Big Data Ecosystem | 4+ years of hands-on, professional experience with tools like Spark, Kafka.
  • AWS Cloud | 2+ years of professional experience in developing Big Data applications in the cloud, specifically AWS.
  • Experience with SQL, writing analytics queries and designing/optimizing databases.
  • Experience mentoring junior developers.
  • Excellent communication and collaboration skills.
  • Experience with Agile development methodology.

 

Preferred:

  • Experience with Apache Airflow.
  • Comfortable working in Linux environment and using IntelliJ IDEs.
  • Experience using Infrastructure as Code tools like Terraform and Packer.
  • Experience working with digital advertising data providers (Facebook, Bing, Google).
  • Experience with data warehouses and BI reporting tools (Snowflake, Redshift, Tableau).
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.