Director, Data Engineering

Director, Data Engineering

This job is no longer open

Your opportunity

We are seeking a results oriented, hands-on Data leader who is passionate about data at scale and useful insights to build a Data-Driven Operating Model at NewRelic.

As a Director of Data Engineering, you will be responsible for leading the development of our data engineering strategy, multi functional data engineering teams and their initiatives (including of end to end data pipelines). Your role will be pivotal in driving data-driven decision making and advancing our data infrastructure and analytics capabilities!

What you'll do

  • Build, lead and manage a world-class team of hard-working Data Engineers, data architects and AI ML engineers
  • Collaborate with other peer leaders, build and maintain relationships with partners, Vendors, and budget management
  • Design and develop scalable data pipelines, data warehouses, and ETL processes ensuring availability, reliability, and accessibility of high-quality data and analytics to drive continuous improvement and innovation
  • Handle end-to-end delivery of data products/assets including intake requests from partners, development, and support
  • Provide thought leadership to grow and evolve Data Engineering function and implementation of SLDC processes in building internal-facing data products by staying up-to-date with industry trends and emerging technologies
  • Work closely with upstream data teams to define and implement standard processes in data instrumentation
  • Communicate technical insights and recommendations effectively to advise executive leadership

This role requires

  • 12+ years of experience in data engineering/software engineering with 5+ years of leading , mentoring and developing data engineering, AI/ML, and product management teams
  • 5+ years of experience in the following:
  • EDW: BigQuery or Snowflake or Redshift

  • Job scheduling and orchestration: Airflow or Astronomer.io 

  • Data transformation framework: DBT 

  • Streaming platforms like Kafka, Spark streaming, etc. 

  • Ingestion platforms like Fivestran, Stitch, Talend, etc.

  • In-depth knowledge of distributed computing frameworks (e.g., Hadoop, Hive, Pig), data processing tools (e.g., SQL, Apache Beam), and cloud-based data technologies (e.g., AWS, Azure, GCP)
  • Strong understanding of data warehousing concepts, data modeling, and schema design principles
  • Knowledge of data integration techniques, ETL processes, and data pipeline development
  • Hands-on experience and proficiency in Complex SQL queries, Python programming, Data Structures, and Data modeling (Star, +Snowflake, OWT, and Galaxy schemas) 

Bonus points if you have

  • Bachelor's degree in Computer Science, Engineering, or a related field. Advanced degree preferred
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.