Data Engineer - Metering

Data Engineer - Metering

This job is no longer open

About Datadog:

We're on a mission to build the best platform in the world for engineers to understand and scale their systems, applications, and teams.  We operate at high scale—trillions of data points per day—providing always-on alerting, metrics visualization, logs, and application tracing for tens of thousands of companies. Our engineering culture values pragmatism, honesty, and simplicity to solve hard problems the right way.

 

The team:

The Revenue and Growth Team builds and runs the data pipelines, container-native services, and systems to quantify our customers’ usage across all Datadog products. This team is at the leading edge of any new product we release.

 

The opportunity:

As a Data Engineer within the Revenue & Growth Metering team, you will work in Spark with big data tooling to build highly reliable, verifiably-accurate data processing pipelines for a high scale mission-critical process. This team ingests the full firehose of data we receive each day - literally trillions of data points and hundreds of TB’s.

 

You will:

  • Build distributed, high-volume data pipelines that power this core product
  • Do it with Spark, Luigi and other open-source technologies
  • Work all over the stack, moving fluidly between programming languages: Scala, Java, Python, Go, and more
  • Join a tightly knit team solving hard problems the right way
  • Own meaningful parts of our service, have an impact, grow with the company

 

Requirements:

  • You have a BS/MS/PhD in a scientific field or equivalent experience
  • You have built and operated data pipelines for real customers in production systems
  • You are fluent in several programming languages (JVM & otherwise)
  • You enjoy wrangling huge amounts of data and exploring new data sets
  • You value code simplicity and performance
  • You want to work in a fast, high growth startup environment that respects its engineers and customers



Bonus points:

  • You are deeply familiar with Spark and/or Hadoop
  • In addition to data pipelines, you’re also quite good with Kubernetes and cloud technology
  • You’ve built applications that run on AWS
  • You’ve built your own data pipelines from scratch, know what goes wrong, and have ideas for how to fix it
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.