Data Integration Engineer

Data Integration Engineer

This job is no longer open

 

Locations - Austin or Remote US


About the team

The Business Intelligence team at Cloudflare is responsible for building a centralized cloud data lake and an analytics platform that enables our internal Business Partners and Product teams with actionable insights and also provides a 360 view of our business. Our goal is to democratize data, support Cloudflare’s critical business needs, provide reporting and analytics via self-service tools to fuel existing and new business critical initiatives.

About the role

We are looking for an experienced Data Integration Engineer to join our team to scale our operational data platform initiatives. You will work with a wide array of data sources to build data pipelines that process billions of records each day and influence our day to day business operations. 

Success in this role comes from having a strong data engineering background to deliver scalable data pipelines that can enable advanced analytics via a self-service user interface.

What you'll do

  • Design, implement and support ingestion data pipelines from multiple variety of database/ API/ Kafka sources 
  • Work closely with a cross-functional team of data engineers and enterprise applications on strategic initiatives 
  • Contribute in improving an evolving data platform for scalability, observability, and reliability
  • Gain insight into data operations by contributing to mapping documents and technical requirements
  • Support day to day production data pipelines
  • Understand data landscape i.e., tooling, tech stack, source systems etc. and work closely with the data engineering team in Austin and San Francisco to improve the data collection and quality

Examples of desirable skills, knowledge, and experience

  • B.S. or M.S in Computer Science, Statistics, Mathematics, or other quantitative fields
  • 2+ years of industry experience in software engineering, data engineering, or related fields
  • Strong programming skills with Go, Python or any JVM based programming language
  • Experience in writing advanced SQL queries
  • Knowledge of data management fundamentals and data storage/computing principles
  • Solid understanding of big data technologies such as Spark, BigQuery, Kafka etc. 
  • Strong communication skills

Bonus Points

  • Experience with container based deployments such as Docker & Kubernetes
  • Experience with Google Cloud Platform or other cloud storage platforms
  • Experience in building RESTful and microservices applications is a plus
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.