Senior Data Engineer

Senior Data Engineer

Why you should join our team: 

  • Our work is transforming the way people in pain access support at their fingertips
  • Our work is innovative in the crisis response space
  • Our dynamic, fun, and diverse culture
  • Our meaningful cause, led by empathy and innovation
  • Our strong values at the center of all we do
  • Our commitment to diversity, equity and inclusion
  • Our commitment to engagement and belonging
  • Our commitment to our employees and their holistic wellbeing
  • Our  value of work/life balance
  • Our growth mindset and prioritize professional development
  • Our leaders who truly care

What you'll be doing:

We are seeking a highly skilled and delivery-focused Data Engineer to join our team. The ideal candidate will have extensive experience in building, managing and optimizing data pipelines, leveraging Infrastructure as Code (IaC) methodologies. You should possess strong technical skills in SQL, Python, Spark, and PySpark and have hands-on experience with AWS and Databricks platforms. 

In this role, you will actively collaborate with cross-functional teams to understand data requirements, design scalable and efficient solutions, and contribute to the continuous improvement of our data infrastructure. You will be responsible for implementing and maintaining data solutions on AWS and Databricks platforms. The ideal candidate must possess a proven track record of delivering high-quality results in a fast-paced environment, with a focus on innovation and continuous improvement.

Responsibilities:

  • Collaborate with other engineering teams, data scientists, and business units to understand data requirements and improve data ingestion and integration processes.
  • Collaborate with cross-functional teams to understand data needs and deliver solutions that support business objectives.
  • Design, build, and maintain scalable and reliable data pipelines using Infrastructure as Code (IaC) principles and Databricks along with Datadog.
  • Design and implement scalable and secure integration solutions between the data sources and our data pipeline infrastructure.
  • Develop and optimize big data processing using Spark and PySpark within the Databricks environment.
  • Write complex SQL queries for data extraction, transformation, and loading (ETL) processes.
  • Implement automation and monitoring tools to ensure data accuracy and pipeline efficiency.
  • Maintain comprehensive documentation for all data engineering processes, including pipeline architectures, codebase, and infrastructure configurations.
  • Ensure that best practices and coding standards are followed and documented for future reference.
  • Stay updated with the latest technologies and methodologies in data engineering and propose improvements to current processes.

Qualifications:

  • Bachelor’s or Master’s in Computer Science/related field.
  • 4+ years in data engineering or similar.
  • Expertise in complex data pipeline management with IaC (e.g., Terraform).
  • Strong understanding of distributed systems, and machine learning basics.
  • Analytical, problem-solving, communication, and collaboration skills.
  • Experience in fast-paced, agile environments; delivery-focused.
  • Self-motivated with a passion for continuous improvement in data engineering.
  • Relevant certifications (e.g., AWS Certified Data Engineer - Associate).
  • Databricks Certified for Apache Spark (recommended but not required).
  • Ready to support production data pipelines on-call.
  • Knowledge of data governance and compliance (GDPR, HIPAA) is beneficial.

Reliable High-Speed Internet Required: Must have a stable high-speed internet connection to support seamless remote collaboration, virtual meetings, online job tasks, etc.

The full salary range for this position, across all United States geographies, is ($107,000-$162,000) per year. The upper portion of the salary range is typically reserved for existing employees who demonstrate strong performance over time. Starting salary will vary by location, qualifications, and prior experience; during the interview process, candidates will learn the starting salary range applicable for their location. We pay competitively in the tech-forward nonprofit space and offer a robust benefits package.

Only candidates in the following states will be eligible for employment: CA, CO, CT, DE, FL, GA, HI, IL, IN, IA, MD, MA, MI, MN, MO, NJ, NM, NY, NC, OH, PA, SC, TN, TX, UT, VA, WA.

Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.