Data Engineer

Data Engineer

This job is no longer open

Company overview:

Promise modernizes and humanizes government payments. We are a comprehensive payments platform that increases revenue and efficiency for government agencies. We help residents navigate payments with dignity and ease to avoid the negative consequences of non-payment. Promise’s win-win services strengthen the bond between government agencies and the communities that they serve.

Role overview:

This role will work as part of our Analytics and Engineering teams to build scalable, reliable, resilient data pipelines to move data throughout our systems. The ingestion and delivery of data is critical to ensuring a top-notch experience for our customers and the overall success of our business. You’ll be the primary architect and builder of pipelines for both batch and streaming data, making sure the data is where it needs to be for analysts, engineers, and everyone they serve.

What you'll do:

  • Design and build data pipelines that are scalable, fault-tolerant, resilient, self-healing, and highly observable - covering both data ingestion (batch, streaming, and logging) and transformation
  • Automate checks of our data throughout its lifecycle to add metadata, quality checks, and auditability
  • Make sure that PII and other sensitive data is secure and only available in contexts where it’s needed
  • Partner with Engineering, Product, and Ops to understand their data needs and identify places where the data pipeline can be improved
  • Select best-of-breed tools to meet our data processing needs today and into the future

You’re a great fit for the role if you are:

  • Experienced (3+ years) with using scripting and programming languages (Python / Scala / Java) to build production data pipelines
  • Deeply familiar with batch processing and orchestration tools like AirFlow, Luigi, NiFi, and dbt
  • An expert in building out data pipelines, designing ELT/ETL processes, and maintaining these systems
  • Capable of writing efficient transformation code in SQL, Spark and/or Python
  • Comfortable in ambiguity: you’ve worked in a startup environment or something similar where there often wasn’t perfect clarity about what you should do next.
  • An excellent communicator, in both written and verbal contexts. You’re articulate and can get to the point quickly but kindly.
  • An expert multitasker: you can triage in the moment, juggling a wide variety of things that need to be fixed. You can quickly sort out priorities from noise.
  • Self-motivated and resourceful: you’ll jump in to do whatever needs doing to get the job done. You love collaborating, but are also happy to jump right in and do things yourself.
  • Excited about our mission: You think that improving the way governments interact with their constituents is critical. You want to prove that moving away from punitive approaches to non-payment and treating people with dignity can be a win for everyone.

Additional skills/experiences that are great:

  • Experience building streaming pipelines using open-source tools like Kafka/Flink or PaaS tools like AWS Kinesis and GCP Pub/Sub
  • Familiarity with Typescript and/or Java (or willingness to learn them) so that you can write the code in our apps that receives data from our pipelines
  • Experience with legacy systems and/or civic technologies
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.