Senior Data Engineer

Senior Data Engineer

Your New TeamThe ApartmentIQ team is revolutionizing how property managers optimize their operations. We provide a powerful platform that centralizes critical property data, offering actionable insights and intuitive tools to enhance efficiency and decision-making. Our product empowers users to streamline everything from leasing and maintenance to resident communication and financial reporting.

Behind the scenes, our data platform runs on a modern AWS-based tech stack designed to support big data architectures and machine learning models at scale. We believe in fostering an inclusive environment built on mutual trust, continuous learning, and a commitment to simplicity, where every engineer can contribute and grow.

The Role

As a Senior Data Engineer, you’ll be the technical backbone of the data layer that powers Daylight — ApartmentIQ’s revenue-management product that delivers real-time rent recommendations to property managers. You’ll design, build, and own the ingestion framework that pulls operational data from a variety of property-management systems, transforms it into analytics-ready models, and serves it to the machine-learning workflows that forecast demand and optimize pricing.

Working hand-in-hand with data scientists, you’ll ensure every byte flowing through Daylight is trustworthy, traceable, and available at the cadence our algorithms require. You’ll architect cloud-native, Terraform-managed infrastructure; implement scalable batch and streaming ETL/ELT jobs in Python; and layer in observability, testing, and data-quality guards so teams can iterate on models with confidence. You’ll also build and own core MLOps components that power model training, inference, and deployment — ensuring our ML systems are reliable, repeatable, and production-ready.

Beyond coding, you’ll collaborate with product managers, backend engineers, and customer-facing teams to translate business requirements—like a new rent rule or occupancy forecast—into performant data solutions. If you thrive on end-to-end ownership, relish tough debugging sessions, and want to see your work directly influence rent recommendations across thousands of units, we’d love to meet you.

Responsibilities

  • Design, build, and maintain scalable MLOps infrastructure to support model training, deployment, monitoring, and continuous integration/continuous delivery (CI/CD) of ML models.
  • Develop and manage robust data pipelines to extract, transform, and load (ETL/ELT) data from a variety of structured and unstructured sources.
  • Collaborate with data scientists and ML engineers to understand model requirements and ensure production readiness of data and model workflows.
  • Debug complex data issues and ML pipeline failures, collaborating closely with data scientists and ML engineers to diagnose root causes in data or algorithm behavior.
  • Debug data / algorithmic related problems in production for user-facing applications
  • Design and optimize data storage solutions using modern data warehousing, and relational database systems.
  • Codify and manage cloud infrastructure using Infrastructure as Code tools, primarily Terraform, to ensure reproducibility, scalability, and auditability across environments.
  • Implement observability, alerting, and data quality frameworks to ensure pipeline health and uphold data integrity.

Qualifications

  • 5+ years of software engineering experience, including 3+ years working directly with data-intensive systems, pipelines, and infrastructure.
  • Display a strong sense of ownership and delivering end-to-end systems — from architecture and implementation to CI/CD, observability, and infrastructure management.
  • Runs toward problems: has zero tolerance for bugs/issues, leans into complex issues, and proactively investigates and resolves failures.
  • Strong debugging capabilities; seeks root causes, not band-aids — whether it’s a data anomaly, algorithmic quirk, or system failure.
  • Strong Python experience — can write clear, idiomatic code and understands best practices.
  • Comfortable writing SQL queries to analyze relational data.
  • Experience with Terraform or other Infrastructure-as-Code tools for provisioning cloud-based infrastructure (e.g., AWS, GCP).
  • Hands-on experience designing and implementing big data architectures, streaming or batch ETL pipelines, and understanding the trade-offs between complexity, performance, and cost.
  • Experience with data lakes, data warehouses, relational databases, and document stores, and when to use each.
  • Math or CS background preferred and/or experience working with algorithms.
  • Uses LLMs and AI agents to enhance engineering productivity and explore solution spaces creatively.
  • Operates effectively in fast-paced, startup environments; adapts quickly and communicates clearly.
  • Strong collaborator and communicator, deeply integrated with the team, and proactively shares context and decisions.

Bonus Skills and Experience

  • Ruby and/or Ruby on Rails framework.
  • Writing performant code using methods like parallelism and concurrency.
  • AWS services: SageMaker, Lambda, Redshift, OpenSearch, Kinesis.
  • Experience with distributed systems.

Why Our Team

  • 100% remote across the U.S., with quarterly in-person gatherings for team offsites
  • Competitive Compensation
  • Flexible vacation and parental leave policies
  • Medical, Dental, and Vision Insurance
  • 100% paid Short-Term Disability, Long-Term Disability, and Life Insurance Program
  • 401k Program
  • A supportive, learning-first culture where you’ll help shape the next generation of AI-driven marketing tools for the apartment rental industry
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.