Luxury Presence is the leading digital platform revolutionizing the real estate industry for agents, teams, and brokerages. Our award-winning websites, cutting-edge marketing solutions, and AI-powered mobile platform empower real estate professionals to grow their business, operate more efficiently, and deliver exceptional service to their clients. Trusted by over 80,000 real estate professionals, including 31 of the nation’s 100 top-performing agents as published in the Wall Street Journal, Luxury Presence continues to set the standard for innovation and excellence in real estate technology.
About the Role
We’re seeking a Staff Software Engineer to strengthen our real estate MLS data platform squad. You will build robust data pipelines and backend services that power:
• High-quality MLS and property data across 400+ feeds
• Property discovery and search on agent websites
• Personalized listing recommendations and other data-driven features
• Conversational and operationalAI agents that streamline internal workflows
• Theevaluation and monitoring infrastructure that keeps these systems improving over time
This role sits at the intersection ofbackend engineering, data infrastructure, and AI-powered products.
Who is the Data Platform Squad?
We make sure clean, reliable MLS listing records and user click-stream data are always available to our products and customers. Our current team—a mix of data engineers and software engineers—owns the entire listing pipeline: ingestion, transformation, and normalization across 400+ MLS feeds and other sources.
We also extend the platform to capture user-activity data for user-facing features such as personalized listing recommendations, and we buildAI agents that automate feed onboarding and listing-issue triage, reducing manual effort for internal teams and clients and shortening the path from data to business impact.
What You’ll Do
Technical leadership & architecture
• Own the end-to-end architecture for MLS and property data:streaming and batch pipelines, microservices, storage layers, and APIs
• Design and evolveevent-driven, Kafka-based data flows that power listing ingestion, enrichment, recommendations, and AI use cases
• Drive technical design reviews, set engineering best practices, and make high-quality tradeoffs around reliability, performance, and cost
Backend, data & platform engineering
• Design, build, and operate backend services (Python or Java) that expose listing, property, and recommendation data via robust APIs and microservices
• Implement scalable data processing withSpark or Flink on EMR (or similar), orchestrated via Airflow and running on Kubernetes where applicable
• Champion observability (metrics, tracing, logging) and operational excellence (alerting, runbooks, SLOs, on-call participation) for data and backend services
Streaming & batch data pipelines
• Build and maintain high-volume, schema-evolvingstreaming and batch pipelines that ingest and normalize MLS and third-party data
• Ensure data quality, lineage, and governance are built into the platform from the start—supporting analytics, AI/ML, and customer-facing features
• Partner with analytics engineering and data science to make data discoverable and usable (e.g., semantic layers, documentation, self-service tooling)
AI agents & data products
• Collaborate with ML/AI engineers todesign and scale AI agents that automate MLS feed onboarding, listing discrepancy triage, and other operational workflows
• Work with frameworks such asPydanticAI, LangChain, or similar to integrate LLM-based agents into our data and service architecture
• Help define and implement evaluation, logging, and feedback loops so these agents and data-driven products continuously improve
Cross-functional impact & mentorship
• Collaborate closely with Product, Engineering, and Operations to shape the roadmap for our data platform, MLS capabilities, and AI-powered experiences
• Translate ambiguous business and customer problems into clear technical strategies and phased delivery plans
• Mentor and unblock other engineers; elevate the overall level of technical decision-making on the team via pairing, reviews, and design guidance
What You’ll Bring
Experience & scope
•10+ years of professional software engineering experience, including owning production systems end-to-end
• Significant experience working withdata-intensive or distributed systems at scale (high volume, high availability)
• Prior experience in a senior or staff/lead role where you influenced architecture, standards, and technical direction
Core technical skills
• Strong programming skills inPython or Java, with experience building microservices and APIs (REST/GraphQL)
Hands-on experience with Apache Kafka or similar event/messaging platforms (Kinesis, Pub/Sub, etc.)
•Deep experience with:
◦Spark or Flink for large-scale data processing, across streaming and batch pipelines (on EMR or similar big-data compute)
◦Airflow (or equivalent orchestration tools)
◦Kubernetes for running data/compute workloads
• Strong SQL and data modeling skills; solid understanding ofETL/ELT patterns, data warehousing concepts, and performance tuning
• Experience building onAWS (preferred) or another major cloud provider, with a good grasp of cost, reliability, and security tradeoffs
AI agent experience
• Experiencebuilding or integrating AI agents into production workflows (e.g., internal tools, support automation, operational triage, or data workflows)
• Familiarity with frameworks such asPydanticAI, LangGraph, Claude Code or similar, and how they interact with backend services, vector stores, and LLM APIs
• Comfort working with logs, telemetry, and evaluation metrics to monitor, debug, and iteratively improve AI-driven systems
Leadership & collaboration
• Demonstrated ability to lead technical initiatives across teams, from idea to production (alignment, design, implementation, rollout)
• Track record of mentoring other engineers and raising the bar on code quality, testing, and design
• Strong communication skills; able to clearly explain complex technical decisions to both engineers and non-technical stakeholders
• Customer and product mindset: you care about how the data and services you build improve the end-user and client experience, not just the internals
Nice to Have
• Experience with any of:
◦Iceberg, Hive, or other table formats/data lake technologies
◦Snowflake, Athena, Redshift, or other cloud data warehouses
◦dbt or similar transformation frameworks
◦ Data quality / observability tools (e.g., Great Expectations, Monte Carlo, Datafold)
◦ Vector databases / retrieval (e.g., LanceDB, Pinecone, Elasticsearch/OpenSearch)
• Background in real estate, marketplaces, or other domains where data quality and freshness are highly visible to customers
• Prior experience in a startup or high-growth environment where you’ve built or significantly evolved a data platform