Data Engineer

Data Engineer

This job is no longer open
Alamo Drafthouse is a rapidly growing dine-in movie theater business with over 35 locations across the US in four time zones. We also operate Season Pass, a movie subscription service, Mondo, a movie collectibles e-commerce business, and Alamo on Demand, a streaming VoD service.  We’ve been called, “Coolest movie theater in the world,” by Wired and “Best theater ever,” by Time.

Each month, over 700,000 guests visit our theaters and over 2M users engage with our digital platforms. Every day, each venue hosts 30+ shows and private events. Every week, we add new movies to our schedules and reallocate our screens. Every tap, pageview, ticket, beer, burger, and ramekin of ketchup leaves a trail of data that we ingest into our warehouse, normalize, model, and connect so that we can deliver an analytics platform with unparalleled analytical detail, speed, and value for our business partners. 

2022 kicks off a period of rapid growth for Alamo and opportunity in this role. In addition to the post-COVID rebound in exhibition, we’re adding new locations, and continuing to outpace the competition on an indexed basis. We’re growing our analytics team to sustain and build on this growth. As a Data Engineer, you will directly impact Alamo’s mission to deliver the best damn theater experience that ever has or ever will exist by expanding our data platform to support the insights and production data needed to  drive our product and service innovations. 

Our analytics infrastructure is built on a thoroughly modern data stack: Stitch, Snowflake, dbt, and Looker. As a Data Engineer, you will be responsible for integrating new and existing data sources, managing production data pipelines, designing data quality processes, transforming data into analytics (star) schemas, designing usable data marts for end user consumption, supporting reverse ETL integrations into production systems, and supporting our compliance with US data privacy requirements. 

The ideal candidate is an experienced data systems builder who is excited to play a big role in designing, optimizing, and evolving our data architecture, and to grow in this role and within our team. They will be comfortable working with both technical and non-technical users in a heterogeneous systems environment (with a bias toward open source, AWS, and G Suite). The Data Engineer will work very closely with our software developers, product managers, and data analysts on a wide variety of initiatives impacting every aspect of our business including theater operations, finance, and marketing.

Colorado Salary Range: $130k-140k

This position can be worked remotely in the following states: Arizona, California, Colorado, Connecticut, Georgia, Michigan, Minnesota, New Jersey, New York, North Carolina and Texas.

OUR MISSION
To Ensure EVERY Guest Has An AWESOME Experience And Is EXCITED To Come Back

OUR CORE VALUES

DO THE RIGHT THING
We strive to be a force of good in our company, in our industry and in the world. We stand up for our beliefs even when it is hard. We start from a place of kindness.

FOSTER COMMUNITY
We value what is unique about each other and celebrate our differences. We treat each other with respect, support each other’s passions, and help each other grow. We welcome healthy debate but don’t tolerate intolerance. We take this commitment outside our 4 walls, creating neighborhood theaters that are deeply tied to the local community.

BOLDLY GO
Like the crew of the Starship Enterprise™, we seek out new experiences and pursue innovation in all of our work. We take risks and chart new territory. We learn from our mistakes and continuously improve.

GIVE A SH!T
We are passionate about creating awesome experiences. We obsess over every detail and take pride in our work because we know it makes all the difference to our guests and our teammates. Our pursuit of excellence drives us to do our best.

CORE ROLE RESPONSIBILITIES

    • Build, maintain, optimize, and extend processes and systems to support data ingestion, transformation, and modeling
    • Manage dependencies and workloads across a variety of systems in a geographically diverse, 365 day, 20+ hour a day operation
    • Identify, design, and implement internal process improvements: automating data quality monitoring, advancing data freshness, optimizing data delivery, improving BI reporting performance, and evolving the data infrastructure for greater scalability, etc.
    • Work with data analysts to gather and refine requirements for data marts and ad hoc data models that drive BI insights into customer acquisition, operational efficiency and other key business performance metrics
    • Work with product managers to engineer our data systems to feed production, customer-facing systems that enable innovative new guest services
    • Develop access controls and policies to ensure we protect customer data both internally and externally and comply with privacy regulations
    • Support predictive analytics and machine learning initiatives working closely with both internal and external subject matter experts
    • Uphold standards of behavior as defined by the company Core Values, Code of Conduct and Operational Guidelines

QUALIFICATIONS

    • 3+ years experience designing, delivering, managing, and optimizing data pipelines, data warehouses, data marts, and BI infrastructure
    • Experience with and detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ELT, and reporting/analytics tools
    • Experience designing schemas, analytics-focused data models, and data marts (bonus points for consumer retail experience)
    • Familiarity with the extract-load-transform (ELT) paradigm
    • Strong analytical skills related to working with time series and unstructured datasets
    • A degree in Information Systems, Data Science, Informatics, or related field
    • Experience with the following software/tools:
    • Advanced working SQL knowledge
    • Data warehouse/lakehouse technologies, especially Snowflake
    • Data transformation and modeling tools like dbt and LookML
    • ELT/ETL tools such as Stitch, Singer, and Meltano
    • Enterprise BI platforms such as Looker
    • Familiarity with software development best practices and comfort with source control (Git)
    • Orchestration and workflow management tools like Prefect and Airflow
    • Proficiency in Javascript, Python, Java, and shell scripting
    • Familiarity with AWS and GCP cloud services and basic devops
    • Familiarity with reverse ETL and data integration tools
    • Knowledge of Customer Data Platforms designed around a modern data stack such as  Hightouch, Rudderstack and Blueshift
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.