Data Engineer

Data Engineer

This job is no longer open

Why this role

As a tech-enabled disruptor in the vacation rental industry, data is the backbone of how Evolve is fundamentally changing guest interaction with vacation rentals and homeowners ability to maximize rental income. Evolve’s Business Intelligence team is focused on enabling the company to innovate and make data informed decisions at scale by ensuring data is a trusted and valued asset available to all levels of the organization for innovation, visualization, analytics, reporting, data hygiene, and data science.

The Data Engineer is a SQL, data warehouse, and data lake expert who is passionate about building, supporting, and optimizing data pipelines that enable mission critical workflows for our reporting, analytics, business operations, and data science teams. This team and role is a critical element to Evolve’s success and helps position us as an innovator and thought-leader in the vacation rental space. 

What you’ll do

  • Build, support, and optimize data pipelines using tools like Fivetran, Matillion, Prefect, and Python to move data to/from Snowflake, SaaS APIs, and other data stores.
  • Design, modify, and implement data structures in Snowflake to support data ingestion, integration, and analytics.
  • Curate and transform data into appropriate structures for analytics and data science purposes using SQL, Python, Snowflake scripting, and data transformation tools like Matillion and dbt.
  • Design and implement processes to automate monitoring and alerting on source data quality, data ingestion and transformation processes, and the overall health of our data infrastructure. 
  • Research, recommend, and implement new and enhanced tools and methods that support Evolve’s data pipelines and processes for data ingestion, storage, transformation, automation, and hygiene.
  • Manage the deployment and monitoring of scheduled data ingestion and transformation processes.
  • Assist in defining quality standards for ETL/ELT, Python, Prefect, Snowflake, Fivetran, Matillion, and AWS as well as documenting and training other teammates on these standards.
  • Partnering with Business Intelligence, Data Architecture, Data Science, Product teams, and others to develop scalable solutions for new and modified data sources.

What makes you a great fit

  • 3+ years of relevant data warehouse and data lake engineering experience
  • Experience with data pipeline design and automation
  • Advanced SQL knowledge and experience coding in Python
  • Experience with databases like Oracle, Snowflake, Red Shift, Azure SQL, Mongo or similar databases is required with Snowflake being highly preferred 
  • Experience with advanced ETL/ELT tools administration (code deployment, security, setup and configuration)
  • Experience with enterprise ETL/ELT tools like Fivetran, dbt, Matillion or other similar ETL/ELT tools 
  • Ambition to design and implement code from scratch and build new infrastructure
  • Enjoy a connected, collegial environment even though we are remote, hybrid, and on-site
  • Experience deploying applications in Docker a plus
  • Experience with a cloud-based BI platform and data warehouse a plus
  • Experience with Airflow or Prefect a plus
  • Experience with AWS Fargate and Lambda a plus

Location 

We currently are able to hire throughout the U.S except in the following states: California, District of Columbia, Hawaii, New Jersey, New Mexico and Pennsylvania. If you live in Colorado, you can work remotely anywhere in the state, at our downtown Denver office, or a hybrid of both!

Compensation

For this role our salary range is $109,000 - $133,000, depending on relevant experience

Additional compensation details:

Equity: $25,000 of equivalent RSUs

#LI-AZ1

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.