Why this role
We’re looking for a creative and highly motivated Senior Data Engineer to join our Custom Development Team to help us build, innovate, and maintain the Revenue team’s data pipeline and algorithms. In this role you will help plan and execute special projects while supporting ongoing cross-team operations. Whether a bug fix or an awesome feature, you will own your work and deliver the most elegant and scalable solutions. You will work closely with our Product Team and other cross-functional development teams to design and implement world class solutions.
The ideal candidate for this role is technically savvy, analytical, and process-oriented. You must anticipate and embrace the challenge of solving critical business problems in a fast-paced environment. We’re looking for someone who is a self-starter, willing to perform a variety of responsibilities and would be comfortable working with cross-functional teams. This position is a fantastic opportunity to get involved in the travel industry and have an immediate and meaningful impact at one of its fastest-growing companies.
What you’ll do
- Build and maintain ETL processes to load data into the data warehouse.
- Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions for all AWS data services.
- Curate and transform data into appropriate structures for algorithm development purposes using SQL, Stored Procedures, Python, Snowflake scripting, and data transformation tools like AWS Glue, Apache Spark and Apache Airflow.
- Perform collaboration duties such as code reviews and technical documentation for peers.
- Work directly with a Product Manager / Product Owner to clearly understand the problem being solved.
- Design, deliver, and support high quality code.
- Execute on engineering and release priorities successfully through strong leadership and communication.
- Build and maintain a reusable component library to be leveraged in other projects as we scale.
- Perform code reviews and collaborate with other team members on data engineering standards and best practices.
- Guide and teach less experienced developers.
What makes you a great fit
- 5+ years of relevant data warehouse and/or data lake engineering experience.
- Experience with AWS data collection, storage and data management, processing, analysis and visualization, and security.
- Problem solving and algorithm development using programming languages like Python.
- Experience in data pipeline design, development and automation.
- Comfortable working with DevOps teams to optimize CI/CD pipelines.
- Experience participating and/or peer code reviews.
- Comfortable working in an Agile Scrum environment.
- You have a knack for finding solutions to problems - and using a full arsenal of debugging tools.
- You have built your own custom tools to help you automate tasks.
- Experience with Git version control and repository management in Gitlab.
- Experience with Agile development process.
Location
Evolve has a flexible working environment so teammates can work remotely anywhere in the state of Colorado, in our beautiful downtown Denver office, remotely or a hybrid of both! As we grow, we are working towards opening remote opportunities across the entire U.S. We currently are able to hire across the U.S except in the following locations: California, District of Columbia, Hawaii, New Jersey, New Mexico, and Pennsylvania.
Compensation
For this role our salary range is $125,000 to $145,000, depending on relevant experience. This role will also be eligible to receive a variable annual bonus based on both company and individual performance.