Principal Data Architect

Principal Data Architect

This job is no longer open

Why this role

We’re looking for a creative and highly motivated Principal Data Engineer to join our Custom Development Team to help us architect, design, build, innovate, and maintain our Revenue team’s  data pipeline and algorithms . In this role you will help plan and execute special projects while supporting ongoing cross-team operations. Whether a new capability or enhancing an awesome feature, you will own your work and deliver the most elegant and scalable solutions. You will work closely with our Product Team and other cross-functional development teams to design and implement world class solutions. 

The ideal candidate for this role is technically savvy, analytical, and process-oriented. You must anticipate and embrace the challenge of solving critical business problems in a fast-paced environment. We’re looking for someone who is a self-starter, willing to perform a variety of responsibilities and would be comfortable working with cross-functional teams. This position is a fantastic opportunity to get involved in the travel industry and have an immediate and meaningful impact at one of its fastest-growing companies.

What you’ll do

  • Lead the design, development, orchestration and optimization of data pipelines in the cloud.
  • Collaborate with and across Agile teams to architect, design, develop, test, implement, and support technical solutions and  data architecture for all AWS data services.
  • Create functioning proof of concepts for future looking implementations.
  • Identify patterns in problem domain and build custom frameworks(Python/Scala/Java) that serves as a platform, enables other developers to build applications, and includes a development philosophy.
  • Architect, design, deliver and support high quality code.
  • Curate and transform data into appropriate structures for algorithm development purposes using SQL, Stored Procedures, Python, Snowflake scripting, and data transformation tools like AWS Glue, Apache Spark and Apache Airflow.
  • Perform collaboration duties such as design reviews, code reviews,  and technical documentation for peers.
  • Collaborate with other team members on data engineering standards and best practices.
  • Work directly with a Product Manager / Product Owner to clearly understand the problem being solved.
  • Collaborate with and mentor Technical Leads across multiple scrum teams.
  • Execute on engineering and release priorities successfully through strong leadership and communication.
  • Build and maintain a reusable component library to be leveraged in other projects as we scale 
  • Guide and teach less experienced developers.

What makes you a great fit

  • 8+ years of relevant data warehouse and/or data lake engineering experience.
  • Advanced expertise in designing, optimizing, and productionalizing data pipelines meeting SLA/SLO.
  • Advanced expertise with AWS data collection, storage and data management, processing, analysis and visualization, and security.
  • Advanced expertise in one or more cloud data platforms like Snowflake , Red Shift, Azure SQL, or similar databases is required with Snowflake being highly preferred.
  • Experience with enterprise ETL/ELT tools like Fivetran, dbt, Matillion or other similar ETL/ELT tools.
  • Experience with AWS serverless development is a plus.
  • Comfortable partnering  with DevOps teams to optimize CI/CD pipelines.
  • Experience leading design and code reviews.
  • Expertise  working in an Agile Scrum environment while driving continuous improvement of team maturity.
  • You have a knack for finding solutions to problems - and using a full arsenal of debugging tools.
  • You have built your own custom tools to help you automate building environments and keeping them in sync.
  • Experience with Git version control, branching strategy, and repository management in Gitlab.

Location 

Evolve has a flexible working environment so teammates can work remotely anywhere in the state of Colorado, in our beautiful downtown Denver office, remotely or a hybrid of both! As we grow, we are working towards opening remote opportunities across the entire U.S. We currently are able to hire across the U.S except in the following locations: California, District of Columbia, Hawaii, New Jersey, New Mexico, and Pennsylvania.

Compensation

For this role our salary range is $156,000 to $180,000, depending on relevant experience. This role will also be eligible to receive a variable annual bonus based on both company and individual performance.

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.