Sr. Data Engineer - AWS & Databricks

Sr. Data Engineer - AWS & Databricks

This job is no longer open

Who is Blueprint?

We are a technology solutions firm headquartered in Bellevue, Washington, with a strong presence across the United States. Unified by a shared passion for solving complicated problems, our people are our greatest asset. We use technology as a tool to bridge the gap between strategy and execution, powered by the knowledge, skills, and the expertise of our teams, who all have unique perspectives and years of experience across multiple industries. We’re bold, smart, agile, and fun.

What does Blueprint do?

Blueprint helps organizations unlock value from existing assets by leveraging cutting-edge technology to create additional revenue streams and new lines of business. We connect strategy, business solutions, products, and services to transform and grow companies.

Why Blueprint?

At Blueprint, we believe in the power of possibility and are passionate about bringing it to life. Whether you join our bustling product division, our multifaceted services team or you want to grow your career in human resources, your ability to make an impact is amplified when you join one of our teams. You’ll focus on solving unique business problems while gaining hands-on experience with the world’s best technology. We believe in unique perspectives and build teams of people with diverse skillsets and backgrounds. At Blueprint, you’ll have the opportunity to work with multiple clients and teams, such as data science and product development, all while learning, growing, and developing new solutions. We guarantee you won’t find a better place to work and thrive than at Blueprint.

We are looking for a Sr. Data Engineer – AWS & Databricks to join us as we build cutting-edge technology solutions!  This is your opportunity to be part of a team that is committed to delivering best in class service to our customers.

 In this role will play a crucial role in designing, developing, and maintaining robust data infrastructure solutions, ensuring the efficient and reliable flow of data across our organization. If you are passionate about data engineering, have a strong background in AWS and Databricks, and thrive in a collaborative and innovative environment, we want to hear from you.

Responsibilities:

  • Design, implement, and maintain scalable data architectures that supports our client’s data processing and analysis needs.
  • Collaborate with cross-functional teams to understand data requirements and translate them into efficient and effective data pipeline solutions.
  • Develop, optimize, and maintain ETL (Extract, Transform, Load) processes to ensure the timely and accurate movement of data across systems.
  • Implement best practices for data pipeline orchestration and automation using tools like Apache Airflow.
  • Leverage AWS services, such as S3, Redshift, Glue, EMR, and Lambda, to build and optimize data solutions.
  • Utilize Databricks for big data processing, analytics, and machine learning workflows.
  • Implement data quality checks and ensure the integrity and accuracy of data throughout the entire data lifecycle.
  • Establish and enforce data governance policies and procedures.
  • Optimize data processing and query performance for large-scale datasets within AWS and Databricks environments.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and provide the necessary infrastructure.
  • Document data engineering processes, architecture, and configurations.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
  • Minimum of 5 years of experience in data engineering roles, with a focus on AWS and Databricks.
  • Proven expertise in AWS services (S3, Redshift, Glue, EMR, Lambda) and Databricks.
  • Strong programming skills in languages such as Python, Scala, or Java.
  • Experience with data modeling, schema design, and database optimization.
  • Proficiency in using data pipeline orchestration tools (e.g., Apache Airflow).
  • Familiarity with version control systems and collaboration tools.
  • Ability to troubleshoot complex data issues and implement effective solutions.
  • Strong communication and interpersonal skills.
  • Ability to work collaboratively in a team-oriented environment.
  • Proactive in staying updated with industry trends and emerging technologies in data engineering.

Salary Range

Pay ranges vary based on multiple factors including, without limitation, skill sets, education, responsibilities, experience, and geographical market. The pay range for this position reflects geographic based ranges for Washington state: $146,400 to $175,100 USD/annually. The salary/wage and job title for this opening will be based on the selected candidate’s qualifications and experience and may be outside this range.

Equal Opportunity Employer

Blueprint Technologies, LLC is an equal employment opportunity employer. Qualified applicants are considered without regard to race, color, age, disability, sex, gender identity or expression, orientation, veteran/military status, religion, national origin, ancestry, marital, or familial status, genetic information, citizenship, or any other status protected by law.

If you need assistance or a reasonable accommodation to complete the application process, please reach out to: recruiting@bpcs.com

Blueprint believe in the importance of a healthy and happy team, which is why our comprehensive benefits package includes:

  • Medical, dental, and vision coverage
  • Flexible Spending Account
  • 401k program
  • Competitive PTO offerings
  • Parental Leave
  • Opportunities for professional growth and development

Location: Remote

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.