Staff Data Engineer, Data Platform

Staff Data Engineer, Data Platform

YOUR MISSION

As a Staff Data Engineer on the Data Platform team, you will grow our business by leveraging advanced data engineering techniques to enhance our data architecture and streamline data ingestion processes. Your expertise will be instrumental in improving our data-driven product offerings, and in providing valuable insights that drive strategic decision-making, thus helping us grow globally.

WHAT YOU'LL DO

  • Design, implement, and maintain scalable and optimized data architectures that meet our evolving business needs
  • Lead and contribute hands-on to major initiatives from inception to delivery
  • Evaluate and recommend appropriate data storage solutions, ensuring data accessibility and integrity
  • Establish monitoring and alerting systems to proactively identify and address potential data pipeline issues
  • Continuously optimize data ingestion processes for improved reliability and performance
  • Support Data infrastructure needs such as cluster management and permission
  • Work closely with product teams to data-driven features and enhancements into our products and services
  • Collaborate with data modelers and data analysts to provide them with high-quality, clean, and well-structured data
  • Provide technical guidance and mentorship to senior data engineers, fostering a collaborative and high-performance environment

WHAT YOU'LL BRING

  • 6+ years of experience in data engineering, with a strong focus on data architecture and data ingestion
  • Strong background as a tech lead or architect
  • Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field
  • Proficiency in Databricks, Redshift, S3, Astronomer/Airflow, Fivetran, Segment, DBT, ETL processes and data integration techniques
  • Proficiency in Data Engineering programming languages such as Python, Scala and SQL
  • Experience with Data Ops (VPCs, cluster management, permissions, Databricks configurations, Terraform)
  • A passion for working with data and turning insights into action
  • Proven track record of designing and optimizing data pipelines for efficiency and accuracy
  • An undying concern about data quality and a commitment to maintaining high standards
  • A willingness to take ownership of projects and see them through to successful completion
  • Experience with cloud-based data platforms like AWS, Azure, or Google Cloud Platform
  • Expert knowledge of big data technologies such as Hadoop, Spark, or NoSQL databases
  • Knowledge of data governance and security practices
  • Excellent command of English, both written and verbal
  • Ability to work independently from a remote location with a global team

NICE TO HAVE

  • Experience with migrating from Redshift to Databricks
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.