Learning Architect - Machine Learning/Data Engineering

Learning Architect - Machine Learning/Data Engineering


As a Staff Architect, you’ll apply your expertise in the field of data engineering and/or machine learning along with your passion for enabling others to lead the strategy and design of Databricks product enablement offerings - if you consider training as a product, you are the product manager.  The strategy that you define will enable all Databricks; it will incorporate the cutting-edge and industry-disrupting features of the Databricks Data Intelligence Platform. 

This is an opportunity for you to develop/maintain your Databricks product skills and work with the latest and greatest features and to be the voice that defines our overarching enablement strategy for our global customers, partners, and internal field. In this role, you will work with the Databricks product and engineering teams, subject-matter experts, and global stakeholders on taking product updates and business needs to define enablement offerings, program, and learning paths that set the direction for the Databricks training and certification business and lead to new content development, certification exams, training offerings, and broader go to market initiatives.

In terms of your day-to-day work, you’ll make a name for yourself at Databricks by being the point of contact for all things related to enablement strategy. You’ll (1) regularly engage in SME and product meetings (2) work with our learning audience teams to fully understand business requirements (3) define learning offering strategy across the Databricks personas and product portfolio and (4) work with a team of global stakeholders, product managers, engineers, subject-matter-experts, and other curriculum developers to meet overarching business needs.

The impact you will have:

  • Function as a company-wide thought leader and subject-matter-expert on Databricks
  • Provide technical leadership to guide enablement initiatives across the Databricks landscape
  • Work with subject-matter-experts and learning architects to scope needs for enablement material
  • Grow technically in areas such as lakehouse technology, big data streaming, and big data ingestion and workflows by working regularly in the Databricks DI Platform

What we look for:

  • Passion/experience for sharing knowledge and expertise to enable others 
  • 5+ years experience in a technical role with expertise in at least one of the following:
  • Experience maintaining and extending production data systems to evolve with complex needs
  • Expertise in scaling big data workloads (such as ETL) that are performant and cost-effective and large-scale data ingestion pipelines and data migrations - including CDC and streaming ingestion pipelines
  • Expert with cloud data lake technologies such as Delta 
  • Bachelor's degree in Computer Science, Information Systems, Engineering, in a quantitative discipline, or equivalent experience through work experience
  • Production programming experience in SQL and Python
  • Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike
  • Passion for collaboration, life-long learning, and driving business value through ML
  • [Preferred] Experience working with Apache Spark to process large-scale distributed datasets


  • Comprehensive health coverage including medical, dental, and vision

  • 401(k) Plan

  • Equity awards

  • Flexible time off

  • Paid parental leave

  • Family Planning

  • Gym reimbursement

  • Annual personal development fund

  • Employee Assistance Program (EAP)

Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.