Senior Data Engineer

Senior Data Engineer

This job is no longer open

Our Data Engineering Team is comprised of data experts. We build world-class data solutions and applications that power crucial business decisions throughout the organisation. We manage multiple analytical data models and pipelines across Atlassian, covering finance, growth, product analysis, customer analysis, sales and marketing, and so on. We maintain Atlassian's data lake that provide a unified way of analysing our customers, our products, our operations, and the interactions among them.

We're hiring a Senior Data Engineer, reporting to the Data Engineering Manager based in Sydney. Here, you'll enable a world-class engineering practice, drive the approach with which we use data, develop backend systems and data models to serve the needs of insights, and help build Atlassian's data-driven culture. You love thinking about the ways the business can consume data and then figuring out how to build it.


  • You'll partner with the product analytics and data scientist team to build the data solutions that allow them to obtain more insights from our data and use that to support important business decisions.

  • You'll work with different stakeholders to understand their needs and architect/build the data models, data acquisition/ingestion processes and data applications to address those requirements.

  • You'll add new sources, code business rules, and produce new metrics that support the product analysts and data scientists.

  • You'll be the data domain expert who understand all the nitty-gritty of our products.

  • You'll own a problem end-to-end. Requirements could be vague, and iterations will be rapid

  • You'll improve data quality by using & improving internal tools/frameworks to automatically detect DQ issues.


  • A BS in Computer Science or equivalent experience with 5+ years of professional experience as a Sr. Data Engineer or in a similar role.

  • Strong programming skills using Python

  • Working knowledge of relational databases and query authoring (SQL).

  • Experience designing data models for optimal storage and retrieval to meet product and business requirements.

  • Experience building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools.

  • Experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka).

  • Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team.

  • Well versed in modern software development practices (Agile, TDD, CICD)

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.