Data Engineer

Data Engineer

Atlassian is looking for a Data Engineer to join our Data Engineering team and build world-class data solutions and applications that power crucial decisions throughout the organisation. We are looking for an open-minded, structured thinker who is passionate about building systems at scale. You will enable a world-class data engineering practice, drive the approach with which we use data, develop backend systems and data models to serve the needs of insights and play an active role in building Atlassian’s data-driven culture.


Data is a BIG deal at Atlassian. We ingest over 180 billion events each month into our analytics platform and we have dozens of teams across the company driving their decisions and guiding their operations based on the data and services we provide.

The data engineering team manages several data models and data pipelines across Atlassian, including finance, growth, product analysis, customer support, sales, and marketing. You'll join a team that is smart and very direct. We ask hard questions and challenge each other to constantly improve our work.

As a Data Engineer, you will apply your technical expertise to build analytical data models that support a broad range of analytical requirements across the company. You will work with extended teams to evolve solutions as business processes and requirements change. You'll own problems end-to-end and on an ongoing basis, you'll improve the data by adding new sources, coding business rules, and producing new metrics that support the business.


  • BS/BA in Computer Science, Engineering, Information Management, or other technical fields and 3+ years of data engineering experience

  • Strong programming skills using Python or Java.

  • Working knowledge of relational databases and query authoring via SQL.

  • Experience designing data models for optimal storage and retrieval to meet product and business requirements.

  • Experience building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools.

  • Experience building real-time data pipelines using a micro-services architecture.

  • Experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka).

  • Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team.

  • Well-versed in modern software development practices (Agile, TDD, CICD).

  • A willingness to accept failure, learn, and try again

Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.