Senior Data Engineer

Senior Data Engineer

This is a remote position. With a sufficient timezone overlap with the team, we're able to hire eligible candidates for this role from any location in Australia and New Zealand. Your future team tlassian is looking for a Senior Data Engineer to join our Corporate Data Engineering Team. You will build top-notch data solutions and applications that inspire important decisions across the organization. You will be reporting to Senior Data Engineering Manager Your role involves shaping our engineering practice, developing backend systems and data models, and promoting Atlassian's data-driven culture. A typical day may involve advising on Information Architecture for our Data Lake and enhancing event collection infrastructure. Collaborating with partners, you will design data models, acquisition processes, and applications to address needs. With experience in large-scale data processing systems (batch and streaming), you will lead business growth and enhance product experiences. Our aim is to improve data pipelines/infrastructure for reliable insights. As a domain expert, you will collaborate with Technology Teams, Global Analytical Teams, and Data Scientists across programs. You'll take ownership of problems from end-to-end: extracting/cleaning data, understanding generating systems, automating analyses/reporting. Improving the quality of data by adding sources, coding rules, producing metrics is crucial as requirements evolve. Agility and smart risk-taking are important qualities in this industry where digital innovation meets partner/customer needs over time. You'll have flexibility in where you work – whether in an office, from home (remote), or a combination of the two.
#LI-Remote
On your first day, we'll expect you to have:

  • BS in Computer Science or equivalent experience with 5+ years as Sr. Data Engineer or similar role.
  • Programming skills in Python & Java (good to have).
  • Design data models for storage and retrieval to meet product and requirements.
  • Build scalable data pipelines using Spark, Airflow, AWS data services (Redshift, Athena, EMR), Apache projects (Spark, Flink, Hive, and Kafka).
  • Improve productivity and quality of output for engineers.
  • Familiar with modern software development practices (Agile, TDD, CICD) applied to data engineering.
  • Enhance data quality through internal tools/frameworks detecting DQ issues. Working knowledge of relational databases and SQL query authoring.

We'd be super excited if you have:

  • Followed a Kappa architecture with any of your previous deployments
  • Domain knowledge of Order Management, Entitlement, Billing, Financial and People System.
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.