Working at Atlassian
Atlassian can hire people in any country where we have a legal entity. Assuming you have eligible working rights and a sufficient time zone overlap with your team, you can choose to work remotely or return to an office as they reopen (unless it’s necessary for your role to be performed in the office). Interviews and onboarding are conducted virtually, a part of being a distributed-first company.
Atlassian is looking for a Senior Data Architect to join our Data Engineering team and build world-class data solutions and applications that power crucial business decisions throughout the organization. We are looking for an open-minded, structured thinker passionate about building systems at scale. You will enable a world-class engineering practice, drive the approach with which we use data, develop backend systems and data models to serve the needs of insights, and play an active role in making Atlassian data-driven. You love thinking about how the business can consume this data and then figuring out how to build it.
You'll own an extensive program end-to-end, so those skills will come in handy to collect, extract, and clean the data, understand the systems that generated it, and automate your analyses and reporting. On an ongoing basis, you'll be responsible for improving the data by adding new sources and business rules and producing new metrics that support the business. Requirements will be vague. Iterations will be rapid. You will need to be nimble and take intelligent risks.
On a typical day, you may be consulted on:
- The information architecture of our data lake.
- Understand the existing analytical data models, reuse, and make necessary revisions as required.
- Understand Integration Design patterns & anti-patterns and create designs accordingly
- Understand & Design Dimensional Data Models.
- Understand & Design Tables leveraging different Hadoop File Formats
- Working with stakeholders to understand the business reporting needs and architect/build the data models, ETL processes, and data applications that can help answer those needs.
- Experience working with large datasets and are interested in reporting platforms and data visualization.
- Optimize the data pipelines/infrastructure to provide data with quality and trust. As the data domain expert, you will be partnering with our technology teams, analytical teams, and data scientists across various initiatives.
- Understand the different source systems and their data attribution to identify the source of truths for bringing trust in data while reporting
- Experience in implementing Master Data Management (MDM) solutions.
More about you
- As a data architect, you will have the opportunity to apply your strong technical experience to building analytics data models, that supports a broad range of analytical requirements across the company. You will work with other teams to evolve solutions as business processes and requirements change. You enjoy working in a fast-paced environment, and you can take vague requirements and transform them into solid solutions. You are motivated by solving challenging problems, where creativity is as crucial as your ability to write code.
On your first day, we'll expect you to have:
- At least 12-15 years of professional experience with at least 5 years as a data architect or in a similar role
- Strong programming skills in Python & Java (good to have)
- Experience in requirements collection, analysis, and data profiling for BI systems.
- Experience in working with project managers in drafting project plans and effort estimations
- Experience in data warehouse modeling, and master data management solutions.
- Experience building data pipelines using Spark and/or Hive
- Experience working in a technical environment with the following technologies: AWS data services (Redshift, Athena, EMR) or similar, Apache projects (Spark, Flink, Hive, Kafka)
- You’re well versed in modern software development practices (Agile, TDD, CICD) and how they can apply to data engineering
- Experience in building frameworks to aid in quality data pipelines.
- Experience on Observability
- Experience writing and tuning SQL
- A willingness to accept failure, learn and try again
- An open mind to try solutions that may seem crazy at first
- A BE in Computer Science or equivalent experience
- Mentoring junior team members through review processes.
We’d be super excited if you have:
- Experience working on Apache Airflow (or similar tools) for orchestrating data pipelines
- Experience building MDMs and other enterprise data integration solutions
- Deployed ML models and know when best to use them
- Followed a Kappa architecture with any of your previous deployments
- Implemented solutions on top of Kafka
- Good experience in building data solutions for Subscription businesses.