This position is 100% remote, based in the US.
The data analyst role at GitLab is a hybrid role: part data analyst, part data scientist, part data warehouse engineer, and part backend engineer.
This role will require an inquisitive and business-oriented mindset with the ability to implement rigorous database solutions and best practices in order to produce and influence the adoption of robust quality data insights to drive business decisions in all areas of GitLab.
Responsibilities
- Collaborate with other functions across the company by building reports and dashboards with useful analysis and data insights
- Explain trends across data sources, potential opportunities for growth or improvement, and data caveats for descriptive, diagnostic, predictive (including forecasting), and prescriptive data analysis
- Deep understanding of how data is created and transformed through GitLab products and services provided by third-parties to help drive product designs or service usage or note impacts to data reporting capabilities
- Understand and document the full lifecycle of data and our common data framework so that all data can be integrated, modeled for easy analysis, and analyzed for data insights
- Document every action in either issue/MR templates, the handbook, or READMEs so your learnings turn into repeatable actions and then into automation following the GitLab tradition of handbook first!
- Expand our database with clean data (ready for analysis) by implementing data quality tests while continuously reviewing, optimizing, and refactoring existing data models
- Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale database environment. Maintain and advocate for these standards through code review
- Provide data modeling expertise to all GitLab teams through code reviews, pairing, and training to help deliver optimal, DRY, and scalable database designs and queries in Snowflake and in Periscope
- Approve data model changes as a Data Team Reviewer and code owner for specific database and data model schemas
- Own the end-to-end process of on-call data triaging from reading Airflow logs, to diagnosing the data issue, and to verifying and implementing a solution with an automated alerting system (ChatOps, etc) as well as providing data support for all GitLab members
- Contribute to and implement data warehouse and data modeling best practices, keeping reliability, performance, scalability, security, automation, and version control in mind
- Follow and improve our processes and workflows for maintaining high quality data and reporting while implementing the DataOps philosophy in everything you do
- This position reports to the Manager, Data
- Advocate for improvements to data quality, security, and query performance that have particular impact across your team as a Subject Matter Expert (SME)
- Solve technical problems of high scope and complexity
- Exert influence on the long-range goals of your team
- Understand the code base extremely well in order to conduct new data innovation and to spot inconsistencies and edge cases
- Experience with performance and optimization problems, particularly at large scale, and a demonstrated ability to both diagnose and prevent these problems
- Help to define and improve our internal standards for style, maintainability, and best practices for a high-scale web environment; Maintain and advocate for these standards through code review
- Represent GitLab and its values in public communication around broader initiatives, specific projects, and community contributions
- Provide mentorship for Junior and Intermediate Engineers on your team to help them grow in their technical responsibilities
- Deliver and explain data analytics methodologies and improvements with minimal guidance and support from other team members. Collaborate with the team on larger projects
- Build close relationships with other functional teams to truly democratize data understanding and access
- Influence and implement our service level framework SLOs and SLAs for our data sources and data services
- Identifies changes for the product architecture and from third-party services from the reliability, performance and availability perspective with a data driven approach focused on relational databases, knowledge of another data storages is a plus
- Proactively work on the efficiency and capacity planning to set clear requirements and reduce the system resources usage to make compute queries cheaper
- Participate in Data Quality Process or other data auditing activities
Requirements
- 5+ years experience in a similar role
- Experience building reports and dashboards in a data visualization tool
- Passionate about data, analytics and automation. Experience cleaning and modeling large quantities of raw, disorganized data (we use dbt)
- Experience with a variety of data sources. Our data includes Salesforce, Zuora, Zendesk, Marketo, NetSuite, Snowplow and many others (see the data team page)
- Demonstrate capacity to clearly and concisely communicate complex business logic, technical requirements, and design recommendations through iterative solutions
- Deep understanding of SQL in analytical data warehouses (we use Snowflake SQL) and in business intelligence tools (we use Periscope)
- Hands on experience working with SQL, Python, API calls, and JSON, to generate business insights and drive better organizational decision making
- Familiarity with Git and the command line
- Deep understanding of relational and non-relational databases, SQL and query optimization techniques, and demonstrated ability to both diagnose and prevent performance problems
- Effective communication and collaboration skills, including clear status updates
- Positive and solution-oriented mindset
- Comfort working in a highly agile, intensely iterative environment
- Self-motivated and self-managing, with strong organizational skills
- Ability to thrive in a fully remote organization
- Share and work in accordance with our values
- Successful completion of a background check
- Ability to use GitLab
Don’t have a ton of knowledge about GitLab yet? Don’t worry. We have an extensive onboarding and training program at GitLab and you will be provided with necessary DevOps and GitLab knowledge to fulfill your role.
Also, we know it’s tough, but please try to avoid the confidence gap. You don’t have to match all the listed requirements exactly to be considered for this role.
Hiring Process
To view the full job description and hiring process, please view our handbook. Additional details about our process can also be found on our hiring page.
Remote-US