thatgamecompany is best recognized for creating award-winning, enriching, and meaningful game titles such as Journey, Flower, and flOw. Our most recent game, Sky, is our most complex undertaking to date. It is a social network built around the values inherited from a powerful humanistic story. It is a live experience continuously evolving inside a global online theme park.
We are seeking passionate engineers to help us build various data-centric products and solutions that will help us process and gain insights from terabytes of data per day generated by millions of active players
As a Data Analytics Engineer, you will serve a crucial role as a nexus between all teams in the studio, gathering requirements, inspiring data projects, and helping facilitate agreement on data availability, standards, and quality.
On any given day at thatgamecompany, you might:
Design and implement the ETL to grab data from Postgres, MongoDB, or a variety of external data sources and store it all in our GCP Data Warehouse
Write and Optimize SQL queries and tables to provide quick, efficient, and intuitive access to our data
Help brainstorm and investigate potential solutions for challenges like Realtime Data Aggregation, Hack Prevention, Player Behavior Analysis, and Data Visibility
Monitor job health and respond to any failures or bugs so our downstream users can trust data accuracy
Help codify our knowledge, processes, pipelines, and schemas into readable documentation
We expect you to:
Have deep passion and thoughts for video games; be a gamer and think on behalf of players.
Be comfortable taking risks and aim for never-been-done engineering achievements
Enjoy working with fast-moving and rapidly-growing small teams
2+ Years of Experience in Software Development, ideally in the realm of data
Experience writing clean, efficient Python, Java, or Scala code
Experience writing optimized SQL Queries for analysis and building custom data sets
Some exposure to a modern Cloud Platform, preferably AWS or GCP
Some knowledge or experience with Big Data Processing Tools
Like BigQuery, Redshift, Snowflake, Spark/Beam, Hadoop
Comfortable using Git for Source Control
An eye for data quality and ability to build tests, reporting, and alerting mechanisms for visibility
Strong communication skills, desire to understand the data you’re using, and the ability to explain what you built, why you built it
Eager to learn any new technology and always open to jump out of your comfort zone.
Any of the following would be highly preferred, but most of all, we value engineers who are eager to learn and contribute to the team:
Familiar with tools in the Python Data Stack (Pandas, Airflow, Flask, Plotly)
Familiar with tools in the GCP Data Stack (BigQuery, PubSub, Dataflow, Cloud Functions)
Experience with CICD or IAC (Terraform)
Experience with a data visualization tool like Tableau or Looker
Familiarity with Agile methodology
Interest in Data Science or Machine Learning
We look forward to meeting you!