About the role:
Afresh’s analytics platform allows teams across the company to create reliable metrics from our disparate data sources, and to use those metrics to track internal performance, power new reporting products for our customers, and drive experiment-based decision-making.
Our growing suite of reporting products help our customers understand what’s happening in their stores and how to use Afresh effectively to drive down food waste. As we continue to build these products, we’re looking to strengthen the analytics platform that powers them.
The Data Science and Analytics team collaborates closely with every team in the company, empowering them to build products and make decisions with data. You will regularly interact with data engineers, applied scientists, data scientists, full stack engineers, and product managers.
As a staff data engineer on the Data Science team, you will own the development of our analytics platform. In this role, you will evolve our data warehouse schema, solidify our transform architecture, and establish data governance patterns to serve our internal and external analytics needs. Some of your responsibilities will include:
- Improve and extend our data analytics architecture to provide reliable and accessible data for a wide range of use cases
- Collaborate with engineers, product managers, and data scientists to understand their data needs, and then build extensible dimensional models and semantic layer metrics that allow for consistent and reliable insights
- Evolve our existing data quality and data governance processes
- Mentor and up-skill other engineers
This is a high-impact role with ownership of highly visible projects and a lot of room to grow in your scope.
Skills and experience:
- 6+ years of experience as an data engineer, analytics engineer, data warehouse engineer, or a similar role.
- Strong understanding of advanced concepts in SQL.
- Exceptional communication and leadership skills, with a proven ability to facilitate cross-team and cross-functional collaboration and information sharing.
- 1+ years of experience working with SQL-driven transform libraries that support an ELT paradigm, like dbt or sqlmesh, at scale, including setting up CI/CD pipelines that ensure high quality transformations.
- Expert knowledge about the differences between OLTP and OLAP database design.
- Familiarity with the differences between data engineering concepts like Data Mesh, Data Lake, Data Warehouse, Data Fabric, and Data Lakehouse.
- Experience with setting up a semantic layer defined with code (LookML, Cube.dev, AtScale, dbt semantic layer).
- Technologies: SQL, Python, Airflow, dbt, Snowflake/Databricks/BigQuery, Spark.