Want to learn more about what it's like to work at Handshake? Check out these interviews from our team members!
At Handshake, we are assembling a diverse team of dynamic engineers who are passionate about creating high-quality, impactful products. As a Senior Data Engineer, you will play a key role in driving the architecture, implementation, and evolution of our cutting-edge data platform. Your technical expertise will be instrumental in helping millions of students discover meaningful careers, irrespective of their educational background, network, or financial resources.
Our primary focus is on building a robust data platform that empowers all teams to develop data-driven features while ensuring that every facet of the business has access to the right data for making informed conclusions.
In this role, you will be responsible for:
Technical leadership: Taking ownership of the data engineering function and providing technical guidance to the data engineering team. Mentoring junior data engineers, fostering a culture of learning, and promoting best practices in data engineering.
Collaborating with cross-functional teams: Working closely with product managers, product engineers, and other stakeholders to define data requirements, design data solutions, and deliver high-quality, data-driven features.
Data architecture and design: Designing and implementing scalable and robust data pipelines, data services, and data products that meet business needs and adhere to best practices. Staying abreast of emerging technologies and tools in the data engineering space, evaluating their potential impact on the data platform, and making strategic recommendations.
Performance optimization: Identifying performance bottlenecks in data processes and implementing solutions to enhance data processing efficiency.
Data quality and governance: Ensuring data integrity, reliability, and security through the implementation of data governance policies and data quality monitoring.
Advancing our Generative AI strategy: Leveraging your machine learning expertise to enrich data through the use of Generative AI techniques, reducing errors and detecting hallucinations.
To excel in this role, you should possess:
Extensive data engineering experience: A proven track record in designing and implementing large-scale, complex data pipelines, data warehousing solutions, and data services. Deep knowledge of data engineering technologies, tools, and frameworks.
Cloud platform proficiency: Hands-on experience with cloud-based data technologies, preferably Google Cloud Platform (GCP), including BigQuery, DataFlow, and Cloud Storage.
Advanced SQL skills: Strong expertise in SQL and experience with data modeling and database design conventions.
Problem-solving abilities: Exceptional problem-solving skills, with the ability to tackle complex data engineering challenges and propose innovative solutions.
Collaborative mindset: A collaborative and team-oriented approach to work, with the ability to communicate effectively with both technical and non-technical stakeholders.
While not required, expertise in any of the following areas would be highly advantageous:
Large Language Models (LLMs): Experience with large language models such as ChatGPT, LLaMa, or Bard for text generation and Natural Language Processing (NLP) tasks.
Machine learning model deployment: Experience in enabling the deployment of machine learning models to production environments.
Machine learning for data enrichment: Experience in applying machine learning techniques to data engineering tasks for data enrichment and augmentation.
Containerization and orchestration: Familiarity with containerization technologies like Docker and container orchestration platforms like Kubernetes.
dbt: Experience with dbt as a data transformation tool for orchestrating and organizing data pipelines.