Want to build cutting-edge tooling for machine learning? Ever wondered how much and which compute resources are required to train machine learning models that can classify millions of photos and reviews? Or how to automate the ML infrastructure for rapid model training and connect those models to a variety of model serving platforms including ranking and streaming systems? We’re looking for remote ML infrastructure engineers who thrive on living at the intersection of machine learning, scalable infrastructure, and massive flows of data.
On the Core ML team, our mission is to build the machine learning platform to pursue Yelp’s top business initiatives. We build tools which help engineers develop and apply their ML models in light speed using the latest technology frameworks, such as Jupyter, Spark, Kubernetes, Kafka, and Cassandra. In many cases, we’ll contribute or drive open-source projects to help us achieve our mission, including ML model serialization and inference projects. We are also building tooling and developing processes to centralize data products and feature stores for analysts and ML needs.
Come work with and learn from our team that is full of a passionate and diverse group of engineers with years of experience spanning machine learning modeling to systems engineering. We communicate across the company, inputting ML-based needs and outputting efficient tooling and systems. As machine learning evolves, we continue to ride the wave of innovation by combining industry best practices and cutting-edge tooling to bolster Yelp’s machine learning platform. See a recent blog post giving an overview of our ML Platform.
This opportunity requires you to be located in the United Kingdom.
We’d love to have you apply, even if you don't feel you meet every single requirement in this posting. At Yelp, we’re looking for great people, not just those who simply check off all the boxes.