Machine Learning Engineer [IC3]

Machine Learning Engineer [IC3]

This job is no longer open

Working hours

🌎 Given that we are an all-remote company and hire almost anywhere in the world, we don’t have a particular time-zone preference for this role. However, you may need to be available for non-recurring urgent meetings outside of working hours.

Why this job is exciting

We are creating a machine learning team at Sourcegraph, aimed at creating the most powerful coding assistant in the world. Many companies are trying, but Sourcegraph is uniquely differentiated by our rich code intelligence data and powerful code search platform. In the world of prompting LLMs, context is everything, and Sourcegraph’s context is simply the best you can get: IDE-quality, global-scale, and served lightning fast. Our code intelligence, married with modern AI, is already providing a remarkable alpha experience, and you can help us unlock its full potential.

We are looking for an experienced full stack ML engineer with demonstrated industry experience in productionizing large scale ML models in industrial settings. And if you happen to have an entrepreneurial streak, you’re in luck:  We have an enterprise distribution pipeline, so whatever you build can be deployed straight to enterprise customers with some of the largest code bases in the world, without all the go-to-market hassle you’d encounter in a startup.

You will be a scientist at Sourcegraph Labs doing R&D, and pushing the boundaries of what AI can do, as an IC on our new ML team. You will have the full power of Sourcegraph’s Code Intelligence Platform at your disposal, and you’ll be working on a coding assistant that is already awesome even after just a few weeks of work, so this is a greenfield opportunity to multiply dev productivity to unprecedented levels.

📅 Within one month, you will…

  • Start building a trusting relationship with your peers, and learning the company structure.
  • Be set up to do local development, and be actively prototyping.
  • Dive deep into how AI and ML is already used at Sourcegraph and identify ways to improve moving forward.
  • Develop simulated datasets using Gym style frameworks across a number of Cody use cases.
  • Experiment with changes to Cody prompts, context sources and evaluate the changes with offline experimentation datasets.
  • Ship a substantial new feature to end users.

📅 Within three months, you will…

  • Building out feature computation, storage, monitoring, analysis and serving systems for features required across our Cody LLM stack
  • Be contributing actively to the world’s best coding assistant.
  • Developing distributed training & experiment infrastructure over Code AI datasets, and scaling distributed backend services to reliably support high-QPS low latency use cases.
  • Be following all the relevant research, and conducting research of your own.

📅 Within six months, you will…

  • Be fully ramped up and owning key pieces of the assistant.
  • Be ramped up on other relevant parts of the Sourcegraph product.
  • Be helping design and build what might become the biggest dev accelerator in 20 years.
  • Owning a number of ML systems, and building core data and model metadata systems powering the end-to-end ML lifecycle.
  • Be developing a highly scalable, high-QPS inference service providing low latency performance using a mix of CPU and GPU hardware to most efficiently utilize resources.
  • Be driving the technical vision and owning couple of major ML components, including their modeling and ML infra roadmap.

About you 

You are an experienced full stack ML engineer with demonstrated industry experience in formulating ML solutions, developing end to end data orchestration pipelines, deploying large scale ML models and experimenting offline and online to drive business impact for Cody users. You want to be part of a world-class team to push the boundaries of AI, with a particular focus on leveraging Sourcegraph’s code intelligence to leapfrog competitors.

First, your AI background could look like a few different things:

  • You’ve worked on AI systems and have built ML at large tech companies, specifically experience in developing and productionising machine learning models.
  • Hands-on experience using data processing tools like Beam, Spark or Flink in a cloud environment like GCP or AWS and first-hand knowledge about data management concepts.
  • You have a deep ML background and have demonstrated an ability to be customer and company focused. You are hands-on and can build machine learning
  • Hands-on experience training and serving large-scale (10GB+) models using frameworks such as Tensorflow or PyTorch
  • Experience with Docker, Kubernetes, Kubeflow or Flink, knowledge of CI/CD in the context of ML pipelines
  • You have some hands-on experience working with large foundational models and their toolkits. Familiarity with LLMs such as Llama, StarCoder etc., model fine-tuning techniques (LORA, QLORA), prompting techniques (Chain of Thought, ReACT, etc) and model evaluation.
  • You’ve worked in NLP or language models at a top-tier research lab

If you’ve been anywhere near the field lately, you can probably pick up enough about LLM capabilities to be able to drive this space, as it’s all greenfield.

Second, you have some understanding of programming languages, and tools that manipulate code. This could have taken any number of forms; e.g.:

  • You’ve worked with grammars and parser generators, or Treesitter
  • You’ve worked with compilers and semantic analysis, e.g. type systems
  • You’ve written an interpreter, or worked on a virtual machine
  • You’ve done static analysis involving scanning source code for semantic information

It doesn’t really matter how you know it, but it’s important that you’re familiar with the basic concepts of semantic representations of source code, and how they’re produced and consumed by tooling.

Level

📊 This job is an IC3.  You can read more about our job leveling philosophy in our Handbook.

Compensation

💸 We pay you an above-average salary because we want to hire the best people who are fully focused on helping Sourcegraph succeed, not worried about paying bills. You will have the flexibility to work and live anywhere in the world (unless specified otherwise in the job description), and we’ll never take your location or current/past salary information into account when determining your compensation.  As an open and transparent company that values equitable and competitive compensation for everyone, our compensation ranges are visible to every single Sourcegraph Teammate. To determine your salary, we use a number of market and data-driven salary sources and target the high-end of the range, ensuring that we’re always paying above market regardless of where you live in the world.  

💰 The target compensation for this role is $185,000 USD base.

📈 In addition to our cash compensation, we offer equity (because when we succeed as a company, we want you to succeed, too) and generous perks & benefits.

Interview process [~5.5 hour total interview]

Below is the interview process you can expect for this role (you can read more about the types of interviews in our Handbook). It may look like a lot of steps, but rest assured that we move quickly and the steps are designed to help you get the information needed to determine if we’re the right fit for you… Interviewing is a two-way street, after all!

👋 Introduction Stage - we have initial conversations to get to know you better…

  • [30m] Recruiter Screen with Devon Coords
  • [60m] Hiring Manager Screen / ML Depth with Rishabh Mehrotra

🧑‍💻 Team Interview Stage - we then delve into your experience in more depth and introduce you to members of the team…

  • [45m] Technical Deep Dive with Dominic Cooney + Julie Tibshirani
  • [60m] Architecture Interview with Rok Novosel
  • [Async] Pairing Exercise with Beyang Liu

🎉 Final Interview Stage - we move you to our final round, where you will gain a better understanding of our business holistically…

  • [30m] Values Interview
  • [30m] Leadership Interview with Steve Yegge
  • [30m] Leadership Interview with Quinn Slack
  • We check references and conduct your background check

Please note - you are welcome to request additional conversations with anyone you would like to meet, but didn’t get to meet during the interview process.

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.