At a glance
Us: Profitable and funded startup of ~50 people. Remote team, mainly based in the UK. YC alumni (summer 2019). Creating a web-based platform for behavioural research. 4x yearly growth, all driven by word-of-mouth.
We’re on a mission to connect people around the world to make trustworthy data more accessible and facilitate world-changing research.
You: A mid-level data analyst or analytics engineer who's an SQL expert with a passion for empowering self-serve analytics.
The tools: SQL, Python, DBT, Snowplow, Redshift
£40-60k depending on skills and experience.
The Company
At Prolific, we're changing how research on the internet is done. Our co-founders Katia and Phelim started by building a marketplace that connects researchers from both Academia and industry with instant, high quality, global research participants. Now, as a growing team, our bigger vision is to build the most powerful and trusted platform for behavioral research.
We were in Y Combinator's Summer 2019 batch, we've recently closed a $1.4M seed round, we've been growing 4x a year purely through word-of-mouth, we're already profitable, and we have very ambitious plans.
The Role
As a complex marketplace platform with a strong focus on data, we know we are sitting on a trove of information that can take our product, and world-class research, to the next level. We're looking for an Analytics Engineer to design and implement resilient, well-structured data transformations to provide access to high quality, intuitive data across the company.
The ideal candidate will be comfortable working with large data sets, and able to combine software engineering approaches with analytics know-how to create robust scalable data pipelines. You should have strong experience with SQL and be comfortable with Python. You will be working in all parts of our data stack, from product-driven event analytics, to data storage, modelling and BI tools.
You will be part of a growing and talented team of data analysts, data engineers and data scientists; and help us transform our raw business data into documented datasets, ready to provide crucial insight! We hope you'll also be a mentor to your team members: boosting the standard of data-software engineering across the team.
What you will be doing
* Migrating, consolidating, and extending our SQL transformations in DBT while ensuring that our business insights are drawn from reliable, correct and easily interpretable datasets.
* Implementing and optimising data models to unlock the potential of our web tracking, giving the growth squad the data they need to make good decisions.
* Working with our product teams to implement new event tracking, user-stitching, marketing attribution and support the development of data-intensive projects.
* Playing a key role in building a clean and extendable codebase for data
* Implementing end-to-end reporting solutions for stakeholders across the business: from gathering requirements to modelling data and developing dashboards.
* Sharing knowledge and best practices with a growing data community across Prolific
How we operate
At Prolific, Projects are delivered using squads: small multi-disciplinary teams of 3-7 people (engineers, product designers, data analysts, product managers, etc). Squads are problem-focused and work on high-level objectives, e.g. our Growth Squad is working hard to ramp up data-driven content marketing, and deeply understand potential users purchase-decision journey. Squads use 6-12 week cycles to meet their objectives, with continuous delivery throughout the cycle.
Since data is a cross-squad discipline, we've founded the "data-chapter" to provide a place for all data experts to work closely together, share processes, and collaborate on core infrastructure. As an analytics engineer, you'll be a core member of the data-chapter, splitting your time about 60/40 between core data-infrastructure work, and more bespoke projects for squads.
Deep work is valued throughout the company. We favour async communication (like Notion) over Slack. When we need to communicate in real-time we try to cluster meetings together to give everyone bigger blocks of interruption-free time.
Continuous learning and development is strongly encouraged. Everyone gets a personal development budget which they can put towards things like books, courses and conferences, and reserve times every fortnight for learning new things or working on creative side-projects.
We’re aware of the challenges of being a remote worker and work hard to foster team spirit. We encourage remote chats over coffee with colleagues and have regular team meetings to keep everyone up to date with goings-on across the company and introduce new joiners.
We believe that we are in the process of successfully building a company that people enjoy working for. Our employees should feel valued, supported, and fulfilled. We know that there’s always more that we could be doing and have regular conversations about what we can improve. Everyone’s opinion is important and all input is taken on board.
Our Data Tech Stack
We use Snowplow to collect event data, Stitchdata to ETL data from our production MongoDB and secondary data sources, AWS Redshift as a central store and source-of-truth for all our business data, and DBT for data modelling, scheduling SQL, and validating the data in our warehouse. We use Metabase as our primary BI tool and JupyterHub as a central workspace for analysis and deeper exploration of our data.
The Interview Process
Our aim is to get to know how you’d fit into our team as a Analytics Engineer. First we’ll have a short call with you to discuss your career to date and your motivations for the role. Then we’ll have an async technical exercise with you that is close to the work we do on a day-to-day basis. The final stage will be interviews with the co-founders and another member of the data chapter, where we will dig deeper into your values and technical understanding.