System1 is one of the largest customer acquisition companies in the world whose growth depends heavily on a very talented data engineering team. Our roadmap includes deploying an event driven design for our data collection, migration from Kinesis to Confluent Kafka, deploying stream processing from our Kafka platform and leveraging these improvements in design changes to our data warehouse, and this is where you come in!
We process billions of records a day to support business intelligence, data science and machine learning, traffic quality and analytics and do so relying primarily on Python, SQL and Snowflake. However, we are looking to expand into stream processing using the Kafka Streams API, the streams DSL and ksqlDB. Our primary goals are scalability, reliability, usability and performance.
You will be working in a fast paced environment as part of the team of data engineers, designing and implementing solutions that provide business-critical insights. You will be responsible for design of data warehouse schemas as well as end-to-end design and implementation of fault tolerant and scalable data processing pipelines using a variety of technologies for orchestrating data movement, primarily Snowflake, DBT and Airflow.