Data Engineer, Data Platform

Data Engineer, Data Platform

This job is no longer open

ABOUT US

We build tools for 100+ brands and retailers in the ecommerce space that help them offer free two-day shipping, same-day delivery, and product expansion into new marketplaces -- keeping them several steps ahead of the curve in a rapidly changing industry.

We are a purpose and culture driven organization that prides ourselves on connectivity, equality, diversity and inclusion. We are committed to our people as a service-oriented company as our people are at the heart of everything we do.

We are Headquartered in Chicago, with offices in New York, Conshohocken, PA (Philly area), and Krakow, Poland and we operate as a subsidiary of FedEx Services.

ABOUT THE ROLE

As a Data Engineer at Shoprunner, you’ll help power many of our data-backed solutions and manage the large scale data we ingest from our merchant partners.  Data Engineering will work in collaboration with our Enterprise, Consumer, and Data Science teams to bring data models to production, power solutions for our merchant partners, and create more personalized experiences for our customers in our never-ending quest to help our shoppers and retailers connect in new ways and new applications.

This role is US based only but we are open to any of our US offices in Chicago, IL, Conshohocken, PA (Philly), and New York City or fully remote in any of these states: CA, DE, IL, IN, FLA, MA, NY, NJ, NC, SC, PA, TX, UT, VA, WA, CT, MD, MI. From time to time, there will be limited travel to our offices as the need arises.

WHAT YOU’LL DO

  • Own the key data pipelines which enable real-time event handling, smarter personalization, and more nimble applications.
  • Develop frameworks to productize our machine-learning models that give our members more product choices. 
  • Help us define and manage our big data infrastructure including Kinesis/Kafka streams, Apache Spark and Snowflake data warehouse.
  • Help us evolve our service architecture, embracing architecture approaches such as 12 factor, microservices, and well-formed APIs to allow our architecture to scale both internally and externally.

WHAT WE’RE LOOKING FOR

  • Bachelors degree and experience working in an Agile environment and using a VCS like Git.
  • Experience writing production code in Python or JVM-based systems.
  • Experience with data stores and technologies such as Spark, Airflow, ElasticSearch, Kinesis, Kafka, Postgres, MySQL, and Snowflake and a strong understanding of SQL.
  • Experience with building REST APIs to serve and consume data
  • Experience with building batch/streaming ELT pipelines to move and transform data.
  • Experience with data transformation tools and techniques and workflow management
  • Experience optimizing larger applications to increase speed, scalability, and extensibility.
  • Proven self-starter with a strong desire to learn new technologies and work independently but knows to seek help when they are blocked.

We want you to bring your whole human self to work every day. We accept you for who you are and consider everybody on an equal opportunity basis without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. 

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.