Spark Data Engineer

Spark Data Engineer

This job is no longer open

What we’re building and why we’re building it. 

Fetch is a build-first technology company creating a rewards program to power the world. Over the last 5 years we’ve grown from 0 to 7M active users and taken over the rewards game in the US with our free app. The foundation has been laid. In the next 5 years we will become a global platform that completely transforms how people connect with brands. 

It all comes down to two core beliefs. First, that people deserve to be rewarded when they create value. If a third party directly benefits from an action you take or data you provide, you should be rewarded for it. And not just the “you get to use our product!” cop-out. We’re talkin’ real, explicit value. Fetch points, perhaps. 

Second, we also believe brands need a better and more direct connection with what matters most to them: their customers. -- Brands need to understand what people are doing, and have a direct line to be able to do something about it. Not just advertise, but ACT. Sounds nice right? 

That’s why we’re building the world’s rewards platform. A closed-loop, standardized rewards layer across all consumer behavior that will lead to happier shoppers and stronger brands.


Fetch Rewards is an equal employment opportunity employer.

 

In this role, you can expect to:

  • Work on Spark applications processing billions of records
  • Work on applications that will generate millions of dollars of revenue
  • Generate innovative approaches to datasets with millions of monthly active users and terabytes of data
  • Work on various Spark applications 
  • Nice to have: Java knowledge (We are open to teaching you Java)
  • Use the latest and great technologies to solve technical problems
  • Have the freedom to choose the technologies that are best suited to the task
  • Have the opportunity to build streaming pipelines
  • Communicate findings clearly both verbally and in writing to a broad range of stakeholders
  • Have the opportunity to use Snowflake for data warehousing
  • Have the opportunity to work with all the latest AWS technologies

You are a good fit if you:

  • Have a deep understanding of Spark
  • Have created spark applications that process high data volumes
  • Have experience building spark or other technology enterprise level applications
  • Have at least a solid understanding of SQL. Solid knowledge is required.
  • Have a strong desire for perfection
  • Have good written and verbal communication skills
  • Are not afraid to ask questions
  • Are highly motivated to work autonomously, with the ability to manage multiple work streams

You have an edge if you:

  • Have AWS or other cloud provider experience.
  • Have experience programmatically deploying cloud resources on AWS, Azure, or GCP
  • Have successfully implemented data quality, data governance, or disaster recovery initiatives

#BI-Remote

#LI-Remote

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.