Captiv8

San Francisco
51-200 employees
End-to-end influencer marketing platform empowering brands, agencies, and creators to harness data and authentic storytelling for powerful results.

Data Engineer

Data Engineer

This job is no longer open
About Us:

Captiv8 brings unrivaled audience insights and accountability to the influencer space, along with thoughtful, creative storytelling to power the most effective and memorable social content.

Captiv8 is an AI-powered global influencer platform connecting and delivering influencers, audiences, and brands at scale: We work with top Fortune brands like Verizon, Nestle, Ford, Amazon, Kraft Heinz, and many others. Captiv8’s platform features passionate influencers across Facebook, Instagram, Twitter, TikTok, Snapchat, YouTube, and other social channels with extensive global audience reach. We offer a full stack of data-driven products and services, bringing to life powerful content that is targeted, compelling, and memorable.  We have spent the last five years streamlining branded content creation and measurement with ad agencies, PR agencies, brands, and talent agencies.

Their founding team comprised proven industry leaders who have over $1B in acquisitions, managed over $600M+ in revenue, and taken two companies public.  Their latest venture was one of the largest monetization platforms on the planet for the mobile-first economy.

Captiv8 partners with credible institutions, including Social+Capital, Subtraction Capital, Launch Fund, and many others.

You will join a small team of experienced backend/data engineers to help us improve/rebuild the infrastructure. From getting it from sources, transforming it, and finally providing it to consumers, you will be challenged to find the best solutions to process the diverse set of data in an ever-growing environment.

Tech Stack (What we are doing):
● Getting data by using Crawlers based on Akka
● Storing data in Hbase, ElasticSearch, MySQL, Postgres, S3
● Communicating via REST API, Kafka, Kinesis, SQS
● Running batch jobs by using Hive, Spark, Athena
● Monitoring applications by using Zabbix, ELK, Cloudwatch

Responsibilities:

    • Design and develop a distributed data processing system built upon AWS
    • Analyze performance and scaling of existing solutions in response to increased load
    • Develop and improve architecture, as well as selecting optimal technologies and methodologies
    • Provide end to end data deployment from inception to consumption
    • Close work with DS engineers provide data for the ML components
    • Develop best practices for data integration and streaming
    • Pride of your work by shipping clean, maintainable, easy to support code by following the engineering best practices: testing, documenting, rapid prototyping, providing modular and structured solutions
    • Agile thinking, be able to adopt changes quickly
    • Attention to details and passion to deep learn and understand good and bad points of each technologies you are using
    • Good team player, make your point of view clearly and listen to other carefully
    • Love with data and data-driven decision making
    • Keep-it-simple approach, with the right amount of foresight for future needs
    • Takes ownership of incidents that fall in their domain and may be involved in the resolution of an incident or provide assistance to others on resolving incidents.

Requirements:

    • 4+ years of relevant experience as a Data Engineer or Backend Engineer
    • Deep knowledge of data structures and algorithms
    • Good understanding of SQL and NoSQL databases, design, and methods for efficiently retrieving data
    • Strong knowledge of Java and Scala
    • Understanding of Big Data technologies and solutions (Hadoop, Spark, etc)
    • Experience with streaming services (Apache Kafka, Kinesis)
    • Experience with Clouds (AWS preferred)

    • Would be a plus, experience with:
    • ○ Infrastructure as a code (Terraform)
    • ○ Container technologies (Docker and Kubernetes)
    • ○ Workflow engines (Apache Airflow or similar)
    • ○ Python
Benefits & Perks:

      Remote 1st Company!
      Competitive compensation & 401k program to plan for your future
      Robust medical, dental, vision, and disability coverage
      The coolest tech equipment and gadgets you need to be successful
      Flexible Vacation & PTO
      All-encompassing parental leave program - family first company!
      Monthly Wellness and WFH stipends
      Generous Employee Referral Program to hire more rock stars like YOU!
      Birthday and Work Anniversary Surprise Boxes
      Yearly Company off-sites - expect sun filled beaches or snow capped mountains
      Nationwide Offices - SF, LA, NY, Chicago (team meet ups are always encouraged)
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.