Data Engineer - Americas

Data Engineer - Americas

This job is no longer open
About Kraken

As one of the largest and most trusted digital asset platforms globally, we are empowering people to experience the life-changing potential of crypto. Trusted by over 8 million consumer and pro traders, institutions, and authorities worldwide - our unique combination of products, services, and global expertise is helping tip the scales towards mass crypto adoption. But we’re only just getting started. We want to be pioneers in crypto and add value to the everyday lives of billions. Now is not the time to sit on the sidelines. Join us to bring crypto to the world.

About the Role

The data engineering team is responsible for designing and implementing scalable solutions that allow the company to make data-driven decisions fast and accurately on several terabytes of data. The team maintains the company’s data warehouse and data-lake, and you will be responsible for creating various pipelines to move and process vast amounts of data into the different data products. The team deals with both batch and streamed data, and split into different responsibilities and areas matching both the engineer’s and Kraken’s interest.

This role is 100% remote and open across the Americas (US, Canada & LATAM)

Responsibilities

    • Build scalable and reliable data pipeline that collects, transforms, loads and curates data from internal systems
    • Augment data platform with data pipelines from select external systems
    • Ensure high data quality for pipelines you build and make them auditable
    • Drive data systems to be as near real-time as possible
    • Support design and deployment of distributed data store that will be central source of truth across the organization
    • Build data connections to company's internal IT systems
    • Develop, customize, configure self service tools that help our data consumers to extract and analyze data from our massive internal data store
    • Evaluate new technologies and build prototypes for continuous improvements in data engineering

Requirements

    • 5+ years of work experience in relevant field (Data Engineer, DWH Engineer, Software Engineer, etc), ideally with large datasets
    • Excellent SQL and data manipulation skills using common frameworks like Spark/PySpark, Pandas, or similar
    • Experience with data warehouse technologies and relevant data modeling best practices (Presto, Druid, etc)
    • Experience building data pipelines/ETL and familiarity with design principles (for ex: Apache Airflow)
    • Proficiency in a major programming language (e.g. Scala, Python, Golang,..) 
    • Experience with business requirements gathering for data sourcing

Nice to have

    • Apache Airflow
    • Experience working with Cloud services (e.g. AWS, GCP, ..) and/or Kubernetes
    • Experience in building and contributing to data lakes on the cloud
    • Designing and writing CI/CD pipelines
    • Working with petabytes of data
    • Enjoys Dockerizing services
Location Tagging: #LI-Remote

We’re powered by people from around the world with their own unique and diverse experiences. We value all Krakenites and their talents, contributions, and perspectives, regardless of their background. 

As an equal opportunity employer we don’t tolerate discrimination or harassment of any kind. Whether that’s based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws. 

Stay in the know

Follow us on Twitter
Catch up on our blog
Follow us on LinkedIn
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.