Data Platform Engineer

Data Platform Engineer

This job is no longer open

Engineering at Circle:

In 2020, Circle unveiled Circle APIs: a set of solutions and smarter technology to help businesses accept payments in a more global, scalable and efficient alternative to traditional banking rails (spoiler: we’re using USD Coin under the hood).

Over the next 12 months, we’re going to rapidly grow our API customer base and enable even more businesses to easily integrate and benefit from the breakthrough of programmable money on the internet.

If you’re data-driven, interested in building something meaningful and would love to work in an entrepreneurial environment, we can’t wait to hear from you.

You will aspire to our four core values:

  • Multistakeholder - you have dedication and commitment to our customers, shareholders, employees and families and local communities.
  • Mindful - you seek to be respectful, an active listener and to pay attention to detail.  
  • Driven by Excellence - you are driven by our mission and our passion for customer success which means you relentlessly pursue excellence, that you do not tolerate mediocrity and you work intensely to achieve your goals. 
  • High Integrity - you seek open and honest communication, and you hold yourself to very high moral and ethical standards.  You reject manipulation, dishonesty and intolerance.

Here is our team hierarchy for individual contributors:

Principal Software Engineer (V)

Staff Software Engineer (IV)

Senior Software Engineer (III)

Software Engineer (II)

Software Engineer (I)

Your team is responsible for:

As a member of the Data Engineering team, you own the core Big Data/ML platform, data ingestion, processing and serving, ETL/ELT pipelines, data governance and security compliances, data analytics and visualization tooling, data modeling and data warehouse that powers our Product, Engineering, Analytics, and Data Science teams for experimentation, operational excellence, and actionable insights, so as to fuel and accelerate business growth. 

You'll work on:

  • Work across functional teams on the design, deployment and continuous improvement of the scalable data platform that ingests, stores, and aggregates various datasets, including data pipelines, platforms, and warehouses, and surfacing data to both internal and customer-facing applications.
  • Be a subject matter expert on data modeling, data pipelines, data quality and data warehousing.
  • Desgin, build and maintain data ETL/ELT pipelines to source and aggregate the required data  for various data analysis and reporting needs, as well as to continually improve the operations, monitoring and performance of the data warehouse.
  • Develop integrations with third party systems to source, qualify and ingest various datasets.
  • Provide data analytics and visualization tools to extract valuable insights from the data to  enable data-driven decisions.
  • Provide ML data platform capabilities for data science teams to perform data preparation, model training and management, and run experiments.
  • Work closely with cross-functional groups and stakeholders, such as the product, engineering, data science, security and compliance teams, for data modeling, general data life cycle management, data governance and processes for meeting regulatory and legal requirements.

You'll bring to Circle (Not all required):

  • Experience in multiple data technologies, such as Spark, Presto, Impala, YARN, Parquet, MLflow, Kafka, AWS Kinesis, Flink, Spark Streaming, etc.
  • Experience with workflow orchestration management engines such as Airflow, Azkaban, Oozie, etc.
  • Experience with Cloud Services (AWS, Google Cloud, Microsoft Azure, etc).
  • Experience in SQL and NoSQL, such as MySQL, PostgreSQL, Cassandra, HBase, Redis, DynamoDB, Neo4j, etc.
  • Experience in building scalable infrastructure to support batch, micro-batch or stream data processing for large volumes of data.
  • Experience in data governance and provenance.
  • Internal knowledge of open source or related big data technologies.
  • Proficiency in one or more programming languages (Java, Scala, Python).
  • Experience in similar business domains, such as payment systems, credit cards, bank transfers, blockchains, etc. 
  • Excellent communication skills, able to collaborate with cross-functional teams and remote teams, share ideas and present concepts effectively.
  • Ability to tackle complex and ambiguous problems.
  • Self-starter who takes ownership, drives results, and enjoys moving at a fast pace.

 

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.