Data Engineer II

Data Engineer II

Job Details 

We’re hiring a Data Engineer II to be a part of our data engineering team to build and enhance data solutions for Openly's insurance platform. You will play a key role in how we build, manage, structure, store, and access data and data pipelines so we can provide high quality usable data to our insurance product applications and customers. 

Key Responsibilities 

  • Design, create, and maintain data solutions. This includes data pipelines and data structures. 
  • Work with data users, data science, and business intelligence personnel, to create data solutions to be used in various projects. 
  • Translating concepts to code to enhance our data management frameworks and services to strive towards providing a high quality data product to our data users. 
  • Collaborate with our product, operations, and technology teams to develop and deploy new solutions related to data architecture and data pipelines to enable a best-in-class product for our data users.
  • Collaborating with teammates to derive design and solution decisions related to architecture, operations, deployment techniques, technologies, policies, processes, etc. 
  • Participate in domain, stand ups, weekly 1:1's, team collaborations, and biweekly retros 
  • Assist in educating others on different aspects of data (e.g. data management best practices, data pipelining best practices).
  • Share your knowledge within the data engineer team and with others in the company (e.g. engineering all-hands, engineering learning hour, domain meetings, etc.). 

Our stack 

  • Backend/Core: Go & Postgresql 
  • Frontend: Browser-based, VueJS, Webpack, Nuxt &, Tailwind 
  • Research/Data Science: R, ArcGIS, Jupyter Notebooks, & Python 
  • Data: GCP GCS, BigQuery, Composer/Airflow, Cloud Functions, Postgres, SQL, Python, Aiven Debezium and Kafka, Fivetran 
  • Infrastructure: Google Cloud, specifically Cloud Run, Kubernetes, Pub/Sub, BigQuery, and CloudSQL, managed with Terraform. We use GitHub for code hosting, DataDog and HoneyComb for monitoring, and CircleCI for running our CI/CD pipelines. 
  • Remote work tools: Slack, Zoom, Donut 


  • 2 years of data engineering and data management experience 
  • Scripting skills in Python
  • Basic understanding and usage of a development and deployment lifecycle, automated code deployments (CI/CD), code repositories, and code management
  • Experience with Google Cloud data store and data orchestration technologies and concepts
  • Hands-on experience and understanding of the entire data pipeline architecture: Data replication tools, staging data, data transformation, data movement, and cloud based data platforms
  • Understanding of modern next generation data warehouse platform, such as the Lakehouse and multi-data layered warehouse
  • Proficiency with SQL optimization and development
  • Ability to understand data architecture and modeling as it relates to business goals and objectives 
  • Ability to gain an understanding of data requirements, translate them into source to target data mappings, and building a working solution
  • Experience with terraform preferred but not required
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.