Platform Data Ops Engineer

Platform Data Ops Engineer

This job is no longer open

 

Position overview

A cloud native application platform automates infrastructure and service  provisioning and configuration, dynamically allocating and reallocating resources at deployment time based on the needs of the application. Building on a cloud native runtime optimizes application life-cycle management, including scaling to meet demand, resource utilization, orchestration across available resources, and recovery from failures with minimum downtime.

You will be at the driving seat of the major tools and processes that enable our stakeholders, by creating a self-serve, seamless and automated journey for the Data Team and Engineering by enabling them for a cost effective/quality customer acquisition.

This approach reduces cognitive load on teams (they have less to think about) and it creates a standard way of building infra+software and producing Data tools to support better business decisions. The platform team also has a role to play in mentoring and supporting our internal clients as well helping teams to understand how good cloud native software operates. 

In successful software organizations, Engineering and Data teams can consume core services in a self-service manner from a central platform team. 

 

Responsibilities

All departments are your core customers. The driving principles that underpin this function are:

  • Contribute to building the data and other type environments by focusing on the reliability, operationality and automation of the Infrastructure as Code.
  • Work with data scientists and backend engineers to make sure that data is properly managed throughout the analytics process and to ensure an iterative approach to data development supported by consistent testing and monitoring.
  • Establish scalable, efficient, and automated processes for large scale deployments.
  • Manage, monitor, and troubleshoot data pipelines and the infrastructure as a whole.
  • Ensure environments maintain the highest level of quality, security, scalability, availability and compliance amidst an environment of rapid change and growth.

You will:

  • Proactively work with stakeholders like Data Team, Engineering Team and Security Team, to identify and build a robust, self-serve and scalable framework of automated actions.
  • Be part of a technical decision group, as your inputs will influence the Platform landscape and roadmap.
  • Deliver self-serve capabilities to our data scientists and backend engineers that allow them to provision, manage and operate their own infrastructure and resources deployment.
  • Build high quality code into everything we do by following industry best practices and at the same time being able to understand what Xapo needs and make decisions accordingly to it.
  • Provide coaching and mentoring to colleagues around how to build sustainable services and automated pipelines.
  • Keep security and compliance at the forefront of all you do.

Skills needed

  • Demonstrated hands-on experience with automating cloud data infrastructure and by following the best practices such as IaC, version control, CI/CD, containerization, compliance and security.
  • A deep technical knowledge of software and systems – able to dive into details with engineers and speak in plain language with stakeholders.
  • Stays abreast of current technology developments and has demonstrated the ability to retain competitive advantage by implementing relevant technologies in software products.
  • Understanding and expertise with Relational Databases
  • Work experience with Google Cloud Platform Cloud Operations, IAM, BigQuery, Dataflow, etc.
  • Passionate about automation.
  • Strong analytical and communication skills.

Nice to have

  • Work experience with the major public clouds (Azure, AWS)
  • Experience building and shaping developer’s and data environment as code and using pipelines; Expertise in GitHub
  • Ideally to be comfortable in a variety of scripting and coding languages.
  • Knowledge and experience with Machine Learning tools

Other requirements

  • A dedicated workspace 
  • A reliable internet connection with the fastest speed possible in your area
  • Devices and other essential equipment that meet minimal technical specifications
  • Alignment with Our Values.
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.