Senior Data Infrastructure Engineer (DevOps)

Senior Data Infrastructure Engineer (DevOps)

This job is no longer open

YOUR MISSION

The Data team at Mural is experiencing rapid growth over the last year. You will join a dynamic and fast-paced environment and work with cross-functional teams to design, build and roll-out products that deliver the company’s vision and strategy. The DataOps Engineer will be responsible for site reliability engineering, CI/CD tooling, pushing code into production, and automation of testing.

Your main responsibility is to lead the design, building, and operational management of highly secure and scalable applications and software platforms for the business. Promote, document, and implement systems infrastructure best practices, building tools that allow the department to develop and deploy impeccable sites/software. You'll also create tools that leverage productivity amplifiers, enabling scalable operations

RESPONSIBILITIES

  • Hands on deployment of data warehouse systems and tools like SiSense Fusion, Databricks, Tableau 
  • Building infrastructure as code framework using Chef, Terraform, EKS and generally containerized deployments collaborating with our SRE, Cloud infrastructure teams 
  • Leverage AWS tech stack for data warehouse infrastructure management 
  • Leverage MURAL’s Data Dog infrastructure to enhance the monitoring and observability capabilities of the data warehouse 
  • Hookup Alerts with Slack and PD and ensure low noise to signal ratio 
  • Build, Manage and Maintain the CI/ CD pipelines for the data warehouse  
  • Understanding of data pipelines, SparkSQL, Python and Scala so data warehouse infrastructure can be fine tuned to the needs of the Data Engineers 

YOUR PROFILE

  • 5+ years of DevOps or Data Infrastructure experience
  • 3+ years of experience in AWS, Azure or GCP.
  • 2+ years of experience in programming with python, scala or SQL
  • Practice DataOps and SysOps (e.g., automation, performance tuning, pipelines, ingestion, prep, orchestration, management and analytics)
  • Understand data explainability and monitoring techniques
  • Working knowledge of cloud configuration and container life cycle management products: Terraform, Ansible, Kubernetes, Mesos, Istio, Docker, etc.
  • [Optional] Familiarity with machine learning concepts and tools (ex. Tensorflow, PyTorch)

 

Please submit your resume in English. #LI-Remote #LI-ABW1

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.