About the role:
The Data team at Zepz is responsible for delivering the infrastructure and services to support the fast-paced demands for reliable, high-quality data. Our customers rely on us to ensure the timely delivery of data, enforce the highest data quality standards, and provide fast, easy access to data. This requires a next-level data platform, and we are looking for a Data Ops Engineer to operate and support that platform. You will work closely with data engineering teams to bring agility, automation and governance to operational processes. Our mission is to build a world-class platform from first principles which will be leveraged both internally and externally. This new platform will enable Zepz and its partners to enter new markets while supporting the organisation’s goal of making remittance easy and secure for a global customer base.
Technologies and the environment…
We are using industry-leading technologies in an AWS environment, such as database services (Redshift, Aurora, RDS, DynamoDB), data analytics services (Athena, Glue DataBrew), real-time data movement (Glue), compute services (EKS, ECS, EC2), and storage (S3). We use standard data transformation and orchestration technologies, such as dbt and Airflow. Python is the primary language used for data processing. Standalone applications and/or services may be required to provide supporting functionality, which may include additional frameworks, and languages such as Java.
We put a strong emphasis on automation, unit/integration testing and performance testing. In Technology we work closely with our product and architecture partners to design scalable solutions that serve our customers.
What you will own:
- Use code to solve problems. Configuration, infrastructure, tooling, and automation, everything must be solved by writing code.
- Helping design processes. You’ll be working closely with feature teams to assist with the release of new products, as such there is plenty of scope to design processes that fit in with the overall direction but allow teams to deliver code to production reliably and at pace.
- Helping design architecture. As well as owning the delivery of solutions you will also play a part in designing the architecture, how it interacts with dependencies from a logical and infrastructure viewpoint.
- Growing together. You’ll review others' work and happily seek feedback on yours to ensure we build a better codebase and sharpen each other's skills.
What you bring to the table:
- Solid experience in a Data Engineer role with a keen interest in solving problems using code.
- 5+ years demonstrated experience, in platform engineering
- Expert Understanding of CI/CD and DevOps methodologies. You understand the build and deployment cycle of an application and how it fits into a global platform.
- Great Terraform Skills. All of our infrastructure is defined using Terraform and deployed using Atlantis. You should have a good understanding of Terraform and be able to solve common issues.
- Containerisation. You have a good understanding of Docker and Kubernetes and how you can leverage these tools to ensure there is zero downtime.
- Holistic view on application delivery. You understand the use of many systems, APM, monitoring, logging, alerting, scaling to build a robust platform for applications to respond to the varying demands from both external sources (traffic) and internal sources (feature team delivery) in a safe and controlled manner.
- Our Cloud Native platform is hosted in AWS. You’ll be comfortable working with a system that supports users from around the world, at scale.
- Agile outlook. You need to be excited about working in a fast-changing environment. Products, tools, frameworks and processes change, we evolve and take the best bits with us. The teams drive the evolution.