Senior Data Engineer - (Platform)

Senior Data Engineer - (Platform)

This job is no longer open

About the Job:

As a Software Engineer, you will be involved and responsible for full ownership and driving key data platform initiatives and life cycle of the data management including data ingestion, data processing, data storage, querying system, cost reduction efforts towards delivering product features to internal and external KT customers.  

We are looking for strong engineers to grow our Data Platform team. The Data Platform team is responsible for driving KT’s Data Engineering vision. The team works in two main areas: 1. Building a data platform for data ingest, access, processing, and the querying system to enable KT product features. 2. Building Micro Services that expose data to our backend product teams.

Responsibilities:

  • Leading and driving from requirements, scoping, design, development, deployment of data management systems, and infrastructure for building and supporting new features.
  • Scaling up the ingestion pipeline, data quality, processing system while maintaining SLAs on performance, reliability and availability of the system.
  • Measuring and continuously improving ETL execution times, cost of running data processing job in production, rate of experimentation, reducing iteration time to help you and the teams make better decisions about how and what we build and try next.
  • Leading new design and data architecture for near real-time features.
  • Communicate effectively across multiple teams and projects.
  • Ability to learn and adapt quickly to new technologies.
  • Self-starter and someone who takes and drives initiatives

Qualifications:

  • Experience in designing, implementing and supporting highly scalable data systems and services 
  • Experience building and running large-scale data pipelines, including distributed messaging such as Kafka, data ingest to/from multiple sources to feed batch and near-real-time/streaming compute components.
  • Experience in data-modeling and data-architecture optimized for big data patterns.
  • Able to build and debug data systems: Defining metrics and datasets, monitoring and debugging of logs, and analysis of the symptoms of issues.
  • Familiar with current distributed systems literature and the life cycle of the data infrastructure.
  • Strong background in Python development with Linux. You write clean, correct code while iterating on experiments in Python. The ability to understand and contribute to Go software is a plus.
  • Experience and understanding of the infrastructure e.g. K8s, CI/CD, Docker, and others.

Creating a diverse and inclusive workplace is one of KeepTruckin's core values. We are an equal opportunity employer and welcome people of different backgrounds, experiences, abilities, and perspectives.



This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.