Senior Data Engineer

Senior Data Engineer

This job is no longer open

About Cardlytics 

If you have a bank account or a credit card, chances are good that you’ve seen our platform in action. By running the cash-back rewards programs for Chase, Bank of America, Wells Fargo and other financial institutions we place targeted, relevant, measurable ads in front of 130 million US consumers - more MAUs than Twitter, Pinterest, and Snapchat.  

Cardlytics is the largest walled garden you’ve never heard of. We see one in every two card swipes across the US, covering $3.3T in purchases. This puts us in a unique position. We can help marketers predict consumer shopping preferences based on actual purchase data, and then use that data to reach bank consumers with offers they will love. 

Role Summary 

The Senior Data Engineer is responsible for the design and coding of new features and applications, enhancing existing products, and implementing new technologies, paradigms, and practices to provide the best solutions to our customers. This includes technical design and development of Cardlytics current and future systems as a part of a team of other data engineers and across other Engineering and business teams. The senior data engineer will also be responsible for implementing organizational and industry standards and best practices to ensure compatibility, reliability, resiliency, scalability, performance, and maintainability.

You will: 

• Develop new applications and features within a scrum team providing data and data services to the enterprise, other engineering teams, data science, analysts, product, management/executives, and other business teams

• Build high performing and scalable data platforms to support multiple data pipelines to ingest and deliver data as fast and reliably as possible

• Implement new technologies and practices to provide the best solutions to our customers

• Responsible for architecture/design and risk analysis/mitigation on a macro level

• Develop and maintain solutions on our tech stack environments (Spark Streaming, Spark, Kafka, Hadoop, Vertica, SQL Server, etc.)

• Work with business teams to create technical requirements and deliver within time and scope • Work with IT Operations and Prod Support to ensure solutions are releasable, maintainable, and scalable

• Work with Risk & Compliance to ensure necessary logging/security is in place to comply with audits

• Help develop team members through code reviews, enforcing standards, best practices, policies, and processes

• Perform functional testing, end-to-end testing, performance testing, and UAT of applications and code written by self and other members of the team  

You are: 

  • A creative problem solver

• Well versed in a variety of Spark, Kafka, RDBS and other Big Data technologies

• Proficient in distributed systems and architecture

• Able to design simple, clean, and elegant solutions

• Able to understand business requirements and translate them to technical specifications and designs

• Able to build solutions that treat monitoring, logging, validation and observability as a first class citizen

• Able to understand and clearly communicate your ideas to others  

. 

You have experience in: 

• Experienced in building high performing, scalable, observable, reliable, extensible data pipelines that process large volumes of data in both batches and streams

• Using logs, tools, and other data to methodically identify issues (performance, environmental or otherwise)

• Working with distributed and MPP systems

• Consuming and supplying data via APIs

• Working with data lakes and data lake query engines (Dremio, Presto, Drill, Spark, etc.)

• Mentoring junior developers

  

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.