Data Engineer

Data Engineer

Job Description

Our Customer Retention Team is looking for a Data Engineer to partner with Customer Success, Analytics and Data Science on providing highly visible data products that power predictive models, analyses, and critical enablement for customer engagement teams. As a part of this team, you will be responsible for building and maintaining insightful, scalable, and robust systems and solutions using AWS and related technologies such as Spark, Hive, Presto, ML Flow, DataBricks etc. This position will support our operational and business objectives, working with Analysts and Data Scientists on new and existing data initiatives. 

 

The ideal candidate will have experience in data manipulation and constructing ETL pipelines, have demonstrable data intuition, and the ability to iterate quickly to develop robust data products and solutions. You should be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.  

 

Your day to day as a Data Engineer you would be working on:

·         Design, build, and manage production-level ETL pipelines and ensure timely SLAs on all managed sources 

·         Be subject matter expert for team on ETL/programming best practices, query optimization, and LogMeIn’s data platform infrastructure 

·         Build and maintain a data quality framework that monitors data sources used by Analysts and Data Scientists to ensure data correctness and availability 

·         Partner with machine learning engineers and data scientists on model deployment, monitoring, and retraining

·         Work with Customer Engagement teams to provide critical customer data to various endpoints (Gainsight, Marketo, Salesforce, etc.)   

·         Transform raw event-level data into formatted tables for modelling and BI consumption 

·         Generate accurate and effective documentation  

 

What We are Looking For:

·         Bachelor’s degree in Computer Science, Information Technologies, Engineering, or related field

·         2-3 years of relevant work experience 

·         Familiarity writing production ETL using data technologies that allow analysis of large amounts of data (Spark, Hadoop, Hive, Presto, etc) 

·         Expert programming skills in one or more general purpose languages (Python, R, Scala, etc.) and highly proficient in SQL 

·         Strong work ethic complimented by a proactive problem solver attitude 

·         Strong communication/interaction skills 

·         Desire to learn new technologies, programming paradigms, and stay up-to-date on current industry standards to share with the team 

·         Experience with AWS and Spark preferred

Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.