Senior Data Engineer

IMO

Senior Data Engineer

This job is no longer open

Research shows that women and underrepresented groups only apply to jobs only if they think they meet 100% of the qualifications on a job description. IMO is committed to considering all candidates even if you don’t think you meet 100% of the qualifications listed. We look forward to receiving your application!

Work that is meaningful. A job that has impact. Colleagues that inspire. That’s what you’ll find at Intelligent Medical Objects (IMO), a growing health IT company creating clinical terminology and insights solutions that are used by more than 740,000 US physicians and 4,500 US hospitals to power better patient care and support meaningful analytics.

Intelligent Medical Objects (IMO) (Rosemont, IL) seeks a Sr. Data Engineer to create and maintain optimal data pipeline architecture for IMO Data Platform. Specific duties include: assemble large, complex data sets that meet functional and non-functional business requirements; identify, design, and implement internal process improvements in automating manual processes and optimizing data delivery and performance; build the infrastructure required for optimal extraction, transformation, and loading of data; build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics; work on a Scrum team with stakeholders including the Executive, Product, Architecture, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs; create data tools for analytics and data scientist team members; work with data and analytics experts to strive for greater functionality in our data systems; develop and implement orchestration frameworks; design data platform components for bulk, transactional, and streaming access; ensure application-specific availability, scalability, and monitoring of resources and costs; develop quality source code, including documentation of detail level designs; leverage automation across testing, integration, and deployment activities; work cooperatively with team members to manage conflict constructively; mentor colleagues’ technical development; deliver quality product leveraging the values of transparency, inspection and adaptation in an agile way; take ownership to proactively anticipate the implications and consequences of situations and acts appropriately to make decisions; and implement creative solutions to technical challenges. Option to work remotely is available.

Position requires a Bachelor’s degree, or foreign equivalent, in Computer Science, or a closely related engineering field of study, plus 5 years of experience in the job offered, or as a Sr. Software Engineer, Software Engineer Trainee, System Engineer, Sr. System Engineer, or similar position implementing well-architected data pipelines that are dynamically scalable, highly available, fault-tolerant, and reliable for analytics and platform solutions. Specific experience must include: working with SQL and relational databases; building and optimizing “big data” data pipelines, architectures and data sets; performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement; applying analytical skills to working with unstructured datasets; build processes supporting data transformation, data structures, metadata, dependency, and workload management; manipulating, processing and extracting value from large disconnected datasets; working with relational SQL and NoSQL databases, including PostgreSQL, DynamoDB, MongoDB and Elasticsearch; working with ETL and BI Dashboard tools, such as Talend, Informatica, Tableau, or Looker;  working with AWS cloud services, including EC2, EMR, RDS, and Redshift; working with object-oriented/functional scripting languages, such as Python, Java, C++, or Scala; working with agile development incorporating Continuous Integration and Continuous Delivery (CI/CD), utilizing technologies such as GIT, Jenkins, and Terraform; working with big data tools, such as Spark, Kafka, or similar; working with stream-processing systems, such as Storm, Spark-Streaming, or similar; working with data pipeline and workflow management tools, such as Azkaban, Luigi, Airflow, or similar; and supporting and working with cross-functional teams in a dynamic and agile environment. Option to work remotely is available.

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.