Job Type
Full-time
Description
Who we are
Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and sustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you.?
About the job
We are seeking experienced data engineers to help us with rapid expansion our Data Cloud, constructing new data pipelines and buildout of the Snowflake cloud data warehouse structures for various stakeholders and analytics applications, using most modern techniques and technologies. Our modern solution features standardized data ingestion pipelines that support low latency of data updates from the sources and enable variety of analytics applications. These may range across new predictive models, advanced analytics, operations monitoring and automation, as well as more traditional operational reporting.
What you’ll be doing
- Create and maintain optimal data pipelines for cloud data analytics utilizing cloud native tools.
- Assemble and simplify large, complex datasets that meet functional and non-functional requirements
- Work with stakeholders to assist with data-related technical issues and support data needs
- Keep our data secure across national boundaries and through multiple data centers and AWS regions
- Create data tools for analytics and data science team members to assist them in building and optimizing our analytical products into an innovative industry leader
- Work with data, architecture and analytics experts to enable greater functionality in our data systems
Requirements
What you need
We are looking for a candidate with 7+ years of experience in a Data Engineer role, who has attained a degree in a related technical field or equivalent experience. They should also have experience using the following software/tools:
- Experience with cloud data tools: snowflake, dbt, s3, etc* (directly required)
- Experience with data modeling for analytical data consumption (directly required)
- Experience with dbt data pipelines, Jinja, and dbt modeling* (directly required)
- Experience with source control and familiar with SDLC best practices* (directly required)
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airglow, dbt, etc* (direct and indirect, non-dbt pipeline tool knowledge that would translate well into dbt)
- Working knowledge of data presentation systems
- Advanced working SQL knowledge and experience with cloud databases
- Experience building and optimizing data pipelines, architecture, and data sets with a focus on analytical data
- Experience performing root cause analysis on internal and external data to identify opportunities for improvement.
- Strong analytical skills related to working with datasets.
- Build Processes supporting data transformation, data structures, metadata, dependency, and workload management.
- Successful history of manipulating, processing and extracting value from large, disconnected datasets
- Working knowledge of cloud data management
- Strong project management and organizational skills
- Experience supporting and working with cross-functional teams in a dynamic environment.
The Brightly culture
We’re guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiously making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.
Equal Opportunity Employer/Protected Veterans/Individuals with Disabilities