The NVIDIA Datacenter organization is seeking an experienced technology professional for the position of Data Engineering Developer to support initiatives for the Operations Data Platform, Reporting, and Analytics Organization for our DGX™ and other Datacenter products.
As a Data Engineering Developer in our team, you will be an integral part of the team in Operations building the Operation Data Platform which will turn data into information leading to insights and business results.
What you’ll be doing:
Implement the new pipelines in the Operations Data Platform for our Datacenter products.
Build data pipelines that are used to transport data from a data source to the data lake.
Craft data systems and pipelines ensuring that data sources, ingestion components, transformation functions, and destination are well understood for implementation
Interpret trends and patterns by performing complex data analysis
Prepare data for prescriptive and predictive modeling by making sure that the data is complete, has been cleansed, and has the necessary rules in place
Build algorithms and prototypes
Develop analytical tools and programs
Collaborate with data scientists and architects on projects
Evaluate business needs and objectives to ensure the organization can access the raw data
What we need to see:
Bachelor’s degree in Computer Science or Information System, or equivalent experience with programming knowledges (i.e Python, Java, etc)
8+ years of experience developing and maintaining data warehouses in big data solutions
Expert in data and database management that includes data pipeline responsibilities in replication and mass ingestion, streaming, API and application and data integration.
You have built required infrastructure for optimal extraction, transformation, and loading of data from various sources using AWS, Azure, SQL or other technologies
Knowledge of data analytics/mining and segmentation techniques
Experience with several ecosystems like Azure, AWS, Spark and Hadoop environments
Ability to communicate effectively with business users and translate business needs into technology solutions
Experience working with high volume structured, semi structured, unstructured data and parsing of log files.
Strong understanding of infrastructure components that are used to run the data pipeline which includes AWS Glue and Azure Data Factory
Strong analytical skills with the ability to collect, organize, and disseminate significant amounts of information with attention to detail and accuracy
Knowledge in operational processes in semi-chips, boards, systems, and servers with a view of data landscape.
Ways to stand out from the crowd:
Strong ability to drive continuous improvement of systems and processes.
Experience with big data projects while delivering significant value back to the business, real examples.
Self-starter, self-confident individual with integrity and accountability, highly motivated, driven, high-reaching, and attracted to a meaningful opportunity.
Practical knowledge setting up data pipelines from Factories, Repair centers, Field, R&D, Sales and Supply Chain.
A proven ability to work in a fast-paced environment where strong organizational skills are essential.
You will also be eligible for equity and benefits.