This is a hands on data platform engineering role that places significant emphasis on consultative data engineering engagements with a wide range of customer stakeholders; Business Owners, Business Analytics, Data Engineering teams, Application Development, End Users and Management teams.
· 3+ years of data engineering experience leading implementations of large-scale lakehouses on Databricks, Snowflake, or Synapse. Prior experience using DBT and PowerBI will be a plus.
· Extensive experience with Azure data services (Databricks, Synapse, ADF) and related azure infrastructure services like firewall, storage, key vault etc. is required.
· Strong programming / scripting experience using SQL and python and Spark.
· Knowledge of software configuration management environments and tools such as JIRA, Git, Jenkins, TFS, Shell, PowerShell, Bitbucket.
· Experience with Agile development methods in data-oriented projects
· Highly motivated self-starter and team player and demonstrated success in prior roles.
· Track record of success working through technical challenges within enterprise organizations
· Ability to prioritize deals, training, and initiatives through highly effective time management
· Excellent problem solving, analytical, presentation, and whiteboarding skills
· Track record of success dealing with ambiguity (internal and external) and working collaboratively with other departments and organizations to solve challenging problems
· Strong knowledge of technology and industry trends that affect data analytics decisions for enterprise organizations
· Certifications on Azure Data Engineering and related technologies.