Responsibilities
You will work alongside a strong, global team of individuals with diverse backgrounds and skills in a variety of areas to:
Analyse data sources, and acquire data
Create data pipelines, and integrate to final data destinations
Create appropriate data models, architectures and pipelines
Move the models and pipelines into Production
You will assist the practice in:
Developing templates and accelerators, across a variety of libraries and platforms.
Participating in data workshops and client work as necessary
You will collaborate with business and technology partners to grow and develop the data engineering practice.
Skills
Must have
Strong Python development skills (min 4 years hands-on experience)
Spark
APIs development
Strong data-related development skills, preferably in mainstream versions of SQL and NoSQL.
Experience with databases, modelling and data flows.
Exposure to the full Software Development Life Cycle, and experience of working in a modern development team
Good analytical skills
Strong communication skills, both verbal and written. The successful candidates will be expected to communicate effectively with both business and technical teams
Great interpersonal and communication skills (ie the ability to articulate technical concepts to non-tech personnel).
Experience to support to other engineers, infrastructure, pipeline, and configuration.
At least some experience with big data.
Nice to have
Experience with AWS, Azure or, GCP.
Experience in working with / supplying data to visualization tools such as Qlik, Tableau, PowerBI or similar.
Good understanding of data integration patterns.
Experience with / exposure to software development for analytic applications.
Experience in projects involving cross-functional teams