Our Data Engineering team, within our Data Services Organization, builds and maintains the infrastructure essential to delivering high-volume, business-critical data to the organization to enable data-driven decisions.
We are focused on expanding our curated and modeled data that unify sources of truth across our multiple products and domains. You’ll have the opportunity and empowerment to guide the data engineering team on best practices using modern distributed data tools like Snowflake, Spark, Kafka, and dbt.
This is an ideal opportunity for someone that has strong opinions on how things should be done and loves figuring out what the right solution is for the scenario at hand. Your voice will be heard and will be given the opportunity to make an impact with the direction and delivery of our data platform to internal stakeholders.
5+ years of experience designing and delivering data warehouses and marts to support business analytics
Expertise in streaming & real-time data processing using a technology like Spark, Kafka, ksqlDB, or Databricks, etc. and best practices on production deployment of these platforms
Strong foundation in SQL development on RDBMS (Snowflake and Postgres preferred)
Experience with dimensional data modeling/data workflow diagrams (conceptual, logical, and physical)
Experience with source control and deployment workflows for ETL (dbt, Fivetran, airflow, etc.)
Experience working with AWS services such as DynamoDB, Glue, Lambda, Step Functions, S3, CloudFormation
Hands on experience with scripting languages (Python, BASH, etc)
Experience with metadata management and data quality
Knowledge of software engineering best practices with experience with implementing CI/CD (Gitlab, Github Actions, Teamcity, etc.), monitoring & alerting for production systems
Data Warehousing and modeling delivery
Support and evolution of data environment to deliver high-quality data, speed, and availability
Curation of source-system data to deliver trusted data sets
Involvement on data cataloging and data management efforts
Production ETL performance tuning and environment-level resource consumption and management
Migration of POC pipelines to production data processes
Strong capability to manipulate and analyze complex, high-volume data from a variety of sources
Strong experience designing and building end-to-end data models and pipelines as well as alerting
Knowledge of data management fundamentals and data storage principles
Experience in data modeling for batch processing and streaming data feeds; structured and unstructured data
Founded in 2004 and trusted by Fortune 500 companies, Pluralsight is the technology skills platform organizations and individuals in 150+ countries count on to create progress for the world.
Our platform helps technologists master their craft and take control of their careers. We empower businesses everywhere to build adaptable teams, speed up release cycles and become scalable, reliable and secure. We come to work every day knowing we’re helping our customers build the skills that power innovation.
And we don’t let fear, egos or drama distract us from our mission. Our mission to democratize technology skills is what drives us and our values are at the helm of how we work together. It’s our commitment to practicing them day in, day out that enables our performance. We’re adults, and we treat each other that way. We have the autonomy to do our jobs, transparency to eliminate office politics and trust each other to do the right thing. We thrive in an environment with creativity around every corner, challenges that keep us on our toes, and peers who inspire us to be the best we can be. We bring different viewpoints, backgrounds and experiences, and united by our mission, we are one.
Bring yourself. Pluralsight is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age or veteran status.