Edina, MN
201-500 employees
Nerdery is a premier digital product consultancy. A collection of strategists, designers, technologists and proud “Nerds,” we’re allies on your digital journey.

Principal Data Engineer (GCP)

Principal Data Engineer (GCP)

About Nerdery and Being a “Nerd.”

Nerdery is a digital product consultancy. Much more than consultants, we’re allies and guides on our clients’ digital journey – helping them to grow their business and delight their customers through intuitive, thoughtfully designed technology. As true partners, we prepare our clients for the opportunities in front of them, help them achieve their goals, and quickly deliver value for their customers. We do this by solving problems in creative ways across strategy, design, and technology.

At Nerdery, we’re not defined by our job titles but by the impact we make. You’ll work directly and closely with some of the world’s best brands to help create innovative digital products that serve everyone. As Nerds, our insight, innovation, and expertise are celebrated, and our growth is not only encouraged but expected. Being a Nerd means stepping up and pushing the boundaries of what’s possible.  We’re curious, fearless (well, not totally fearless – there are heights and spiders, after all), and always our authentic selves.

The Principal Data Engineer is a technologist passionate about data in all forms — whether stored within a relational database, a data warehouse, a data lake, a lakehouse or in-transit in ETL pipelines. They independently produce capable data structures and performant queries. As a Principal Data Engineer at Nerdery, you will architect and implement data solutions from scratch to extract and land data from various sources that will deliver insights, visualizations, or better predictions for our clients. You will support our software development teams, data analysts, and data scientists using market-relevant products and services. 


  • Oversee the entire technical lifecycle of a cloud data platform  including, but not limited to framework decisions, breaking down features into technical stories, writing technical requirements, and production readiness. 
  • Design and implement a robust, secure data platform in GCP using industry best practices, native security tools, and integrated data governance controls.
  • Translate a defined data governance strategy into technical requirements, implementing controls, documenting processes, and fostering a data-driven culture within the organization.
  • Using complex SQL knowledge and experience, will work with relational databases, Big Query, query authoring (SQL), and working familiarity with various databases.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other critical business performance metrics.
  • Design and implement scalable and reliable data pipelines on GCP.
  • Implement Change Data Capture (CDC) techniques and manage Delta Live Tables for real-time data integration and analytics, ensuring data consistency and enabling incremental data updates in cloud-based data platforms.
  • Design, configure, and manage Data Lakes in GCP, utilizing services like Google Cloud Storage, BigQuery, and Dataproc, to support diverse data types and formats for scalable storage, processing, and analytics.
  • API architecture design, including RESTful services and microservices, integrating Machine Learning models into production systems to enhance data-driven applications and services.
  • Build the infrastructure, using IaC,  required for extraction, transformation, and loading (ETL) of data from a wide variety of data sources using SQL and GCP
  • Migrate and create data pipelines and infrastructure from AWS or Azure to GCP.
  • Write and maintain robust, efficient, scalable Python scripts for data processing and automation.
  • Use a strong understanding of data pipeline design patterns, and determine the best for the use case.
  • Work with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • Manipulate, process, and extract value from large, disconnected datasets.
  • Work with stakeholders, including the Executive, Product, Data, and Design teams, to assist with data-related technical issues and support their data infrastructure needs.
  • Assume responsibility for the solution and stability of the data in transit and at rest
  • Collaborate directly with the client to identify and implement data security and compliance  requirements. Keep client data secure using best practices. 
  • Build internal processes, frameworks, and best practices for our data engineering domain.
  • Foster cross-functional collaboration as a technical liaison between engineering and other project disciplines (Design, Quality, Project Management, Strategy, Product, etc.)
  • Support the growth of other data engineers through mentorship
  • Own the technical review process for team members and provide ongoing technical feedback and recommendations
  • Participate in the internal leadership of their respective domain; provide input to the strategic direction of the domain, assist with domain initiatives, and maintain best practices within the domain
  • Assess the technical skills of prospective candidates and provide recommendations to hiring managers
  • Assist with sales requests as needed by providing technical recommendations and estimates to prospective clients

Skills & Qualifications

  • Bachelors in Computer Science or related field or equivalent experience required
  • 8+ years of relevant experience 
  • In-depth knowledge of Google Cloud Platform (GCP) data services such as BigQuery, Dataflow, Dataproc, and Pub/Sub, with proven experience in designing and implementing data pipelines, data storage, and analytics solutions in GCP.
  • Experience designing and implementing data governance and compliance policies at scale
  • Ability to take technical requirements and produce functional code
  • Experience with Git and specified technologies.
  • Proficiency in Python and SQL.
  • Experience with migrating data pipelines and infrastructure to GCP from multiple infrastructure stacks.
  • Deep understanding of data modeling, ETL processes, and data warehousing principles.
  • Familiarity with data pipeline orchestration tools and practices, such as Pub/Sub, Streaming, and Cloud Functions.
  • Excellent problem-solving and analytical skills.
  • Ability to communicate with technical and non-technical client stakeholders
  • Proactive collaborator works with colleagues to improve their technical aptitude
  • A successful history of manipulating, processing and extracting value from large disconnected datasets
  • Experience building and optimizing ‘big data’ pipelines, architectures and data sets
  • Experience using and/or creating APIs
  • Experience with any of the following additional database management systems: MS SQL Server, MongoDB, PostgreSQL, NoSQL (e.g. Cassandra), Sybase, IBM Db2, or Oracle Database
  • Experience with big data tools: Hadoop, Spark, Kafka, etc. are a plus

Are We the Right Fit For You?

The best way to get the scoop on whether Nerdery is the right place for you is to chat with current Nerds.  We would be delighted to have a conversation with you and share insight into what it’s really like to work at our organization and if it’s a place where you can thrive.  Our interview process will provide you ample opportunity to talk with other team members and assess whether the role is a good fit for your next chapter.  Take the first step and apply today – our Talent Advocates will then reach out to you to get the ball rolling!

Must be legally authorized to work within the country of employment without sponsorship for employment visa status.

Nerdery is an equal opportunity employer and complies with all applicable federal, state and local fair employment practice laws. Nerdery strictly prohibits and does not tolerate discrimination against employees, applicants or any other covered persons because of race, color, religion, creed, national origin or ancestry, ethnicity, sex, sexual orientation, gender (including gender nonconformity and status as a transgender or transsexual individual), pregnancy, marital status, familial status, age, physical or mental disability, citizenship, past, current or prospective service in the uniformed services, genetic information, membership or activity in a local human rights commission, status with regard to public assistance or any other characteristic protected under applicable federal, state or local law. All employees, other workers and representatives of Nerdery are prohibited from engaging in unlawful discrimination. Nerdery will ensure that all employment practices are free of such discrimination. Such employment practices include, but are not limited to: hiring, promotion, demotion, transfer, recruitment or recruitment advertising, selection, layoff, disciplinary action, termination, compensation, benefits, selection for training, including apprenticeship and other terms and conditions of employment. Nerdery will also provide reasonable accommodation to applicants and employees with disabilities pursuant to all applicable laws.

Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.