Data Engineer

Data Engineer

This job is no longer open
Our Data Operation team owns the data infrastructure, data pipeline, data transformation, data quality, and data platforms focusing on scalability and performance to empower business decision making as well as the hyper-growth of our all-in-one productivity platform! 
 
The ideal candidate loves data and embraces DataOps philosophy, and is excited about solving complex technical and data challenges at scale, while delivering high-quality and reliable data products to our internal stakeholders. Our core data tech stack includes Fivetran, Airflow, DBT, Snowflake, Tableau, Amplitude and Kubeflow.
 
Beyond technical skills, we strongly believe in ClickUp Core Values. If they resonate with you too, you will be a great addition to our team! 
 
You will be responsible for: 
  • Work with BI & Analytics leadership, enterprise application team (IT), data engineers, and data analysts to understand data generation and data needs 
  • Design, build and maintain highly efficient and reliable data pipelines to move data across a number of platforms including applications, database, and data warehouse, and BI tools.
  • Build data models via DBT and delivery data products to the stakeholders
  • Provide technical leadership around data architecture and drive requirements, design, and delivery of scalable data ecosystem
  • Provide data engineers with guidance and design support around workflows and pipelines, conduct code reviews, and select the tools the team will use.
  • Drive day to day execution using agile scrum development process to maintain a consistent sprint velocity.
  • Enable and automate Role Based Access Control (RBAC) and Attribute Based Access Control (ABAC) for Snowflake
  • Provide leadership on data engineering and data ops best practices
Qualifications: 
  • 5+ years hands-on development experience in custom ELT design, implementation and maintenance
  • 5+ years of Python development experience.
  • 5+ years of SQL experience.
  • Proficient in DBT and Github
  • Proven track record of architecting large-scale data pipelines, CI/CD tools and have experience with technologies like Airflow, Docker, Lambda, Kubernetes, and similar.
  • Experience with implementation of DataOps tools and software engineering best practices e.g. Git Action, Jenkins or AWS Step Function
  • Extensive Experience working with cloud data platforms (Snowflake, Redshift, Databricks or similar).
  • Experience in building data pipeline monitoring and alerting tools to provide real-time visibility about scalability and performance metrics.
  • Experience and certification in deploying AWS cloud infrastructure for hosting data platform infrastructure.

LI-JBM

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.