Data Engineer

Data Engineer

As the Data Engineer on the Data Engineering team, you will lead our critical efforts to make our business data continually available and useful to all teams across the organization. You will ensure that our efforts effectively scale in line with rapid growth in the business and team. You will design and implement the systems that connect our vendored SaaS tools with each other, our production application and our data warehouse. You will both own existing data pipelines and launch new ones, as well as improve and scale our data warehouse. We currently use Snowflake, DBT and Prefect, among various IPaaS/ ETL tools, to power our Looker instance. You’ll partner with key stakeholders across the business, including Data Science, Rev Ops and Growth teams. Though we work closely with Engineering, in this role, you will own the architecture, implementation and deployment of systems that support our broader data needs. All positions at Postscript are fully remote.

Primary duties

  • Design, implement and deploy new data processing applications and pipelines
  • Own and improve existing data pipelines. Introduce robust pipeline testing frameworks into our existing warehouse infrastructure using tools such as Great Expectations. Continually improve data integrity and pipeline stability
  • Own, maintain and effectively scale our Snowflake instance. Continually improve how we structure our data to best enable easy, intuitive end user experience
  • Introduce new data sources and data models to the warehouse as needed. Update existing models as necessary
  • Proactively enable Data Science, Engineering and stakeholders across the company to take full advantage of the tools that the BI team incubates internally
  • Solve data problems using SQL, Python or whichever tool is most appropriate
  • Effectively manage and execute projects in line with timelines and business goals
  • Evangelize Agile development principles across the team. Ensure excellent stakeholder communication
  • Develop and maintain deep working knowledge of industry best practices and trends

What We’ll Love About You

  • Proven experience owning the full development lifecycle of stand-alone applications, including applications focused on data ingestion and  processing in a production environment
  • Expert-level skills in SQL and data modeling 
  • Strong Python knowledge (or similar programming language). Hands-on experience with Pandas, Numpy and Dask preferred
  • Experience working with and administering a Cloud Warehouse such as Snowflake, BigQuery, or Redshift
  • Demonstrated understanding of how testing fits into the data warehouse ecosystem 
  • Demonstrated strong collaboration` and cross-functional project management skills 
  • Demonstrated ability to effectively prioritize decisions to deliver maximum value to end users 

 

Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.