Staff Software Engineer – Cloud Data Pipeline

Staff Software Engineer – Cloud Data Pipeline

This job is no longer open
This is a remote position that could be based anywhere in the United States or Canada.

Calix is leading a service provider transformation to deliver a differentiated subscriber experience around the Smart Home and Business, while monetizing their network using Role based Cloud Services, Telemetry, Analytics, Automation, and the deployment of Software Driven Adaptive networks.

As part of a high performing global team, the right candidate will play a significant role as Calix Cloud Data Engineer involved in architecture design, implementation, technical leadership in data ingestion, extraction, and transformation domain.

Responsibilities and Duties:

  • Work closely with Cloud product owners to understand, analyze product requirements and provide feedback.
  • Architecture design and review of Cloud data pipeline, including data ingestion, extraction, and transformation services.
  • Implement and enhance support tools for monitoring and acting on data pipeline issues and interpret trends and patterns.
  • Technical leadership of software design in meeting requirements of service stability, reliability, scalability, and security
  • Guiding technical discussions within engineer group and making technical recommendations
  • Design review and code review with peer engineers
  • Guiding testing architecture for large scale data ingestion and transformations.
  • Customer facing engineering role in debugging and resolving field issues.

Qualifications:

  • 10 years of software engineering experience delivering high quality solutions at scale.
  • 4+ years of development experience performing ETL and/or data pipeline implementations.
  • Organized and goal-focused, ability to deliver in a fast-paced environment.
  • Strong understanding of distributed systems and Restful APIs.
  • Experience in cloud-based big data projects (preferably in either AWS or Azure)
  • Working experience with the cloud-based data warehouse like (Greenplum, RedShift, Azure SQL Data Warehouse, etc.)
  • Hands on experience implementing data pipeline infrastructure for data ingestion and transformation near real time availability of data for applications, BI analytics, and ML pipelines.
  • Expert level working knowledge of Data Lake technologies, data storage formats (Parquet, ORC, Avro) and query engines (Athena, Presto, Dremio) and associated concepts for building optimized solutions at scale.
  • Experience in designing data streaming and event-based data solutions (Kafka, Kinesis, or like) and building data pipelines (Flink, Spark or like). Streamsets a plus.
  • Experience designing optimized solutions for large datasets.
  • Knowledge and experience designing solutions with cloud-native AWS Cloud services (EC2, EMR, RDS, EKS etc.) as well as deploying alternative solutions for appropriate use cases.
  • Expert level in one of the following programming languages or similar- Python, Java, Go.
  • BS degree in Computer Science, engineering, or mathematics or equivalent experience.

Location:

  • Remote-based position located in the United States or Canada.

#LI-Remote

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.