YOUR MISSION
We are looking for a Data Engineer capable of helping create and maintain the data pipeline architecture of Mural, as well as writing APIs and tools to help other teams work with data. You will be working in the BizOps Engineer team and also collaborating closely with Product, Analytics and Data Science teams to help them achieve their goals.
The Data Engineer role is a software development role with knowledge of data architectures, APIs, and the delivery and transformation of data in a reliable way.
The ideal candidate is passionate about both developing software and working with data, and is capable of challenging and redesigning existing solutions. He or she must be a team player, always willing to collaborate with others.
Responsibilities:
As a member of the Data Platform team, you will:
- Ability to develop private APIs and messaging endpoints for the services for rich functionality and administrative control within the platform.
- Take a central role in the running and development of Apache Kafka and/or other messaging infrastructure.
- Improve the existing data platform and propose alternative solutions.
- Have experience building data pipelines and ETL workflows using cloud platforms like Microsoft Azure or Amazon Web Services.
- Design and maintain ETL/ELT pipelines, SQL queries to meet business needs.
- Design, build, and evolve durable, highly scalable Kafka infrastructure.
- Work with stakeholders including the Executive, Engineering, and Operation teams to assist with data-related technical issues and support their data infrastructure needs.
- Have experience working day-to-day using a popular data warehouse, like Redshift, Vertica, BigQuery, or Snowflake using SQL.
- Owns the development, implementation, assessment, and support of Kafka streaming platform.
- Work closely with Product teams to help them explore the feasibility of experimental data-driven features, helping them narrow down preliminary or unclear requirements, and building the tools and APIs necessary to support those features. A strong analytical mindset is a must.
- Design, develop and deploy backend services with focus on high availability, low latency and scalability.
- Follow modern development best practices such as code reviews, unit testing and continuous integration.
- Work as part of a team. We value team players who share their knowledge and like collaborating with others.
- Show initiative, completing your tasks and providing timely status updates to both the rest of your team and all of the stakeholders.
- Take full ownership of the solutions you build. This means analyzing requirements, building them, monitoring them on production, and troubleshooting them if problems arise.
YOUR PROFILE
We are looking for a Software Engineer with 5+ years of experience in a development role, with Data Engineering experience, and who has attained a graduate degree in Computer Science, Software Engineering, or related field, or who has the equivalent relevant experience.
- A MS/BS degree in Computer Science, Software Engineering or 5+ years of proven experience in a similar position.
- Strong understanding of relational database management systems with experience in Snowflake, Redshift, SQL Server, Oracle, or similar systems.
- 5 years of experience in Data or BI Engineering, Data Warehousing/ETL, or Software Engineering.
- 5 years of experience in Big Data Solutions using technologies including one or more of the following: Hadoop, Hive, HBase, MapReduce, Spark, Sqoop, Oozie, Java.
- Experience with Apache Kafka in a high-throughput production environment.
- Strong technical skills and proficiency with any general purpose language (Java, Javascript/Typescript, Python, C#, C++, Go, etc.).
- Experience in designing and developing web services and REST APIs.
- Advanced knowledge of relational databases such as PostgreSQL, and being capable of writing non-trivial SQL.
- Experience designing data models and data warehouses and with non-relational data storage systems (NoSQL and distributed database management systems).
#LI-Remote
#LI-DJ1