Data Engineer

Data Engineer

This job is no longer open
Before you read on, take a look around you. Chances are, pretty much everything you see has been shipped, often multiple times, in order to get there. E-commerce and parcel shipping volumes are exploding but so are customer expectations about shipping speed and cost. Managing shipping and logistics operations to meet increasingly exacting demands is an extremely hard endeavor, especially for SMBs who can be left in the dust by larger and far more sophisticated competitors. But this does not have to be so.

At Shippo, our goal is to level the playing field by providing businesses with access to shipping tools and terms that would not be available to them otherwise. We lower the barriers to shipping for businesses around the world, and move shipping from a pain point to a competitive advantage.

Through Shippo, e-commerce businesses, from fast-growing brands to mom-and-pop shops are able to connect to multiple shipping carriers around the world from one API and dashboard, and seamlessly run every aspect of their shipping operations, from checkout shipping options to returns.

Join us to build the foundations of something hard yet meaningful, roll up your sleeves, and get important work done everyday. Founded in 2013, and funded by top-tier investors likeD1 Capital Partners, Bessemer Venture Partners, Union Square Ventures, Uncork Capital, VersionOne Ventures, FundersClub, we are a fast-growing and proudly distributed Unicorn with hubs in San Francisco and Austin. We are also featured in Wealthfront’s Career Launching List  and Forbes’ Cloud 100 list of fast growing startups.

We are seeking a new Data Engineer! You will be responsible for building systems to collect and process events of massive scale to gain operational and business insight into the performance and optimization of shipping services.

The data engineer will work closely with product, engineering, and business leads in generating customer-facing and internal dashboards, ad hoc reports, and models to provide insights and affect platform behavior. This will also include building and maintaining the infrastructure to collect and transform raw data.

Responsibilities and Impact

    • Design, build, scale, and evolve our large scale data infrastructure and processing workflows to support running our business intelligence, data analytics and data science processes
    • Build robust, efficient and reliable data pipelines and data integration consisting of diverse data sources and transformation techniques, and ensure consistency and availability of data insights
    • Collaborates with product, engineering and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-informed decision making across the organization
    • Articulate and present findings and recommendations at different levels, with a clear bias towards impactful learning and results
    • Develop clean, well-designed, reusable, scalable code following TDD practices
    • Champion engineering organization’s adoption and ongoing use of the data infrastructure
    • Embody Shippo’s cultural values in your everyday work and interactions

Requirements

    • 3+ years of experience in software development
    • Experience designing, building, and maintaining data pipeline systems
    • Coding experience in server-side programming languages (e.g. Python, Scala, Go, Java) as well as database languages (SQL)
    • Experience with data technologies and concepts such as Airflow, Kafka, Hadoop, Hive, Spark, MapReduce, RDBMS, NoSQL, and Columnar databases
    • Exceptional verbal, written, and interpersonal communication skills
    • Deep understanding of customer needs and passion for customer success
    • Exhibit core behaviors focused on craftsmanship, continuous improvement, and team success
    • BS or MS degree in Computer Science or equivalent experience

Bonus Points

    • Experience with implementing data pipeline and ETL process
    • Experience with Big Data frameworks such as Hadoop, MapReduce and associated tools
    • Experience building stream-processing systems, using solutions such as Kinesis Stream, Kafka or Spark-Streaming
    • Experience integrating with APIs that use REST, gRPC, SOAP and other technologies
    • Experience with cloud environments and DevOps tools; working experience with AWS and its associated products a plus
    • Experience with machine learning a plus
Salary range reflected is an estimate of base pay and is for the primary location of San Francisco. Pay Range: $150,000 - $220,000. Base pay range may vary if an offer is made for work in a different location or a candidate with a different level of skill and experience.

Benefits and Perks

Medical, dental, and vision healthcare coverage for you and your dependents. Pets coverage is also available!
Flexible policy for PTO and work arrangement
3 VTO days for ShippoCares volunteering events
$2,500 annual learning stipend for your personal and professional growth
Charity donation match up to $100
Free daily catered lunch, drinks, and snacks
Fun team events outside of work hours - happy hours, “escape room” adventures, hikes, and more!
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.