About Us
At People Data Labs, we’re committed to democratizing access to high-quality B2B data and leading the emerging DaaS economy. We empower developers, engineers, and data scientists to create innovative, compliant data products at scale with our clean, easy-to-use datasets of resume, company, location, and education data consumed through our suite of APIs.
PDL is an innovative, fast-growing, global team backed by world-class investors, including Craft Ventures, Flex Capital, and Founders Fund. We scour the world for people hungry to improve, curious about how things work, and willing to challenge the status quo to build something new and better.
Roles & Responsibilities:
Own and drive insights on the efficacy of our products and product strategy
- Analyzing data using statistical techniques and tools to identify anomalous data, clean data, and derive meaningful insights and trends.
- Ensuring data integrity, accuracy, and completeness throughout the analysis process.
- Generate, maintain, and update dashboard reports using our business intelligence tools, highlighting key findings and trends.
- Develop and maintain databases, data systems, and data analytics pipelines within database management systems.
- Work with stakeholders in Engineering, Product, and Revenue to assist with data-related technical issues and support their data infrastructure and analytics needs.
- Ensure the integrity and consistency of database schemas, including managing updates, version control, and documenting schema changes to support data analysis and reporting requirements.
Impact the quality of PDLs core data products
Our person data product contains over 3.9B records (wow!) - these are big data problems
- Developing an analytic-centric product QA and reporting process to identify key changes in the quality of our data products.
- Working cross functionally to define data quality requirements and expectations
- Work with engineering to build quality checks not only at the end of the process, but through our data build processes
- Responsible for assistance and further development of our quality assurance process.
- Work with product managers and engineers to dig into issues in our data to identify there are gaps in our processes
- Communicate key product quality insights and opportunities to the product and engineering teams to drive improvements and resolutions
Technical Requirements
- 3-5+ years industry experience with clear examples of strategic and analytical technical problem solving and implementation
- Strong analytics fundamentals (strong understanding of what it takes to go from raw data to data that is suitable to draw analytic conclusions)
- Ability to deeply analyze data and uncover insights, understand how various data points converge to form a cohesive narrative, and make business decisions based on that narrative.
- Expertise with SQL & Python
- Experience working with business intelligence tools and dashboard creation
- Experience with data cleaning and data processing (e.g., cleaning, transformation)
- Data warehouse operational design patterns (e.g., incremental updating, partitioning and segmentation, rebuilds and backfills)
- Experience with data warehousing platforms (e.g., Databricks, Snowflake, Redshift, BigQuery, or similar)
- Experience with data warehousing data access patterns (e.g., OLAP, Star schema, snowflake schema, or similar patterns)
- Understanding of modern data storage formats and tools (e.g., columnar vs. row-based vs. key-val storage)
- Experience assessing query cost (in environments that are billed by computation) and query performance (for general run-time)
- Experience with data flow tools (e.g. Airflow, dbt, or similar)
- Experience with the software development lifecycle (using version control systems, submitting and reviewing pull requests, being able to interact with a CI/CD system to deploy analysis, or similar)
- Experience evaluating data quality and maintaining consistently high data standards across new feature releases (e.g., consistency, accuracy, validity, completeness)
Professional Requirements
- Must thrive in a fast paced environment and be able to work independently
- Can work effectively remotely (able to be proactive about managing blockers, proactive on reaching out and asking questions, and participating in team activities)
- Strong written communication skills on Slack/Chat and in documents
- You are experienced in writing data design docs (pipeline design, dataflow, schema design)
- You can scope and breakdown projects, communicate and collaborate progress and blockers effectively with your manager, team, and stakeholders
- Experience collaborating with Product, Engineering, and Revenue teams
Nice To Haves:
- Degree in a quantitative discipline such as computer science, mathematics, statistics, or engineering
- Experience with Apache Spark or PySpark
- Experience working in Databricks (including delta live tables, data lakehouse patterns, etc.)
- Experience working in Snowflake
- Experience with cloud computing services (AWS (preferred), GCP, Azure or similar)
- Experience with log analysis platforms (e.g. ELK, ElasticSearch, Datadog, New Relic, Sumologic, raw text processing, or similar)
- Experience working with data acquisition / 3rd party data integrations
- Experience working in data procurement or assessment of 3rd party data
- Expertise with the Python data stack (e.g., numpy, pandas, PySpark, or similar)
Our Benefits
- Stock
- Competitive Salaries
- Unlimited paid time off
- Medical, dental, & vision insurance
- Health, fitness, and office stipends
- The permanent ability to work wherever and however you want
No C2C, 1099, or Contract-to-Hire. Recruiters need not apply.
People Data Labs does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits.