About

We love our portfolio companies.

You’ll love working for one of them.

0
Companies
0
Jobs

Data Scientist, Evals

Perplexity

Perplexity

Data Science
San Francisco, CA, USA · Berlin, Germany
USD 210k-385k / year + Equity
Posted on Feb 13, 2026

Location

London, New York City, Belgrade

Employment Type

Full time

Location Type

Hybrid

Department

Data Science

Compensation

  • $210K – $385K • Offers Equity

U.S. Benefits

Full-time U.S. employees enjoy a comprehensive benefits program including equity, health, dental, vision, retirement, fitness, commuter and dependent care accounts, and more.

International Benefits

Full-time employees outside the U.S. enjoy a comprehensive benefits program tailored to their region of residence.

USD salary ranges apply only to U.S.-based positions. International salaries are set based on the local market. Final offer amounts are determined by multiple factors, including experience and expertise, and may vary from the amounts listed above.

Perplexity serves tens of millions of users daily with reliable, high-quality answers grounded in an LLM-first search engine and our specialized data sources. We aim to use the latest models as they are released, but the intelligence frontier is a jagged one, and popular benchmarks do not effectively cover our use cases. In this role, you will build specialized evals to improve answer quality across Perplexity, covering search-based LLM answers and other scenarios popular with our users.

Responsibilities

  • Architect and maintain automated evaluation pipelines to assess answer quality across Perplexity's products, ensuring high standards for accuracy and helpfulness

  • Design evaluation sets and methods specifically to measure the impact of tool calls (particularly web search retrieval) on the final answer's quality

  • Develop VLM-based solutions to programmatically evaluate how final answers render visually across different platforms and devices

  • Continuously review public benchmarks and academic evaluations for their applicability to the Perplexity product, adapting and incorporating them into our regular performance measurements

  • Operate within a small, high-impact team where your evaluation metrics directly shape product changes, collaborating closely with technical leadership to measure and improve Answer Quality

Qualifications

  • PhD or MS in a technical field or equivalent experience

  • 4+ years of experience in data science or machine learning

  • Strong proficiency in Python and SQL (expected to write production-grade code)

  • Experience building within a modern cloud data stack, specifically AWS and Databricks

  • Comfortable with agentic coding workflows and using AI-assisted development tools to iterate faster

Preferred Qualifications

  • 1+ years of experience working with LLMs at scale, specifically with LLM-as-a-judge setups

  • Prior experience working on customer-facing web products or consumer apps, with real user traffic at scale

  • A strong research background, with experience applying research methods to real-world ML problems

  • Experience defining evaluation metrics (e.g., factual consistency, hallucination rate, retrieval precision) and building ground truth datasets

Compensation Range: $210K - $385K