Impact Careers Full-time

Biological Safety Research Scientist

Anthropic

Posted

Jan 15, 2026

Location

USA

Type

Full-time

Compensation

$300000 - $320000

Mission

What you will drive

  • Design and execute capability evaluations ("evals") to assess the capabilities of new AI models
  • Collaborate with threat modeling experts and ML engineers to develop and train safety systems that detect harmful behaviors and prevent misuse
  • Analyze safety system performance, identify gaps, and propose improvements through rigorous stress-testing against evolving threats
  • Partner with Research, Product, and Policy teams to embed biological safety throughout the model development lifecycle

Impact

The difference you'll make

This role creates positive change by developing safety and oversight mechanisms for AI systems to prevent misuse in the biological domain, balancing AI's potential to accelerate legitimate life sciences research while preventing harm from sophisticated threat actors.

Profile

What makes you a great fit

  • PhD in molecular biology, virology, microbiology, biochemistry, systems/computational biology, or related life sciences field, OR equivalent professional experience
  • Extensive experience in scientific computing and data analysis with proficiency in programming (Python preferred)
  • Deep expertise in modern biology techniques including both "reading" (high-throughput measurement, functional assays) and "writing" (gene synthesis, genome editing, strain construction, protein engineering)
  • Familiarity with dual-use research concerns, select agent regulations, and biosecurity frameworks (e.g., Biological Weapons Convention, Australia Group guidelines)

Benefits

What's in it for you

Competitive compensation ($300,000-$320,000 USD annual salary), optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in San Francisco for collaboration.

About

Inside Anthropic

Visit site →

Anthropic is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at Anthropic that we think may be high impact. We do not necessarily recommend working at other positions at Anthropic. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic.