AI Safety & Governance Full-time

Technical CBRN-E Threat Investigator

Anthropic

Location

USA

Type

Full-time

Posted

Jan 15, 2026

Compensation

USD 200000 – 200000

Mission

What you will drive

Core responsibilities:

  • Detect and investigate attempts to misuse Anthropic's AI systems for developing, enhancing, or disseminating CBRN-E weapons, pathogens, toxins, or other threats
  • Conduct technical investigations using SQL, Python, and other tools to analyze large datasets and trace user behavior patterns
  • Develop CBRN-E-specific detection capabilities, including abuse signals, tracking strategies, and detection methodologies
  • Create actionable intelligence reports on CBRN-E attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems

Impact

The difference you'll make

This role creates positive change by protecting against the most serious potential misuses of AI systems for CBRN-E threats, working at the intersection of AI safety and CBRN security to build robust defenses against threat actors who may attempt to leverage AI technology for developing weapons or creating biological harm.

Profile

What makes you a great fit

Required qualifications:

  • Deep domain expertise in biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, or related CBRN-E threat domains
  • Demonstrated proficiency in SQL and Python for data analysis and threat detection
  • Experience with threat actor profiling and utilizing threat intelligence frameworks
  • Hands-on experience with large language models and understanding of how AI technology could be misused for CBRN-E threats

Benefits

What's in it for you

Competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Annual salary range: $230,000—$290,000 USD.

About

Inside Anthropic

Visit site →

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. They want AI to be safe and beneficial for users and society as a whole, working as a single cohesive team on large-scale research efforts to advance long-term goals of steerable, trustworthy AI.