AI Safety & Governance Full-time

Policy Manager, Chemical Weapons and High Yield Explosives

Anthropic

Location

USA

Type

Full-time

Posted

Jan 15, 2026

Compensation

USD 200000 – 200000

Mission

What you will drive

Core responsibilities:

  • Design and implement evaluation methodologies for assessing AI model capabilities relevant to chemical weapons, explosives synthesis, and energetic materials
  • Develop and execute strategies to identify and mitigate potential chemical/explosives misuse in model outputs
  • Create chemical/explosives threat models, including precursor identification, synthesis routes, and weaponization techniques
  • Review and analyze traffic to identify potential policy violations related to chemical/explosives content
  • Collaborate with software engineers to develop and refine detection systems and automated enforcement tools for chemical/explosives threats
  • Conduct rapid response to escalations involving dangerous chemical/explosives queries
  • Collaborate across teams to establish safety benchmarks and develop appropriate model guardrails
  • Translate chemical/explosives domain knowledge into actionable safety requirements
  • Develop approaches to assess chemical/explosives model knowledge boundaries for dual-use chemical information
  • Monitor emerging threats in the chemical/explosives landscape to inform policy development

Impact

The difference you'll make

This role offers a unique opportunity to shape how AI systems handle sensitive chemical and explosives information, working with leading AI safety researchers to tackle critical problems in preventing catastrophic misuse and ensuring AI systems remain safe and beneficial.

Profile

What makes you a great fit

Required qualifications:

  • Ph.D. in Chemistry, Chemical Engineering, or related field with focus on energetic materials, explosives, and/or chemical weapons
  • 5-8+ years of experience in chemical weapons and/or explosives defense, with deep expertise in energetic materials, chemical weapon agents, or related areas
  • Knowledge of high yield explosives application to radiological dispersal devices (dirty bombs) and related radiological weapons
  • Track record of translating specialized technical knowledge into actionable safety policies or guidelines
  • Comfortable navigating ambiguity and developing solutions for novel safety challenges
  • Ability to work independently while maintaining strong collaboration with cross-functional teams
  • Thrive in fast-paced environments balancing rigorous scientific standards with rapid threat response
  • Passionate about preventing misuse of dangerous technical knowledge while enabling beneficial applications

Benefits

What's in it for you

Competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Annual salary range: $245,000—$285,000 USD.

About

Inside Anthropic

Visit site →

Anthropic's mission is to create reliable, interpretable, and steerable AI systems, aiming to make AI safe and beneficial for users and society as a whole. The organization is a quickly growing group of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.