AI Safety & Governance Full-time

Product Policy Manager, Frontier Risk

Anthropic

Location

USA

Type

Full-time

Posted

Jan 15, 2026

Compensation

USD 200000 – 200000

Mission

What you will drive

Core responsibilities:

  • Develop and maintain risk assessment frameworks to identify and evaluate potential safety risks associated with new product features and functionality
  • Conduct comprehensive product safety reviews, covering technical and non-technical harms, to inform product launch and safety mitigation strategies
  • Analyze the potential for misuse, unintended consequences, and harmful outputs of new model and product capabilities
  • Craft policy recommendations that strike a balance between enabling innovation and ensuring responsible AI deployment

Impact

The difference you'll make

This role maintains Anthropic's commitment to safe and beneficial AI by developing policies that balance innovation with responsibility, ensuring AI systems are deployed safely as product capabilities expand.

Profile

What makes you a great fit

Required skills and qualifications:

  • Strong technical background with ability to explain technical concepts to non-technical stakeholders
  • Experience conducting risk evaluations of novel products in fast-moving organizations
  • Familiarity with AI ethics, responsible AI principles, and current debates surrounding AI safety and governance
  • Strong project management skills with ability to drive policy development processes from ideation to implementation

Benefits

What's in it for you

Competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space for collaboration.

About

Inside Anthropic

Visit site →

Anthropic's mission is to create reliable, interpretable, and steerable AI systems, aiming to make AI safe and beneficial for users and society as a whole.