Application Guide
How to Apply for Policy Manager, Harmful Persuasion
at Anthropic
🏢 About Anthropic
Anthropic is a frontier AI research and product company focused on AI safety and alignment, distinguishing itself through its constitutional AI approach and public commitment to responsible development. The company explicitly acknowledges the potential harms of working at frontier AI labs, as noted in their 80,000 Hours career review link, indicating a transparent and ethically conscious culture. This makes it particularly appealing for policy professionals who want to work at the intersection of cutting-edge AI and meaningful harm prevention.
About This Role
This Policy Manager role specifically focuses on harmful persuasion risks, requiring development of comprehensive policy frameworks for election integrity, influence operations, and fraud within AI systems. You'll be responsible for creating enforceable policy language that translates into technical detection requirements and designing evaluations to assess model capabilities for deceptive persuasion. This position is impactful because it directly addresses some of the most pressing AI safety concerns in the current regulatory landscape.
💡 A Day in the Life
A typical day might involve collaborating with technical teams to translate newly identified harmful persuasion techniques into clear policy language and detection requirements, while also designing evaluation protocols to test model vulnerabilities. You'd likely spend time analyzing emerging threats in election integrity or influence operations and updating policy frameworks accordingly, with regular cross-functional meetings to ensure alignment between policy, research, and product teams.
🚀 Application Tools
🎯 Who Anthropic Is Looking For
- Has 5+ years of specific policy development experience across election integrity, fraud/scams, coordinated inauthentic behavior, or influence operations (not just general policy work)
- Possesses working knowledge of global regulatory frameworks around election integrity and digital services accountability, particularly relevant to AI systems
- Demonstrates proven ability to translate complex risk frameworks into clear, enforceable policy language that technical teams can implement
- Has experience designing and overseeing evaluations of harmful content or behavior, particularly in the context of emerging technologies
📝 Tips for Applying to Anthropic
Explicitly quantify your 5+ years of experience in the specific domains mentioned (election integrity, fraud/scams, CIB, influence operations) rather than just stating general policy experience
Include concrete examples of policy frameworks you've developed that were translated into technical detection requirements or enforcement protocols
Reference Anthropic's constitutional AI approach and how your policy experience aligns with their safety-focused methodology
Demonstrate awareness of the ethical considerations mentioned in their 80,000 Hours career review link about working at frontier AI companies
Show how your experience bridges the gap between policy development and technical implementation, as this role requires both policy writing and evaluation design
✉️ What to Emphasize in Your Cover Letter
["Your specific experience with election integrity and influence operations policy development, with concrete examples of frameworks you've created", "How you've previously translated policy language into technical requirements or detection systems", 'Your understanding of harmful persuasion techniques and how they manifest in AI systems', "Why you're specifically interested in Anthropic's approach to AI safety and their transparent acknowledgment of potential harms in frontier AI work"]
Generate Cover Letter →🔍 Research Before Applying
To stand out, make sure you've researched:
- → Anthropic's constitutional AI approach and their published research on AI safety and alignment
- → Their existing usage policies and public statements about harmful content prevention
- → The specific concerns raised in the 80,000 Hours career review about working at frontier AI labs
- → Recent developments in AI regulation, particularly around election integrity and digital services accountability
💬 Prepare for These Interview Topics
Based on this role, you may be asked about:
⚠️ Common Mistakes to Avoid
- Applying with only general policy experience without specific examples in election integrity, fraud, or influence operations
- Failing to demonstrate how policy translates into technical implementation or evaluation design
- Not addressing the ethical considerations of working at a frontier AI company that Anthropic explicitly references
- Using generic AI policy language without showing understanding of harmful persuasion-specific risks
📅 Application Timeline
This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.
Typical hiring timeline:
Application Review
1-2 weeks
Initial Screening
Phone call or written assessment
Interviews
1-2 rounds, usually virtual
Offer
Congratulations!