Application Guide

How to Apply for Policy Manager, Harmful Persuasion

at Anthropic

🏢 About Anthropic

Anthropic is a frontier AI research and product company focused on AI safety and alignment, distinguishing itself through its constitutional AI approach and public commitment to responsible development. The company explicitly acknowledges the potential harms of working at frontier AI labs, as noted in their 80,000 Hours career review link, indicating a transparent and ethically conscious culture. This makes it particularly appealing for policy professionals who want to work at the intersection of cutting-edge AI and meaningful harm prevention.

About This Role

This Policy Manager role specifically focuses on harmful persuasion risks, requiring development of comprehensive policy frameworks for election integrity, influence operations, and fraud within AI systems. You'll be responsible for creating enforceable policy language that translates into technical detection requirements and designing evaluations to assess model capabilities for deceptive persuasion. This position is impactful because it directly addresses some of the most pressing AI safety concerns in the current regulatory landscape.

💡 A Day in the Life

A typical day might involve collaborating with technical teams to translate newly identified harmful persuasion techniques into clear policy language and detection requirements, while also designing evaluation protocols to test model vulnerabilities. You'd likely spend time analyzing emerging threats in election integrity or influence operations and updating policy frameworks accordingly, with regular cross-functional meetings to ensure alignment between policy, research, and product teams.

🎯 Who Anthropic Is Looking For

  • Has 5+ years of specific policy development experience across election integrity, fraud/scams, coordinated inauthentic behavior, or influence operations (not just general policy work)
  • Possesses working knowledge of global regulatory frameworks around election integrity and digital services accountability, particularly relevant to AI systems
  • Demonstrates proven ability to translate complex risk frameworks into clear, enforceable policy language that technical teams can implement
  • Has experience designing and overseeing evaluations of harmful content or behavior, particularly in the context of emerging technologies

📝 Tips for Applying to Anthropic

1

Explicitly quantify your 5+ years of experience in the specific domains mentioned (election integrity, fraud/scams, CIB, influence operations) rather than just stating general policy experience

2

Include concrete examples of policy frameworks you've developed that were translated into technical detection requirements or enforcement protocols

3

Reference Anthropic's constitutional AI approach and how your policy experience aligns with their safety-focused methodology

4

Demonstrate awareness of the ethical considerations mentioned in their 80,000 Hours career review link about working at frontier AI companies

5

Show how your experience bridges the gap between policy development and technical implementation, as this role requires both policy writing and evaluation design

✉️ What to Emphasize in Your Cover Letter

["Your specific experience with election integrity and influence operations policy development, with concrete examples of frameworks you've created", "How you've previously translated policy language into technical requirements or detection systems", 'Your understanding of harmful persuasion techniques and how they manifest in AI systems', "Why you're specifically interested in Anthropic's approach to AI safety and their transparent acknowledgment of potential harms in frontier AI work"]

Generate Cover Letter →

🔍 Research Before Applying

To stand out, make sure you've researched:

  • Anthropic's constitutional AI approach and their published research on AI safety and alignment
  • Their existing usage policies and public statements about harmful content prevention
  • The specific concerns raised in the 80,000 Hours career review about working at frontier AI labs
  • Recent developments in AI regulation, particularly around election integrity and digital services accountability
Visit Anthropic's Website →

💬 Prepare for These Interview Topics

Based on this role, you may be asked about:

1 Walk through a specific policy framework you developed for election integrity or influence operations and how it was implemented
2 How would you design an evaluation to assess an AI model's capability to execute deceptive persuasive techniques?
3 What are the unique challenges in creating usage policy language for AI systems versus traditional platforms?
4 How do you stay current with the global regulatory landscape around digital services and election integrity?
5 How would you approach the tension between preventing harmful persuasion and maintaining legitimate uses of persuasive AI?
Practice Interview Questions →

⚠️ Common Mistakes to Avoid

  • Applying with only general policy experience without specific examples in election integrity, fraud, or influence operations
  • Failing to demonstrate how policy translates into technical implementation or evaluation design
  • Not addressing the ethical considerations of working at a frontier AI company that Anthropic explicitly references
  • Using generic AI policy language without showing understanding of harmful persuasion-specific risks

📅 Application Timeline

This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.

Typical hiring timeline:

1

Application Review

1-2 weeks

2

Initial Screening

Phone call or written assessment

3

Interviews

1-2 rounds, usually virtual

Offer

Congratulations!

Ready to Apply?

Good luck with your application to Anthropic!