Impact Careers Full-time

Red Team Engineer, Safeguards

Anthropic

Posted

Jan 25, 2026

Location

Remote (US)

Type

Full-time

Compensation

$300000 - $320000

Mission

What you will drive

  • Conduct comprehensive adversarial testing across Anthropic's product surfaces, developing creative attack scenarios that combine multiple exploitation techniques
  • Research and implement novel testing approaches for emerging capabilities, including agent systems, tool use, and new interaction paradigms
  • Design and execute 'full kill chain' attacks that emulate real-world threat actors attempting to achieve specific malicious objectives
  • Build and maintain systematic testing methodologies and automated testing frameworks to enable continuous assessment at scale

Impact

The difference you'll make

This role helps ensure the safety of Anthropic's deployed AI systems and products by taking an adversarial approach to uncover vulnerabilities before they can be exploited by malicious actors, focusing on broader safety implications and novel abuse unique to advanced AI systems.

Profile

What makes you a great fit

  • Demonstrated experience in penetration testing, red teaming, or application security
  • Strong technical skills in web application security, including hands-on expertise with security testing tools (Burp Suite, Metasploit, custom scripting frameworks, etc.)
  • A track record of discovering novel attack vectors and chaining vulnerabilities in creative ways
  • A public body of work such as CVEs, blog posts, or disclosed bug bounty reports
  • Experience with security testing tools and the ability to build custom automation

Benefits

What's in it for you

Competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.

About

Inside Anthropic

Visit site →

Anthropic is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at Anthropic that we think may be high impact. We do not necessarily recommend working at other positions at Anthropic. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic.