Impact Careers Full-time

Technical Scaled Abuse Threat Investigator

Anthropic

Posted

Jan 15, 2026

Location

Remote (US)

Type

Full-time

Compensation

$230000 - $290000

Mission

What you will drive

  • Detect and investigate large-scale abuse patterns including model distillation, unauthorized API access, account farming, fraud schemes, and scam operations
  • Develop abuse signals and tracking strategies to proactively identify scaled adversarial activity and coordinated abuse networks
  • Conduct technical investigations using SQL, Python, and data science methodologies to analyze large datasets and uncover sophisticated abuse patterns
  • Create actionable intelligence reports on new attack vectors, vulnerabilities, and threat actor TTPs targeting AI systems at scale

Impact

The difference you'll make

Your work will directly inform our defenses against threat actors who seek to exploit our products for financial gain, competitive advantage, or to cause widespread harm, helping to create reliable, interpretable, and steerable AI systems that are safe and beneficial for society.

Profile

What makes you a great fit

  • Strong proficiency in SQL and Python with a data science background
  • Experience with large language models and understanding of how AI technology could be exploited at scale
  • Subject matter expertise in abusive user behavior detection, fraud patterns, account abuse, or platform integrity
  • Experience tracking threat actors across surface, deep, and dark web environments

Benefits

What's in it for you

Competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.

About

Inside Anthropic

Visit site →

Anthropic is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at Anthropic that we think may be high impact. We do not necessarily recommend working at other positions at Anthropic. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic.