Impact Careers Full-time

ML/Research Engineer, Safeguards

Anthropic

Posted

Dec 01, 2025

Location

USA

Type

Full-time

Compensation

$350000 - $500000

Mission

What you will drive

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

We are looking for ML Engineers and Research Engineers to help detect and mitigate misuse of our AI systems. As a member of the Safeguards ML team, you will build systems that identify harmful use—from individual policy violations to sophisticated, coordinated attacks—and develop defenses that keep our products safe as capabilities advance. You will also work on systems that protect user wellbeing and ensure our models behave appropriately across a wide range of contexts. This work feeds directly into Anthropic's Responsible Scaling Policy commitments.

Responsibilities

  • Develop classifiers to detect misuse and anomalous behavior at scale. This includes developing synthetic data pipelines for training classifiers and methods to automatically source representative evaluations to iterate on

  • Build systems to monitor for harms that span multiple exchanges, such as coordinated cyber attacks and influence operations, and develop new methods for aggregating and analyzing signals across contexts

  • Evaluate and improve the safety of agentic products—developing both threat models and environments to test for agentic risks, and developing and deploying mitigations for prompt injection attacks

  • Conduct research on automated red-teaming, adversarial robustness, and other research that helps test for or find misuse

You may be a good fit if you

  • Have 4+ years of experience in ML engineering, research engineering, or applied research, in academia or industry

  • Have proficiency in Python and experience building ML systems

  • Are comfortable working across the research-to-deployment pipeline, from exploratory experiments to production systems

  • Are worried about misuse risks of AI systems, and want to work to mitigate them

  • Have strong communication skills and ability to explain complex technical concepts to non-technical stakeholders

Strong candidates may also have experience with

  • Language modeling and transformers

  • Building classifiers, anomaly detection systems, or behavioral ML

  • Adversarial machine learning or red-teaming

  • Interpretability or probes

  • Reinforcement learning

  • High-performance, large-scale ML systems

The annual compensation range for this role is listed below. 

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:$350,000—$500,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.**

Location-based hybrid policy:** Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Profile

What makes you a great fit

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

We are looking for ML engineers to help build safety and oversight mechanisms for our AI systems. As a Safeguards Machine Learning Engineer, you will work to train models which detect harmful behaviors and help ensure user well-being. You will apply your technical skills to uphold our principles of safety, transparency, and oversight while enforcing our terms of service and acceptable use policies.

Responsibilities:

  • Build machine learning models to detect unwanted or anomalous behaviors from users and API partners, and integrate them into our production system

  • Improve our automated detection and enforcement systems as needed

  • Analyze user reports of inappropriate accounts and build machine learning models to detect similar instances proactively

  • Surface abuse patterns to our research teams to harden models at the training stage

You may be a good fit if you:

  • Have 4+ years of experience in a research/ML engineering or an applied research scientist position, preferably with a focus on AI safety.

  • Have proficiency in Python, LLMs, SQL and data analysis/data mining tools.

  • Have proficiency in building safe AI/ML systems, such as behavioral classifiers or anomaly detection.

  • Have strong communication skills and ability to explain complex technical concepts to non-technical stakeholders.

  • Care about the societal impacts and long-term implications of your work.

Strong candidates may also have experience with:

  • Machine learning frameworks like Scikit-Learn, TensorFlow, or PyTorch

  • High-performance, large-scale ML systems

  • Language modeling with transformers

  • Reinforcement learning

  • Large-scale ETL

The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.
Annual Salary:$315,000—$425,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.**

Location-based hybrid policy:** Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

About

Inside Anthropic

Visit site →

Anthropic is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at Anthropic that we think may be high impact. We do not necessarily recommend working at other positions at Anthropic. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic.