Software Engineer, Safeguards
Anthropic
Location
Safeguards Anthropic Added Dec 13 San Francisco CA, USA
Type
Full-time
Posted
Dec 13, 2025
Compensation
USD 200000 – 200000
Mission
What you will drive
## **About Anthropic**
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
## **About the role**
We are looking for software engineers to help build safety and oversight mechanisms for our AI systems. As a software engineer on the Safeguards team, you will work to monitor models, prevent misuse, and ensure user well-being. This role will focus on building systems to detect unwanted model behaviors and prevent disallowed use of models. You will apply your technical skills to uphold our principles of safety, transparency, and oversight while enforcing our terms of service and acceptable use policies.
## **Responsibilities:**
• Develop monitoring systems to detect unwanted behaviors from our API partners and potentially take automated enforcement actions; surface these in internal dashboards to analysts for manual review
• Build abuse detection mechanisms and infrastructure
• Surface abuse patterns to our research teams to harden models at the training stage
• Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale
• Analyze user reports of inappropriate content or accounts
## **You may be a good fit if you:**
• Bachelor’s degree in Computer Science, Software Engineering or comparable experience
• 5-10+ years of experience in a software engineering position, preferably with a focus on integrity, spam, fraud, or abuse detection and mitigation
• Proficiency in Python and Typescript
• Ability to work across the stack
• Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders
## **Strong candidates may also:**
• Have experience building trust and safety detection mechanisms and intervention for AI/ML systems
• Have experience with prompt engineering, jailbreak attacks, and other adversarial inputs
• Have worked closely with operational teams to build custom internal tooling
**Deadline to apply: **None. Applications will be reviewed on a rolling basis.
The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.
Annual Salary:$320,000—$425,000 USD
## **Logistics**
**Education requirements: **We require at least a Bachelor's degree in a related field or equivalent experience.**
Location-based hybrid policy:** Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
**Visa sponsorship:** We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
**We encourage you to apply even if you do not believe you meet every single qualification.** Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
## **How we're different**
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
## **Come work with us!**
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. **Guidance on Candidates' AI Usage:** Learn about our policy for using AI in our application process
Profile
What makes you a great fit
## **About Anthropic**
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
## **About the role**
We are looking for software engineers to help build safety and oversight mechanisms for our AI systems. As a software engineer on the Safeguards team, you will work to monitor models, prevent misuse, and ensure user well-being. This role will focus on building systems to detect unwanted model behaviors and prevent disallowed use of models. You will apply your technical skills to uphold our principles of safety, transparency, and oversight while enforcing our terms of service and acceptable use policies.
## **Responsibilities:**
• Develop monitoring systems to detect unwanted behaviors from our API partners and potentially take automated enforcement actions; surface these in internal dashboards to analysts for manual review
• Build abuse detection mechanisms and infrastructure
• Surface abuse patterns to our research teams to harden models at the training stage
• Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale
• Analyze user reports of inappropriate content or accounts
## **You may be a good fit if you:**
• Bachelor’s degree in Computer Science, Software Engineering or comparable experience
• 5-10+ years of experience in a software engineering position, preferably with a focus on integrity, spam, fraud, or abuse detection and mitigation
• Proficiency in Python and Typescript
• Ability to work across the stack
• Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders
## **Strong candidates may also:**
• Have experience building trust and safety detection mechanisms and intervention for AI/ML systems
• Have experience with prompt engineering, jailbreak attacks, and other adversarial inputs
• Have worked closely with operational teams to build custom internal tooling
**Deadline to apply: **None. Applications will be reviewed on a rolling basis.
The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.
Annual Salary:$320,000—$425,000 USD
## **Logistics**
**Education requirements: **We require at least a Bachelor's degree in a related field or equivalent experience.**
Location-based hybrid policy:** Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
**Visa sponsorship:** We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
**We encourage you to apply even if you do not believe you meet every single qualification.** Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
## **How we're different**
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
## **Come work with us!**
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. **Guidance on Candidates' AI Usage:** Learn about our policy for using AI in our application process
About
Inside Anthropic
Anthropic is a frontier AI research and product company, with teams working on alignment, policy, and security. We post specific opportunities at Anthropic that we think may be high impact. We do not necessarily recommend working at other positions at Anthropic. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic.