Applied Safety Research Engineer, Safeguards
Anthropic
Location
USA
Type
Full-time
Posted
Jan 15, 2026
Compensation
USD 200000 – 200000
Mission
What you will drive
Core responsibilities:
- Design and run experiments to improve evaluation quality—developing methods to generate representative test data, simulate realistic user behavior, and validate grading accuracy
- Research how different factors (multi-turn conversations, tools, long context, user diversity) impact model safety behavior
- Analyze evaluation coverage to identify gaps and inform where we need better measurement
- Productionize successful research into evaluation pipelines that run during model training, launch and beyond
Impact
The difference you'll make
Your work will directly shape how Anthropic understands and improves the safety of our models across misuse, prompt injection, and user well-being, contributing to building reliable, interpretable, and steerable AI systems that are safe and beneficial for society.
Profile
What makes you a great fit
Required qualifications:
- Have 4+ years of software engineering or ML engineering experience
- Are proficient in Python and comfortable working across the stack
- Have experience building and maintaining data pipelines
- Are comfortable with data analysis and can draw insights from large datasets
- Have experience with LLMs and understand their capabilities and failure modes
- Can move fluidly between prototyping and production-quality code
- Are excited by ambiguous problems and can translate them into concrete experiments
- Care deeply about AI safety and want your work to have real impact
Benefits
What's in it for you
Competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Annual salary range: $320,000—$405,000 USD.
About
Inside Anthropic
Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.