Engineering Manager, Detection and Response
Anthropic
Location
USA
Type
Full-time
Posted
Jan 20, 2026
Compensation
USD 200000 – 200000
Mission
What you will drive
Core responsibilities:
- Manage and grow a high-performing Detection and Response team, planning strategy and hiring to support Anthropic's rapid growth and unique AI safety requirements
- Navigate prioritization in a fast-paced frontier environment, balancing operational demands with building innovative, scalable solutions for the future
- Collaborate across security engineering teams to build comprehensive prevention, observability, detection, and response capabilities throughout the security lifecycle
- Facilitate development of scalable, AI-leveraged Detection and Response solutions that enable self-service observability and detection capabilities across Anthropic
Impact
The difference you'll make
This role creates positive change by building comprehensive Security Observability, Detection Lifecycle, and Security Incident Response programs for Anthropic, helping to ensure AI systems are safe and beneficial for users and society as a whole.
Profile
What makes you a great fit
Required skills and experience:
- 10+ years building detection and response capabilities in a cloud-native organization
- 5+ years of engineering management experience with a proven track record of building and scaling security teams
- Deep understanding of security monitoring, threat detection, incident response, and forensics best practices
- Experienced in securing complex cloud environments (Kubernetes, AWS/GCP) with modern detection technologies
- Knowledgeable in AI/ML security risks, detection patterns, and response strategies
Benefits
What's in it for you
Competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Annual salary range: $405,000 - $485,000 USD.
About
Inside Anthropic
Anthropic's mission is to create reliable, interpretable, and steerable AI systems. They want AI to be safe and beneficial for users and society as a whole, working as a single cohesive team on large-scale research efforts to advance long-term goals of steerable, trustworthy AI.