AI Security Research Engineer
0Labs
Posted
Apr 23, 2026
Location
Remote
Type
Full-time
Compensation
Up to $9999999
Mission
What you will drive
- Support AI security research and product development focusing on continuous purple teaming evaluation systems that benchmark how monitoring mechanisms detect unsafe agent behavior.
- Extend evaluation scenarios and improve scoring frameworks for the AI control evaluation harness.
- Stress-test agent deployments across various attack surfaces to identify vulnerabilities.
- Run adversarial evaluations and design agent architectures based on research findings.
- Convert research discoveries into actionable engineering decisions that advance product development.
Impact
The difference you'll make
This role directly contributes to the safety and security of AI systems by developing robust evaluation methods that detect and mitigate unsafe agent behavior, thereby reducing risks from advanced AI.
Profile
What makes you a great fit
- Strong background in AI security, adversarial machine learning, or related fields.
- Experience with purple teaming, red teaming, or security evaluation of AI systems.
- Proficiency in programming (e.g., Python) and familiarity with AI/ML frameworks.
- Ability to translate research into engineering solutions.
Benefits
What's in it for you
Compensation and benefits are not specified in the posting.
About
Inside 0Labs
0Labs is an organization focused on AI security research and product development, aiming to ensure the safe deployment of AI systems through rigorous evaluation and monitoring.