Research Scientist - Red Team
AI Security Institute (AISI)
Posted
Feb 10, 2026
Location
UK
Type
Full-time
Compensation
£65000 - £135000
Mission
What you will drive
- Design, build, run, and evaluate methods to automatically attack and evaluate safeguards on frontier AI systems
- Design and run experiments testing measures to keep AI systems under human control when misaligned
- Build benchmarks for monitoring misuse and jailbreak development across multiple model interactions
- Investigate novel attacks and defenses for data poisoning LLMs with backdoors or other attacker goals
Impact
The difference you'll make
This role advances research on attacking and defending frontier AI systems while improving government understanding of misuse and misalignment risks, which is critical for the safe and secure deployment of advanced AI globally.
Profile
What makes you a great fit
- Hands-on research experience with large language models (training, fine-tuning, evaluation, or safety research)
- Demonstrated track record of peer-reviewed publications in top-tier ML conferences or journals
- Ability to write clean, documented research code for ML experiments using frameworks like PyTorch
- Experience with adversarial robustness, AI security, red teaming, AI alignment, or AI control
Benefits
What's in it for you
Salary range: £65,000–£145,000 (base plus technical allowance) with 28.97% employer pension contribution. Benefits include: modern central London office or option to work in other UK government offices, hybrid working with flexibility for occasional remote work abroad, at least 25 days annual leave plus public holidays and team breaks, generous paid parental leave (36 weeks UK statutory + 3 extra paid weeks), learning and development stipends, pre-release access to frontier models and ample compute, and discounts for cycling, donations, retail, and gyms.