Application Guide
How to Apply for Research Scientist – Science of Evaluation
at AI Security Institute (AISI)
🏢 About AI Security Institute (AISI)
The AI Security Institute (AISI) is a UK-based organization focused on frontier AI safety and governance, operating at the intersection of technical research and policy impact. What makes AISI unique is its mission to translate cutting-edge evaluation science into actionable insights for AI safety and governance, bridging the gap between academic research and real-world policy decisions. Candidates would be drawn to AISI's opportunity to work on high-stakes problems with direct societal impact in a growing, mission-driven team.
About This Role
This Research Scientist role focuses on the 'Science of Evaluation'—developing novel methodologies and tools to measure and understand AI capabilities, particularly in LLMs and agents. The role involves designing experiments to extract deeper signals from evaluation data, stress-testing claims about AI systems, and producing policy-relevant reports. It's impactful because it directly informs AI safety standards and governance frameworks by providing rigorous, evidence-based assessments of frontier AI capabilities.
💡 A Day in the Life
A typical day might involve designing and running experiments to evaluate LLM capabilities, analyzing results to extract deeper signals about model behavior, and collaborating with policy teams to translate findings into actionable insights. You could spend time developing new evaluation tools, reviewing state-of-the-art research, and preparing conference submissions or internal reports that characterize AI capabilities for safety and governance purposes.
🚀 Application Tools
🎯 Who AI Security Institute (AISI) Is Looking For
- Has a PhD in ML, statistics, or a related technical field with publications at top-tier conferences (NeurIPS, ICML, ICLR) or substantial real-world deployment experience in evaluation methodology
- Possesses hands-on experience with LLMs and agents, including fine-tuning, prompting, and designing evaluations for these systems
- Demonstrates a strong motivation for work at the intersection of science, safety, and governance, with examples of previous impactful projects
- Is self-directed and adaptable, comfortable navigating ambiguity in a growing organization and taking initiative on open-ended research problems
📝 Tips for Applying to AI Security Institute (AISI)
Highlight specific examples of evaluation methodology development in your research or work, particularly for LLMs/agents—quantify improvements in signal extraction or reliability
Demonstrate your understanding of policy-relevant AI evaluation by referencing AISI's publications or similar work from organizations like Anthropic, OpenAI, or government AI safety institutes
Showcase your ability to translate technical findings into actionable insights by including examples where your evaluation work informed decisions or reports
Emphasize your conference presence (submissions, publications, reviews) at ML venues to align with AISI's goal of contributing to the research community
Tailor your application to AISI's mission by explicitly connecting your experience to AI safety and governance outcomes, not just technical ML achievements
✉️ What to Emphasize in Your Cover Letter
['Your hands-on experience with LLMs/agents and how it informs your approach to evaluation methodology', 'Specific examples of developing new techniques or tools for measuring AI capabilities, with emphasis on methodological innovation', "Your motivation for working at the intersection of science, safety, and governance, referencing AISI's mission and publications", "How you've navigated ambiguity in previous roles and contributed to growing teams or organizations"]
Generate Cover Letter →🔍 Research Before Applying
To stand out, make sure you've researched:
- → Review AISI's published reports and research outputs to understand their evaluation frameworks and policy recommendations
- → Study the UK's approach to AI governance and safety to contextualize AISI's role within the national and international landscape
- → Explore AISI's presence at ML conferences (look for their workshops, papers, or presentations) to understand their research priorities
- → Research similar organizations (like the UK's Frontier AI Taskforce or international AI safety institutes) to understand the ecosystem AISI operates within
💬 Prepare for These Interview Topics
Based on this role, you may be asked about:
⚠️ Common Mistakes to Avoid
- Focusing solely on ML model development without emphasizing evaluation methodology or experimental design experience
- Generic statements about AI safety without concrete examples of how your work informs governance or policy decisions
- Presenting yourself as purely academic without demonstrating ability to work in ambiguous, fast-paced environments typical of growing institutes
📅 Application Timeline
This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.
Typical hiring timeline:
Application Review
1-2 weeks
Initial Screening
Phone call or written assessment
Interviews
1-2 rounds, usually virtual
Offer
Congratulations!
Ready to Apply?
Good luck with your application to AI Security Institute (AISI)!