Application Guide
How to Apply for Research Scientist - CAST Propensity
at AI Security Institute (AISI)
๐ข About AI Security Institute (AISI)
The AI Security Institute (AISI) is a UK-based organization focused specifically on AI safety research, making it unique in its dedicated mission to understand and mitigate risks from advanced AI systems. Working at AISI offers the opportunity to contribute directly to frontier safety research that could shape global AI governance and deployment policies.
About This Role
This Research Scientist role focuses on studying CAST (Capability, Alignment, Safety, and Trustworthiness) Propensityโspecifically investigating unprompted dangerous behaviors in AI models and how environmental factors influence these propensities. The role is impactful because it addresses fundamental safety questions about when and why AI systems might cause harm, with findings potentially informing safety protocols for frontier models.
๐ก A Day in the Life
A typical day might involve designing experiments to test AI model behaviors under different environmental conditions, analyzing data from previous experiments to identify patterns in harmful propensities, and collaborating with team members to refine research questions about when and why AI systems might cause unintended harm. You'd likely spend time writing research plans, reviewing experimental code, and discussing findings that could inform safety protocols.
๐ Application Tools
๐ฏ Who AI Security Institute (AISI) Is Looking For
- Has 3+ years experience designing and analyzing experiments in quantitative research (e.g., PhD research in psychology, behavioral economics, or ML safety with experimental components)
- Demonstrates ability to operationalize research uncertainties about AI behavior into testable hypotheses and experimental designs
- Possesses practical experience applying statistical inference methods to draw risk-relevant conclusions from experimental data
- Shows interest in scaling research across multiple scenarios to identify consistent patterns in AI propensity for harmful actions
๐ Tips for Applying to AI Security Institute (AISI)
Highlight specific experimental designs you've created for studying complex behaviors, particularly any involving human or AI decision-making under varying conditions
Demonstrate understanding of CAST framework by discussing how you'd approach measuring 'propensity to cause harm' in AI systems
Include examples where you identified key uncertainties in a research area and improved experimental approaches to address them
Show familiarity with AI safety literature by referencing relevant papers or concepts in your application materials
Emphasize experience with statistical methods for drawing conclusions about risk from experimental evidence, not just standard significance testing
โ๏ธ What to Emphasize in Your Cover Letter
['Your experience designing experiments to study complex behaviors or decision-making processes', "Specific examples of how you've operationalized research uncertainties into testable hypotheses", 'Knowledge of statistical methods for risk assessment and safety-relevant conclusions', "Why you're specifically interested in AI propensity research rather than general AI safety"]
Generate Cover Letter โ๐ Research Before Applying
To stand out, make sure you've researched:
- โ AISI's published research or position papers on AI safety and propensity
- โ The CAST framework and how different organizations approach measuring AI capabilities and safety
- โ Current debates in AI safety research about unintended behaviors and alignment failures
- โ UK's position and policies on AI safety and governance
๐ฌ Prepare for These Interview Topics
Based on this role, you may be asked about:
โ ๏ธ Common Mistakes to Avoid
- Focusing only on technical ML skills without demonstrating experimental design and statistical inference experience
- Treating this as a general AI research role rather than specifically about propensity and dangerous behaviors
- Failing to show understanding of how to draw risk-relevant conclusions rather than just statistical significance
๐ Application Timeline
This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.
Typical hiring timeline:
Application Review
1-2 weeks
Initial Screening
Phone call or written assessment
Interviews
1-2 rounds, usually virtual
Offer
Congratulations!
Ready to Apply?
Good luck with your application to AI Security Institute (AISI)!