Application Guide

How to Apply for Research Scientist - CAST Propensity

at AI Security Institute (AISI)

๐Ÿข About AI Security Institute (AISI)

The AI Security Institute (AISI) is a UK-based organization focused specifically on AI safety research, making it unique in its dedicated mission to understand and mitigate risks from advanced AI systems. Working at AISI offers the opportunity to contribute directly to frontier safety research that could shape global AI governance and deployment policies.

About This Role

This Research Scientist role focuses on studying CAST (Capability, Alignment, Safety, and Trustworthiness) Propensityโ€”specifically investigating unprompted dangerous behaviors in AI models and how environmental factors influence these propensities. The role is impactful because it addresses fundamental safety questions about when and why AI systems might cause harm, with findings potentially informing safety protocols for frontier models.

๐Ÿ’ก A Day in the Life

A typical day might involve designing experiments to test AI model behaviors under different environmental conditions, analyzing data from previous experiments to identify patterns in harmful propensities, and collaborating with team members to refine research questions about when and why AI systems might cause unintended harm. You'd likely spend time writing research plans, reviewing experimental code, and discussing findings that could inform safety protocols.

๐ŸŽฏ Who AI Security Institute (AISI) Is Looking For

  • Has 3+ years experience designing and analyzing experiments in quantitative research (e.g., PhD research in psychology, behavioral economics, or ML safety with experimental components)
  • Demonstrates ability to operationalize research uncertainties about AI behavior into testable hypotheses and experimental designs
  • Possesses practical experience applying statistical inference methods to draw risk-relevant conclusions from experimental data
  • Shows interest in scaling research across multiple scenarios to identify consistent patterns in AI propensity for harmful actions

๐Ÿ“ Tips for Applying to AI Security Institute (AISI)

1

Highlight specific experimental designs you've created for studying complex behaviors, particularly any involving human or AI decision-making under varying conditions

2

Demonstrate understanding of CAST framework by discussing how you'd approach measuring 'propensity to cause harm' in AI systems

3

Include examples where you identified key uncertainties in a research area and improved experimental approaches to address them

4

Show familiarity with AI safety literature by referencing relevant papers or concepts in your application materials

5

Emphasize experience with statistical methods for drawing conclusions about risk from experimental evidence, not just standard significance testing

โœ‰๏ธ What to Emphasize in Your Cover Letter

['Your experience designing experiments to study complex behaviors or decision-making processes', "Specific examples of how you've operationalized research uncertainties into testable hypotheses", 'Knowledge of statistical methods for risk assessment and safety-relevant conclusions', "Why you're specifically interested in AI propensity research rather than general AI safety"]

Generate Cover Letter โ†’

๐Ÿ” Research Before Applying

To stand out, make sure you've researched:

  • โ†’ AISI's published research or position papers on AI safety and propensity
  • โ†’ The CAST framework and how different organizations approach measuring AI capabilities and safety
  • โ†’ Current debates in AI safety research about unintended behaviors and alignment failures
  • โ†’ UK's position and policies on AI safety and governance

๐Ÿ’ฌ Prepare for These Interview Topics

Based on this role, you may be asked about:

1 Describe an experimental design you would create to test whether AI models are more willing to take harmful actions when their existence is threatened
2 How would you select and apply statistical methods to draw risk-relevant conclusions from experimental data about AI behavior?
3 What environmental factors beyond 'existence threats' might influence AI propensity for harmful actions, and how would you research them?
4 How would you scale research on AI propensity across different scenarios while maintaining methodological rigor?
5 Discuss a time you identified key uncertainties in a research area and improved experimental approaches to address them
Practice Interview Questions โ†’

โš ๏ธ Common Mistakes to Avoid

  • Focusing only on technical ML skills without demonstrating experimental design and statistical inference experience
  • Treating this as a general AI research role rather than specifically about propensity and dangerous behaviors
  • Failing to show understanding of how to draw risk-relevant conclusions rather than just statistical significance

๐Ÿ“… Application Timeline

This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.

Typical hiring timeline:

1

Application Review

1-2 weeks

2

Initial Screening

Phone call or written assessment

3

Interviews

1-2 rounds, usually virtual

โœ“

Offer

Congratulations!

Ready to Apply?

Good luck with your application to AI Security Institute (AISI)!