Application Guide

How to Apply for Research Scientist, Chemical, Biological, Radiological, and Nuclear Risk Modelling

at SaferAI

๐Ÿข About SaferAI

SaferAI is a specialized organization focused exclusively on assessing and managing AI risks, positioning itself at the intersection of AI safety and policy. Unlike general AI companies, SaferAI works directly with government bodies like the European Commission's AI Office, offering unique influence on AI governance and safety standards. Working here means contributing to high-stakes, policy-relevant research that directly shapes how society manages emerging AI threats.

About This Role

This role involves leading the development of chemical, biological, radiological, and nuclear (CBRN) risk models specifically for AI systems, assessing how general-purpose AI could enable harmful scenarios. You'll design and facilitate risk modeling workshops with partners to identify priority scenarios and establish mappings between AI benchmark performance and real-world CBRN risk parameters. The position has direct impact through regular briefings to the European Commission's AI Office, influencing European AI safety policy.

๐Ÿ’ก A Day in the Life

A typical day might involve analyzing new AI model capabilities to assess potential CBRN risks, designing risk modeling frameworks, and preparing materials for an upcoming workshop with partners. You could spend time mapping benchmark results to real-world threat parameters, monitoring emerging AI safety research, and drafting briefings for the European Commission's AI Office based on your risk findings.

๐ŸŽฏ Who SaferAI Is Looking For

  • Has a strong background in risk modeling, quantitative analysis, or threat assessment, ideally with exposure to CBRN domains, AI safety, or dual-use technology research
  • Possesses workshop facilitation skills and experience collaborating with diverse stakeholders (academic, government, industry) to build consensus on risk priorities
  • Demonstrates ability to monitor and synthesize emerging AI threats, model capabilities, and safety frameworks into actionable risk assessments
  • Can communicate complex technical risk findings clearly to both technical audiences and policy makers like the European Commission

๐Ÿ“ Tips for Applying to SaferAI

1

Highlight any experience with CBRN-related research, risk modeling, or AI safetyโ€”even if from adjacent fields like biosecurity, nuclear security, or catastrophic risk assessment

2

Showcase workshop design or facilitation experience, especially in multi-stakeholder settings focused on risk identification or scenario planning

3

Demonstrate your ability to track and analyze emerging AI capabilities (e.g., model releases, benchmark results) and connect them to real-world risk parameters

4

Emphasize any policy engagement or experience briefing government or regulatory bodies, as this role interfaces directly with the European Commission

5

Tailor your application to SaferAI's mission by explicitly linking your background to AI risk management and CBRN threat reduction

โœ‰๏ธ What to Emphasize in Your Cover Letter

['Explain your understanding of CBRN risks in the context of AI, and how your background prepares you to model these specific threats', 'Describe your experience with risk modeling methodologies and workshop facilitation, providing concrete examples of stakeholder engagement', 'Highlight your ability to monitor AI advancements and translate technical benchmarks into actionable risk assessments', "Express your interest in contributing to AI safety policy, given SaferAI's direct engagement with the European Commission"]

Generate Cover Letter โ†’

๐Ÿ” Research Before Applying

To stand out, make sure you've researched:

  • โ†’ Review SaferAI's published work, blog posts, or presentations to understand their specific approach to AI risk assessment
  • โ†’ Research the European Commission's AI Office and its role in AI governance to understand the policy context of this role
  • โ†’ Explore existing literature on CBRN risks and AI, including work from organizations like CSET, OpenAI, or academic institutions
  • โ†’ Look into SaferAI's partners or collaborators to understand the ecosystem they operate within
Visit SaferAI's Website โ†’

๐Ÿ’ฌ Prepare for These Interview Topics

Based on this role, you may be asked about:

1 How would you approach modeling a specific CBRN risk scenario enabled by general-purpose AI?
2 Describe a workshop you've designed or facilitated to identify and prioritize risks with diverse stakeholders
3 How do you stay updated on emerging AI threats and model releases, and how would you integrate this into continuous risk assessment?
4 What experience do you have briefing technical risk findings to non-technical or policy audiences?
5 How would you establish mappings between AI benchmark performance and real-world CBRN risk parameters?
Practice Interview Questions โ†’

โš ๏ธ Common Mistakes to Avoid

  • Submitting a generic application that doesn't address CBRN risks, AI safety, or SaferAI's specific mission
  • Failing to demonstrate workshop facilitation or stakeholder engagement skills, which are core to the role
  • Overlooking the policy dimensionโ€”not showing awareness of or interest in engaging with the European Commission's AI Office

๐Ÿ“… Application Timeline

This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.

Typical hiring timeline:

1

Application Review

1-2 weeks

2

Initial Screening

Phone call or written assessment

3

Interviews

1-2 rounds, usually virtual

โœ“

Offer

Congratulations!

Ready to Apply?

Good luck with your application to SaferAI!