Application Guide
How to Apply for Research Scientist, Chemical, Biological, Radiological, and Nuclear Risk Modelling
at SaferAI
๐ข About SaferAI
SaferAI is a specialized organization focused exclusively on assessing and managing AI risks, positioning itself at the intersection of AI safety and policy. Unlike general AI companies, SaferAI works directly with government bodies like the European Commission's AI Office, offering unique influence on AI governance and safety standards. Working here means contributing to high-stakes, policy-relevant research that directly shapes how society manages emerging AI threats.
About This Role
This role involves leading the development of chemical, biological, radiological, and nuclear (CBRN) risk models specifically for AI systems, assessing how general-purpose AI could enable harmful scenarios. You'll design and facilitate risk modeling workshops with partners to identify priority scenarios and establish mappings between AI benchmark performance and real-world CBRN risk parameters. The position has direct impact through regular briefings to the European Commission's AI Office, influencing European AI safety policy.
๐ก A Day in the Life
A typical day might involve analyzing new AI model capabilities to assess potential CBRN risks, designing risk modeling frameworks, and preparing materials for an upcoming workshop with partners. You could spend time mapping benchmark results to real-world threat parameters, monitoring emerging AI safety research, and drafting briefings for the European Commission's AI Office based on your risk findings.
๐ Application Tools
๐ฏ Who SaferAI Is Looking For
- Has a strong background in risk modeling, quantitative analysis, or threat assessment, ideally with exposure to CBRN domains, AI safety, or dual-use technology research
- Possesses workshop facilitation skills and experience collaborating with diverse stakeholders (academic, government, industry) to build consensus on risk priorities
- Demonstrates ability to monitor and synthesize emerging AI threats, model capabilities, and safety frameworks into actionable risk assessments
- Can communicate complex technical risk findings clearly to both technical audiences and policy makers like the European Commission
๐ Tips for Applying to SaferAI
Highlight any experience with CBRN-related research, risk modeling, or AI safetyโeven if from adjacent fields like biosecurity, nuclear security, or catastrophic risk assessment
Showcase workshop design or facilitation experience, especially in multi-stakeholder settings focused on risk identification or scenario planning
Demonstrate your ability to track and analyze emerging AI capabilities (e.g., model releases, benchmark results) and connect them to real-world risk parameters
Emphasize any policy engagement or experience briefing government or regulatory bodies, as this role interfaces directly with the European Commission
Tailor your application to SaferAI's mission by explicitly linking your background to AI risk management and CBRN threat reduction
โ๏ธ What to Emphasize in Your Cover Letter
['Explain your understanding of CBRN risks in the context of AI, and how your background prepares you to model these specific threats', 'Describe your experience with risk modeling methodologies and workshop facilitation, providing concrete examples of stakeholder engagement', 'Highlight your ability to monitor AI advancements and translate technical benchmarks into actionable risk assessments', "Express your interest in contributing to AI safety policy, given SaferAI's direct engagement with the European Commission"]
Generate Cover Letter โ๐ Research Before Applying
To stand out, make sure you've researched:
- โ Review SaferAI's published work, blog posts, or presentations to understand their specific approach to AI risk assessment
- โ Research the European Commission's AI Office and its role in AI governance to understand the policy context of this role
- โ Explore existing literature on CBRN risks and AI, including work from organizations like CSET, OpenAI, or academic institutions
- โ Look into SaferAI's partners or collaborators to understand the ecosystem they operate within
๐ฌ Prepare for These Interview Topics
Based on this role, you may be asked about:
โ ๏ธ Common Mistakes to Avoid
- Submitting a generic application that doesn't address CBRN risks, AI safety, or SaferAI's specific mission
- Failing to demonstrate workshop facilitation or stakeholder engagement skills, which are core to the role
- Overlooking the policy dimensionโnot showing awareness of or interest in engaging with the European Commission's AI Office
๐ Application Timeline
This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.
Typical hiring timeline:
Application Review
1-2 weeks
Initial Screening
Phone call or written assessment
Interviews
1-2 rounds, usually virtual
Offer
Congratulations!