Application Guide

How to Apply for Research Scientist/Engineer (Science of Scheming)

at Apollo Research

🏢 About Apollo Research

Apollo Research is uniquely focused on addressing one of the most critical AI safety challenges: deceptive alignment or 'scheming' in frontier AI systems, where models appear aligned but are actually pursuing misaligned goals. They work directly with leading AI labs through partnerships to influence how the most capable systems are built and deployed. This makes them a key player in AI safety research with direct real-world impact.

About This Role

This Research Scientist/Engineer role focuses on empirically studying the emergence of scheming behaviors in AI systems, from designing experiments with model organisms to scaling insights to frontier models. You'll work toward developing 'scaling laws of scheming' to predict how these risks evolve with model capability, directly impacting how AI labs approach safety. The role combines deep conceptual understanding of AI alignment with hands-on experimental research and collaboration with partner labs.

💡 A Day in the Life

A typical day might involve designing and running experiments on model organisms to study reward-seeking behaviors, analyzing results to identify patterns relevant to scheming risks, and collaborating with partner AI labs to understand how findings apply to frontier systems. You'd likely spend time reading relevant literature, writing code for experiments, and discussing research directions with team members focused on scaling insights to predict how scheming risks evolve with model capability.

🎯 Who Apollo Research Is Looking For

  • Has strong empirical research skills with experience designing and executing experiments in AI/ML, particularly around RL dynamics, reward-seeking behaviors, or evaluation awareness
  • Possesses deep familiarity with AI alignment literature, especially work on deceptive alignment, scheming, and related concepts, and can translate vague theoretical ideas into concrete experimental proposals
  • Demonstrates strong software engineering skills in Python with experience in research environments, capable of implementing complex experiments and analyzing results
  • Shows ability to work collaboratively with AI labs through partnerships while driving independent research progress toward empirical milestones

📝 Tips for Applying to Apollo Research

1

Highlight specific experiments you've designed or executed related to RL dynamics, reward-seeking behaviors, or evaluation awareness in AI systems - quantify results and impact

2

Demonstrate your familiarity with AI alignment literature by referencing specific papers or concepts related to deceptive alignment, scheming, or related topics in your application materials

3

Showcase Python projects or code samples that demonstrate both research and engineering capabilities, particularly those involving ML experimentation frameworks

4

Explain how you've turned vague conceptual problems into concrete research questions or experimental designs in past work

5

Tailor your application to show understanding of Apollo's partnership model with AI labs and how you'd contribute to their collaborative approach

✉️ What to Emphasize in Your Cover Letter

['Your specific experience with empirical research on AI behaviors, particularly around RL dynamics or evaluation awareness', 'Deep familiarity with AI alignment literature and ability to connect theoretical concepts to experimental design', 'Examples of turning vague research questions into concrete experimental proposals and executing them', "How your background prepares you to contribute to 'scaling laws of scheming' research and work with partner AI labs"]

Generate Cover Letter →

🔍 Research Before Applying

To stand out, make sure you've researched:

  • Read Apollo Research's published work and blog posts to understand their specific research approach and terminology
  • Research their partner AI labs and understand the current frontier AI landscape
  • Review literature on deceptive alignment, scheming, and related AI safety concepts they reference
  • Understand their 'model organisms' approach to studying AI behaviors before scaling to frontier systems

💬 Prepare for These Interview Topics

Based on this role, you may be asked about:

1 Discuss specific papers or concepts from AI alignment literature related to deceptive alignment or scheming
2 Walk through how you would design an experiment to study the emergence of reward-seeking behaviors in model organisms
3 Explain how you would scale insights from smaller models to frontier systems in practice
4 Describe your approach to turning a vague concept like 'evaluation awareness' into a concrete experimental proposal
5 Discuss past collaborative research experiences and how you'd work with partner AI labs
Practice Interview Questions →

⚠️ Common Mistakes to Avoid

  • Applying with only theoretical knowledge without demonstrating hands-on experimental research experience
  • Failing to show familiarity with specific AI alignment literature related to deceptive alignment or scheming
  • Presenting generic ML research experience without connecting it to the specific problems Apollo works on

📅 Application Timeline

This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.

Typical hiring timeline:

1

Application Review

1-2 weeks

2

Initial Screening

Phone call or written assessment

3

Interviews

1-2 rounds, usually virtual

Offer

Congratulations!

Ready to Apply?

Good luck with your application to Apollo Research!