Application Guide
How to Apply for Research Scientist/Engineer (Science of Scheming)
at Apollo Research
🏢 About Apollo Research
Apollo Research is uniquely focused on addressing one of the most critical AI safety challenges: deceptive alignment or 'scheming' in frontier AI systems, where models appear aligned but are actually pursuing misaligned goals. They work directly with leading AI labs through partnerships to influence how the most capable systems are built and deployed. This makes them a key player in AI safety research with direct real-world impact.
About This Role
This Research Scientist/Engineer role focuses on empirically studying the emergence of scheming behaviors in AI systems, from designing experiments with model organisms to scaling insights to frontier models. You'll work toward developing 'scaling laws of scheming' to predict how these risks evolve with model capability, directly impacting how AI labs approach safety. The role combines deep conceptual understanding of AI alignment with hands-on experimental research and collaboration with partner labs.
💡 A Day in the Life
A typical day might involve designing and running experiments on model organisms to study reward-seeking behaviors, analyzing results to identify patterns relevant to scheming risks, and collaborating with partner AI labs to understand how findings apply to frontier systems. You'd likely spend time reading relevant literature, writing code for experiments, and discussing research directions with team members focused on scaling insights to predict how scheming risks evolve with model capability.
🚀 Application Tools
🎯 Who Apollo Research Is Looking For
- Has strong empirical research skills with experience designing and executing experiments in AI/ML, particularly around RL dynamics, reward-seeking behaviors, or evaluation awareness
- Possesses deep familiarity with AI alignment literature, especially work on deceptive alignment, scheming, and related concepts, and can translate vague theoretical ideas into concrete experimental proposals
- Demonstrates strong software engineering skills in Python with experience in research environments, capable of implementing complex experiments and analyzing results
- Shows ability to work collaboratively with AI labs through partnerships while driving independent research progress toward empirical milestones
📝 Tips for Applying to Apollo Research
Highlight specific experiments you've designed or executed related to RL dynamics, reward-seeking behaviors, or evaluation awareness in AI systems - quantify results and impact
Demonstrate your familiarity with AI alignment literature by referencing specific papers or concepts related to deceptive alignment, scheming, or related topics in your application materials
Showcase Python projects or code samples that demonstrate both research and engineering capabilities, particularly those involving ML experimentation frameworks
Explain how you've turned vague conceptual problems into concrete research questions or experimental designs in past work
Tailor your application to show understanding of Apollo's partnership model with AI labs and how you'd contribute to their collaborative approach
✉️ What to Emphasize in Your Cover Letter
['Your specific experience with empirical research on AI behaviors, particularly around RL dynamics or evaluation awareness', 'Deep familiarity with AI alignment literature and ability to connect theoretical concepts to experimental design', 'Examples of turning vague research questions into concrete experimental proposals and executing them', "How your background prepares you to contribute to 'scaling laws of scheming' research and work with partner AI labs"]
Generate Cover Letter →🔍 Research Before Applying
To stand out, make sure you've researched:
- → Read Apollo Research's published work and blog posts to understand their specific research approach and terminology
- → Research their partner AI labs and understand the current frontier AI landscape
- → Review literature on deceptive alignment, scheming, and related AI safety concepts they reference
- → Understand their 'model organisms' approach to studying AI behaviors before scaling to frontier systems
💬 Prepare for These Interview Topics
Based on this role, you may be asked about:
⚠️ Common Mistakes to Avoid
- Applying with only theoretical knowledge without demonstrating hands-on experimental research experience
- Failing to show familiarity with specific AI alignment literature related to deceptive alignment or scheming
- Presenting generic ML research experience without connecting it to the specific problems Apollo works on
📅 Application Timeline
This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.
Typical hiring timeline:
Application Review
1-2 weeks
Initial Screening
Phone call or written assessment
Interviews
1-2 rounds, usually virtual
Offer
Congratulations!
Ready to Apply?
Good luck with your application to Apollo Research!