Application Guide

How to Apply for Applied Researcher (Monitoring)

at Apollo Research

🏢 About Apollo Research

Apollo Research is a UK-based organization focused specifically on AGI safety monitoring, making it unique in its direct application of theoretical AI safety research to practical tools. The company operates with a small, high-impact team structure where researchers work closely with leadership including the CEO, offering significant autonomy and influence over technical direction. Their mission to transform complex AI research into scalable safety tools for AI coding agents positions them at the forefront of applied AI safety.

About This Role

This Applied Researcher (Monitoring) role involves systematically identifying coding agent failure modes, designing experiments to test monitoring effectiveness, and building evaluation frameworks to measure progress. You'll translate theoretical AI risks into concrete detection mechanisms that directly impact real-world AI safety. The position offers significant influence over team and technology direction at an early stage, with rapid iteration cycles based on empirical data.

💡 A Day in the Life

A typical day involves analyzing real-world coding agent failures to catalog new failure modes, designing experiments to test monitoring approaches, and iterating on detection mechanisms based on empirical results. You'll collaborate with monitoring engineers and the Evals team to integrate research findings into practical tools, while regularly updating evaluation frameworks to measure progress against safety goals.

🎯 Who Apollo Research Is Looking For

  • Has experience systematically cataloging AI failure modes from real-world instances, research literature, and theoretical predictions
  • Demonstrates ability to design and conduct experiments testing detection mechanisms across different failure modes and agent behaviors
  • Thrives in rapid iteration environments and enjoys learning directly from empirical data rather than purely theoretical work
  • Possesses practical experience building and maintaining evaluation frameworks for measuring monitoring capability progress

📝 Tips for Applying to Apollo Research

1

Submit early despite the 2026 deadline - they review applications on a rolling basis and are actively interviewing now

2

Highlight specific examples of translating theoretical AI risks into practical detection mechanisms in your previous work

3

Demonstrate your systematic approach to cataloging failure modes with concrete examples from real-world AI systems

4

Show experience with rapid iteration cycles and learning from empirical data in AI safety or related fields

5

Emphasize any experience working in small, high-autonomy teams where you've shaped technical direction

✉️ What to Emphasize in Your Cover Letter

['Your passion for using empirical research to make AI systems safer in practice, not just theoretical interest', 'Specific examples of translating theoretical risks into concrete detection mechanisms from your background', 'Experience with rapid iteration and learning from data in AI/ML contexts', "Why you're excited about building tools that make AI agent safety accessible at scale"]

Generate Cover Letter →

🔍 Research Before Applying

To stand out, make sure you've researched:

  • Apollo Research's specific publications or public work on AGI safety monitoring (check their website and research papers)
  • Their CEO's background and research interests in AI safety
  • Current public examples of AI coding agent failures that might inform your failure mode cataloging approach
  • The broader landscape of AI safety monitoring tools and where Apollo's approach might fit

💬 Prepare for These Interview Topics

Based on this role, you may be asked about:

1 Walk through your process for systematically collecting and cataloging coding agent failure modes
2 Describe an experiment you designed to test monitor effectiveness across different failure modes
3 How do you balance detection accuracy with practical implementation constraints in monitoring systems?
4 Discuss your experience building and maintaining evaluation frameworks for AI systems
5 How would you approach iterating on monitoring approaches based on empirical results?
Practice Interview Questions →

⚠️ Common Mistakes to Avoid

  • Focusing only on theoretical AI safety without demonstrating practical application experience
  • Presenting yourself as purely a researcher without interest in rapid iteration and tool-building
  • Applying with generic AI/ML experience that doesn't specifically address monitoring, detection, or failure mode analysis

📅 Application Timeline

This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.

Typical hiring timeline:

1

Application Review

1-2 weeks

2

Initial Screening

Phone call or written assessment

3

Interviews

1-2 rounds, usually virtual

Offer

Congratulations!

Ready to Apply?

Good luck with your application to Apollo Research!