Full-stack Software Engineer
Apollo Research
Location
stack Software Engineer Apollo Research Added Dec 7 London, UK
Type
Full-time
Posted
Dec 07, 2025
Compensation
USD 135000 – 135000
Mission
What you will drive
**Applications deadline:** We accept submissions until 15 January 2026. We review applications on a rolling basis and encourage early submissions.
ABOUT THE OPPORTUNITY
We’re looking for Full-stack Software Engineers who are excited to build tools for frontier AGI safety research, e.g. building and maintaining evals libraries and tools for monitoring and controlling our own LLM traffic.
REPRESENTATIVE PROJECTS
Your main objective is to develop tooling for analyzing model evaluation results. Here is a list of features that you might build and ship in your first 6 months:
- LLM-powered search that finds interesting fragments in evaluation transcripts- Comparison views that show how conversations and scores differ between two evaluation runs- Ability to view and analyse conversations with coding agents (Cursor, Claude Code, etc.) in addition to evaluation transcripts- Results streaming for evaluations that are currently being run- Collaborative editing of evaluation logs that automatically updates metrics and other derived data.
Think of this as developing an “IDE for evaluations”.
Besides this, here are example auxiliary projects which you might do:
- Automated evaluation pipelines to minimize the time from getting access to a new model for pre-deployment testing to analyzing the most important results and sharing them.- LLM agents and MCP tools to automate internal software engineering and research tasks, with sandboxes to prevent major failures- Telemetry API and instrumentation of our existing tools, allowing us to monitor usage and improve reliability- Upstream improvements to the Inspect framework and ecosystem, e.g. support for evaluating modern agentic scaffolds.
## Requirements
ABOUT THE TEAM**The SWE team currently consists of Rusheb Shah, Andrei Matveiakin, Alex Kedrik, and Glen Rodgers. Beyond the SWE team, you will closely interact with the research scientists and engineers as the primary user group of your tools. You can find our full team here.
ABOUT THE APOLLO RESEARCH
The rapid rise in AI capabilities offer tremendous opportunities, but also present significant risks. At Apollo Research, we’re primarily concerned with risks from Loss of Control, i.e. risks coming from the model itself rather than e.g. humans misusing the AI. We’re particularly concerned with deceptive alignment / scheming, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight. We work on the detection of scheming (e.g. building evaluations), the science of scheming (e.g. model organisms), and scheming mitigations (e.g. anti-scheming, and control). We closely work with multiple frontier AI companies, e.g. to test their models before deployment or collaborate on scheming mitigations. At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you’re interested in more details about what it’s like working at Apollo, you can find more information here.
Equality Statement:** Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation.**INTERVIEW PROCESS
Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples.
About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide), such as building LM agent evaluations in Inspect.
Applications deadline:** We accept submissions until 15 January 2026. We review applications on a rolling basis and encourage early submissions.
Profile
What makes you a great fit
**Applications deadline:** We accept submissions until 15 January 2026. We review applications on a rolling basis and encourage early submissions.
ABOUT THE OPPORTUNITY
We’re looking for Full-stack Software Engineers who are excited to build tools for frontier AGI safety research, e.g. building and maintaining evals libraries and tools for monitoring and controlling our own LLM traffic.
REPRESENTATIVE PROJECTS
Your main objective is to develop tooling for analyzing model evaluation results. Here is a list of features that you might build and ship in your first 6 months:
- LLM-powered search that finds interesting fragments in evaluation transcripts- Comparison views that show how conversations and scores differ between two evaluation runs- Ability to view and analyse conversations with coding agents (Cursor, Claude Code, etc.) in addition to evaluation transcripts- Results streaming for evaluations that are currently being run- Collaborative editing of evaluation logs that automatically updates metrics and other derived data.
Think of this as developing an “IDE for evaluations”.
Besides this, here are example auxiliary projects which you might do:
- Automated evaluation pipelines to minimize the time from getting access to a new model for pre-deployment testing to analyzing the most important results and sharing them.- LLM agents and MCP tools to automate internal software engineering and research tasks, with sandboxes to prevent major failures- Telemetry API and instrumentation of our existing tools, allowing us to monitor usage and improve reliability- Upstream improvements to the Inspect framework and ecosystem, e.g. support for evaluating modern agentic scaffolds.
## Requirements
ABOUT THE TEAM**The SWE team currently consists of Rusheb Shah, Andrei Matveiakin, Alex Kedrik, and Glen Rodgers. Beyond the SWE team, you will closely interact with the research scientists and engineers as the primary user group of your tools. You can find our full team here.
ABOUT THE APOLLO RESEARCH
The rapid rise in AI capabilities offer tremendous opportunities, but also present significant risks. At Apollo Research, we’re primarily concerned with risks from Loss of Control, i.e. risks coming from the model itself rather than e.g. humans misusing the AI. We’re particularly concerned with deceptive alignment / scheming, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight. We work on the detection of scheming (e.g. building evaluations), the science of scheming (e.g. model organisms), and scheming mitigations (e.g. anti-scheming, and control). We closely work with multiple frontier AI companies, e.g. to test their models before deployment or collaborate on scheming mitigations. At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you’re interested in more details about what it’s like working at Apollo, you can find more information here.
Equality Statement:** Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation.**INTERVIEW PROCESS
Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples.
About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no leetcode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide), such as building LM agent evaluations in Inspect.
Applications deadline:** We accept submissions until 15 January 2026. We review applications on a rolling basis and encourage early submissions.