Impact Careers Full-time

Summer Research Fellowship 2026

Center on Long-Term Risk

Posted

Mar 04, 2026

Location

Remote (UK)

Type

Full-time

Mission

What you will drive

Summer Research Fellowship 2026

We, the Center on Long-Term Risk, are looking for Summer Research Fellows to explore strategies for reducing suffering in the long-term future (s-risks) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, you will be in regular contact with our researchers and other fellows, and receive guidance from an experienced mentor.

You will work on challenging research questions relevant to reducing suffering. You will be integrated and collaborate with our team of intellectually curious, hard-working, and caring people, all of whom share a profound drive to make the biggest difference they can.

While this iteration retains the basic structure of previous rounds, there are several key differences:

  • We are particularly interested in applicants who wish to engage in s-risk relevant empirical AI safety work (more details on our priority areas below).

  • We encourage applications from individuals who may be less familiar with CLR’s work on s-risk reduction but are nonetheless interested in empirical AI safety research. Our empirical agenda focuses on understanding LLM personas, in particular how malicious traits might arise.

  • We are especially looking for individuals seriously considering transitioning into s-risk research, whether to assess their fit or explore potential employment at CLR.

Apply here by 23:59 PT Sunday 22nd March.

We are also hiring for permanent research positions, for which you can apply through the same link. 

Contents

  • About the Summer Research FellowshipPurpose of the fellowship
  • Priority areas
  • What we look for in candidates
  • Program detailsProgram dates
  • Location & office space
  • Compensation
  • Program length & work quota
  • Application processStage 1
  • Stage 2
  • Stage 3
  • Stage 4
  • Why work with CLR
  • InquiriesPast fellowsLewis Hammond
  • Julia Karbing
  • Francis Rhys Ward
  • Megan Kinniment-Williams
  • Nicolas Macé
  • Research projects published
  • How past fellows have rated our fellowship

About the Summer Research Fellowship

Purpose of the fellowship

In this iteration of the fellowship, we are primarily looking for people seriously considering transitioning to s-risk research, who want to assess their fit or explore potential employment at CLR. 

That said, we welcome applicants with other motivations though the bar for acceptance will likely be higher. In the past, we have often had fellows from the following backgrounds:

  • People at the very start of their careers—such as undergraduates or even high school students—who are strongly focused on s-risk and want to explore research and assess their fit.

  • People with a fair amount of research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.

  • People committed to s-risk who are pursuing a research or research-adjacent career outside CLR and want to develop a strong understanding of s-risk macrostrategy beforehand.

Additionally, there may be many other valuable reasons to participate in the fellowship. We encourage you to apply if you think you would benefit from the program. In all cases, we will work with you to make the fellowship as valuable as possible given your strengths and needs. For many participants, the primary focus will be on learning and assessing their fit for s-risk research, rather than immediately producing valuable research output.

Priority areas

Moving forward, a significant focus of our work will be on s-risk-motivated empirical AI safety research through our Model Persona research agenda*. *

In this agenda, we are aiming to understand in which conditions AI personas develop malicious traits that provide motivation to create suffering: examples of such traits include spitefulness, sadism, or punitiveness. We are also interested in building a general understanding of LLM psychology in order to develop interventions that make personas robustly avoid such traits.**

Candidates for the empirical stream can work on one of our suggested research questions, their own proposal, or join an ongoing project of one of our researchers.

We are also open to taking on fellows interested in working on:

Safe Pareto improvements (SPI)**. An SPI is (roughly) an intervention on how AIs approach bargaining that mitigates downsides from conflict, without changing their bargaining positions. We’re currently interested in both:

  • empirical research on evals for failures in reasoning about SPI; and 

  • conceptual research on the conditions under which AIs individually prefer to do SPI, and on how to prepare for AI-assisted SPI research. **

**

S-risk macrostrategy. We are interested in research on how to robustly reduce s-risk through interventions in AI development—in particular, understanding the conditions under which such  interventions might backfire or have unintended effects, and developing frameworks for evaluating their robustness. Possible projects include:

  • analysing how s-risk interventions interact with different AI development scenarios; 

  • identifying and modelling mechanisms by which interventions can fail; and

  • developing recommendations for when and how to act.

We expect to take on at most one fellow in this area, and are particularly looking for candidates with a strong existing interest in s-risk reduction and familiarity with CLR's work.

What we look for in candidates

We don’t require specific qualifications or experience for this role, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.

  • Curiosity and a drive to work on challenging and important problems;

  • Ability to answer complex research questions related to the long-term future;

  • Willingness to work in poorly-explored areas and to learn about new domains as needed;

  • Independent thinking;

  • A cautious approach to potential information hazards and other sensitive topics;

  • Alignment with our mission or strong interest in one of our above priority areas.

In the empirical stream we are primarily looking for candidates with prior research experience, preferably involving LLMs. University projects, independent work, or work done at prior fellowships such as MATS all count, and other demonstrations of technical skills and interest in our focus areas can substitute for this.

We worry that some people won’t apply because they wrongly believe they are not a good fit for the program. While such a belief is sometimes true, it is often the result of underconfidence rather than an accurate assessment. We would therefore love to see your application even if you are not sure if you are qualified or otherwise competent enough for the positions listed. We explicitly have no minimum requirements in terms of formal qualifications. Being rejected this year will not reduce your chances of being accepted in future hiring rounds.

Program details

We encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation.

Program dates

The default start date is Monday 29th June. Exceptions may be possible and will be considered on a case-by-case basis.

Location & office space

CLR is a research organization based in London, UK. We prefer fellows to be based in London throughout the fellowship, where possible.

We expect to facilitate in-person participation in London in most cases, including support with necessary immigration permissions or visas.

That said, we encourage strong candidates to apply...

Profile

What makes you a great fit

Summer Research Fellowship 2026

We, the Center on Long-Term Risk, are looking for Summer Research Fellows to explore strategies for reducing suffering in the long-term future (s-risks) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, you will be in regular contact with our researchers and other fellows, and receive guidance from an experienced mentor.

You will work on challenging research questions relevant to reducing suffering. You will be integrated and collaborate with our team of intellectually curious, hard-working, and caring people, all of whom share a profound drive to make the biggest difference they can.

While this iteration retains the basic structure of previous rounds, there are several key differences:

  • We are particularly interested in applicants who wish to engage in s-risk relevant empirical AI safety work (more details on our priority areas below).

  • We encourage applications from individuals who may be less familiar with CLR’s work on s-risk reduction but are nonetheless interested in empirical AI safety research. Our empirical agenda focuses on understanding LLM personas, in particular how malicious traits might arise.

  • We are especially looking for individuals seriously considering transitioning into s-risk research, whether to assess their fit or explore potential employment at CLR.

Apply here by 23:59 PT Sunday 22nd March.

We are also hiring for permanent research positions, for which you can apply through the same link. 

Contents

  • About the Summer Research FellowshipPurpose of the fellowship
  • Priority areas
  • What we look for in candidates
  • Program detailsProgram dates
  • Location & office space
  • Compensation
  • Program length & work quota
  • Application processStage 1
  • Stage 2
  • Stage 3
  • Stage 4
  • Why work with CLR
  • InquiriesPast fellowsLewis Hammond
  • Julia Karbing
  • Francis Rhys Ward
  • Megan Kinniment-Williams
  • Nicolas Macé
  • Research projects published
  • How past fellows have rated our fellowship

About the Summer Research Fellowship

Purpose of the fellowship

In this iteration of the fellowship, we are primarily looking for people seriously considering transitioning to s-risk research, who want to assess their fit or explore potential employment at CLR. 

That said, we welcome applicants with other motivations though the bar for acceptance will likely be higher. In the past, we have often had fellows from the following backgrounds:

  • People at the very start of their careers—such as undergraduates or even high school students—who are strongly focused on s-risk and want to explore research and assess their fit.

  • People with a fair amount of research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.

  • People committed to s-risk who are pursuing a research or research-adjacent career outside CLR and want to develop a strong understanding of s-risk macrostrategy beforehand.

Additionally, there may be many other valuable reasons to participate in the fellowship. We encourage you to apply if you think you would benefit from the program. In all cases, we will work with you to make the fellowship as valuable as possible given your strengths and needs. For many participants, the primary focus will be on learning and assessing their fit for s-risk research, rather than immediately producing valuable research output.

Priority areas

Moving forward, a significant focus of our work will be on s-risk-motivated empirical AI safety research through our Model Persona research agenda*. *

In this agenda, we are aiming to understand in which conditions AI personas develop malicious traits that provide motivation to create suffering: examples of such traits include spitefulness, sadism, or punitiveness. We are also interested in building a general understanding of LLM psychology in order to develop interventions that make personas robustly avoid such traits.**

Candidates for the empirical stream can work on one of our suggested research questions, their own proposal, or join an ongoing project of one of our researchers.

We are also open to taking on fellows interested in working on:

Safe Pareto improvements (SPI)**. An SPI is (roughly) an intervention on how AIs approach bargaining that mitigates downsides from conflict, without changing their bargaining positions. We’re currently interested in both:

  • empirical research on evals for failures in reasoning about SPI; and 

  • conceptual research on the conditions under which AIs individually prefer to do SPI, and on how to prepare for AI-assisted SPI research. **

**

S-risk macrostrategy. We are interested in research on how to robustly reduce s-risk through interventions in AI development—in particular, understanding the conditions under which such  interventions might backfire or have unintended effects, and developing frameworks for evaluating their robustness. Possible projects include:

  • analysing how s-risk interventions interact with different AI development scenarios; 

  • identifying and modelling mechanisms by which interventions can fail; and

  • developing recommendations for when and how to act.

We expect to take on at most one fellow in this area, and are particularly looking for candidates with a strong existing interest in s-risk reduction and familiarity with CLR's work.

What we look for in candidates

We don’t require specific qualifications or experience for this role, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.

  • Curiosity and a drive to work on challenging and important problems;

  • Ability to answer complex research questions related to the long-term future;

  • Willingness to work in poorly-explored areas and to learn about new domains as needed;

  • Independent thinking;

  • A cautious approach to potential information hazards and other sensitive topics;

  • Alignment with our mission or strong interest in one of our above priority areas.

In the empirical stream we are primarily looking for candidates with prior research experience, preferably involving LLMs. University projects, independent work, or work done at prior fellowships such as MATS all count, and other demonstrations of technical skills and interest in our focus areas can substitute for this.

We worry that some people won’t apply because they wrongly believe they are not a good fit for the program. While such a belief is sometimes true, it is often the result of underconfidence rather than an accurate assessment. We would therefore love to see your application even if you are not sure if you are qualified or otherwise competent enough for the positions listed. We explicitly have no minimum requirements in terms of formal qualifications. Being rejected this year will not reduce your chances of being accepted in future hiring rounds.

Program details

We encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation.

Program dates

The default start date is Monday 29th June. Exceptions may be possible and will be considered on a case-by-case basis.

Location & office space

CLR is a research organization based in London, UK. We prefer fellows to be based in London throughout the fellowship, where possible.

We expect to facilitate in-person participation in London in most cases, including support with necessary immigration permissions or visas.

That said, we encourage strong candidates to apply...

About

Inside Center on Long-Term Risk

Visit site →

The Center on Long-Term Risk's is a nonprofit aiming to reduce worst-case risks from the development and deployment of advanced AI systems, alongside community-building and grantmaking to support work on the reduction of ‘s-risks’.