Pillar Guide

Remote AI Safety & Governance Jobs: The Complete Career Guide

AI safety has gone from a niche academic concern to one of the most in-demand career paths in technology. Frontier AI labs, governments, and non-profits are all hiring — and many of these roles are fully remote.

What is AI safety?

AI safety encompasses the research, engineering, and policy work aimed at ensuring advanced AI systems are reliable, aligned with human values, and governed responsibly. The field broadly splits into two tracks:

  • Technical alignment — Research on making AI systems do what we intend (interpretability, RLHF, red-teaming, evaluation science).
  • Governance & policy — Designing regulation, standards, and institutional frameworks to manage AI risks (export controls, safety benchmarks, international coordination).

Both tracks are essential, and the division between them is increasingly blurred as policy decisions require deep technical literacy and technical work is shaped by regulatory reality.

Key skills employers look for

  • Machine learning fundamentals — Transformers, reinforcement learning, evaluation methodology.
  • Interpretability & red-teaming — Probing model behaviour, adversarial testing, jailbreak analysis.
  • Policy writing & analysis — Translating technical concepts into regulatory language (EU AI Act, NIST AI RMF).
  • Research communication — Distilling complex findings for diverse audiences: policymakers, media, the public.
  • Programme & grant management — Running research agendas, allocating funding, coordinating distributed teams.
  • Security engineering — Model security, supply-chain integrity, access controls for frontier systems.

How to get started

The fastest on-ramps depend on your background. Software engineers should explore alignment research programmes like MATS (ML Alignment Theory Scholars) or Redwood Research's residency. Policy professionals should look at fellowships at GovAI or the AI Policy Institute. Generalists with strong operations or communications skills are urgently needed at safety-focused non-profits — 80,000 Hours lists these as some of the most impactful roles available today.

Regardless of track, signal your commitment: publish blog posts on the AI Alignment Forum, attend EAGx conferences, and engage with open problems on the alignment research agenda.

Frequently asked questions

What is AI safety and why does it matter?

AI safety is the field dedicated to ensuring that artificial intelligence systems behave as intended, remain under human control, and do not cause unintended harm. As AI capabilities grow rapidly, the field has become one of the highest-priority areas for researchers, policymakers, and engineers who want to shape how these systems are built and deployed.

Do I need a PhD to work in AI safety?

Not necessarily. While core alignment research roles often prefer PhD-level candidates, the broader AI safety ecosystem has many entry points: policy analysis, communications, operations, field-building, and software engineering. Organisations like 80,000 Hours report that operations and strategy roles at AI safety labs are highly impactful and do not require research credentials.

Which organisations hire for remote AI safety roles?

Major employers include Anthropic, OpenAI, DeepMind, Redwood Research, MIRI, the Center for AI Safety, the Future of Humanity Institute, and GovAI. Many policy-focused roles sit within think tanks such as the Center for a New American Security (CNAS) and the Institute for AI Policy and Strategy. Most of these organisations offer remote or hybrid arrangements.

What salary can I expect in AI safety?

AI safety salaries are competitive. Research scientists and ML engineers at frontier labs earn $150,000 to $400,000+. Policy analysts and programme managers typically earn $80,000 to $140,000. Operations and communications roles range from $60,000 to $110,000. Non-profit salaries are generally lower than industry but still above average for the impact sector.