Pillar Guide 12 min read

Remote AI Safety & Governance Jobs: The Complete Career Guide

AI safety has gone from a niche academic concern to one of the most in-demand career paths in technology. Frontier AI labs, governments, and non-profits are all hiring — and many of these roles are fully remote.

$2B+
Safety Research Funding
400%
Job Growth (2023-25)
$180K
Median ML Engineer

⚡ Quick Summary

  • Two tracks: Technical alignment (ML research) & Governance (policy work)
  • PhD not required — operations, policy, and comms roles are high-impact
  • Top employers: Anthropic, OpenAI, DeepMind, CAIS, GovAI, MIRI
  • Salaries: Research $150-400K | Policy $80-140K | Ops $60-110K

🤖 What is AI safety?

AI safety encompasses the research, engineering, and policy work aimed at ensuring advanced AI systems are reliable, aligned with human values, and governed responsibly. The field broadly splits into two tracks:

🔬

Technical Alignment

Research on making AI systems do what we intend: interpretability, RLHF, red-teaming, evaluation science.

🏛️

Governance & Policy

Designing regulation, standards, and institutional frameworks: export controls, safety benchmarks, international coordination.

Both tracks are essential. The division between them is increasingly blurred as policy decisions require deep technical literacy and technical work is shaped by regulatory reality.

💼 Key skills employers look for

🧠

ML Fundamentals

Transformers, reinforcement learning, evaluation methodology.

🔍

Interpretability & Red-teaming

Probing model behaviour, adversarial testing, jailbreak analysis.

📜

Policy Analysis

Translating technical concepts into regulatory language (EU AI Act, NIST AI RMF).

📣

Research Communication

Distilling complex findings for policymakers, media, and the public.

📋

Programme Management

Running research agendas, allocating funding, coordinating distributed teams.

🔐

Security Engineering

Model security, supply-chain integrity, access controls for frontier systems.

🚀 How to get started

The fastest on-ramps depend on your background:

  • Software engineers: Explore alignment research programmes like MATS (ML Alignment Theory Scholars) or Redwood Research's residency.
  • Policy professionals: Look at fellowships at GovAI or the AI Policy Institute.
  • Generalists: Operations and communications skills are urgently needed at safety-focused non-profits — 80,000 Hours lists these as some of the most impactful roles available today.
Signal your commitment: Publish blog posts on the AI Alignment Forum, attend EAGx conferences, and engage with open problems on the alignment research agenda. The community is small and values demonstrated interest.

🏢 Top organisations hiring

🅰️

Anthropic

Leading safety-focused AI lab building Claude. Heavy focus on interpretability and Constitutional AI. Competitive salaries, mostly SF-based but some remote roles.

🔬 Research 🛠️ Engineering 💰 $150-400K+
🌐

GovAI (Oxford)

Leading think tank on AI governance. Research fellowships, policy analyst roles, and summer programmes. Strong pipeline to government positions.

📜 Policy 🎓 Fellowships 🇬🇧 UK-based
🛡️

Center for AI Safety (CAIS)

Research and advocacy organisation focused on reducing catastrophic AI risks. Hires operations, communications, and research staff. SF-based with remote options.

📢 Advocacy ⚙️ Operations 🏠 Remote-friendly
FAQ

❓ Frequently asked questions

What is AI safety and why does it matter?

AI safety is the field dedicated to ensuring that artificial intelligence systems behave as intended, remain under human control, and do not cause unintended harm. As AI capabilities grow rapidly, the field has become one of the highest-priority areas for researchers, policymakers, and engineers who want to shape how these systems are built and deployed.

Do I need a PhD to work in AI safety?

Not necessarily. While core alignment research roles often prefer PhD-level candidates, the broader AI safety ecosystem has many entry points: policy analysis, communications, operations, field-building, and software engineering. Organisations like 80,000 Hours report that operations and strategy roles at AI safety labs are highly impactful and do not require research credentials.

Which organisations hire for remote AI safety roles?

Major employers include Anthropic, OpenAI, DeepMind, Redwood Research, MIRI, the Center for AI Safety, the Future of Humanity Institute, and GovAI. Many policy-focused roles sit within think tanks such as the Center for a New American Security (CNAS) and the Institute for AI Policy and Strategy. Most of these organisations offer remote or hybrid arrangements.

What salary can I expect in AI safety?

AI safety salaries are competitive. Research scientists and ML engineers at frontier labs earn $150,000 to $400,000+. Policy analysts and programme managers typically earn $80,000 to $140,000. Operations and communications roles range from $60,000 to $110,000. Non-profit salaries are generally lower than industry but still above average for the impact sector.

Ready to start your AI safety career?

Browse open remote AI safety and governance positions.

Browse AI Safety Jobs