Cluster Guide
5 Non-Technical AI Safety Roles You Can Apply For
When people think of AI safety careers, they picture machine-learning researchers writing papers on alignment. But the organisations doing this work also need policy experts, communicators, operations leaders, and programme managers. If you care about reducing AI risk and do not have a technical background, there is almost certainly a role that matches your skills.
Below are five non-technical roles that are in high demand across the AI safety ecosystem right now. For a broader overview of the field, see our complete AI safety career guide.
1. Policy Analyst
Policy analysts translate technical AI safety concepts into actionable regulatory recommendations. Day-to-day work involves writing policy memos, analysing draft legislation such as the EU AI Act or U.S. executive orders on AI, and briefing policymakers on frontier-model risks. You need strong analytical writing, familiarity with the policy cycle, and enough technical literacy to read safety evaluations and benchmark reports critically.
Who hires: GovAI (Oxford), the Institute for AI Policy and Strategy, and the Center for a New American Security (CNAS).
2. Operations Manager
Operations managers keep safety-focused organisations running. Responsibilities span hiring, budgeting, vendor management, and internal processes. In a fast-growing field where research teams double in size every year, the person who builds scalable systems is just as important as the person writing the next interpretability paper. You need experience managing cross-functional teams, comfort with ambiguity, and the ability to prioritise ruthlessly.
Who hires: Anthropic, the Center for AI Safety (CAIS), and Redwood Research.
3. Communications Lead
Communications leads shape how safety research reaches the public, journalists, and policymakers. This includes writing press releases, managing social channels, producing explainer content, and handling media enquiries during high-profile model releases. You need excellent writing skills, experience with media relations, and the ability to simplify technical concepts without distorting them.
Who hires: Anthropic, the Future of Life Institute, and the Partnership on AI.
4. Research Programme Manager
Research programme managers coordinate multi-team research agendas, manage grant portfolios, and ensure milestones are hit. In AI safety, this often means overseeing alignment research grants, organising workshops, and acting as the bridge between funders like Open Philanthropy and the research teams doing the work. You need project-management expertise, strong stakeholder communication, and enough familiarity with the research landscape to evaluate progress meaningfully.
Who hires: Open Philanthropy, the Centre for Effective Altruism, and the Machine Intelligence Research Institute (MIRI).
5. Field-Building Coordinator
Field-building coordinators grow the pipeline of talent entering AI safety. They design fellowship programmes, run career workshops, manage mentorship networks, and produce resources that help newcomers navigate the space. This is a role that combines community management, event production, and strategic outreach. You need strong interpersonal skills, event-planning experience, and a genuine understanding of the AI safety talent bottleneck.
Who hires: 80,000 Hours, AI Safety Camp, and the Centre for Effective Altruism.
Where to start
The best way to break in is to demonstrate domain knowledge before you apply. Write about AI governance on the EA Forum, volunteer at an EAGx conference, or complete a short course like BlueDot Impact's AI safety fundamentals programme. Hiring managers consistently say that candidates who show genuine engagement with the field stand out far more than those with a generic policy or operations CV.