AI Ethics and Safety Policy Researcher
Google DeepMind
Posted
Jan 31, 2026
Location
USA
Type
Full-time
Compensation
$147000 - $216000
Mission
What you will drive
Core responsibilities:
- Systematically identify risks associated with emerging and proliferating AI capabilities
- Conduct original research on identified challenges, gathering information from a variety of sources
- Design and build operational frameworks for mitigating model risks, converting them into standardized artefacts
- Collaborate with model development teams to help them adopt and apply these frameworks
Impact
The difference you'll make
This role ensures that Google DeepMind develops and deploys its AI technology in a way that is aligned with the company's AI Principles, proactively identifying and mitigating emerging AI ethics and safety challenges to create widespread public benefit and scientific discovery.
Profile
What makes you a great fit
Required skills and experience:
- A PhD, or equivalent experience, in a relevant field such as AI ethics or safety, computer science, social sciences, or public policy
- Proven expertise in AI ethics, AI policy or a related field
- Demonstrable track record of implementing policies
- Strong research and writing skills, evidenced by publications in top journal and conference proceedings
Benefits
What's in it for you
The US base salary range for this full-time position is between $147,000 - $216,000 + bonus + equity + benefits. The organization values diversity of experience, knowledge, backgrounds and perspectives and is committed to equal employment opportunities.