AI Ethics and Safety Policy Researcher
Google DeepMind
Location
Remote (US)
Type
Full-time
Posted
Jan 31, 2026
Compensation
USD 147000 – 147000
Mission
What you will drive
Core responsibilities:
- Systematically identify risks associated with emerging and proliferating AI capabilities
- Conduct original research on identified challenges, gathering information from a variety of sources
- Design and build operational frameworks for mitigating model risks, converting them into standardized artefacts
- Collaborate with model development teams to help them adopt and apply these frameworks
Impact
The difference you'll make
This role ensures that Google DeepMind develops and deploys its AI technology in a way that is aligned with the company's AI Principles, proactively identifying and mitigating emerging AI ethics and safety challenges to create widespread public benefit.
Profile
What makes you a great fit
Required skills and experience:
- A PhD, or equivalent experience, in a relevant field such as AI ethics or safety, computer science, social sciences, or public policy
- Proven expertise in AI ethics, AI policy or a related field
- Demonstrable track record of implementing policies
- Strong research and writing skills, evidenced by publications in top journal and conference proceedings
Benefits
What's in it for you
The US base salary range for this full-time position is between $147,000 - $216,000 + bonus + equity + benefits. Google DeepMind values diversity of experience, knowledge, backgrounds and perspectives and is committed to equal employment opportunities.
About
Inside Google DeepMind
Google DeepMind is a team of scientists, engineers, and machine learning experts working to advance the state of the art in artificial intelligence, using technologies for widespread public benefit and scientific discovery while ensuring safety and ethics are the highest priority.