Research Engineer, Frontier Safety Risk Assessment
Google DeepMind
Location
Remote
Type
Full-time
Posted
Jan 03, 2026
Compensation
USD 136000 – 136000
Mission
What you will drive
Core responsibilities:
- Design, implement, and empirically validate approaches to assessing and managing catastrophic risk from current and future frontier AI systems
- Identify new risk pathways within current areas (loss of control, ML R&D, cyber, CBRN, harmful manipulation) or in new ones
- Conceive of, design, and develop new ways to measure pre-mitigation and post-mitigation risk
- Forecast and scenario plan for future risks which are not yet material
Impact
The difference you'll make
This role contributes to measuring and assessing potential catastrophic risks from current and future AI systems, ensuring safety and ethics remain the highest priority in AI development and helping prevent harmful outcomes from frontier AI technologies.
Profile
What makes you a great fit
Required skills and experience:
- Extensive research experience with deep learning and/or foundation models (e.g., PhD in machine learning)
- Adept at generating ideas, designing experiments, and implementing these in Python with real AI systems
- Keen to address risks from foundation models and have thought about how to do so
- Strong, clear communication skills with ability to engage technical stakeholders
Benefits
What's in it for you
Benefits include: enhanced maternity, paternity, adoption, and shared parental leave; private medical and dental insurance for yourself and any dependents; flexible working options; healthy food; on-site gym; faith rooms; terraces; relocation support and immigration services; plus bonus, equity, and benefits in addition to salary.
About
Inside Google DeepMind
Google DeepMind is a team of scientists, engineers, and machine learning experts working to advance artificial intelligence for widespread public benefit and scientific discovery, with safety and ethics as the highest priority.