AI Safety & Governance
Full-time
Public Security Policy Researcher
Center for AI Risk Management and Alignment
Location
Remote, Global
Type
Full-time
Posted
Jan 06, 2022
Mission
What you will drive
- Conduct policy research on AI governance, risk mitigation, and emergency response frameworks to address advanced AI system risks.
- Assess societal vulnerabilities to AI systems across critical infrastructure and institutions.
- Develop comprehensive AI disaster scenario plans for different jurisdictional contexts.
- Draft policy briefs and reports translating complex technical concepts into actionable recommendations.
- Create simulation materials and exercises to test emergency response protocols for AI system failures.
Impact
The difference you'll make
This role creates positive change by developing frameworks and plans to mitigate risks from advanced AI systems, protecting critical infrastructure and institutions from potential AI-related disasters.
Profile
What makes you a great fit
- Experience in policy research, particularly in technology or AI governance.
- Ability to assess societal vulnerabilities and develop disaster scenario plans.
- Strong skills in drafting policy briefs and reports that translate technical concepts into actionable recommendations.
- Experience in creating simulation materials and exercises for emergency response protocols.
Benefits
What's in it for you
No specific benefits, compensation, or perks mentioned in the provided description.
About
Inside Center for AI Risk Management and Alignment
The Center for AI Risk Management and Alignment focuses on managing and aligning AI systems to mitigate risks and ensure safety, likely through research and policy development in AI governance.