AI Safety & Governance
Full-time
Research Engineer, Novel AI Platforms for Multiscale Alignment
Center for AI Risk Management and Alignment
Location
Remote, Global
Type
Full-time
Posted
Jan 06, 2022
Mission
What you will drive
- Develop novel AI platforms addressing critical alignment challenges and practical LLM agents
- Design and implement architectures for agent execution environments and interaction platforms
- Develop optimization algorithms for multi-objective and cooperative AI systems
- Create mechanisms for conflict resolution and preference aggregation in multi-agent settings
Impact
The difference you'll make
This role creates positive change by developing novel AI platforms that address critical alignment challenges, helping to ensure AI systems are safe, cooperative, and aligned with human values.
Profile
What makes you a great fit
- Experience developing AI platforms and architectures
- Knowledge of optimization algorithms for multi-objective systems
- Ability to create conflict resolution mechanisms in multi-agent settings
- Technical infrastructure development skills for experimental AI alignment approaches
Benefits
What's in it for you
No benefits information provided in the job description.
About
Inside Center for AI Risk Management and Alignment
The Center for AI Risk Management and Alignment focuses on addressing AI alignment challenges and developing safe AI systems through research and technical development.