Impact Careers full_time

Researcher, Alignment

Open AI

Posted

Dec 26, 2025

Location

San Francisco, CA

Type

full_time

Compensation

$200000 - $200000

Mission

What you will drive

The Alignment team at OpenAI is dedicated to ensuring that AI systems are safe, trustworthy, and consistently aligned with human values. Our work focuses on developing methodologies that enable AI to robustly follow human intent across a wide range of scenarios, including adversarial or high-stakes situations.

As a Research Engineer/Research Scientist on the Alignment team, you will be at the forefront of ensuring AI systems consistently follow human intent. Your role involves designing and implementing scalable solutions for AI alignment as capabilities grow and integrating human oversight into AI decision-making.

The two pillars of our approach are: (1) harnessing improved capabilities into alignment, and (2) centering humans by developing mechanisms that enable humans to express intent and effectively supervise AIs.

Profile

What makes you a great fit

  • PhD or equivalent experience in computer science, computational science, data science, or cognitive science
  • Strong engineering skills in designing and optimizing large-scale ML systems (e.g., PyTorch)
  • Deep understanding of alignment algorithms and techniques
  • Ability to develop data visualization or collection interfaces (TypeScript, Python)
  • Enjoy fast-paced, collaborative research environments