AI Safety & Governance Part-time

ML Engineer, AI Risk Initiative

Massachusetts Institute of Technology, FutureTech

Posted

Jan 06, 2026

Location

Remote (US)

Type

Part-time

Compensation

Up to $9999999

Mission

What you will drive

Core responsibilities:

  • Build LLM-augmented pipelines to accelerate evidence synthesis and systematic reviews for AI risks and mitigations
  • Develop modules for document identification, screening, extraction, and classification with human validation processes
  • Integrate components into end-to-end evidence synthesis pipelines for the organization review project
  • Refactor systems for reuse across different AI Risk Initiative projects
  • Design interfaces to optimize human-AI collaboration and document findings for knowledge transfer

Impact

The difference you'll make

This role creates positive change by accelerating evidence synthesis for AI risks and mitigations, enabling more effective identification and implementation of safety measures to prevent potential harms from advanced AI systems.

Profile

What makes you a great fit

Required skills and qualifications:

  • Experience with machine learning engineering and LLM-augmented pipeline development
  • Ability to develop modules for document processing and classification systems
  • Experience with system integration and refactoring for reuse across projects
  • Skills in designing interfaces for human-AI collaboration
  • Background in evidence synthesis or systematic review processes

Benefits

What's in it for you

No specific benefits, compensation, or salary information provided in the job description.

About

Inside Massachusetts Institute of Technology, FutureTech

Visit site →

Massachusetts Institute of Technology's FutureTech initiative focuses on AI risk research and mitigation strategies, working to understand and address potential harms from advanced artificial intelligence systems.