AI Safety & Governance Part-time

AI Red Teamer

Trajectory Labs

Posted

Jan 30, 2026

Location

Remote

Type

Part-time

Compensation

Up to $9999999

Mission

What you will drive

Core responsibilities:

  • Red-team products from frontier AI labs to identify security vulnerabilities before deployment
  • Perform jailbreaks of advanced models like GPT-5, Claude, and Gemini to uncover potential risks
  • Develop Python scripts to systematically test AI system security boundaries
  • Work independently with minimal guidance, taking ownership of testing approaches

Impact

The difference you'll make

This role contributes directly to making AI systems safer by identifying and documenting failure modes before deployment, helping prevent potential security risks in advanced AI models.

Profile

What makes you a great fit

Required skills and qualifications:

  • Experience with AI model testing and security assessment
  • Proficiency in Python programming for developing testing scripts
  • Ability to work independently with minimal guidance
  • Strong analytical skills for identifying and documenting AI system failure modes

Benefits

What's in it for you

No specific benefits, compensation, or perks mentioned in the job description.

About

Inside Trajectory Labs

Visit site →

Trajectory Labs appears to be an organization focused on AI safety and security, specifically working on testing frontier AI products for vulnerabilities before deployment.