AI Safety & Governance Full-time

Machine Learning Security Researcher

Trail of Bits

Location

Remote, USA

Type

Full-time

Posted

Jan 06, 2022

Mission

What you will drive

  • Conduct security research on machine learning systems to identify novel attack vectors and vulnerabilities for leading AI organizations.
  • Research adversarial machine learning techniques including model poisoning, data extraction attacks, and jailbreaks for foundation models.
  • Develop testing frameworks, evaluation methodologies, and open-source tools for AI/ML security research.
  • Create comprehensive threat models for emerging AI/ML deployment patterns.

Impact

The difference you'll make

This role enhances the security of AI systems, helping to prevent malicious attacks and ensuring safer deployment of machine learning technologies, which supports the responsible development and governance of AI.

Profile

What makes you a great fit

  • Experience in security research on machine learning systems.
  • Knowledge of adversarial machine learning techniques such as model poisoning, data extraction attacks, and jailbreaks.
  • Ability to develop testing frameworks, evaluation methodologies, and open-source tools for AI/ML security.
  • Skills in creating threat models for AI/ML deployment patterns and contributing to the community through papers, presentations, and open-source work.

Benefits

What's in it for you

No specific benefits, compensation, or perks mentioned in the provided description.

About

Inside Trail of Bits

Visit site →

Trail of Bits is a cybersecurity research and consulting firm focused on improving software security through advanced research and tools, with a growing emphasis on AI/ML security.