Application Guide

How to Apply for Research Scientist, Open Source Technical Safeguards

at AI Security Institute (AISI)

🏢 About AI Security Institute (AISI)

The AI Security Institute (AISI) is a specialized organization focused exclusively on mitigating AI-driven harms through technical safeguards, positioning itself at the intersection of cutting-edge AI research and real-world security applications. Unlike broader tech companies, AISI targets specific, high-impact threats like AI-generated CSAM and NCII, offering a mission-driven environment where technical work directly addresses urgent societal challenges.

About This Role

This role involves developing technical safeguards against tampering with open-weight models to prevent AI-generated harmful content, synthesizing threat intelligence, and creating scalable screening methodologies for platforms. It's impactful because it directly combats the proliferation of AI-generated CSAM and NCII by targeting the real-world supply chain, requiring both deep technical expertise and collaboration with NGOs and platforms.

💡 A Day in the Life

A typical day might involve analyzing threat intelligence reports on emerging AI-generated harmful content, prototyping Python-based screening algorithms using PyTorch/JAX, and collaborating with NGO partners to refine enforcement protocols. You could also be testing safeguards on open-weight models in development environments and documenting scalable methodologies for platform integration.

🎯 Who AI Security Institute (AISI) Is Looking For

  • Has 3+ years in applied ML, trust & safety, or security engineering, with hands-on experience in building pipelines and tooling for content moderation or similar systems.
  • Possesses deep familiarity with open-weight image/video models (e.g., diffusion models, LoRA), model hosting ecosystems, and the limitations of pre-deployment safeguards.
  • Is proficient in Python, ML stacks like PyTorch/JAX, data engineering, and systems skills, with a track record of developing scalable, automated solutions.
  • Demonstrates an understanding of threat intelligence related to AI-generated harmful content and can collaborate with NGOs and platforms to co-develop enforcement protocols.

📝 Tips for Applying to AI Security Institute (AISI)

1

Highlight specific projects where you built or optimized pipelines for content moderation, trust & safety, or security tooling, emphasizing scalability and real-world impact.

2

Detail your experience with open-weight models (e.g., diffusion, LoRA) and model hosting ecosystems, including any work on safeguards or tampering prevention.

3

Showcase your Python and ML stack skills with concrete examples, such as GitHub repositories or case studies involving PyTorch/JAX in production environments.

4

Discuss your knowledge of AI-generated CSAM/NCII threats and how you've synthesized threat intelligence or developed screening methodologies in past roles.

5

Tailor your resume to emphasize collaboration with NGOs or platforms, as this role involves co-developing best-practice protocols for enforcement.

✉️ What to Emphasize in Your Cover Letter

['Explain your motivation for working at AISI, focusing on their mission to combat AI-generated harmful content and why this specific role aligns with your technical and ethical interests.', 'Provide a brief case study from your past experience where you developed technical safeguards, threat intelligence, or scalable screening methods relevant to CSAM/NCII or similar harms.', "Highlight your familiarity with open-weight models and hosting ecosystems, and how you've addressed limitations in pre-deployment safeguards in previous projects.", 'Mention your ability to collaborate with NGOs and platforms, citing examples of co-developing protocols or working in cross-functional teams on security or moderation initiatives.']

Generate Cover Letter →

🔍 Research Before Applying

To stand out, make sure you've researched:

  • Investigate AISI's public reports or publications on AI security, focusing on their work related to CSAM, NCII, or open-weight model safeguards to understand their technical approach.
  • Look into the broader ecosystem of AI-generated harmful content and recent incidents or trends, as this role requires up-to-date threat intelligence knowledge.
  • Research NGOs and platforms that AISI might partner with (e.g., in hosting, app stores, or OS enforcement) to grasp the collaborative landscape of this role.
  • Explore the limitations of current pre-deployment safeguards for AI models, as this is a key area mentioned in the job description for developing new methodologies.

💬 Prepare for These Interview Topics

Based on this role, you may be asked about:

1 Describe a past project where you built a scalable, automated screening pipeline for content moderation or security, and discuss the challenges you faced in deployment.
2 How would you design a technical safeguard to prevent tampering with open-weight models specifically for mitigating AI-generated CSAM?
3 Explain your experience with open-weight image/video models (e.g., diffusion, LoRA) and how you've worked with model hosting ecosystems in a security context.
4 Discuss how you synthesize threat intelligence and translate it into actionable screening methodologies that platforms can realistically implement.
5 Describe a time you collaborated with NGOs or external partners on security or enforcement protocols, and what outcomes were achieved.
Practice Interview Questions →

⚠️ Common Mistakes to Avoid

  • Applying with a generic resume that doesn't highlight specific experience in trust & safety, content moderation, or security engineering related to AI models.
  • Overemphasizing theoretical ML knowledge without demonstrating practical skills in building pipelines, tooling, or scalable systems for real-world applications.
  • Failing to show awareness of the specific threats (AI-generated CSAM/NCII) or the technical nuances of open-weight models and hosting ecosystems in your application materials.

📅 Application Timeline

This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.

Typical hiring timeline:

1

Application Review

1-2 weeks

2

Initial Screening

Phone call or written assessment

3

Interviews

1-2 rounds, usually virtual

Offer

Congratulations!

Ready to Apply?

Good luck with your application to AI Security Institute (AISI)!