Application Guide

How to Apply for AI Red Team Delivery Manager

at Lakera AI

🏢 About Lakera AI

Lakera AI is pioneering AI safety by focusing specifically on security testing and defensive measures for large language models, positioning itself at the forefront of a critical emerging field. Unlike general AI companies, Lakera specializes in red teaming and vulnerability identification for LLMs, offering the chance to work on cutting-edge security challenges that directly impact how AI systems are deployed safely. Working here means contributing to foundational safety standards that will shape the entire AI industry's approach to security.

About This Role

As AI Red Team Delivery Manager, you'll lead offensive security operations designed to systematically find and exploit vulnerabilities in AI systems, particularly large language models. This role involves developing novel testing methodologies, setting industry standards for AI security evaluation, and coordinating with engineering teams to translate findings into defensive improvements. Your work will directly influence how organizations secure their AI deployments against emerging threats.

💡 A Day in the Life

A typical day involves planning and reviewing red team operations targeting LLM vulnerabilities, developing new testing methodologies for emerging threat vectors, and coordinating with engineering teams to prioritize and implement defensive fixes based on findings. You'll likely spend time researching new attack techniques, documenting security standards, and aligning testing approaches across different AI system deployments.

🎯 Who Lakera AI Is Looking For

  • Has 3+ years experience in AI/ML security, red teaming, or adversarial testing, preferably with exposure to large language models
  • Demonstrates experience developing security testing frameworks or methodologies, not just executing existing tests
  • Shows ability to bridge technical security work with managerial coordination across engineering and research teams
  • Possesses knowledge of current LLM vulnerabilities (prompt injection, data leakage, model extraction) and defensive techniques

📝 Tips for Applying to Lakera AI

1

Highlight specific AI red teaming or security testing projects in your resume, quantifying impact (e.g., 'identified 15+ vulnerability classes in LLM deployments')

2

Demonstrate knowledge of Lakera's focus areas by referencing their public research or blog posts on AI security in your application

3

Prepare examples of how you've developed testing methodologies rather than just following established procedures

4

Show experience coordinating between offensive security teams and defensive engineering teams to implement fixes

5

Emphasize any experience setting standards or frameworks, as this role specifically mentions 'setting new standards for AI security testing'

✉️ What to Emphasize in Your Cover Letter

['Your experience with AI/LLM-specific security testing methodologies and frameworks', 'Examples of coordinating red team findings into defensive improvements with engineering teams', 'Your vision for setting new standards in AI security testing and evaluation', "Why you're specifically interested in Lakera's focused approach to AI safety versus general AI companies"]

Generate Cover Letter →

🔍 Research Before Applying

To stand out, make sure you've researched:

  • Lakera's public research, blog posts, or talks on AI security and red teaming
  • Their specific focus areas within LLM security (check their website for case studies or technical content)
  • The team's background and expertise through LinkedIn or company profiles
  • Recent industry developments in AI red teaming and security standards they might be responding to

💬 Prepare for These Interview Topics

Based on this role, you may be asked about:

1 Walk me through how you would design a red teaming operation for a new LLM deployment
2 What methodologies would you use to test for emerging LLM vulnerabilities like prompt injection or training data extraction?
3 How would you coordinate findings from red team operations with engineering teams to implement defensive measures?
4 What standards or frameworks do you believe should exist for AI security testing that don't currently?
5 How do you stay current with evolving AI security threats and defensive techniques?
Practice Interview Questions →

⚠️ Common Mistakes to Avoid

  • Applying with only general cybersecurity experience without AI/ML security specifics
  • Focusing only on executing tests rather than developing methodologies or setting standards
  • Failing to demonstrate understanding of LLM-specific vulnerabilities and defenses

📅 Application Timeline

This position is open until filled. However, we recommend applying as soon as possible as roles at mission-driven organizations tend to fill quickly.

Typical hiring timeline:

1

Application Review

1-2 weeks

2

Initial Screening

Phone call or written assessment

3

Interviews

1-2 rounds, usually virtual

Offer

Congratulations!

Ready to Apply?

Good luck with your application to Lakera AI!