AI Security Bootcamp
Unknown Organization
Posted
Feb 28, 2026
Location
Remote
Type
Full-time
Mission
What you will drive
Looking for past programmes? →
AI Security
Bootcamp
Singapore2026Singapore
A 7-day intensive program for security professionals shaping how we secure emerging AI systems.
April 20–26, 2026|Singapore|In-PersonApply NowLearn MoreApplications close in17Days05Hours45Min33SecApplication Deadline: 15th March 2026Decisions: 28th March 2026
Overview
This iteration of AI Security bootcamp explores the rapidly evolving threat landscape of frontier AI systems, equipping security professionals with the knowledge and hands-on skills to secure against current and emerging risks.
As AI systems become more capable and integrated into critical infrastructure, new attack surfaces and failure modes are emerging that traditional security training doesn't cover. This program is designed to fill that gap, providing an intensive, practitioner-focused curriculum that prepares participants to engage with the most pressing AI security challenges we see today, and grapple with how the risks will evolve in the future.
Participants will complete pre-work before the program to establish baseline ML fundamentals, followed by an immersive week delivered through demos, lectures, guest speakers, and hands-on red/blue exercises that build skills across the modern AI system stack.
The Program
Day 1Introduction & Threat Modeling+Day 2Adversarial Attacks, Watermarking & Data Security+Day 3LLM Security+Day 4Infrastructure Security+Day 5Weight security, Verification & Formal Methods+Day 6Data Center Security & ML Stack Threat Modeling+Day 7AI Control & Hardware Governance+
What You'll Learn
- ■Develop a threat model for frontier AI systems: from current deployments to the security challenges posed by increasingly capable systems
- ■Build hands-on capability across the full attack surface: adversarial techniques, infrastructure exploitation, supply chain attacks, and model-level vulnerabilities
- ■Understand how attacks and defenses scale with AI capability increases
- ■Engage with security challenges that frontier AI organizations are actively working on—problems not yet covered in standard training curricula
- ■Position yourself for high-impact roles at the frontier: AI labs, government programs, and research institutions shaping how the field develops
Who Should Attend
Security professionals ready to secure frontier AI systems at all stages - from user applications, to model APIs for developers; and from infrastructure hosting the models, to governance frameworks for emerging threats.
We want our cohort to span a wide range of expertise - whether your background is offensive security, incident response, threat intelligence, infrastructure, or application security, the AI-specific threat models and techniques we cover will push what you already know into new territory.
Prerequisites
5+ years of hands-on security experience. No prior AI or ML background needed - the pre-work will cover what's necessary.
Selection prioritizes candidates interested in frontier AI risk, high-consequence failure modes, or work involving sophisticated threat actors.
Experience with deep learning frameworks (e.g., PyTorch) is a plus but not required. We want to make this accessible to security professionals from a variety of backgrounds, so we provide comprehensive pre-work to get everyone up to speed on the AI fundamentals needed to engage with the curriculum.
Timing
AISB Singapore runs April 20–26, overlapping with Black Hat Asia (April 21–24) and just before DEF CON (April 28–30). If you're already planning to attend DEF CON, this program fits naturally into the same trip: the bootcamp ends just before DEF CON opens.
Cost & Selection
Accommodation included. Limited travel support available—note this in your application.
Selection is competitive—we accept 10 to 12 participants. The cost is your time: full attendance for seven days and pre-reading completed before arrival.
Team
Program Lead
Pranav Gade
Research engineer at Conjecture. Created AISB to bridge AI safety and security, leads curriculum design and program direction.
Security Lead
Nitzan Shulman
Head of Cyber Security at Heron AI Security Initiative. 6+ years security research specializing in IoT, robotics, malware and AI security.
Bootcamp Partner
Singapore AI Safety Hub (SASH)
Local execution and institutional linkage supported by the Singapore AI Safety Hub.
FAQs
Who else will be in the room?+What does “frontier AI security” mean in practice?+What does the full application process look like?+Does the program cover accommodation and travel?+What is the time commitment?+What happens after the program?+I have more questions.+
Ready to Apply?
Applications close 15th March 2026. We review on a rolling basis - early applications are encouraged. Let us know in your application if you'd like a decision sooner.
Reach out to [email protected] to express interest or ask questions about the program.
Apply Now
Profile
What makes you a great fit
Looking for past programmes? →
AI Security
Bootcamp
Singapore2026Singapore
A 7-day intensive program for security professionals shaping how we secure emerging AI systems.
April 20–26, 2026|Singapore|In-PersonApply NowLearn MoreApplications close in17Days05Hours45Min33SecApplication Deadline: 15th March 2026Decisions: 28th March 2026
Overview
This iteration of AI Security bootcamp explores the rapidly evolving threat landscape of frontier AI systems, equipping security professionals with the knowledge and hands-on skills to secure against current and emerging risks.
As AI systems become more capable and integrated into critical infrastructure, new attack surfaces and failure modes are emerging that traditional security training doesn't cover. This program is designed to fill that gap, providing an intensive, practitioner-focused curriculum that prepares participants to engage with the most pressing AI security challenges we see today, and grapple with how the risks will evolve in the future.
Participants will complete pre-work before the program to establish baseline ML fundamentals, followed by an immersive week delivered through demos, lectures, guest speakers, and hands-on red/blue exercises that build skills across the modern AI system stack.
The Program
Day 1Introduction & Threat Modeling+Day 2Adversarial Attacks, Watermarking & Data Security+Day 3LLM Security+Day 4Infrastructure Security+Day 5Weight security, Verification & Formal Methods+Day 6Data Center Security & ML Stack Threat Modeling+Day 7AI Control & Hardware Governance+
What You'll Learn
- ■Develop a threat model for frontier AI systems: from current deployments to the security challenges posed by increasingly capable systems
- ■Build hands-on capability across the full attack surface: adversarial techniques, infrastructure exploitation, supply chain attacks, and model-level vulnerabilities
- ■Understand how attacks and defenses scale with AI capability increases
- ■Engage with security challenges that frontier AI organizations are actively working on—problems not yet covered in standard training curricula
- ■Position yourself for high-impact roles at the frontier: AI labs, government programs, and research institutions shaping how the field develops
Who Should Attend
Security professionals ready to secure frontier AI systems at all stages - from user applications, to model APIs for developers; and from infrastructure hosting the models, to governance frameworks for emerging threats.
We want our cohort to span a wide range of expertise - whether your background is offensive security, incident response, threat intelligence, infrastructure, or application security, the AI-specific threat models and techniques we cover will push what you already know into new territory.
Prerequisites
5+ years of hands-on security experience. No prior AI or ML background needed - the pre-work will cover what's necessary.
Selection prioritizes candidates interested in frontier AI risk, high-consequence failure modes, or work involving sophisticated threat actors.
Experience with deep learning frameworks (e.g., PyTorch) is a plus but not required. We want to make this accessible to security professionals from a variety of backgrounds, so we provide comprehensive pre-work to get everyone up to speed on the AI fundamentals needed to engage with the curriculum.
Timing
AISB Singapore runs April 20–26, overlapping with Black Hat Asia (April 21–24) and just before DEF CON (April 28–30). If you're already planning to attend DEF CON, this program fits naturally into the same trip: the bootcamp ends just before DEF CON opens.
Cost & Selection
Accommodation included. Limited travel support available—note this in your application.
Selection is competitive—we accept 10 to 12 participants. The cost is your time: full attendance for seven days and pre-reading completed before arrival.
Team
Program Lead
Pranav Gade
Research engineer at Conjecture. Created AISB to bridge AI safety and security, leads curriculum design and program direction.
Security Lead
Nitzan Shulman
Head of Cyber Security at Heron AI Security Initiative. 6+ years security research specializing in IoT, robotics, malware and AI security.
Bootcamp Partner
Singapore AI Safety Hub (SASH)
Local execution and institutional linkage supported by the Singapore AI Safety Hub.
FAQs
Who else will be in the room?+What does “frontier AI security” mean in practice?+What does the full application process look like?+Does the program cover accommodation and travel?+What is the time commitment?+What happens after the program?+I have more questions.+
Ready to Apply?
Applications close 15th March 2026. We review on a rolling basis - early applications are encouraged. Let us know in your application if you'd like a decision sooner.
Reach out to [email protected] to express interest or ask questions about the program.
Apply Now