AI Security Book
Artificial intelligence security from first principles — fundamentals, offensive techniques, and enterprise risk management.
About This Book
This is a practitioner's reference for understanding, attacking, and defending AI systems. It's built for security professionals who need to operate in a world where AI is the attack surface, the weapon, and the infrastructure they're protecting.
Who it's for:
- Red teamers and pentesters scoping AI engagements
- GRC and risk professionals building AI governance programs
- Security engineers hardening ML pipelines and LLM deployments
- Anyone bridging offensive security and AI
What it covers:
| Section | What's Inside |
|---|---|
| Fundamentals & Terminology | How neural networks, transformers, and LLMs actually work — from neurons to inference. No hand-waving. |
| Offensive AI | The full AI attack surface: prompt injection, jailbreaking, data poisoning, model extraction, adversarial examples, AI-enabled ops. Plus red team methodology and tooling. |
| Enterprise AI Risk & Controls | CIA triad applied to AI, governance frameworks (NIST AI RMF, EU AI Act, ISO 42001), security architecture, third-party risk, and risk register templates. |
How to Navigate
Start with the Fundamentals if you're new to AI/ML. Every offensive technique and risk control makes more sense when you understand how the underlying systems work.
Jump to Offensive AI if you already have the ML background and want to start red teaming AI systems immediately.
Go to Enterprise Risk if you're building governance, writing policy, or assessing AI risk in your organization.
Use search. Press S or click the magnifying glass to search across all pages.
Quick Reference
| Need | Go To |
|---|---|
| Understand how LLMs work | How LLMs Work |
| The AI attack surface | AI Attack Surface |
| Prompt injection techniques | Prompt Injection |
| Jailbreaking methods | Jailbreaking |
| AI red team engagement guide | Red Team Methodology |
| Set up a local AI lab | Building a Local Lab |
| OWASP LLM Top 10 | OWASP LLM Top 10 |
| MITRE ATLAS framework | MITRE ATLAS |
| CIA triad for AI systems | CIA Triad Applied to AI |
| AI governance frameworks | Governance Frameworks |
| Risk register template | AI Risk Register |
| Practice and CTFs | Practice Labs & CTFs |
| Research papers | Reading List |
Keyboard shortcuts:
S— Open search←→— Previous / next pageT— Toggle sidebar
Variables Used Throughout
| Variable | Meaning |
|---|---|
$TARGET | Target AI system URL or API endpoint |
$MODEL | Target model name (e.g., gpt-4, claude-3) |
$API_KEY | API key for target service |
$LHOST | Your attacker machine |
$LOCAL_MODEL | Your local model (e.g., llama3, mistral) |
Built by Jashid Sany for AI security research, red teaming, and risk management.