AI Security Book

Artificial intelligence security from first principles — fundamentals, offensive techniques, and enterprise risk management.


About This Book

This is a practitioner's reference for understanding, attacking, and defending AI systems. It's built for security professionals who need to operate in a world where AI is the attack surface, the weapon, and the infrastructure they're protecting.

Who it's for:

  • Red teamers and pentesters scoping AI engagements
  • GRC and risk professionals building AI governance programs
  • Security engineers hardening ML pipelines and LLM deployments
  • Anyone bridging offensive security and AI

What it covers:

SectionWhat's Inside
Fundamentals & TerminologyHow neural networks, transformers, and LLMs actually work — from neurons to inference. No hand-waving.
Offensive AIThe full AI attack surface: prompt injection, jailbreaking, data poisoning, model extraction, adversarial examples, AI-enabled ops. Plus red team methodology and tooling.
Enterprise AI Risk & ControlsCIA triad applied to AI, governance frameworks (NIST AI RMF, EU AI Act, ISO 42001), security architecture, third-party risk, and risk register templates.

How to Navigate

Start with the Fundamentals if you're new to AI/ML. Every offensive technique and risk control makes more sense when you understand how the underlying systems work.

Jump to Offensive AI if you already have the ML background and want to start red teaming AI systems immediately.

Go to Enterprise Risk if you're building governance, writing policy, or assessing AI risk in your organization.

Use search. Press S or click the magnifying glass to search across all pages.


Quick Reference

NeedGo To
Understand how LLMs workHow LLMs Work
The AI attack surfaceAI Attack Surface
Prompt injection techniquesPrompt Injection
Jailbreaking methodsJailbreaking
AI red team engagement guideRed Team Methodology
Set up a local AI labBuilding a Local Lab
OWASP LLM Top 10OWASP LLM Top 10
MITRE ATLAS frameworkMITRE ATLAS
CIA triad for AI systemsCIA Triad Applied to AI
AI governance frameworksGovernance Frameworks
Risk register templateAI Risk Register
Practice and CTFsPractice Labs & CTFs
Research papersReading List

Keyboard shortcuts:

  • S — Open search
  • — Previous / next page
  • T — Toggle sidebar

Variables Used Throughout

VariableMeaning
$TARGETTarget AI system URL or API endpoint
$MODELTarget model name (e.g., gpt-4, claude-3)
$API_KEYAPI key for target service
$LHOSTYour attacker machine
$LOCAL_MODELYour local model (e.g., llama3, mistral)

Built by Jashid Sany for AI security research, red teaming, and risk management.