AI Product Security Profiles

Overview

This section provides security profiles for major AI products and developer tools. Each profile covers the product's architecture, known vulnerability classes, notable CVEs with recommended controls, and what to test during red team engagements.

How to Use These Profiles

For red teamers: Start with the vulnerability classes section to understand what attack surface exists, then reference specific CVEs for proven exploitation paths.

For defenders: Focus on the controls column in each CVE table and the hardening recommendations at the bottom of each page.

For risk managers: Use the product profiles to inform vendor risk assessments and AI tool approval decisions.

Product Index

ProductVendorPrimary RiskProfile
Claude (Chat, API)AnthropicPrompt injection, data extraction, memory manipulationClaude
Claude CodeAnthropicRCE via config injection, API key theft, command injectionClaude
CursorAnysphereRCE via MCP poisoning, config injection, outdated ChromiumCursor
ChatGPTOpenAISSRF, memory injection, prompt injection, browser agent exploitsChatGPT
WindsurfCodeiumShared VS Code fork vulns, Chromium CVEs, extension flawsWindsurf
GitHub CopilotGitHub/MicrosoftWorkspace manipulation, prompt injection, extension vulnsGitHub Copilot
GeminiGooglePrompt injection, data exfiltration via extensions, calendar leaksGemini

Common Vulnerability Patterns Across AI Products

Several vulnerability classes appear repeatedly across products:

MCP Configuration Injection — nearly every AI IDE that supports Model Context Protocol has had vulnerabilities where malicious MCP configurations in shared repositories execute code without user consent. This is the supply chain attack vector of the AI tooling era.

Prompt Injection → Tool Abuse chains — the pattern of using prompt injection to trigger tool calls (file writes, API calls, code execution) appears across ChatGPT, Claude, Cursor, and Copilot.

Outdated Chromium in Electron forks — Cursor and Windsurf both ship with outdated Chromium builds inherited from their VS Code fork, exposing developers to 80-100+ known CVEs at any given time.

Configuration-as-Execution — AI tools increasingly treat configuration files as execution logic. Files that were historically passive metadata (.json, .toml, .yaml) now trigger code execution, tool launches, and API calls.

Freshness Notice

AI product CVEs are published frequently. This section captures major vulnerability classes and notable CVEs as of early 2026. Always check NVD, vendor security advisories, and MITRE ATLAS for the latest disclosures.