ChatGPT — Security Profile

Product Overview

ComponentDescriptionAttack Surface
ChatGPT Web/AppConversational AI with memory, file upload, code execution, web browsing, image generationPrompt injection, memory manipulation, data extraction, SSRF
ChatGPT APIDeveloper API (GPT-4o, GPT-4, GPT-3.5)Prompt injection via applications, model extraction
ChatGPT AtlasAI-powered browser with agent mode, browser memoriesCSRF memory injection, prompt injection via web content, clipboard hijacking, weak anti-phishing controls
Custom GPTsUser-created GPT configurations with custom instructions and toolsSystem prompt extraction, action abuse, data exfiltration
ChatGPT Plugins/ActionsThird-party tool integrationsIndirect prompt injection via plugin responses, unauthorized actions

ChatGPT Web & API

Notable CVEs and Vulnerabilities

CVE / FindingSeverityDescriptionControl
CVE-2024-275646.5 (Medium)SSRF in pictureproxy.php of ChatGPT codebase. Allows attackers to inject malicious URLs into input parameters, forcing the application to make unintended requests. Over 10,000 attacks in one week. Note: OpenAI disputed the attribution, stating the vulnerable repo was not part of ChatGPT's production systems.WAF rules for SSRF patterns; URL validation on all input parameters; monitor for SSRF indicators in logs
Memory Injection (Tenable, 2025)HighSeven vulnerabilities in GPT-4o and GPT-5 models. CSRF flaw allows injecting malicious instructions into ChatGPT's persistent memory via crafted websites. Corrupted memory persists across devices and sessions.Periodically review stored memories; be cautious when asking ChatGPT to summarize untrusted websites
One-Click Prompt InjectionMediumCrafted URLs in format chatgpt.com/?q={Prompt} auto-execute queries when clicked. Combined with other techniques for data exfiltration.Don't click ChatGPT URLs from untrusted sources; disable auto-query parameter execution
Bing.com Allowlist BypassMediumbing.com is allowlisted as safe in ChatGPT. Bing ad tracking links (bing.com/ck/a) can mask malicious URLs, rendering them in chat as trusted links.Don't trust links rendered in ChatGPT output without independent verification
Zero-Click Data ExfiltrationHighIndirect prompt injection via browsing context causes ChatGPT to exfiltrate conversation data by rendering images with data encoded in URL parameters to attacker-controlled servers.Output filtering for encoded data in URLs; restrict image rendering from untrusted domains

ChatGPT Atlas (Browser)

FindingSeverityDescriptionControl
CSRF Memory InjectionHighMalicious websites inject persistent instructions into Atlas browser memories. Corrupted memory persists across sessions and can control future AI behavior.Regularly audit browser memories; avoid browsing untrusted sites with Atlas
Clipboard HijackingHighHidden "copy to clipboard" actions on web pages overwrite clipboard with malicious links when Atlas navigates the site. Later paste actions redirect to phishing sites.Don't paste content from clipboard after Atlas browsing sessions without inspection
Weak Anti-PhishingHighLayerX testing showed Atlas stopped only 5.8% of malicious web pages (vs. 53% for Edge, 47% for Chrome).Don't rely on Atlas as a primary browser; use traditional browsers with better security controls
Prompt Injection via OmniboxMediumAtlas omnibox can be jailbroken by disguising malicious prompts as URLs.Treat Atlas as an untrusted execution environment; don't use for sensitive browsing

What to Test in Engagements

□ System prompt extraction for Custom GPTs
□ Memory injection via malicious web content
□ One-click prompt injection via URL parameters
□ Data exfiltration via image rendering
□ Bing.com allowlist bypass for URL masking
□ Custom GPT action abuse — can injection trigger unauthorized API calls?
□ Plugin/action output injection — can plugin responses hijack conversation?
□ Atlas browser memory poisoning
□ Atlas clipboard hijacking
□ Cross-session data leakage via persistent memory