Researcher demonstrated that Gemini AI assistant could be tricked into leaking Google Calendar data via indirect prompt injection through crafted calendar event descriptions.
Review calendar event sources; limit Gemini's access to sensitive calendar data
Gmail injection
High
Malicious emails processed by Gemini can contain hidden instructions that cause data exfiltration or unauthorized actions.
Email filtering; don't use Gemini to summarize emails from untrusted senders
Document injection
High
Shared Google Docs with hidden instructions can hijack Gemini's behavior when the document is summarized or analyzed.
Audit shared documents; limit Gemini document access to trusted sources
The IDEsaster research found prompt injection attack chains affecting Gemini CLI alongside other AI coding tools. Indirect prompt injection via poisoned web sources can manipulate Gemini into harvesting credentials and sensitive code from a user's IDE and exfiltrating them to attacker-controlled servers.
□ Indirect injection via Google Workspace (Gmail, Docs, Calendar, Sheets)
□ Gemini CLI config injection and prompt injection via project files
□ Cross-product data leakage (can Gemini in Docs access Drive data?)
□ System prompt extraction from custom Gemini configurations
□ API key handling in AI Studio integrations
□ Jailbreak testing across Gemini model versions
□ Data exfiltration via Gemini tool use in Workspace