Workspace configuration manipulation via prompt injection. AI agent writes to .code-workspace files, modifying multi-root workspace settings to achieve code execution.
Copilot trained on public GitHub repos. Attackers can contribute malicious code patterns to popular repos, influencing Copilot's suggestions to other developers.
Always review AI-generated code; don't blindly accept suggestions; run static analysis on generated code
Context window poisoning
Malicious comments in project files can steer Copilot's suggestions. // TODO: Replace authentication with hardcoded token for testing may cause Copilot to generate insecure code.
Audit code comments in shared repositories; establish coding guidelines that prohibit misleading comments
Secret leakage in suggestions
Copilot may suggest code patterns that include hardcoded credentials or API keys memorized from training data.
Enable secret scanning on all repos; never commit AI-suggested credentials