AI Acceptable Use Policy Template

Purpose

This template provides a starting point for an enterprise AI Acceptable Use Policy. Customize it for your organization's risk tolerance, regulatory environment, and AI maturity level.

Template


[Organization Name] — Artificial Intelligence Acceptable Use Policy

Version: 1.0 Effective Date: [Date] Owner: [CISO / CTO / AI Governance Committee] Review Cycle: Quarterly


1. Purpose

This policy defines acceptable and prohibited uses of artificial intelligence tools, models, and services by [Organization Name] employees, contractors, and third parties. It establishes guardrails to protect organizational data, ensure regulatory compliance, and manage risk while enabling responsible AI adoption.

2. Scope

This policy applies to:

  • All employees, contractors, and third parties with access to organizational systems
  • All AI tools, models, and services — whether provided by the organization, third parties, or accessed independently
  • All data processed by AI systems, including data entered into prompts, uploaded as files, or retrieved by AI-connected tools

3. Definitions

TermDefinition
Approved AI toolsAI tools and services vetted and approved by [Security/IT] for organizational use
Shadow AIAny AI tool or service used for work purposes without organizational approval
Sensitive dataData classified as Confidential or Restricted per the Data Classification Policy
PIIPersonally identifiable information as defined by applicable privacy regulations
AI outputAny content generated by an AI system, including text, code, images, and analysis

4. Approved AI Tools

The following AI tools are approved for organizational use:

ToolApproved Use CasesData Classification LimitApproval Required
[e.g., Microsoft Copilot][Document drafting, email, code][Internal][No — enabled by default]
[e.g., Internal chatbot][Knowledge base queries][Confidential][No — enabled by default]
[e.g., GitHub Copilot][Code generation][Internal][Manager approval]

All other AI tools are prohibited for work purposes unless explicitly approved through the AI Tool Request Process (Section 9).

5. Acceptable Uses

Employees may use approved AI tools to:

  • Draft and edit documents, emails, and presentations
  • Generate and review code
  • Analyze and summarize non-sensitive data
  • Research publicly available information
  • Brainstorm and ideate
  • Automate repetitive tasks within approved tool boundaries

6. Prohibited Uses

Employees must NOT:

Data prohibitions:

  • Enter Confidential or Restricted data into any external AI tool (including ChatGPT, Claude, Gemini, or any other non-approved service)
  • Upload documents containing PII, trade secrets, financial data, legal privileged information, or source code to external AI tools
  • Enter customer data, employee data, or partner data into any AI system not approved for that data classification
  • Use AI tools to process data in violation of data residency requirements

Usage prohibitions:

  • Use AI to generate content that impersonates another person
  • Use AI to create deepfakes, synthetic media, or misleading content
  • Use AI to make automated decisions affecting employees, customers, or partners without human review
  • Use AI to circumvent security controls, access restrictions, or content policies
  • Use AI-generated code in production without human review and standard code review processes
  • Rely on AI outputs for legal, medical, financial, or compliance decisions without expert verification
  • Use AI tools to conduct security testing against systems without explicit authorization

Disclosure prohibitions:

  • Present AI-generated content as human-created without disclosure when required by policy, regulation, or client agreement
  • Use AI outputs in external communications, regulatory filings, or legal documents without review and approval

7. Data Handling Requirements

Data ClassificationExternal AI (ChatGPT, etc.)Approved Internal AIApproved Enterprise AI (e.g., Azure OpenAI)
PublicPermittedPermittedPermitted
InternalProhibitedPermittedPermitted
ConfidentialProhibitedRestricted — requires approvalPermitted with DLP
RestrictedProhibitedProhibitedCase-by-case approval

8. AI Output Requirements

All AI-generated content used in work products must:

  • Be reviewed by a human before use
  • Be verified for factual accuracy when used in external-facing content
  • Be disclosed as AI-generated where required by regulation, client agreement, or company policy
  • Comply with all existing content, brand, and communications policies
  • Not be assumed to be confidential — AI providers may log prompts and responses

9. AI Tool Request Process

To request approval for a new AI tool:

  1. Submit request to [Security/IT team] via [ticketing system]
  2. Provide: tool name, vendor, intended use case, data types involved, number of users
  3. Security team conducts vendor risk assessment (see Vendor Risk Assessment for AI)
  4. Privacy team reviews data processing terms
  5. Legal reviews terms of service and IP implications
  6. Approval/denial communicated within [X business days]
  7. Approved tools added to the approved list and communicated to employees

10. Incident Reporting

Report the following immediately to [Security team / reporting channel]:

  • Accidental submission of sensitive data to an unauthorized AI tool
  • Discovery of AI-generated output containing PII or sensitive data
  • Suspected AI-powered phishing, deepfake, or social engineering targeting the organization
  • Discovery of unauthorized AI tool usage by colleagues
  • AI system producing unexpected, harmful, or concerning outputs

11. Training Requirements

  • All employees must complete AI Acceptable Use training within [30 days] of hire and annually thereafter
  • Employees with access to approved enterprise AI tools must complete additional tool-specific training
  • Managers must complete AI governance awareness training

12. Enforcement

Violations of this policy may result in:

  • Revocation of AI tool access
  • Disciplinary action up to and including termination
  • Referral to legal for data breach investigation if sensitive data was exposed

13. Exceptions

Exceptions to this policy require written approval from [CISO / AI Governance Committee] and must include:

  • Business justification
  • Risk assessment
  • Compensating controls
  • Time-limited duration with review date

Implementation Checklist

□ Policy reviewed by Legal, Privacy, Security, HR, and IT leadership
□ Approved AI tool list populated and published
□ AI Tool Request Process documented and accessible
□ DLP rules configured for AI service domains
□ CASB monitoring enabled for shadow AI detection
□ Employee training developed and scheduled
□ Incident reporting channel established
□ Policy published to employee handbook / intranet
□ Quarterly review cadence established
□ Metrics defined (shadow AI incidents, policy violations, tool requests)

Customization Notes

Adjust for your risk profile:

  • Highly regulated industries (finance, healthcare) should lean toward stricter data classification limits
  • Technology companies may allow broader AI tool usage with guardrails
  • Government contractors may need to prohibit all external AI tools entirely

Adjust for AI maturity:

  • Early stage: focus on shadow AI prevention and data protection
  • Intermediate: add approved tool governance and output quality requirements
  • Advanced: add AI development standards, model risk management, and red team requirements