🤖 A Sample Company Policy For Large Language Model Artificial Intelligence (LLM AI)

🤖 A Sample Company Policy For Large Language Model Artificial Intelligence (LLM AI)

image

🤖 A Sample Company Policy For Large Language Model Artificial Intelligence (LLM AI)

Author(s): Christian Scott

✨ The Objective of This Sample Policy

A year ago, most companies didn't have a policy on guiding staff on how to safely leverage large language artificial intelligence (LLM AI) but now with popular solutions like ChatGPT making the news on a daily basis, staff are utilizing LLM AI for daily tasks like writing code, creating reports and creating new content.

Without any guidance on how to safely utilize LLM AI like ChatGPT, staff at many companies are endangering the confidentiality of sensitive information, and the security of company systems and potentially violating many company policies and laws.

I felt there was a need to create a sample policy (Creative Commons Attribution 4.0 International) that businesses could leverage as a starting point for dictating the ethical, lawful, and secure use of LLM AI technologies that abide by corporate policies.

🏛️
There is no warranty on the completeness and accuracy of this sample policy: I am not a lawyer and this policy is not legal advice. If you leverage this policy, it’s at your own discretion; you should curate this sample policy to your organization’s specific needs and have it reviewed by your legal counsel. License: Creative Commons Attribution 4.0 International

Corporate Policy On Utilizing Large Language Model Artificial Intelligence such as ChatGPT

Policy Version & Revision: 1.0

Policy Author(s) & Maintainer(s): <MAINTAINER>

Policy Authorization Date: <DATE>

Purpose & Scope

The purpose of this policy is to establish the requirements and guidelines for the use of LLM AI, such as ChatGPT, Bard, Bing, Notion, or similar solutions, by employees, contractors, temporaries, referred to as “staff members” of our company, whether through company-owned devices or BYOD. This policy aims to ensure that the use of LLM AI is ethical, lawful, secure, and abides by all company policies, applicable laws, and regulations.

Important Risks to Consider When Utilizing LLM AI

The use of LLM AI has inherent risks that staff members must be aware of and understand before using LLM AI.

  1. Data Confidentiality & Privacy Risks:
    • Information entered into LLM AI may become public or utilized in a training dataset, which could result in the disclosure of sensitive company data. Such disclosures could violate data privacy laws, breach customer contracts, or compromise company trade secrets. The privacy policies of LLM AI solution providers vary and, in many instances, permit the LLM AI solution provider to train their language models on any questions, requests, or data submitted to the LLM AI solution provider.
  2. Accuracy & Quality Control Risks:
    • LLM AI relies upon algorithms that are trained on limited datasets to generate content. There is a significant risk that LLM AI may generate inaccurate or unreliable, and completely false information, known as hallucinations. Staff members should exercise extreme caution when relying on LLM AI generated content and always review and edit responses for accuracy before utilizing any content.
  3. Intellectual Property Risks:
    • To the extent that staff members utilize LLM AI to generate any content or code, that content may not be protected by copyright laws in many jurisdictions due to the fact there was no human authorship. As of March 2023, the United States Copyright Office does not recognize LLM AI generated content as copyrightable.
    • Since LLM AI generated content is based on previous training datasets, the content may be considered a derivative work of any copyrighted materials used to train the LLM AI .
    • To the extent that code, financial data, other trade secrets, or confidential information are submitted to a public LLM AI for analysis, there is a risk that other users and companies that utilize that same LLM AI may be able to access and disclose that sensitive information.
    • Any software code submitted to or received from LLM AI, such as ChatGPT, may include some open-source derivative references, which may be subject to various open-source license obligations and requirements such as:
      • The redistribution of open-source code
      • Limitations on the commercial use of open-source code
      • Author attribution references the original author of the open-source code.
  4. Bias & Objectionable Content Risks:
    • LLM AI may produce biased, discriminatory, offensive, or unethical content.
    • Furthermore, LLM AI may produce content that does not align with the company’s mission, vision, values, and policies.
  5. Data Security Risks:
    • LLM AI may store and process sensitive data, which could be at risk of being accessed by unauthorized parties, unintentionally leaked, breached, or hacked through various means, such as prompt injection attacks.

LLM AI Policy Requirements and Guidelines

  1. Permitted Use of LLM AI Solutions:
    1. Staff members are authorized for the restricted use of LLM AI for work-related purposes that do not involve confidential company processes, information, or trade secrets for only the approved LLM AI providers listed below. General tasks such as generating content for emails, presentations, reports, and customer service communications are permitted as long as all LLM AI policy requirements are strictly followed.
    2. Approved LLM AI Solutions By The Company:
      1. <LIST OF APPROVED SOLUTIONS>
    3. The use of all other LLM AI solutions is strictly prohibited by the company. Any additional LLM AI solutions must be reviewed by executive management, the legal department, the technology department, and the information security department.
  2. Confidentiality & Data Privacy:
    1. Staff members must follow all applicable data privacy laws and company policies when using LLM AI. Confidential information shall never be entered into any LLM AI solutions, which includes confidential code, financial data, other trade secrets, confidential employee personally identifiable information, confidential customer personally identifiable information, or any customer data that may be constituted as trade secrets and protected by intellectual property laws.
    2. Incidents of the unintentional disclosure of confidential data to an LLM AI solution shall immediately be reported to the information security department at <EMAIL ADDRESS> and the legal department at <EMAIL ADDRESS>.
  3. LLM AI Generated Content, Deliverables & Code:
    1. While our company believes that LLM AI can be a helpful tool for generating content, our company believes that LLM AI cannot replace the critical thinking, creativity, and spirit of human creators, writers, and editors. Any content or deliverables generated by LLM AI should always be interpreted as an “initial starting point” and not a “finished product.
    2. LLM AI shall not be utilized to generate any content that could be deemed as a company trade secret or important intellectual property.
    3. LLM AI shall not be utilized to generate any software code that would be leveraged in any of the company’s closed-source software or products.
    4. LLM AI may be permitted in limited instances to generate scripts that would be used in turn to help automate tedious tasks such as sorting a dataset.
      1. Any code or scripts generated by LLM AI in such limited instances shall abide by the company’s standard acceptable use, software development, and information security policies and processes.
      2. Any code or scripts generated by LLM AI in such limited instances shall be labeled as LLM AI generated and undergo peer review to ensure compliance with all company policies.
  4. Accuracy & Proofing of LLM AI Generations Content:
    1. LLM AI generated content must be fact-checked by qualified staff members to ensure that all information is accurate. This includes verifying all sources, checking any statistics, and ensuring the content is supported by material evidence.
    2. LLM AI generated content must be curated to ensure that it is coherent and well-written for its intended audience; this includes checking for spelling errors and grammatical mistakes.
  5. Transparency of Use:
    1. Aside from automatic language completion suggestions in solutions like Microsoft Teams or Outlook, all LLM AI generated content must be clearly apparent or labeled as such, and the use of AI in generating content should be transparent to staff members and customers.
    2. The company’s general guideline would be to insert a message of “AI Assistance Was Utilized In The Creation Of This Document” in the footer or references of a document.
  6. Ethical Use & Fairness:
    1. LLM AI must be used ethical manner that complies with all company policies, applicable laws, and regulations.
    2. Staff members shall not utilize LLM AI to generate content that is inappropriate, objectionable, or offensive.
    3. Staff members shall not utilize LLM AI to discriminate against any individual based on their protected characteristics, such as race, gender, age, or disability.
    4. Staff members shall ensure that LLM AI generated content aligns with the company’s mission, vision, values, and policies.
  7. Insurance:
    1. Our company must have sufficient insurance coverage to protect against claims about the misuse of LLM AI.
  8. Compliance & Enforcement:
    1. Staff members that do not comply with the company LLM AI policies, requirements, and guidelines will be subject to the company’s standard disciplinary process. Staff members found to be knowingly misusing LLM AI solutions will be subject to legal action.
  9. Review & Updates:
    1. This policy shall be reviewed and updated periodically to ensure continued compliance with all applicable laws, regulations, and company policies.
  10. Acknowledgment & Acceptance:
    1. By using LLM AI at the company, staff members acknowledge that they have read and understood and will comply with this policy. Staff members shall immediately report any violations of this policy to the legal department at <EMAIL ADDRESS>.

Fun Footer Stuff:

Copyright 2023 Enclave Regenerous. Unless otherwise stated, all of our work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Simply put, please share it, provide attribution and if you remix it then share generously with others. The work of others that is featured on this site is always provided with attribution and is not directly monetized.

image

Disclaimers:

The opinions expressed here are respectively our own and do not reflect the views of our organization or anyone else unless quoted verbatim.

We try our best to provide helpful insight to folks but there is no warranty to completeness of anything we create or post here; so please be sure to always do your own research.