🤖 A Sample Company Policy For Large Language Model Artificial Intelligence (LLM AI)
Author(s): Christian Scott
✨ The Objective of This Sample Policy
A year ago, most companies didn't have a policy on guiding staff on how to safely leverage large language artificial intelligence (LLM AI) but now with popular solutions like ChatGPT making the news on a daily basis, staff are utilizing LLM AI for daily tasks like writing code, creating reports and creating new content.
Without any guidance on how to safely utilize LLM AI like ChatGPT, staff at many companies are endangering the confidentiality of sensitive information, and the security of company systems and potentially violating many company policies and laws.
I felt there was a need to create a sample policy (Creative Commons Attribution 4.0 International) that businesses could leverage as a starting point for dictating the ethical, lawful, and secure use of LLM AI technologies that abide by corporate policies.
Corporate Policy On Utilizing Large Language Model Artificial Intelligence such as ChatGPT
Policy Version & Revision: 1.0
Policy Author(s) & Maintainer(s): <MAINTAINER>
Policy Authorization Date: <DATE>
Purpose & Scope
The purpose of this policy is to establish the requirements and guidelines for the use of LLM AI, such as ChatGPT, Bard, Bing, Notion, or similar solutions, by employees, contractors, temporaries, referred to as “staff members” of our company, whether through company-owned devices or BYOD. This policy aims to ensure that the use of LLM AI is ethical, lawful, secure, and abides by all company policies, applicable laws, and regulations.
Important Risks to Consider When Utilizing LLM AI
The use of LLM AI has inherent risks that staff members must be aware of and understand before using LLM AI.
- Data Confidentiality & Privacy Risks:
- Information entered into LLM AI may become public or utilized in a training dataset, which could result in the disclosure of sensitive company data. Such disclosures could violate data privacy laws, breach customer contracts, or compromise company trade secrets. The privacy policies of LLM AI solution providers vary and, in many instances, permit the LLM AI solution provider to train their language models on any questions, requests, or data submitted to the LLM AI solution provider.
- LLM AI relies upon algorithms that are trained on limited datasets to generate content. There is a significant risk that LLM AI may generate inaccurate or unreliable, and completely false information, known as hallucinations. Staff members should exercise extreme caution when relying on LLM AI generated content and always review and edit responses for accuracy before utilizing any content.
- To the extent that staff members utilize LLM AI to generate any content or code, that content may not be protected by copyright laws in many jurisdictions due to the fact there was no human authorship. As of March 2023, the United States Copyright Office does not recognize LLM AI generated content as copyrightable.
- Source: https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence
- Since LLM AI generated content is based on previous training datasets, the content may be considered a derivative work of any copyrighted materials used to train the LLM AI .
- To the extent that code, financial data, other trade secrets, or confidential information are submitted to a public LLM AI for analysis, there is a risk that other users and companies that utilize that same LLM AI may be able to access and disclose that sensitive information.
- Any software code submitted to or received from LLM AI, such as ChatGPT, may include some open-source derivative references, which may be subject to various open-source license obligations and requirements such as:
- The redistribution of open-source code
- Limitations on the commercial use of open-source code
- Author attribution references the original author of the open-source code.
- LLM AI may produce biased, discriminatory, offensive, or unethical content.
- Furthermore, LLM AI may produce content that does not align with the company’s mission, vision, values, and policies.
- LLM AI may store and process sensitive data, which could be at risk of being accessed by unauthorized parties, unintentionally leaked, breached, or hacked through various means, such as prompt injection attacks.
LLM AI Policy Requirements and Guidelines
- Permitted Use of LLM AI Solutions:
- Staff members are authorized for the restricted use of LLM AI for work-related purposes that do not involve confidential company processes, information, or trade secrets for only the approved LLM AI providers listed below. General tasks such as generating content for emails, presentations, reports, and customer service communications are permitted as long as all LLM AI policy requirements are strictly followed.
- Approved LLM AI Solutions By The Company:
- <LIST OF APPROVED SOLUTIONS>
- The use of all other LLM AI solutions is strictly prohibited by the company. Any additional LLM AI solutions must be reviewed by executive management, the legal department, the technology department, and the information security department.
- Confidentiality & Data Privacy:
- Staff members must follow all applicable data privacy laws and company policies when using LLM AI. Confidential information shall never be entered into any LLM AI solutions, which includes confidential code, financial data, other trade secrets, confidential employee personally identifiable information, confidential customer personally identifiable information, or any customer data that may be constituted as trade secrets and protected by intellectual property laws.
- Incidents of the unintentional disclosure of confidential data to an LLM AI solution shall immediately be reported to the information security department at <EMAIL ADDRESS> and the legal department at <EMAIL ADDRESS>.
- LLM AI Generated Content, Deliverables & Code:
- While our company believes that LLM AI can be a helpful tool for generating content, our company believes that LLM AI cannot replace the critical thinking, creativity, and spirit of human creators, writers, and editors. Any content or deliverables generated by LLM AI should always be interpreted as an “initial starting point” and not a “finished product.
- LLM AI shall not be utilized to generate any content that could be deemed as a company trade secret or important intellectual property.
- LLM AI shall not be utilized to generate any software code that would be leveraged in any of the company’s closed-source software or products.
- LLM AI may be permitted in limited instances to generate scripts that would be used in turn to help automate tedious tasks such as sorting a dataset.
- Any code or scripts generated by LLM AI in such limited instances shall abide by the company’s standard acceptable use, software development, and information security policies and processes.
- Any code or scripts generated by LLM AI in such limited instances shall be labeled as LLM AI generated and undergo peer review to ensure compliance with all company policies.
- Accuracy & Proofing of LLM AI Generations Content:
- LLM AI generated content must be fact-checked by qualified staff members to ensure that all information is accurate. This includes verifying all sources, checking any statistics, and ensuring the content is supported by material evidence.
- LLM AI generated content must be curated to ensure that it is coherent and well-written for its intended audience; this includes checking for spelling errors and grammatical mistakes.
- Transparency of Use:
- Aside from automatic language completion suggestions in solutions like Microsoft Teams or Outlook, all LLM AI generated content must be clearly apparent or labeled as such, and the use of AI in generating content should be transparent to staff members and customers.
- The company’s general guideline would be to insert a message of “AI Assistance Was Utilized In The Creation Of This Document” in the footer or references of a document.
- Ethical Use & Fairness:
- LLM AI must be used ethical manner that complies with all company policies, applicable laws, and regulations.
- Staff members shall not utilize LLM AI to generate content that is inappropriate, objectionable, or offensive.
- Staff members shall not utilize LLM AI to discriminate against any individual based on their protected characteristics, such as race, gender, age, or disability.
- Staff members shall ensure that LLM AI generated content aligns with the company’s mission, vision, values, and policies.
- Our company must have sufficient insurance coverage to protect against claims about the misuse of LLM AI.
- Compliance & Enforcement:
- Staff members that do not comply with the company LLM AI policies, requirements, and guidelines will be subject to the company’s standard disciplinary process. Staff members found to be knowingly misusing LLM AI solutions will be subject to legal action.
- Review & Updates:
- This policy shall be reviewed and updated periodically to ensure continued compliance with all applicable laws, regulations, and company policies.
- Acknowledgment & Acceptance:
- By using LLM AI at the company, staff members acknowledge that they have read and understood and will comply with this policy. Staff members shall immediately report any violations of this policy to the legal department at <EMAIL ADDRESS>.
Fun Footer Stuff:
Copyright 2023 Enclave Regenerous. Unless otherwise stated, all of our work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Simply put, please share it, provide attribution and if you remix it then share generously with others. The work of others that is featured on this site is always provided with attribution and is not directly monetized.
The opinions expressed here are respectively our own and do not reflect the views of our organization or anyone else unless quoted verbatim.
We try our best to provide helpful insight to folks but there is no warranty to completeness of anything we create or post here; so please be sure to always do your own research.