ChatGPT can be a powerful time-saver for content creators, but only if you have the right policies in place to guide its use. Here are our thoughts on what to keep top of mind.
Outside the education world’s concerns about plagiarism and cheating, most worries about ChatGPT have been around how it will affect businesses and employees. Will I lose my job? Can we afford to invest in the paid ChatGPT Plus or other AI tools? Are my employees ready for it?
But the education world’s concerns with academic integrity are also relevant to your business. Claiming other people’s work as your own or sharing inaccurate information because you haven’t “done your homework” have serious implications for your brand, your reputation and your financials.
Schools have rules about plagiarism and citing sources. In the mildest cases, students might be reprimanded and told to redo assignments. In more serious cases, they could receive a lower grade. At the college level, plagiarism could even lead to expulsion.
Of course, businesses don’t, and shouldn’t, have the same kind of control over their employees that schools have over students. However, just like schools, they should put ChatGPT content policies in place to decrease the likelihood that an employee will, even unintentionally, put their name to work that belongs to someone else, contributes to misinformation or spreads outright falsehoods.
No one can tell you exactly what policies to put in place for plagiarism, disinformation or any other workplace issue. Your exact ChatGPT content policy must reflect your business’s culture and needs. But generative AI tools like ChatGPT are a new universe.
In this blog, we’ll share some questions to consider while crafting your ChatGPT content guidelines and policies, which should be included and communicated with your other content policies.
How Will Your ChatGPT Content Policy Discourage Plagiarism?
To reiterate, what your employees learned in school still applies. Putting your name to work you did not create is not OK.
But in the business world, the stakes are much higher. Plagiarism, copyright and trademark infringement, and other intellectual property laws can have costly results. The challenge today is that the world of ChatGPT has made it even harder to distinguish between truly original content and content that only appears original because ChatGPT rearranged it.
One policy matter to consider is whether you will require authors to disclose when they have used ChatGPT to help generate content, regardless of the use case. Rather than taking authority from the writer’s voice, doing so establishes trust and provides context for readers. It also provides a backstop in case someone charges you with even inadvertently using straight AI-generated content.
Such statements may range from a simple “We used ChatGPT to help create this content” to identifying which sections of a piece of writing received AI assistance. You should also consider whether you will establish limits on how much content AI can inspire — 0 percent? 5 percent? 10 percent? — and how you will calculate those limits. (Though they aren’t perfect, tools like ZeroGPT and GPTZero can provide estimates. You may want to explain which tool you will rely on to determine percentages and how you will handle appeals.)
How Will You Encourage Authenticity in ChatGPT-Generated Content?
ChatGPT results, full of generalizations, vague language and “No duh!” statements, do not engage readers or contribute new knowledge. Plus, as search and ChatGPT continue to evolve, it’s likely that writing containing such language will be downgraded in the future.
Google, for example, rewards content with higher rankings when it displays EEAT, an abbreviation for expertise, experience, authority and trustworthiness. EEAT is the litmus test against which Google judges all content, whether AI-assisted or human-generated. That’s because EEAT measures the value of a piece of content. Content that could come from anywhere or be written by anyone does not have value for Google or readers.
ChatGPT content policies and guidelines should help content contributors share their expertise, experience, authority and trustworthiness in their work. On the other hand, just as all your writing guidelines should, your ChatGPT policies must establish guardrails to prevent writers from sharing experiences that reflect badly on themselves or the company. Policies can also guide content producers on what kind of authority your writers can legitimately claim or what has the most value for your audiences.
Your ChatGPT content policy should indicate who has the authority to establish, maintain and enforce these guardrails and who has accountability if content goes off track.
How Will You Determine ‘Legitimate’ ChatGPT Use Cases?
ChatGPT is here, and your content producers are probably already experimenting with it on their own, if not on the job. You shouldn’t try to prevent them from using it. Instead, consider the best use cases for ChatGPT and similar AI tools.
The biggest and most obvious no-no is generating an AI result, putting your own name to it and publishing it as your creation. Beyond that, what you consider legitimate use cases for producing business content will vary. The two constants: you will always need a human in the loop, and using ChatGPT is a question of “how,” not “if.”
For example, AI could take a lot of the busy work out of creating a document, but only if you have the right guardrails in place. In one recent case, an attorney was reprimanded for citing cases in a ChatGPT-generated brief, but the cases were not real. ChatGPT invented them, complete with plaintiffs’ names. So no, you should not use results without further investigation. That’s where a ChatGPT content policy comes in, which might flat-out prohibit using ChatGPT for any legal documents.
That said, AI can be a great tool for:
- Generating ideas
- Creating keyword or topic clusters
- Building outlines
- Getting the creative juices flowing
- Acting as the writer’s devil’s advocate.
How Will You Ensure Accuracy When Using ChatGPT?
ChatGPT does not know the truth — it only knows patterns. It will repeat patterns of falsehoods as quickly as patterns of truthful statements. That makes it more important than ever to check facts in the writing process before you publish.
ChatGPT guidelines should encourage people to rely on their personal expertise and experiences to check content for statements that feel off or that readers could interpret in multiple ways. Similarly, policies should encourage writers to rely on that expertise to enhance their content with use cases, anecdotes, stories and tips unique to them because of their expert point of view.
Guidelines should also include advice on what sort of language helps create the best ChatGPT prompts for your content needs.
Finally, you should educate content producers about recognizing vetted, reliable sources no matter what resource they are using. Even articles from The Harvard Business Journal or Forbes may be written by people with vested or biased interests in their subject matter.
How Will You Ensure Quality Across ChatGPT-Assisted Workflows?
ChatGPT’s ability to construct grammatically correct sentences in dozens of languages is uncanny, but content producers should never assume it’s flawless. That’s why having people edit and proofread is as essential as fact checking.
To ensure ChatGPT-generated grammatical and factual errors don’t leak into writers’ content, you might consider formalizing a workflow around content production. Just like a workflow for code quality assurance, a content workflow would dictate various levels of approvals at key checkpoints — draft completed, draft edited, draft proofread, final sign-off and so on.
You should also consider how rigorous your workflow should be, who owns it and how you will maintain and enforce it. Set goals to generate the content you need at your desired pace while providing opportunities for your subject matter experts to share their expertise — with a little help from ChatGPT.
Conclusion
ChatGPT might make your content creators nervous, but with the right policies in place, this new tool can make it easier for them to do their work, not take work away. Provide them with a safe space to experiment with ChatGPT, but make sure they understand the policies you have implemented, too. The right ChatGPT content policies will help your team create great content that adds value, not risk, to your company.