Whether it’s debugging code, writing poetry, or answering customers’ questions, ChatGPT has startled many organizations with its powerful and flexible functions. But is it really a replacement for human intelligence? We explore the capabilities of large language models like ChatGPT for business and share some key implications for the business world.
ChatGPT is revolutionizing the technological landscape, taking our understanding of artificial intelligence (AI) to a whole new level. Among several impressive features, ChatGPT can interact with users just like a customer service agent might (or any number of other personas). Trained on massive volumes of text from sources like literature and the internet, this predictive engine is remarkably skilled at mimicking human interaction.
But is it reasonable to think of ChatGPT as a substitute for human intelligence? Where do AI capabilities end and human capabilities begin?
In this blog, we explore how intelligent large language models like ChatGPT actually are. We share an even-handed take on the possibilities, limitations and implications of these technologies for the future of the business world.
The Power of ChatGPT for Business
ChatGPT is one of the best-known large language models (LLMs) on the market. An LLM is an algorithm designed to take in large amounts of text and, when prompted, return intelligent results like translations, summaries and predictions.
LLM responses can seem indistinguishable from human responses – at least at first glance. The Turing Test, famously developed by AI engineer Alan Turing, suggests we can consider a computer “intelligent” if it’s impossible to tell whether you’re interacting with a person (or not) when using it. ChatGPT passes the Turing Test in spades.
LLMs like ChatGPT can produce content once thought to be exclusively human: music, lyrics, essays, film scripts, and poetry. In the business world, its results can be similarly breathtaking yet take seconds to produce. Among many things, LLMs can create and debug code, develop marketing materials, and offer executive summaries. Companies are, therefore, integrating LLMs into everyday business applications, including Microsoft Copilot and Salesforce Einstein.
The “function calling” ability of the GPT model holds great promise, but is its ability to “create” the same thing as human creativity? The answer is a qualified “no.” ChatGPT generates sophisticated answers (outputs) based on the information it receives (inputs). This function — a lot like expert-level MadLibs — is impressive but imperfect. That is, outputs are only ever as good as the inputs. The model is just trying to please us – our job as the user is to learn how to best prompt the model. This means the model can’t produce output on par with the best humans – but it’s often better than what the average human can produce.
To help illustrate the point, here are some things ChatGPT has been known to get wrong.
ChatGPT “Hallucinations”
One type of ChatGPT error, which happens occasionally, is called a “hallucination.” A hallucination is when an LLM makes up content, seemingly out of nowhere, using incorrect text that appears plausible. The term “hallucination” is interesting because if a human embellished or made up an answer for an assignment, that answer would be called wrong.
Since ChatGPT was released, an entirely new field of prompt engineering has developed. Prompt engineering involves crafting queries to guide the AI towards more accurate, relevant responses. It’s akin to asking the right questions to get the most useful answers. Prompt engineering allows us to greatly reduce hallucinations. But in the end, models like GPT are non-deterministic and have risks.
What does this distinction mean for the business world?
For one thing, it acknowledges the underlying intentions of human vs. artificial intelligence are different. The AI didn’t return false answers for selfish purposes or to get ahead. When faced with an ethical dilemma, it didn’t use moral reasoning to choose a deceptive path. It’s just here to serve a function: to “fill in the blank.” Even if it has low confidence in an answer, it will give you something – because that’s how it was programmed.
Just like with an employee you know is prone to making mistakes, it’s essential to put safeguards, guidelines and policies in place for your AI. These measures ensure that both your human and AI workforce are working effectively and ethically, contributing to the success of your business.
We share these distinctions to underscore that while AI and human intelligence are fundamentally different, neither is perfect, and both are valuable. Companies should deploy each in ways that help to maximize business success.
AI vs. Human Content Creation
ChatGPT is renowned for its ability to create and summarize content. It’s especially useful for crossing written deliverables off your to-do list. It can draft emails, meeting minutes, presentations, blog posts, contracts, and more. If preferred, it can “get you started” or offer helpful tips as you create these materials on your own.
When comparing human vs. AI content creation, it can help to know about AI’s speed. ChatGPT can respond to certain prompts in seconds. Humans, on the other hand, have an advantage when it comes to length. A human translator, for example, can process large volumes like War and Peace or Brothers Karamazov. ChatGPT has a context window that limits the amount of information it can consume in one run. Something like “War and Peace” would need a divide-and-conquer approach to break the job down into smaller tasks.
There are important differences in quality between human and artificial intelligence as well. Humans can judge which information is most important, know which passages need emphasizing, and offer analysis based on what they read. ChatGPT cannot critically analyze text or make judgment calls based on it. You may have noticed, for example, how it sometimes dedicates large chunks of text to points that a human might treat as minor or supporting points.
Another important difference between human and artificial intelligence? Knowing how to read the room.
Human employees might draw upon company politics, the status of ongoing negotiations, the nuances of partner relations, and more to tactfully write an email. ChatGPT’s algorithm can reflect much more context — such as all of Google’s results on a particular topic — but it won’t necessarily be the right context for your business dealings.
Both forms of intelligence are useful and necessary. There will be times when you want an employee who can draw upon Google’s entire knowledgebase – and other times when you want an employee with intuition who’s been beside you, navigating partner relations for years.
The Magic of Bringing the Two Together
Even though human and artificial intelligence each bring different strengths that might each be best suited to different scenarios – the truth is that there’s often a special alchemy in combining the two.
For instance, with its ability to scan sources in seconds, ChatGPT can offset days or weeks of labor for human employees who must find, sort and read online materials during market research. Human employees, in turn, can assess ChatGPT’s responses, refine their inputs, and think critically about the full scope of research findings. They can curate ChatGPT’s answers in a thorough and strategic market report. Other examples:
- Humans ask intelligent questions. ChatGPT is great at answering questions.
- Humans can create astounding musical and artistic patterns. ChatGPT can recognize and build upon complex patterns in ways that humans often cannot.
- ChatGPT can generate an accurate brief ahead of a high-stakes negotiation. A human can read the body language of the executive who receives it.
The bottom line: ChatGPT can be the world’s greatest copilot – if you know how to use it strategically and synergistically. The leaders who capture the full value of this technology will be those who understand the power and potential of bringing human and artificial intelligence together.