In this segment of Joseph Ours’ Forbes Technology Council column, Joseph discusses why business leaders need to be aware of deepfake technology and how to be prepared for when cybercriminals use it against their company.
When a Hong Kong-based finance employee of a multinational company received an email from the company’s CFO in the U.K. about a secret business deal, the employee was immediately on alert. It didn’t feel right.
Many of U.S. have been in a similar position where we receive a fishy email that doesn’t sit well. However, the employee’s fears were allayed when he joined a video call with the CFO and other corporate executives. He was let in on the confidential business deal and began following orders to transfer funds to various bank accounts.
All was well — except it wasn’t.
The “executives” the employee had seen in the video conference call were deepfakes, or AI-generated synthetic media. When they’re high-quality, as in this instance, they can be so realistic that they’re hard to discern as fake. The cybercriminals ended up scamming $25 million from the company.
This example is one of many that have raised global concerns about deepfake technology and its malicious use in society. How prevalent are they, and what can business leaders do to reduce the risk of a deepfake scam?
Types Of Deepfakes
To understand what deepfakes are capable of, it’s important to have an understanding of various types of deepfakes and their purpose, including:
- Textual Deepfakes: This involves using AI language models to generate fake text that mimics the writing style of a person or entity.
- Audio/Video Deepfakes: Scammers use AI to generate entirely new video or audio of a person.
- Puppet Deepfakes: Cybercriminals replace a live actor’s face and voice with those of another person in a real-time video.
- Fake Identity Deepfakes: Bad actors use AI to create fake online personas with generated images, video, audio and text that support their identity.
- Shallowfakes Or Cheapfakes: Scammers use simpler video manipulation techniques to misrepresent reality through selective editing, splicing or speeding up video clips.
The level of sophistication and potential for misuse varies, but each is potentially dangerous.
How Big Is The Threat To Businesses?
The threat to businesses posed by deepfakes is more substantial than most people think. “In fact, two out of three cyber security professionals saw malicious deepfakes used as part of a strike against businesses in 2022, a 13 percent increase from the previous year, with email as the top delivery method,” according to a recent issue of Bank of America’s Cyber Security Journal. The attacks impacting businesses or operations are as diverse as the types of deepfakes themselves. Some of the deepfakes business leaders should be aware of include:
- Brand Imitation: A cybercriminal uses AI to create media that impersonates or clones a company’s identity for malicious purposes. For example, they create a deepfake video showing the CEO of a bank announcing a new product using the bank’s logo, branding, graphics, voice, tone and production style. The video may direct customers to a website luring them into providing personal information or transferring funds, thinking it’s their trusted bank.
- Brand Association: Brand association deepfake attackers create fake content that associates a brand or company with something that could damage its reputation. Some examples could include a company CEO making racist remarks, fake images showing a brand’s logo and products alongside extremist ideological symbols, and false testimonials or social posts from “customers” complaining about unethical labor practices.
- Fraud: Although the avenues to commit fraud are many, one area of vulnerability is insurance, such as if a policyholder used AI to augment damage to their home after a weather event to inflate their claim. Without the ability to detect that the images are fake, they may be tricked into paying a fraudulent settlement.
Deepfake attacks can be difficult to distinguish and lead to repercussions like financial fraud, loss of customers and revenue, reputational damage, legal liabilities, costly crisis control measures and more. Today, companies are exploring how to identify and combat deepfakes using a variety of technologies, from watermarks to tracking signals and other forms of authentication that validate original content. However, it’s important to note that some approaches, like watermarks, are susceptible to being faked themselves, and others have yet to achieve the level of accuracy necessary to fully allay fears stemming from potential hacking incidents.
Get Proactive About Deepfake Threats
Minimizing risks associated with malicious deepfakes will require a multilayered technical, operational and regulatory approach. Like with all evolving technology, there’s potential for misuse, and new approaches both in creating and combating the misuse of the technology will also continue to evolve. Like an arms race, deepfakes and deepfake prevention build on one another. That’s why leaders must remain vigilant and invest in one or more measures outlined below:
- Consider adopting deepfake detection technology.
- Establish media authentication processes, such as digital watermarking.
- Enhance internal controls to ensure critical business processes require authentication outside of virtual media.
- Invest in zero-trust frameworks that continuously authenticate, authorize and validate users’ access to virtual content such as recorded meetings, emails, SaaS applications and structured data.
- Train employees on the dangers of deepfakes and what to do if they have suspicions or concerns.
- Develop an incident response strategy and practice using tabletop exercises.
- Stay informed about emerging deepfake threats and how they’re being carried out, and adjust defensive strategies accordingly.
It’s important to note that legislation will be critical. Some states have introduced or passed legislation related to deepfakes, but while we wait for federal laws to be enacted, the onus of protecting businesses will be on company leaders and how well they prepare their organizations for a deepfake attack.
This article was originally published on Forbes.com.
Don’t get left behind in the AI revolution. We guide leaders through the disruption AI may cause to help you go from uncertain to excited about the potential of using AI. Ready to get started? Let’s Talk