In this segment of Shane O’Donnell’s Forbes Technology Council column, Shane talks about the new subsect of shadow IT: shadow AI.
In the physical world, dangerous things lurk in the shadows. The same is true for the digital world, where shadow AI can be a business leader’s worst nightmare.
Like its well-known cousin, shadow IT, shadow AI is the use of unapproved tools — in this case, AI tools — by employees within an organization. But shadow AI can be much riskier, and every C-suite leader should be concerned about how it operates in their organization.
The good news is that technology leaders are starting to understand the risks it poses. In a recent survey of 200 IT directors and executives at companies of 1,000 or more employees, 90 percent said they were worried about shadow AI from a privacy and security standpoint.
However, with one in five organizations already experiencing cyberattacks linked to shadow AI, according to IBM research, worrying is no longer enough. Companies with high levels of shadow AI face data breach costs that are $670,000 higher on average than those with minimal unauthorized AI use.
Shadow AI can also cause fines and compliance issues, as shadow AI can cause personal data to be shared in non-compliant ways.
The risks of invisible AI go well beyond financial impact. Because security teams don’t have visibility into unauthorized AI tools, tracing the source and scope of data exposure is nearly impossible.
A Real-World Wake-Up Call
Not long after the launch of ChatGPT, Samsung Electronics became a stark example of AI’s cybersecurity risks. Over three weeks, three engineers inadvertently leaked sensitive corporate data to the platform: One pasted source code, trying to fix a bug; another entered proprietary equipment testing codes seeking optimization; the third converted a confidential meeting recording to text and fed it to the engine to get meeting minutes.
The cybersecurity implications were immediate and irreversible. Unlike traditional data breaches, where companies can potentially contain the damage, Samsung’s intellectual property became embedded in OpenAI’s systems — impossible to retrieve or delete. The company banned generative AI tools company-wide, joining other companies like Amazon and some major financial firms in restricting employee access to these platforms.
But banning AI may not be the best strategy to solve this issue. By doing so, companies may face costs like lost productivity from employees who could no longer use AI tools for legitimate work, the expense of developing internal alternatives and the competitive disadvantage of operating without AI while rivals continue leveraging these technologies.
Shadow AI: The Cybersecurity Risk Multiplier Effect
While shadow IT might expose specific documents or data, shadow AI can extract patterns, relationships and insights from data that create far broader security vulnerabilities. This ability is what makes it exponentially more dangerous than traditional shadow IT.
The attack surface is massive. Companies typically have 67 generative AI tools running across their systems, but 90 percent lack proper licensing or oversight, according to research by cybersecurity firm Prompt Security cited by Axios. Meanwhile, 38 percent of employees admit to sharing confidential data with AI platforms, and 65 percent of ChatGPT users rely on its free tier, where data can be used to train models accessible to competitors, the Cloud Security Alliance notes.
Common cybersecurity risks include:
- Data Exfiltration At Scale: Unlike traditional breaches that steal static data, shadow AI can capture live workflows, decision-making processes and strategic thinking patterns. When employees feed proprietary information into AI systems, they’re essentially providing blueprints of how their organization operates.
- Supply Chain Vulnerabilities: Most shadow AI attacks originate through compromised apps, APIs or plug-ins connected to AI platforms. With 97 percent of organizations lacking proper AI access controls, according to the IBM report, these entry points remain largely unprotected.
- AI-Powered Attack Amplification: When shadow AI tools leak organizational data, bad actors can use that information to craft more sophisticated phishing campaigns, deepfake attacks and social engineering schemes tailored to specific companies.
A Practical Framework For Executives
Employees aren’t using unauthorized AI tools to cause harm. They’re using them because these tools solve problems that approved systems don’t address. It’s up to leaders to reframe shadow AI from a threat into a managed strategic advantage. Here are some strategies to consider:
- Implement a sandbox. Rather than driving AI use underground with restrictive policies, create secure environments where teams can safely experiment with AI tools. Establish dedicated testing environments with synthetic data, clear guardrails around what information can be processed and evaluation criteria for determining which tools warrant enterprise adoption.
- Deploy technical safeguards. Implement AI-specific data loss prevention (DLP) tools that can detect and block sensitive data patterns being sent to unauthorized AI platforms. Deploy cloud access security broker solutions for real-time visibility into AI app usage across your network. Critically, establish clear AI governance policies that classify tools into “approved,” “limited-use” and “prohibited” categories with use cases and data handling requirements for each tier.
- Foster change through education. Transform your approach from policing to partnership. Provide comprehensive training on data privacy and ethical AI, helping employees understand the capabilities and risks of AI tools. Create safe reporting mechanisms where employees can disclose AI usage without fear of punishment. Invest in approved enterprise AI solutions that deliver the benefits employees seek.
- Turn shadow AI into competitive intelligence. When employees adopt unauthorized tools, they’re conducting informal R&D on what works for your business processes. Evaluate these grassroots adoptions to identify which capabilities deliver value, then implement vetted enterprise versions. This accelerates digital transformation based on user behavior rather than vendor promises or theoretical use cases.
Shadow AI isn’t going away — nor should it. The key to success is channeling innovation through governance and strategic frameworks. Companies that master this balance don’t just avoid catastrophic data leaks. They accelerate digital transformation, boost productivity and gain competitive intelligence from grassroots AI adoption.
Rather than viewing shadow AI as a threat to be eliminated, smart leaders recognize it as innovation waiting to be harnessed.
This article was originally published on Forbes.com.
Data breaches and ransomware attacks threaten financial stability and customer trust that could impact your organization for years to come. Our Cybersecurity experts can help you address your most pressing cybersecurity issues and keep compliance a continuous commitment at your organization. Let’s Talk