In this segment of “Office Optional with Larry English,” Larry explains what your AI security plan should include and why you should be thinking about it now.
In late May, an image showing thick, black billows of smoke rising from the headquarters of the U.S. armed forces building near the Pentagon popped up on a prominent social media platform. The photos were determined to be a false report of an explosion near the federal building. Local and national officials quickly refuted the claim, but the post was still shared nationally and internationally in investment circles causing the S&P 500 to drop, albeit briefly, before a rebound. The image, and other similar images with claims of a White House explosion, were likely created using generative AI. Only days later in an open letter signed by more than 350 AI experts and public figures, industry leaders warned that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” None of this is meant to scare business leaders, but to illustrate a few key points:
- Generative AI is already here, and there’s no turning back. Even reigning it in will be difficult. Creating a culture of AI awareness and preparing your team will be critical in navigating unchartered waters.
- Legislative guardrails will take time to develop in the United States. We can’t wait for legislation before creating plans around AI’s use, implementation, security, or disaster response. Companies need to realistically assess threats and build defenses now.
- For bad actors, AI has significantly lowered the barrier to entry. Those who have bad intentions but who didn’t before have the technical know-how or intelligence to carry out attacks can now engineer something that looks and sounds authentic.
AI Security Considerations for Enterprises
A generative AI platform, Writer, recently revealed nearly half of senior executives believe corporate data has been unintentionally shared with ChatGPT, the most widely used generative AI platform among enterprises. These concerns aren’t baseless. In fact, cybersecurity veteran David Lefever, founder, principal and CEO of The Mako Group and one of Centric Consulting’s business partners, has found that today, many business leaders are concerned with an increasing number of threats. Among those is “leaky data,” or the unintentional sharing of information with a third-party system without proper documentation and authorization. This can lead to privacy breaches, invalid and unreliable information, accidental security risks, and other threats. At a minimum, all AI security plans should include:- Vulnerability management: Zero-day attacks could become more commonplace as AI enables cyberattacks to be found more rapidly. As its name implies, this type of attack means there are zero days between the time a vulnerability is discovered and when an attack takes place.
- Fraud and threat detection: AI can enable advanced fraud. Fraud and threat detection are key to creating a cybersecurity program that reduces the risk of an attack and minimizes impacts should one happen.
- Continuous penetration testing: Conducting internal and external penetration testing isn’t one-and-done. Company leaders are starting to realize that even quarterly testing is not enough and that ongoing monitoring is required.
- IP risks: Generative AI poses unique IP risks in that your information could be exposed without your knowledge. Further, if you ask an AI tool to create something and use it, you may be inadvertently infringing on trademarks or copyrights of other companies.
- Monitoring and maintaining compliance: Protecting data isn’t simply an expectation, it’s now becoming law. Monitoring and maintaining compliance are other important considerations in your organization’s overall security strategy.
How to Prepare for AI Security Impacts Now
The best ways to prepare for the security implications of artificial intelligence are to educate, create governance, remain vigilant and plan for recovery in case of a breach:Create Security Awareness Across Your Organization
With any risk or vulnerability, there’s a software component and a people component. To successfully leverage AI in a secure manner, your organization will have to address both, starting with creating awareness and providing comprehensive and ongoing training for the workforce. “AI can create such convincing content to the average person that it’s going to be difficult for them to discern what’s real without intensive training,” Lefever said. “Social engineering approaches will become much more sophisticated and convincing, and it will require teaching the workforce to be critical thinkers around security.” Communicating policies, guidelines, best practices and updates to these living documents will be critical in creating and maintaining a security mindset.Set Up AI Guardrails and Governance
A key part to creating a security mindset is establishing a governance plan that promotes the responsible and ethical use of AI tools while helping ensure compliance and managing risk in a continually evolving landscape.- Designate a cross-functional AI governance committee
- Define AI guidelines and best practices
- Establish a decision-making process for using AI
- Audit AI tool usage and monitor performance.