As we wrap up our series about using AI in the workplace, we discuss the importance of AI governance, security and ethics. We provide recommendations for defining policies and guidelines, setting up decision-making processes, securing AI solutions and data, and addressing ethical considerations while promoting responsible and effective AI adoption within your company.
In our previous blog post, we discussed the importance of preparing your workforce for the AI revolution. Today, as we wrap up our series, we will delve into a crucial aspect of AI adoption: AI governance, security and ethics.
As businesses increasingly embrace AI technologies, they must navigate a complex landscape of regulatory, security and ethical concerns to ensure a successful AI and digital transformation.
Keeping abreast of cybersecurity threats is an ongoing effort that has intensified the work of cybersecurity professionals. When exploring ChatGPT and other tools, we should advocate for appropriate AI governance to ensure that people use ChatGPT in an ethical and responsible way that aligns with the organization’s values and goals.
Mira Murati, chief technology officer of OpenAI and co-creator of ChatGPT and other AI models, has voiced her support in the need of regulations to address AI security and ethical concerns. Sam Altman, OpenAI’s chief executive officer, echoed these concerns when he testified before congress recently, telling the Senate Judiciary subcommittee that, “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”
ChatGPT’s viral moment highlights how this tool caught everyone by surprise with its abilities. As AI tools evolve and are adopted, the need for governance and regulations becomes more important. While waiting for regulations, we encourage our clients to adopt internal AI governance.
It’s important to note that I wrote this from the perspective of businesses using AI tools versus businesses creating AI tools.
The Importance of AI Governance
AI governance is crucial for organizations using AI in the workplace, as it helps promote responsible and ethical AI usage, ensures compliance with laws and regulations, manages risks, maintains consistency and standardization, supports skill development, fosters trust and transparency, and aligns AI adoption with long-term strategic goals.
Our recommendation is that organizations rally and rapidly adopt a governance structure to manage AI adoption by following these steps:
- Establish a cross-functional AI governance committee:
- Identify key stakeholders: Determine the key stakeholders from IT, legal, human resources and various business units who have the relevant expertise and interest in AI governance.
- Define roles and responsibilities: Clearly outline the roles and responsibilities of each committee member to ensure efficient decision making and collaboration.
- Set regular meetings: Schedule periodic meetings to review AI projects, discuss concerns and make decisions related to AI governance.
- Encourage continuous learning: Promote ongoing learning among committee members to stay updated on the latest AI tools, usage and best practices.
- Define AI policies, guidelines and best practices:
- Conduct a thorough review: Analyze existing AI tools and technologies within your organization to understand the current state of AI use.
- Research industry standards: Investigate AI governance practices and recommendations from industry experts, regulatory bodies and peers to inform your organization’s policies.
- Develop tailored policies and guidelines: Create a comprehensive set of AI policies, guidelines and best practices that align with your organization’s values, goals and industry requirements.
- Communicate and train: Ensure all relevant employees are aware of the AI policies, guidelines and best practices through training sessions, workshops, and internal communications.
- Set up a clear decision-making process for AI implementation and use:
- Define AI tool approval criteria: Establish criteria for evaluating and approving AI tools, considering factors such as business impact, cost, risk and ethical implications.
- Create a prioritization framework: Develop a framework for prioritizing AI investments based on factors like strategic alignment, expected ROI and data leakage risk.
- Establish project oversight: Set up a team to monitor implementation and rollout to ensure the company properly implements and follows its governance plan.
- Ensure regular audits and assessments of AI tool usage:
- Develop an AI audit and assessment plan: Create a plan that outlines the scope, frequency and methodology for auditing and assessing AI tool usage.
- Assign audit responsibilities: Designate individuals or teams responsible for conducting AI audits and assessments, ensuring they have the necessary expertise and objectivity.
- Monitor AI performance: Continuously monitor the performance of AI tools against predefined goals, KPIs and ethical standards.
- Identify and address risks and issues: During audits and assessments, identify potential risks and issues related to AI tools and develop action plans to address them.
Securing AI Solutions and Data in the Workplace
As AI tools become an integral part of the workplace, ensuring the security of AI solutions and data security is crucial to mitigate risks, protect sensitive information and maintain compliance. This focus on AI security is necessary because the improper usage of AI tools can lead to unintended data breaches, regulatory violations and reputational damage.
Leaky Data: The Risks of Accidental Proprietary Information Exposure
One of executives’ primary concerns is “leaky data,” or the accidental release of proprietary information to a third-party system without appropriate legal agreements, protections and controls in place.
Samsung Semiconductor allowed its engineers to access ChatGPT. Over the course of 20 days, Samsung recorded multiple uses of ChatGPT that included the transmission of proprietary and confidential data. This accidental exposure is a critical risk that worries security and compliance personnel.
Key steps to securing AI systems and data while using AI tools include:
- User Education and Awareness:
- Train employees on secure AI tool usage, including guidelines for handling sensitive data and recognizing potential security threats.
- Encourage a security-conscious culture where employees understand the importance of protecting AI systems and data.
- Implementing Robust Access Controls:
- Limit access to AI tools and data to authorized personnel with appropriate privileges.
- Use multi-factor authentication to further enhance the security of AI systems.
- Monitoring and Auditing AI Tool Usage:
- Regularly monitor and log employee use of AI tools to detect potential misuse or unusual behavior.
- Conduct periodic audits of AI tool usage to ensure compliance with security policies and best practices.
- Data Protection:
- Implement data classification policies to identify sensitive data and apply appropriate security measures.
- Encrypt sensitive data, both in transit and at rest to safeguard it from unauthorized access.
- Incident Response Planning:
- Develop and maintain an incident response plan to address security incidents related to AI tool usage.
- Regularly test and update the plan to ensure its effectiveness and adaptability to evolving security threats.
- Collaborating with External Security Experts:
- Engage with external security experts to stay informed about the latest AI security trends and best practices.
- Seek guidance on implementing advanced security measures for AI tools, as needed.
By focusing on these steps, organizations can better protect their AI systems and data when using AI tools, ensuring a secure and compliant work environment that leverages AI technologies effectively and responsibly.
Ethical Considerations in AI Adoption
Ethical considerations play a vital role in AI adoption, as organizations must ensure AI technologies do not harm users or stakeholders or perpetuate harmful biases. There is an important distinction between the ethical concerns with using AI versus the ethical considerations for creating an AI model.
This blog is limited to ethical concerns with using AI. Some key ethical considerations to address include:
- Know what to ask and validate what is given: Expertise is the number one ethical safeguard an organization can leverage to avoid the misuse of AI tools. Ensuring you educate users when to use and when not to use AI is critical to the effective and ethical use of AI on the job.
- Monitor for Bias: AI tools might inadvertently perpetuate or even amplify existing biases in data, leading to unfair treatment or discrimination against certain groups or individuals. It is essential to be aware of these biases and take steps to minimize their impact when using AI tools. Related to this is confirmation bias, which occurs when we see what we expect to see and ignore other relevant pieces of information. For example, in the mortgage industry, an AI tool could accidentally reimplement redlining because it was trained on bad data or trained incorrectly.
- Protect Proprietary Data: Do not allow users access to data that is proprietary or confidential without appropriate approval and clearance from the governance and security team. Develop a program of accountability and education to reinforce
- Privacy: Keep protected identifiable information safe and ensure workers do not use this data outside allowable laws, regulations and company policy.
- Watch Attribution: AI tools have been known to copy other works, whether art or text. It is important to have a policy and controls in place to avoid direct copies of work that would violate copyright laws.
Keeping these considerations at top of mind when establishing a plan for AI governance will help prevent unethical behavior such as using ChatGPT to automatically read a competitor’s copyrighted material (e.g., a white paper), and then tweak the wording using ChatGPT to publish a competing version.
Banning AI Tools Won’t Work
Despite the challenges associated with AI adoption, outright banning of AI tools is not a viable solution.
Some companies have initiated a knee-jerk response by banning ChatGPT usage from their organization due to a lack of regulation and governance. While this, initially, is a safe approach, it is only temporary as ChatGPT gets integrated into more everyday tools.
Microsoft has recently announced it will integrate next-generation AI capabilities it its premium Office 365 products. It is already integrated into their Bing search engine. While the Office 365, Dynamics, and Security offerings are sure to be an opt-in approach, the integration of ChatGPT into Bing means that AI is already accessible to everyone in the workforce.
Banning access to ChatGPT means more than banning OpenAI’s offerings, it also would mean banning access to the Microsoft Edge browser, Bard and Bing.com.
Key reasons why banning AI tools won’t work include:
- Competitive disadvantage: Organizations that do not adopt AI technologies risk falling behind their competitors in terms of efficiency, innovation and customer experience.
- Stifling innovation: Banning AI tools may hinder the development of new AI applications and technologies that could have significant societal benefits.
- Inevitability of AI: AI technologies are becoming increasingly pervasive, and their adoption is inevitable. Organizations must learn to navigate the challenges of AI adoption rather than avoid it altogether.
Organizations should instead focus on responsible AI implementation that mitigates risks and addresses concerns.
ChatGPT Usage Policy
While corporations may take some time to grapple with AI in the workplace, it is helpful to establish a ChatGPT usage policy or governance plan to get the most out of the tool. Here are our suggested minimal guardrails to using ChatGPT:
- Avoid entering sensitive or confidential data, such as personal information or financial records into ChatGPT. OpenAI’s policy, as of March 1, 2023, is that it will not use data to train or improve the models unless a user opts in. However, it does retain and monitor data for abuse up to a maximum of 30 days.
- Refrain from using ChatGPT to provide legal guidance or advice. These tasks require specialized knowledge and expertise.
- Avoid relying on ChatGPT to provide accurate, factual and reliable information on a field in which we have cursory or no knowledge. Humans are prone to confirmation bias (believing something because we expected it) and ChatGPT feeds into biases. Remember that ChatGPT, or any AI solution, cannot replace the human element.
- Avoid relying on ChatGPT to independently perform highly specialized tasks, such as scientific research or engineering fields. These fields require knowledge or expertise to provide accurate and reliable results, and we cannot count on ChatGPT to be reliable. Just like we can’t count Google search results as completely reliable, you must investigate the results to determine relevancy and accuracy.
- Avoid assuming accuracy on recent events. ChatGPT was trained on data prior to 2021. Some estimate it costs approximately $40M to train the model. This means that ChatGPT is only as current as the most recently trained model. Given the cost, we do not expect frequent updates.
Planning for AI Governance and Security
Ultimately, the impact of AI and ChatGPT on cybersecurity is based on how people use it. ChatGPT and similar emerging AI models provide opportunities to empower the employee workforce to be more productive and be able to focus more efficiently on strategic initiatives within the organization.
However, organizations need a strategy to protect themselves and their employees. This plan is more important now than ever and includes updating policies and protocols to protect against bad actors and establishing process and security awareness through governance within the organization.
A Comprehensive Approach is Critical
The successful integration of AI technologies like ChatGPT into the business world requires a comprehensive approach that addresses governance, security and ethical considerations.
By implementing robust AI governance frameworks, securing AI systems and data, and addressing ethical considerations, businesses can responsibly adopt AI technologies and unlock their full potential.
Rather than resorting to banning AI tools, organizations should focus on responsible AI implementation that mitigates risks and addresses concerns, while allowing them to remain competitive and foster innovation.