In this segment of “Office Optional with Larry English,” Larry shares five action items to set your business up to use AI responsibly.
The list of concerns around AI is long: privacy, security, bias, job security, inequality, disinformation, to name just a few. To ensure the technology remains a force for positive change and to prevent unintended negative consequences, businesses have a duty to deploy AI ethically and responsibly.
Yet studies suggest many organizations are falling behind in this area. A 2024 Workday report, for instance, found that only 62 percent of leaders and 55 percent of employees believe their employer will ensure AI is implemented responsibly. Leaders need a plan for responsible AI governance that protects privacy and personal data, upholds the well-being of users, mitigates bias and promotes transparency. Read on to learn how to get started.
What Does the Law Say About Responsible AI?
While it’s true that technology is advancing much faster than governments can keep up, legislation is coming. Already, in early 2024, the EU passed the Artificial Intelligence Act, providing a regulatory and legal framework for AI. No doubt, more laws are coming both in the U.S. and globally.
Leaders can look to current laws around privacy, consumer protection and protection of classes of individuals for guidance now, suggests Michael Bennett, an AI business expert and researcher at Boston’s Northeastern University. “In many instances, we have 100-plus years of laws addressing these matters,” he notes. “We can look to pre-existing law that’s already been tested in these spaces and non-AI contexts to address emerging AI issues.”
One thing is almost certain: AI laws are going to be complex to navigate. “You’ll almost certainly see more AI regulation, whether it’s city ordinance, state law or new federal legislation,” Bennett says. “The US is not going to be like the EU; there’s probably not going to be one overarching framework in a near-term timeframe. Instead, we can anticipate a growing regulatory thicket that’s going to be very complex for folks to navigate.”
What Leaders Should Do Now for Responsible AI Governance
Businesses can’t afford to wait for new AI laws and policies to craft responsible AI governance plans — they need AI now to remain competitive, and they need to develop and deploy it responsibly to prevent a major headache later on once the regulatory environment catches up.
Already, some companies are working to provide responsible AI governance frameworks that other organizations can use as a starting point. “We’re not waiting for regulation to be finalized,” says Kelly Trindel, chief responsible AI officer at Workday. “We’re making major investments in responsible AI because we know it’s not only the right thing to do, it’s the smart thing to do.”
To get started with responsible AI, consider the following action items:
1. Research existing frameworks and guidelines.
If you’re just getting started with responsible AI, look to recently developed best practice frameworks. A good place to start is the NIST AI Risk Management Framework, and many industry groups are putting out their own industry-specific guidelines and best practices as well, says Trindel. You can also refer to how companies have adapted these frameworks and best practices. Do a deep dive into what’s already out there, making note of how existing guidelines could work for your organization.
Also think through your organization’s existing technology guidelines and practices around privacy and data governance. Most likely, you can adapt some of your current governance structures and processes to guide responsible AI use.
2. Build a cross-functional, responsible AI governance team.
As AI-related laws become enacted, organizations will need to ensure compliance. You’ll need to build a team tasked with making sure any use of AI remains ethical and adheres to your goals for responsible AI use as well as any legal and regulatory requirements.
“Increasingly, we will see businesses needing to respond less to abstract internally generated notions of what privacy or transparency means and more to their explicitly articulated mandates required for compliance with regulation,” Bennett says. “This change will require the creation and maintenance of cross-functional teams that include specialists with expertise in AI law, ethics and data science, all working together harmoniously.”
Workday, for instance, has a dedicated responsible AI team, including data scientists, social scientists and DevOps experts, that focuses on guiding the business to a responsible AI-by-design approach. This team orchestrates cross-functional engagement with experts from product, engineering, user experience, legal, and data privacy and reports to the chief legal officer.
Workday keeps reporting lines separate to ensure independence of ethical review and efficiency of product development. “We also have a cross-functional executive advisory board that meets monthly to guide the embedding of responsible-AI-by-design into our technologies and advise on AI development edge cases not yet contemplated by our core governance,” Trindel says.
3. Determine the risks associated with your intended AI use.
The EU AI Act denotes four levels of risk for AI, ranging from unacceptable use (for things like social scoring or manipulative AI) to high-risk and minimal-risk uses. Each category comes with different regulatory rules.
Leaders can use the AI Act’s categorization of risk as a starting point to develop an internal risk evaluation process based on their organization’s unique business needs. Workday took this approach in crafting its responsible AI policies and practices, Trindel notes, leaning heavily upon the framework provided both by the EU AI Act and NIST’s AI Risk Management Framework. Such frameworks outline risks and mitigation strategies in the areas of safety, accountability, human oversight, transparency and fairness.
4. Control your AI risks.
Once you understand the risks associated with each use case for AI within your organization, create a strategy for appropriately mitigating and managing those risks.
“If something is high risk, it doesn’t mean we don’t do it,” says Trindel. “It just means we have different guardrails for ourselves internally.” Kathy Pham, vice president of AI at Workday adds “The risk framework helps us make decisions about what we should build and how to use the responsible AI-by-design approach to avoid unintended consequences and build trust into the technology. I see this as similar to how we’ve had security, privacy and integrity by design prioritized as well. This approach helps us to keep moving on AI innovations for our customers and do so in a safe and ethical way.”
5. Select an AI vendor with a strong focus on responsible, ethical AI.
Both organizations developing AI tools and those deploying the tools have important roles to play in responsible AI, says Trindel, describing this as a “shared responsibility ecosystem.” Leaders deploying AI within their organization must ensure they’re investing in an AI vendor with a focus on responsible AI, including strong controls around privacy and security.
AI is a powerful tool that can be used for positive change — if business leaders develop and deploy the technology carefully with high ethical standards and a smart governance plan. Leaders need to act now to make sure AI benefits their organizations and society as a whole and to prevent major disruption once more comprehensive AI laws are in effect worldwide.
This blog was originally published on Forbes.com.
Are you ready to explore how artificial intelligence can fit into your business but aren’t sure where to start? Our AI experts can guide you through the entire process, from planning to implementation. Talk to an expert