To get the best possible employees, businesses have to become more inclusive. In this blog, we explain how an inclusive AI environment can help with DEI in your business.
Artificial intelligence (AI) was a new buzzword, a new concept, only a few short years ago. But while there are many unanswered questions about how exactly AI is going to continue to transform our personal and business lives, there likely isn’t a soul on the planet with a Wi-Fi connection, or even just a TV or radio, that hasn’t heard of it or even used it. Simply put, AI is now on everyone’s radar.
Nearly every major organization – business or otherwise – is trying to introduce and implement AI in a way that allows them to become more efficient and productive while also helping them improve strategic business decisions. Some organizations are mature in the space, while others are in the process of figuring it out.
Levels of adoption vary widely worldwide, where most companies either haven’t adopted AI and machine learning (ML) or are still conducting research on it. One recent survey found that only 42 percent of North American companies have adopted AI or ML, 22 percent are introducing it, and 21 percent are scaling up their AI involvement.
Too often, however, C-suite executives underestimate the effect that implementing AI has on people. In AI’s younger days, they largely forgot how people had to learn how to absorb AI applications into their day-to-day responsibilities and figure out how it would ultimately impact their careers.
While it’s still necessary for organizations to remember change management to help employees become fully comfortable with AI technology, what’s no less important is for them to understand how AI adopters must overcome racial, sexual and other biases inherent in a lot of data to embrace the imperative of diversity, equity and inclusion (DEI) in the corporate world.
Sometimes, overcoming bias involves being vigilant about unintended exclusionary results – say, from natural language processing algorithms. Amazon halted its use of a hiring algorithm when it discovered the algorithm preferred applicants whose resumes included the words “executed” or “captured” – which are found more often in men’s resumes.
Companies can band together to tackle the problem, too. Microsoft and Robust Intelligence contributed prize money to a bias-bounty competition to reward the winning participants who devised the best tools to identify and mitigate algorithmic biases in AI models. The contest challenged entrants to build machine learning models that label a data set of about 15,000 synthetically generated images of human faces with their skin tones, age groups, and perceived gender. They were judged on multiple grounds, including how accurately the models tagged images.
Give Employees a Big Say in Planning and Implementing AI Technology
As with any significant initiative, managing the AI technology revolution carefully and intentionally requires a well-thought-out plan, where employees participate in developing as well as implementing the plan.
One such framework, called STEP, has been successfully adopted by several knowledge-intensive companies at the leading edge of AI use. Faithful to the acronym, it involved:
- Segmenting tasks for AI either to automate or enhance – where workers themselves decided which tasks belong in either category.
- Transitioning tasks across work roles – where workers could spend more time on certain higher-level tasks if necessary and dispense with lower-level ones to focus on more critical work.
- Educating workers to make the best use of evolving AI proficiencies and learn new AI-related skills essential to their changing jobs – where workers got multiple opportunities to relearn and refine new skills.
- Evaluating performance to see how well employees are learning and applying AI technologies for their own and others’ benefit – a process that went beyond top-down assessments of employee productivity to having employees themselves decide if AI can help them do faster, more accurate work.
Initial returns suggest that STEP does three essential things: empowers workers to be closely involved in determining their new responsibilities, lets leaders rethink work roles and reallocate work functions as value-added changes, and shows businesses how to manage the sometimes-bewildering pace and details of AI tech change in a sustained rather than a sporadic way.
Rather than hierarchical approaches where leaders impose change upon their employees without prior notification or consultation, more collegial alternatives such as STEP achieve buy-in by making workers active agents of the change. Co-creating and including employees in the AI development and execution processes drive a valuable and sustainable product – and can establish and maintain a DEI-friendly workplace.
Accordingly, employee surveys inviting feedback can be more effective than HR teams in identifying policies and practices that potentially exclude groups within the organization. Workers also can exert an influence as DEI change agents through DEI committees and employee resource groups – which are employee-led – that can lobby for DEI initiatives.
Without Inclusivity, There Is No Diversity or Equity
For AI to evolve to its highest and best levels of operational effectiveness, it must recognize and engage the latent talent pool of the entire workforce in the leadership and staff ranks. And “entire” means the demographic width and breadth of the population, cutting across all racial and gender lines. The movement to systematize DEI policies in business, civic and non-profit organizations is integral to making that happen – whether or not the jobs in question are AI-related.
The ways to cast this widest possible net are as varied as the companies leading the way.
According to Fortune, Colgate-Palmolive does it with a data dashboard that monitors groups by gender, ethnicity and race to determine which ones are advancing or not and how diverse the candidate slates are for a given job (the target is 50 percent).
In its first year of comprehensively expanding its supply chain to meet the demands of its customer base, Lowe’s Home Improvement invited small, diverse businesses to pitch their products and chose eight candidates out of 1,300 applicants for mentorship that helped them grow their businesses and reach new consumer audiences.
Finally, Herman-Miller became the catalyst for a cross-industry cooperative approach by starting up the Diversity in Design (DID) Collaborative (members included Adobe and Levi-Strauss) to cultivate systemic change for more diversity. Consistent with this, Herman Miller is addressing the under-representation of black creatives in design in the U.S. by furnishing career opportunities, networking and mentorship for black youths and college students.
We can argue that without inclusivity, there is no diversity or equity in the emerging AI world – not only because inclusivity is an easier metric to gauge than diversity but because making diversity a practical reality demands that all employees get the chance to make their voices heard and apply those latest talents to AI implementation in their workplaces.
Korn Ferry cites a new Canadian research study showing that diverse teams of people, with their sometimes-contradictory ways of thinking and behaving, can devolve into chaos unless inclusive leadership, which is collaborative, transparent, and culturally nimble, can unleash all of that latent multi-cultural, multi-ethnic, differently-abled talent so that they can be risk-takers who act authentically and take responsibility for their own development.
For employees to lend their voices to AI solutions-making, businesses must create a culture of acceptance that encourages, rather than inhibits, that kind of participation. For instance, this is critical to developing assistive technologies that effectively support accessibility on the job, given that 76 percent of employees with disabilities are reluctant to fully disclose their impairments at work.
How is this culture being created? The 2023 Disability Equality Index report found that 62 percent of responding companies have disability experts to deal with issues that disabled workers confront when they use internal-facing digital products. Resources such as the Equitable AI Playbook of the Partnership on Employment & Accessible Technology suggest how to foster cultures of authentic inclusivity when establishing AI workplace programs.
The emphasis on equity can adversely change and distort what inclusion ultimately seeks to achieve, where equality of opportunity – where everyone gets the same chance to succeed – becomes equality of outcomes, an impossible goal that attempts to guarantee universal success and not infrequently kickstarts a backlash against DEI.
So, how do you make your AI ecosystem more inclusive?
1. Set the Tone
First, set a more inclusive tone. It’s not hard to understand basic AI concepts, even at the high-school level, but too much of the conversation is filled with jargon, and that tends to leave a lot of people on the outside looking in. Simplifying the language can make AI much more inclusive and accessible.
2. Get More People Involved
Next, companies can organize people into groups who are least likely to be involved in AI. You can do this by auditing hiring practices so that people you’re (intentionally or accidentally) screening out because of their existing skills might instead be selected to train on AI tools. You can also identify benefits that specifically help a wide variety of users learn how to do their jobs more easily, quickly and safely.
3. Update Hiring Algorithms
And, since AI ultimately revolves around algorithms, a more inclusive AI environment means having more inclusive algorithms – something IBM, for one, is doing by sharing research and certain tools to help support bias mitigation. Significantly broadening your database to make it more inclusive can happen through several steps:
- Gather more representative data about individuals in marginalized communities.
- Implement guidelines for ethical data sourcing.
- Correct the historical exclusion of such communities from research and statistics.
Design algorithms to avoid oversimplifying and generalizing parameters (i.e., proxies) for these target groups.
Conclusion
AI, by its very nature, can do so much to make the workplace more inclusive, and it’s already doing that – with technology that improves accessibility for people with disabilities, gets rid of language bias in job listings, and detects patterns in corporate data to confront disparities in succession planning.
Tying in virtual reality (VR) and augmented reality (AR) features helps instructional designers create real-world, interactive training experiences. Training teams can use video generation platforms to quickly and inexpensively conjure up people who speak a diverse range of languages in many different accents. And AI tools can analyze images, videos, and text to red-flag biased training materials and content.
We still have a way to go before corporations can fully realize the potential for inclusive AI, but the technology and many of the players designing and implementing it are making considerable progress in that direction.