Enter Centric: A Cross-Functional Approach to AI Intake and Governance
Our client initiated a strategic engagement with us to identify actionable AI use cases and develop a resilient governance framework. Drawing from the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), which recommends aligning new governance structures with an organization’s existing processes, we established a governance model that integrated with the client’s continuous improvement methodology and existing collaboration relationships, such as departmental partnerships and business and IT teams.
This initiative involved cross-functional collaboration with executives, continuous improvement directors, and subject matter experts across claims processing, underwriting, customer service, and IT. This team’s combined efforts resulted in a structured AI Governance Framework that defined clear roles and responsibilities to ensure accountability, communication, and effective ownership.
A step central to the framework included developing an AI intake and assessment form — a tool designed to streamline the evaluation of AI opportunities by capturing essential criteria on potential risks, suitability, alignment with business values, and feasibility, as recommended by AI RMF.
We then packaged the AI Intake and Governance Framework into a governance manual as a resource to operationalize the new entities while fulfilling the need for documentation of effective AI governance. This helps new teams follow the rules while keeping proper records of AI oversight.
One key aspect of governance woven into the AI intake process involved identifying key performance metrics and outcomes to determine the business value of AI initiatives. This approach, building on the organization’s existing innovation and continuous improvement practices, has brought a new level of alignment and measurement that enhances and provides for better planning, focus and effectiveness. The team quickly adapted to this more structured approach, recognizing its potential to enhance their decision-making processes.
One team member highlighted the benefits of this collaborative effort while completing the AI Intake and Assessment form. She noted, “I have been thinking about this problem in my department for a long time, and just going through this exercise, the group has identified new considerations I had not thought of.” This team member’s reflection highlights the value of involving cross-functional teams in the process. By bringing knowledge of their diverse needs and objectives to the table, cross-functional teams work collaboratively to make responsible decisions regarding the risks and benefits of AI throughout the organization.
The introduction of a structured governance approach enabled our client to begin strategically piloting AI opportunities and putting the right policies and procedures in place. We then helped them implement Microsoft 365 and Microsoft Copilot tools that allow them to start with low-risk initiatives before transitioning to high-impact use cases. Our phased approach creates the gradual integration and adoption of AI within the company’s operational processes.
The Results: Alignment, Measurement and Adoption
Several months into operationalizing its AI Governance Framework, our client found its rhythm. They have successfully cultivated a process and environment for governance and innovation, and their confidence in handling AI use cases continues to grow. Regular AI leadership team meetings are essential to their process, facilitating active engagement with the AI intake and assessment form.
A key development in the company’s governance structure includes the appointment of a lead to the 10-plus member AI Governance Council, enhancing its approach to risk tolerance and management. This role is important for activating and refining the company’s policies and procedures, which they test through AI initiative intake and guidance forms. For example, the council is improving how it evaluates and scores software vendors and third-party AI tools — a capability vital for maintaining the quality and security of the AI applications our client integrates into their operations. The new policies and procedures they are creating emphasize data protection, compliance, and ethical AI practices.
The company has also created a prioritized list of AI initiatives identified as valuable for the organization, focusing on impact, feasibility, and alignment with strategic objectives. review and approval process.
The collaborative effort between Centric and this leading insurer has exceeded the focus of simply replacing an existing process with an AI solution. The team is now more focused on the art of the possible, strengthened by solid governance practices.
This engagement has reaffirmed our client’s commitment to continuous improvement and has positioned them as forward-thinking leaders in the insurance sector. By effectively marrying traditional practices with new, responsible AI, they have set a new standard for innovation, operational excellence, and risk management in the industry.
Conclusion
Many organizations are so eager to reap the benefits of AI that they fail to look strategically at their governance policies and practices. The result is not only wasted time and resources but serious risks. As our engagement with this client shows, however, you don’t have to overhaul your governance structures completely. Instead, work cross-functionally to build on the structures already in place to ensure that you are pursuing the right AI opportunities in the safest, most responsible way.