Building trust in AI is a business imperative
| | |
ON AI TRUST Why investing in AI trust pays off
| | | | | | | | |
| It’s becoming increasingly clear that AI can bring superpowers to people in many roles and will change how they work. Soon, all software that companies build and buy will become AI software, which will be more flexible and adaptive than today’s rule-based systems. This will shrink time from opportunity discovery to action. Integration will get easier as agents dynamically bridge systems so that fragmented business processes come together in interconnected flows. The shift will unlock immense value for enterprises of all kinds, enhancing velocity, productivity, and innovation.
But to get there, trust is critical. Stakeholder expectations are rising as perceived risks also rise, and that’s leading to rapid experimentation with new regulation. But the regulatory patterns and postures around the world are far from settled. And while compliance is essential, building AI trust goes well beyond compliance.
The companies that derive the most value from AI will be those that create trust with their customers, employees, and stakeholders. Fundamentally, people must trust AI enough to hand over tasks. Enhanced evaluations, transparency, and explainability can all contribute—as well as flexible governance that puts principles into practice while encouraging innovation. Organizations can start with a principled approach to deciding not just what they can build but what they should build. These ethical decisions must be rooted in the values unique to each organization and the values of a society that places humans at the center of the AI ecosystem. This approach to building trust is responsible AI, or RAI. And when implemented well, RAI leads to real ROI.
Our research has found that the majority of large companies (72 percent) are implementing AI today in at least one business process, but just 18 percent have an RAI council with decision authority. Making AI governance work requires bringing people together who can offer complementary cross-functional perspectives.
Getting RAI right means implementing guidelines for all and operationalizing formal AI trust policies. This can create a psychologically safe environment where employees feel empowered to innovate boldly. But organizations also need technical guardrails to ensure their AI systems can be driven fast but safely. Across industries, we have repeatedly seen how the right AI guardrails can accelerate innovation, not impede it.
Trusted data is also key to AI innovation. AI builders should constantly ask themselves: “How do we create the right metadata to track the provenance of data sets, how they were collected, and how they therefore can be used?” Deploying a data operating guide ensures that AI builders have well-curated and documented data for responsible innovation.
| | |
| | “The majority of large companies are implementing AI today in at least one business process, but just 18 percent have a responsible AI council with decision authority.” | | | |
| In parallel with strengthening data governance and guardrails, leaders must tackle the hard work of building and implementing trusted AI deployment processes. I often advise leaders to take three steps to move fast.
First, educate the enterprise. Create a clear communication plan about what AI trust means for the entire organization and why everyone should be committed to it. Define how executives should lead in an AI era—and then roll out structured reskilling and upskilling programs. Even top technologists can be new to many aspects of RAI and will need to learn new human-centered AI engineering practices.
Second, invest in AI trust. Allocating the right resources requires treating it as an asset that is built up and not a cost of compliance to be “managed down” in the presence of regulatory scrutiny. This means creating a multiquarter road map for enhancing RAI maturity that embraces people, processes, and technology in a well-orchestrated action plan.
Third, engage cross-functional teams to deploy a strong governance platform, including registries for the software and the data resources that must be built or bought, as well as end-to-end workflows to ensure the right controls are in place. In addition, at the model and product level, AI tools should be continuously monitored by machine learning operations, or MLOps, for performance, quality, and risk. These “engine room” technologies are critical to ensuring that leaders “on the bridge” can make confident decisions.
Getting AI trust right is a shared responsibility between the organizations deploying AI and the platform providers, governments, international organizations, and standards bodies aiming to ensure that AI is safe and reliable. In this dynamic environment, academic researchers, open-source communities, and developers also play a big role in building AI that is more trustworthy, transparent, and explainable. CEOs and chief technology officers can do their part by getting their data houses in order, empowering their teams to innovate safely, and monitoring all their AI deployments for signs of bias or misinformation.
| | |
Share Roger Roberts’ insights
|
|
|
|
|
| | | | | | |
|
Copyright © 2024 | McKinsey & Company, 3 World Trade Center, 175 Greenwich Street, New York, NY 10007
|
|
|
|
|