The rapid growth of generative AI and large language models (LLMs) is driving adoption across business functions, with the aim of boosting productivity, efficiency, and innovation. However, these benefits can only be realized if AI is deployed safely and responsibly. Responsible AI (RAI) practices—key to a broader AI trust strategy to generate trust across customers, employees, and stakeholders around the organization’s use of AI—are essential to achieving this. By addressing critical aspects such as data governance, explainability, fairness, privacy, security, and transparency, RAI helps organizations mitigate risks, build trust, ensure accountability, and maximize the impact of their AI solutions.
A recent McKinsey survey of more than 750 leaders across 38 countries provides insights into the current state of RAI in enterprises. Survey respondents come from industries ranging from technology to healthcare and represent professional roles in legal, data and AI, engineering, risk, finance, and more. Their responses were assessed using the McKinsey AI Trust Maturity Model, an RAI framework that encompasses four dimensions of RAI—strategy, risk management, data and technology, and operating model—with 21 subdimensions (Exhibit 1). RAI maturity was ranked across four levels, ranging from the development of foundational RAI practices to the implementation of a comprehensive and proactive program (see sidebar, “What maturity looks like: RAI development across different dimensions”).
The average RAI maturity score for organizations surveyed was 2.0 on a scale of 0 to 4. Level 2 described about 36 percent of respondents. This implies that, on average, organizations are still in the process of integrating responsible AI practices, such as defined key risk indicators, data quality guidelines, and incident response plans.
Technology, media, and telecommunications (TMT) and financial and professional services are leading the way with an average RAI maturity score of 2.1 (Exhibit 2). Geographically, India stands out in RAI maturity, scoring 23 percent above the global average with a score of 2.5, followed by the United States, at 19 percent above average and a score of 2.4. This could reflect a greater awareness of the risks enterprises face and, in the more litigious context of the United States, the uncertainty around potential legal liabilities.
Most organizations surveyed, regardless of size, said they plan to invest more than $1 million in RAI in the coming year, with many larger organizations planning to invest much more (Exhibit 3). These investments include hiring RAI professionals, building or purchasing technical systems to comply with RAI practices, and engaging legal or professional services support related to RAI. There is a strong positive correlation between companies with higher RAI maturity scores and greater levels of investment, suggesting that increased investment may help advance RAI maturity.
When looking at larger businesses (those with revenue of more than $10 billion), the separation between leaders who are setting higher AI trust aspirations and those taking a “wait and see” approach becomes clearer—with roughly equal shares investing at the highest and lowest levels. Companies that have already made such investments report significant benefits, including improved business efficiency and cost reductions (42 percent), increased consumer trust (34 percent), enhanced brand reputation (29 percent), and fewer AI incidents (22 percent). Around 55 percent of organizations are investing in reducing inaccuracy as part of their RAI road map, along with more than 50 percent investing in cybersecurity and regulatory compliance. We expect the rapid development of platforms, tools, and services—including trusted third-party evaluations—to fuel further investment acceleration.
Despite the progress, obstacles to implementing best-in-class RAI practices remain. When asked to cite leading barriers, respondents identified knowledge and training gaps (51 percent) and regulatory uncertainty (40 percent) as significant challenges. These findings indicate that organizations still lack clarity about how to implement the correct practices to gain the benefits listed above.
However, a lack of clarity should not justify a passive approach to RAI. As enterprises continue to adopt AI across business functions, building new risk management and mitigation capabilities in parallel with bold AI road maps will be critical to ensure safe and trustworthy use. Organizations that invest in AI trust now will benefit later from faster adoption and greater resilience against risk as they push to capture the full potential of AI in their businesses.
For a deep dive on RAI in enterprises, see section 3.3 in the Stanford HAI AI Index Report 2025.
Angela Luget is a partner in McKinsey’s London office, Gabriel Morgan Asaftei is a partner in the New York office, and Roger Roberts is a partner in the Bay Area office, where Brittany Presten is an associate partner and Katherine Ottenbreit is a consultant.
The authors wish to thank Cayla Volandes, Cecile Prinsen, Maya Voelkel, and Natasha Maniar for their contributions to the research for this survey.