Responsible AI: Confronting Risks and Guiding Application
Two new pieces by McKinsey Digital explore the risks to consider with AI, and how leaders can ensure their organizations are using AI responsibly.
We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at:
McKinsey_Website_Accessibility@mckinsey.com
McKinsey Digital recently published two pieces that expand on the Firm's research and thinking around AI.
Confronting the Risks of AI , published in the Quarterly, examines a range of risks to consider when applying artificial intelligence—from data difficulties, technology troubles, and security snags to unstable models and human-machine interaction issues. The piece also shares how organizations can mitigate the risks of applying AI and advanced analytics by embracing three principles: clarity, breadth, and nuance.
Leading your organization to responsible AI explores how leaders can ensure their organizations build and deploy artificial intelligence in a responsible manner by translating company values into practice when developing and using AI. Beyond values, the piece also looks at five areas that demand CEO leadership, including appropriate data acquisition, dataset suitability, fairness of AI outputs, regulatory compliance and engagement, and explainability.
If you’re interested in learning more about the ethics of artificial intelligence, check out our recent alumni webcast with Michael Chui from MGI.
* * *
Read more from McKinsey Digital .