Food for thought
How can a human-in-the-loop approach build the trust that leaders need to realize their AI visions?
How can a human-in-the-loop approach build the trust that leaders need to realize their AI visions?
Our research shows that employees trust their employers more than other institutions to roll out gen AI responsibly. And when it comes to general trust, 71 percent of employees trust their employers, compared with just 40 percent who trust their governments:
Clara Shih: There are three recurring themes that I’m hearing when we talk to our customers about AI. The first one always is around trust. Our customers, regardless of whether they’re CIOs, chief revenue officers, or CMOs [chief marketing officers], recognize that there are new risks inherent with large language models. So we talk a lot about trust.
Once customers get comfortable from a trust perspective, the next questions are always, “What are the business outcomes that we’re going to drive? These generative AI costs add up, and so there better be a there. How are we driving efficiency? How are we cutting down the average resolution time of a customer service case, increasing deflection, increasing customer satisfaction, driving higher conversion rates, and reducing the sales cycle?”
Talking about business outcomes is very important. There are short-, medium-, and long-term types of impact here. So we have a lot of conversations with customers about each of those time horizons.
And then, the third one, invariably, involves the topic of people. “How do I bring my entire organization along? How do I get my leaders on board thinking this way?” And that is a tremendous change management challenge that we spend a lot of time on.
Lareina Yee: These are three fantastic questions, Clara. If I could just unpack the first one, around trust, what advice do you have for executives looking at different types of software and partnerships?
Clara Shih: So much of AI is around data. And so the top trust questions are, “Where is the data housed? How is it being used? Is it being learned by the model? Is there a risk that the data can leak out of the organization? Are the organization’s internal sharing rules being honored by the AI?” Those are all really important questions.
And on the consumer data side, we get asked, “How are we protecting our customers’ data? How do we make sure there are ethical guardrails, knowing that these models are trained on the corpus of data on the internet, with all the toxic, inaccurate information out there?” So we need guardrails to improve accuracy and relevance.
At the Edge podcast excerpt
Clara Shih
Head, Business AI, Meta
Former CEO, Salesforce AI
How will you, as a leader, think about deploying and iterating AI quickly while also establishing a comprehensive foundation of trust?