A new and faster machine learning flywheel for enterprises

by Medha Bankhwal and Roger Roberts

This post is a commentary on the MLCommons article “Perspective: Unlocking ML requires an ecosystem approach” by Peter Mattson, Aarush Selvan, David Kanter, Vijay Janapa Reddi, Roger Roberts, and Jacomo Corbo.

The world of artificial intelligence (AI) and machine learning (ML) is undergoing a sea change from science to engineering at scale. Over the past decade, the volume of AI research has skyrocketed as the cost to train and deploy commercial models has decreased. Between 2015 and 2021, the cost to train an image classification system fell by 64 percent, while training times improved by 94 percent in the same period.1

The emergence of foundation models—large-scale, deep learning models trained on massive, broad, unstructured data sets—has enabled entrepreneurs and business executives to see the possibility of true scale. Although previous releases (including Open AI’s GPT series, CLIP, DALL-E 2, Google’s Imagen and PaLM, Midjourney, and Stable Diffusion) have helped pave the way, the release of ChatGPT represents a tectonic shift. By unleashing the power of machine learning to the public through an open-access API, ChatGPT short-circuits the distance from production to distribution and consumption.

The traditional ML development flywheel is an iterative process for creating an ML solution, including problem identification, data transformation, model development, validation and deployment, and serving live operations (exhibit). With the adoption of foundation models, this flywheel is now spinning much faster because enterprises may be able to bypass model development (Step 3 in the exhibit). However, product differentiation will require fine-tuning the model to target a unique business problem using quality, proprietary data sets.

The ML development flywheel depicts the iterative process of creating an ML solution.

The current state of AI/ML adoption

The potential value of AI across industries is massive. Moreover, global private-equity and venture-capital funding in AI companies has increased nearly fivefold, from $16 billion in 2015 to $79 billion in 2022.2 Even prior to the introduction and adoption of foundation models, businesses were rapidly expanding their AI capabilities: the average number of AI capabilities that businesses used doubled from 1.9 in 2018 to 3.8 in 2022.

Many enterprise AI adopters have already experimented and scaled several use cases across the value chain. For instance, retail companies are employing AI to help achieve the following:

  • Hyperpersonalization (for example, summarizing customer feedback to improve product design)
  • Connection with omnichannel experiences (for example, building real-time dynamic consumer profiles to support seamless customer journeys across digital and physical channels)
  • End-to-end processes optimization (for example, using past operational data to optimize sourcing, manufacturing, and fulfillment)
  • Convenience and flexibility for consumers (for example, automating self-service shopping experiences with cashierless checkouts)
  • Faster adoption of digital assets (for example, customizing digital-native products using large volumes of consumer data from foundation models)

Three imperatives for realizing the full potential of AI/ML

To capture the full potential of AI/ML, mainstream adopters will need to rethink the ways they frame business problems, build their enterprise architecture, and revisit their talent strategy.

Reimagine challenges as ML problems

Not all business problems are ML problems. But some of the most persistent challenges that hold businesses back can be reframed as ML problems, which can in turn enable novel approaches to creating solutions. Recognizing how ML can uniquely address a business challenge requires identifying and framing the problem as well as identifying timely, granular, representative, live-data sources to address the specific problem. Once a business has confirmed that a problem should be solved using ML, framing the problem3 involves defining the ideal outcome and objective, identifying the model’s output, and defining success criteria for all relevant stakeholders (including direct impact and potential externalities).

Effective and inclusive problem framing requires the involvement of multiple stakeholders—including tech translators (such as product managers), data scientists, data engineers, ML engineers, human-centered designers, legal, privacy and ethics leads, and domain experts—from the beginning of the ML development life cycle. Additionally, enterprises will have to build in flexibility to respond to potential changes in business needs and market forces.

Put ML at the core of enterprise architecture

Organizations need to reshape enterprise tech platforms to put ML at the core rather than treating ML as auxiliary to systems architectures that are built around rules-based logic. Given the current pace of innovation, systems just five years old could soon become “new legacy.” The emergence of foundation models has led to an evolution of the enterprise tech stack, thus requiring further investment in model selection, FMOps4 tools for deployment, fine-tuning and inference, developer frameworks (such as Langchain and GPT Index), and the application layer. The choice of enterprise architecture depends on several factors and use cases, ranging from organizational problems with fairly straightforward applications (for example, the use of large language models [LLMs] to improve workforce productivity) to big bets in AI (especially generative AI) that require more effort (for example, generating more-robust synthetic controls for clinical trials using data from different modalities in cases in which patient recruitment or retention is difficult).

Develop a human-centered talent strategy

Enterprises will need to build new workforce capabilities to capture these possibilities, guide technological adoption, and proactively shape how new AI tools augment employee creativity and productivity. As a result, leaders will have to make smart decisions on where and how to build, buy, or partner—and not just plug in new “black boxes.” For example, enterprises can utilize AI to boost productivity for certain tasks (such as using generative AI to take notes, synthesize arguments, generate marketing copy, and scan 10-Ks). However, this requires a clear understanding of the current pain points that can be automated or addressed using AI. With the use of generative AI across functions, we are likely to see a demand for new skills and roles (such as technical expertise to build the application layer on top of foundation models and an understanding of intellectual-property laws pertaining to data use in generative models); new models of human–machine collaboration to enable new possibilities; and a new kind of team composition (for example, including ethics, legal, and data privacy leads in every room).


Capturing the opportunity from AI/ML is a marathon, not a sprint. The winners will be those who can effectively frame business problems as AI/ML problems, build a forward-looking enterprise architecture, and innovate a human-centered talent strategy. By embracing these imperatives, enterprise adopters will be able to push the technology faster and further—closing in on unlocking its full potential.

Medha Bankhwal is a consultant in McKinsey’s Bay Area office, where Roger Roberts is a partner.

1 Artificial Intelligence Index report 2022, Stanford University Human-Centered Artificial Intelligence, March 2022.
2 PitchBook.
3 “Framing an ML problem,” Google Developers, updated March 3, 2023.
4 Foundation models operations.