By Jannik Podlesny, Stephen Simpson, and Henning Soller
Central processing units (CPUs) have long been the primary computation engine in advanced analytics. Recently, however, organizations have begun using graphics processing units (GPUs)—computation chips designed to perform rapid calculations, primarily for video rendering—in a broader range of applications. By using open-source tools that were developed for advanced analytics to support GPUs, companies can get substantially more computation power without making significant investments in software upgrades or in talent.
GPUs are especially beneficial for vector calculations common in data science, particularly machine learning. The combination of larger data sets, more unstructured data, and more sophisticated statistical analysis has made computation power even more critical. For example, genome sequencing used to require days on CPUs but takes only minutes on GPUs. The massive parallelism and processing speed of GPUs, especially for repetitive tasks such as combinatorial problems, as well as their decreasing hardware costs make their business value hard to ignore. That value, moreover, is applicable across various industries, which can benefit from GPUs’ capacity for accelerating existing use cases and enabling novel application opportunities (Exhibit 1).
Unlocking value from GPUs, however, isn’t always as simple as swapping out CPUs for them. Companies need to understand the technology and be deliberate about the use cases and applications. They can most effectively integrate GPUs into their operations in three phases. Start with no-regrets moves to enhance productivity and performance. Next, use GPUs to improve infrastructure performance and standardize data architecture. In the third phase, focus on more sophisticated applications of GPUs such as innovations in machine learning and Software 2.0.
Productivity and performance improvement
GPUs’ most significant benefits come in complex, distributed environments, where their parallel application enables orders-of-magnitude improvements in speed. But organizations can also use this GPU capacity to experiment with ways to increase the productivity of individual developers and individual projects (Exhibit 2).
Companies have begun to significantly accelerate the iterative development involved with machine learning. Cycle times can be five to 50 times faster as a result. Similarly, highly iterative tasks, such as training machine learning models, can be achieved more quickly and accurately, thanks to the additional cycles that GPUs make possible in the same or less time.
One example of a project exploiting the GPU architecture is the prediction of the spread of a pandemic such as the flu. GPUs allow for scalable semantic parsing for high-quality natural-language interactions, which traditional models cannot easily achieve. Because search results for items related to flu symptoms are correlated, the model can comb search data and compare numerous models for how the pandemic spreads to identify the one that best fits the search data. Massive parallel processing is required to perform the required parameter tuning of the underlying transport model.
Infrastructure performance improvement
Present-day data lakes, machine learning, and model training and production environments tend to be collections of technologies and frameworks from different eras. This can often constrain system-level performance, especially when speed is important. But with GPUs, organizations can lock data sets into core GPU memory as a readily accessible, centralized resource along the entire data pipeline, even for multiple data frameworks. This accessibility is key to improving infrastructure performance.
Decision makers can use wider applications of GPUs to reimagine their advanced analytics infrastructure. Such an infrastructure would offer a more uniform data-processing pipeline that is unencumbered by traditional business intelligence and data architecture. Indeed, the main productivity gains will come from avoiding memory-copying operations between disparate data frameworks. Organizations may even run the entire data-processing pipeline in the shared GPU memory—from ingestion to production. Open-source initiatives may serve as a model.
Innovations in machine learning and Software 2.0
The clearest application of GPUs is, of course, in deep learning, a subset of machine learning that structures algorithms to create an artificial neural network that can learn to make decisions without human involvement. These are scalable, versatile, and reusable and now further extended to automatically generate code based on core business data. This shift is called Software 2.0.
Deep learning models are most commonly applied to unstructured data, which account for about 70 percent of all data and are harder to access. However, the models can also be effective with tabular data, which make up the bulk of conventional, “usable” enterprise data. The most significant challenge of using deep learning to process tabular financial data lies in pre-processing data sets: addressing missing values, identifying potentially skewed data, and standard scaling (determining the mean and standard deviation of a set of values). GPUs can make it easier to automate data augmentation, especially in industries such as retail and e-commerce.
Other applications of GPUs in machine learning include data profiling, dependency and inference analysis,1 and data anonymization. GPUs’ processing power brings significant value here, too: experience shows that simple grouping and aggregation activities can run 426 times faster than the same activities without GPU support.
Getting started
To get started, decision makers in business and technology functions can assess the throughput of their current IT stack and prioritize products, services, and offerings that can benefit from lightning-fast response times for complex queries or expanded data-processing capacity. Those applications can range from executing simple large-scale aggregation queries for routine financial reports to anonymizing large data sets. Experiments with a few of those use cases can help organizations and teams learn, gain experience, and evaluate other ways that GPUs can support their work.
The technology and understanding exist to start evaluating these approaches today; the greater challenge is positioning them inside the organization and validating their potential benefits. We will discuss these three trends in future posts.
Jannik Podlesny is a specialist in McKinsey’s Berlin office; Stephen Simpson, based in London, is a senior principal at QuantumBlack, a McKinsey company; and Henning Soller is a partner in the Frankfurt office.
1 Dependency analysis is used to understand and describe the attribute values structure and identify connections between data records.