Speeding up scientific research with AI: Interview with Anna Koivuniemi of Google DeepMind

Proteins are vital molecules that perform a wide range of functions essential for life. But since the late 1950s, even with the help of supercomputers, humans have only been able to predict the structure of around 200,000 proteins. Fast forward to today, and in just a few short years, Google DeepMind’s AI model has dramatically increased that number to 200 million structures.

Anna Koivuniemi, who leads Google DeepMind’s Impact Accelerator, recently met with Matt Fitzpatrick, senior partner and global leader of QuantumBlack Labs, the R&D and software development group within QuantumBlack. They discussed how AI could benefit people, business, and humanity, how it can be scaled in the enterprise, and why every application needs to underscore safety and responsibility.

Matt Fitzpatrick: Why don’t we start with just your personal story, Anna. What led you to the field of AI?

Anna Koivuniemi: Looking back, I should probably thank my math teacher, because my parents were very focused on the human sciences. And she was the one who made me realize that I had mathematical talent, that math is interesting and can be done by girls. When I went to university, I earned a direct admission place to study mathematics, but was not yet convinced. So, I attended a second university as well and studied finance and French while continuing in my engineering career, which over time, became my career path.

But I’ve always loved math, which was the origin of my interest in AI. I worked for a tech start-up in Finland, where I’m from, and really enjoyed it. I then ended up at McKinsey, where I spent 17 years, and had the opportunity to build the firm’s then-largest AI alliance. That really triggered me to the potential of AI, and how we can improve processes and create new things. I was then contacted by Google DeepMind and could not say no.

The growing and varied benefits of AI

Matt Fitzpatrick: AI is a topic that’s been in the news quite a bit, especially regarding productivity improvements. But more broadly, what are some of the concrete examples that you’ve seen that demonstrate the greatest AI benefits?

Anna Koivuniemi: I believe there are a lot of benefits. It was McKinsey who wrote the 2018 report pointing out roughly 160 AI use cases supporting all the UN’s 17 Sustainable Development Goals (SDGs), a number that has now tripled to more than 600. So, you proved some very real benefits AI can offer.

I can give you two concrete examples, one personal one and one that’s part of my responsibilities. One of my teenage sons is studying Latin and Greek, and he needs to study it in Dutch, since we live in Amsterdam. I tried to create exercises to help him study, but it’s quite difficult to do this in two languages you haven’t yet mastered. So I started prompting AI to create exercises for him, and realized how well AI can support his learning.

In my professional life, an AI model DeepMind built in 2020, AlphaFold, is regularly succeeding in predicting 3D models of protein structures, how they fold and how they also interact with RNA, DNA, and small molecules.

Now, you might wonder why I think it’s so inspiring, but every living thing in the world is made up of proteins. One of my colleagues said that without proteins, humans are actually just a bag of water. They make our hearts beat, they make our blood circulate, etc. Biologically, they are non-linear amino acid chains with very complicated structures.

In previous decades, humans often needed three to five years to predict the structure of a single protein, and sometimes an entire lifetime. Even huge, expensive machines that consumed vast amounts of energy were only able to figure out the structure of maybe 200,000 proteins. But with AI, our AlphaFold model predicted the structures of 200 million proteins. We’re talking about condensing the equivalent of hundreds of millions, maybe billions of years of scientific research, into a few years.

My team has made this model available to the roughly 2 million researchers using these different structures in their research to advance our understanding of new drugs to cure disease, why tomatoes are resistant to certain fungi, or how to create plastic-eating enzymes.

A boon to scientific research

Matt Fitzpatrick: A lot of AI talk centers on the near term, but if you take a ten-year view of life science innovation, what could healthcare look like in ten years with these sorts of breakthroughs?

Anna Koivuniemi: I believe we will realize many more advances, and I think it starts with small steps. Like many of us, Matt, I’m sure you’re using AI to boost productivity by making notes, summarising texts, etc.

Scientists use them that way too. If you are a researcher, you typically need to read a lot of material, and figure out which of the thousands of papers published annually are relevant to your work. AI’s ability to enhance productivity will enormously accelerate research and lead to new breakthroughs.

I’ve seen the proof of AI’s ability to handle huge amounts of data, understand the patterns, fill in the gaps, and make predictions that are far faster and more precise than humans. I think we have an example of that in weather forecasting, which is a brilliant human science. But weather models based on AI have quickly made huge advances, making complicated predictions in seconds that took hours with powerful computers just a year ago.

Thirdly, think about the problems humans haven’t yet been able to solve because they are either so complex, too interdependent, or involve too much data. I have a strong hope that in five to ten years, AI models will help researchers be more productive, help them create better and faster simulations, and help to solve new problems.

How to scale AI in the organization

Matt Fitzpatrick: You’ve clearly articulated the potentials of the technology. But I think a lot of folks out there have been struggling to realize the near-term value in their organizations, and I think some of that is simply because the technology is new for many people. If you were giving advice to a chief analytics officer, a chief technologist, or even a CEO, what should they be doing to scale AI in their organization?

Anna Koivuniemi: It’s a very good question, Matt, and I still talk to my old clients, and realize the challenges of making this change. And there is indeed a risk in users not realizing its full potential. I would say to everybody, follow what your children do, because when I see how they use AI, and observe their curiosity and interest, you get a glimpse of what the future will bring.

When I talk to the CEOs I used to work with, I ask them questions like, “Do you know what your ambition is? Where do you want to be with AI? Is it only a productivity gain, or is there a value-add to your business that could create a unique competitive advantage? Do you know where that value is? Which part of the business? Which type of a transition?’”

I think it starts with a bold ambition and a clear vision where that value resides, and then you need to address the practicalities. What data is required? Is it unique, or basically some data you need to acquire for machine-learning to help unlock that value?

What capabilities do you need in-house or with your partners, both in AI but also in the application of AI, to that specific case? Thirdly, and very importantly, what responsibility and safety practices do we need to have in place and deployed in order to make sure we capture that value in a responsible way?

A dazzling burst of vivid blue light emanating from a focal point and stretching into slender strands with luminous tips.

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value

Lessons learned in driving a successful AI agenda

Matt Fitzpatrick: I completely agree on the bold ambition point. What lessons have you learned around mobilizing an organization and getting a group of people to drive this forward? What are the big things you’ve learned over the past couple of years?

Anna Koivuniemi: There are a few things which I think have enabled this success. One of them is the mission. DeepMind has always had a very clear mission, and our mission is to develop AI responsibly for the benefit of humanity. That trickles down to everything we do, including our values.

Secondly, every business depends on talent, and our talent is extraordinary. We have world-class researchers, not only in AI but in mathematics, neuroscience, and some of the areas we’re deploying the technology. I think the quality of the people, especially in the research, is crucial, as well as how we combine everything, we do with everything we deliver.

Our teams are typically cross-functional. The AlphaFold team, for example, contains structural biologists, machine learning specialists, and product managers. So, the project is very much comprised of all sorts of different talent. And when you have a good team of people, you can achieve a good balance between top-down guidance and the autonomy of the team to do their thing well.

Whenever I interact with teams or people, I continue to be very impressed by their level of the ambition. They’re extremely ambitious, but also very thoughtful on how to do things in a responsible way. The responsibility is in the culture, but it’s also in the people’s genes, almost. That makes the collaboration very easy.

Achieving a balance between expertise and technology

Matt Fitzpatrick: That’s really interesting. I get a lot of questions about the kind of team composition for these sorts of things. When you’re pursuing any initiative, what is the balance of the team between engineers and industry experts? How do they work together? What is the interface between expertise and technology?

Anna Koivuniemi: I don’t think there’s a one-solution fit, and it actually changes over the product lifecycle. Let me give you an example. When we are exploring new products, there are basically three considerations. The first involves interviewing and listening to a wide range of experts on whether an AI application makes sense to build.

Secondly, when we start, we assess the responsibility and safety of the project. And that’s another moment when we pull people together from different backgrounds and different perspectives to understand how to do this in a responsible and safe way, and what we need to take into account to develop and deploy this technology properly?

Thirdly, when we’re thinking about the deployment, there’s often actually another set of experts that come together to understand how people access the product. For example, in the case of AlphaFold, my team did quite a lot of work to understand how the structural biologists would be using the tool. They are not machine learning experts, so how do we explain the uncertainty in the model to them? What do we need to show them so that they understand the uncertainty of the predictions? How do we help them integrate that in their workflow? It also involves, talking to other experts to be safe and responsible when it comes to potential bad actors. So, it depends on the product stage, and really relies on the culture. Which is why you need the best people when you do important things.

Embedding safety and responsibility throughout the process

Matt Fitzpatrick: Anna, you mentioned safety and responsibility more than once, and one question I get from many CEOs is, ’If I want to adopt deploy this technology and think about risks like hallucinations, how do I deploy AI responsibly?’ Many of them are looking for guidance, and I’m curious to hear your take.

Anna Koivuniemi: I’m glad you asked that question, Matt, because I think that we cannot talk enough about the responsibility and safety, which goes beyond hallucinations to considering the impact of this technology.

For us, it all starts with our operating principles, which are published online, and are very clear on where we cannot deploy this technology. For example, we never get involved in surveillance or weapons systems of any kind. Those operating principles guide our application developers and basically everything we do.

We also do cross-functional red-teaming, where we literally think about what can go wrong, and proactively include that in the product development process. Thirdly, we understand we’re not expert in everything. At AlphaFold, we engage external experts, including Nobel laureates and leading biology researchers, to understand the potential risks and implications of putting this technology out there, and how we can minimize them.

Matt Fitzpatrick: To follow-up from that, if you’re a CEO looking to get started on this technology and you’ve got a bunch of pilots going, what advice would you give them to make sure they’re deploying it responsibly?

Anna Koivuniemi: I think it’s a combination of making sure you have internal capabilities of responsibility, which certain industries are really good at, like the automotive or aviation sectors. But you also need to figure out who in the organization has the authority and capability to think about these responsibilities, and how to integrate that discussion on safety and responsibility at the executive level and throughout different parts of the process.

It’s not a natural discussion, one is about targets, cost, and value increase. And if you are doing something that adds really competitive value, you need to make sure you thought through what may go wrong—and how you can prevent it from going wrong.

So that would be my guidance. Get the capabilities in-house, make sure that they are senior enough to have an impact, and embed this thinking on responsibility and safety during the development of AI solutions. And make sure that you monitor them throughout the lifecycle of the product.

High-tech bees buzz with purpose, meticulously arranging digital hexagonal cylinders into a precisely stacked formation.

Implementing generative AI with speed and safety

Matt Fitzpatrick: That makes complete sense. Looking beyond the life sciences, is there another sector or area where you think AI will drive positive innovation?

Predicting natural disasters and adapting to climate change

Anna Koivuniemi: I think you could also restate the question to ask, “Where do we, as humanity, need innovation in order to solve our problems?” We absolutely need AI innovation to enable new energy sources, energy production, and even energy distribution. Optimizing electrical power flows is an old problem from 1960s that requires a lot of simulations where AI can help on speed and accuracy.

I would also expect progress in materials science. There is a lot of activity in that space to help us identify new materials, which are very important for sustainability and biodiversity, so we can find a safer, more sustainable source of materials for things like batteries.

And if you look at weather forecasting, you could argue that it’s nice to know what to wear when you go out. But it’s more important to predict hurricanes better to save lives. The same holds true for predicting wildfires and floods, and quickly warning people in their path to get to safety. There is a lot also on adapting to climate change that AI can help with to inform people and governments about potential risks and preventive actions.

The personal impact of artificial intelligence

Matt Fitzpatrick: What are the biggest changes an everyday person will experience from some of this technology?

Anna Koivuniemi: I hope it will be proactive like reminding you to do something that needs your attention, like preparing for an upcoming employee review cycle, things that release your memory capacity to manage what needs doing. Or maybe warning you, “Hey, there’s a business issue here. You need to go and make sure that part of business is running well, because it’s trending down.” I hope this proactiveness will be a big part of the game.

These are tools that will help us become more productive. This will hopefully increase contentment with our work, because we’ll be able to automate mundane tasks and receive proactive help from agents.

But we are still humans who need that human interaction, and the sun will still shine, and the bees will still buzz. I don’t think life will change. But I think the way we work, and the way we organize our private lives, will be helped by these tools.

Matt Fitzpatrick: Completely agree, and I think the education example you gave earlier is a really good one. Think about students today and the ability to summarise 50 books to write a paper. It’s pretty powerful, and if used correctly, your ability to learn multiplies exponentially.

Anna Koivuniemi: Absolutely. I don’t know how it is in the US, but in Europe, we are already experiencing a huge shortage of teachers, so how do we make sure that every single kid still gets the proper attention? I hope this technology will help us.

I know there have been a lot of efforts with different technologies, and hope we get it right this time by providing personalised support, encouraging everyone to ask any questions, and giving them the type of answers and support they need.

Matt Fitzpatrick: Great. Finally, it’s obvious you are very passionate about this topic and excited for what’s to come. What are the things that you’re most excited about in the years ahead?

Anna Koivuniemi: What keeps me going are basically the human stories, how people see the benefits, which are sometimes difficult even for me to predict. The enthusiasm, belief, and the trust people express in this technology that helps them innovate continues to make me very, very excited.

Explore a career with us