In this edition of Author Talks, McKinsey Global Publishing’s Raju Narisetti chats with Marietje Schaake, a nonresident fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centered AI, about her new book, The Tech Coup: How to Save Democracy from Silicon Valley (Princeton University Press, September 2024). Schaake, who was a member of the European Parliament from the Netherlands for a decade, explores how, in an ever-expanding digital landscape that is largely unregulated, there can be potential solutions to offset the balance of power and to adopt a democratic approach to governance. An edited version of the conversation follows, and you can also watch the full video at the end of this page.
Your book is a call to stop ‘digital arms.’
I wrote this book now because I felt like I was having the same conversation repeatedly. I get invited to speak with local governments, enforcement bodies, and regulators, and a lot of them say they want to do something about the outsize power of tech companies. Yet they often don’t know where to start. On the part of governments, I also sense fatigue or a lack of ideas on how to actually have a big impact.
The more I began to dive into the role of technology, going from a sector to a layer touching all aspects of our lives, the more I realized that the problem is not only disinformation or the mental health of teenagers, or antitrust, or cybersecurity. There’s also a real challenge with who gets to decide and, in essence, who has power over our digital lives.
What is the ‘accountability gap’ you see?
Over the past decades, there has been a tendency by governments, especially in the US, to choose a hands-off approach to tech regulation, to trust market forces, and to trust the technology itself. This whole promise of putting people online, giving them mobile phones, or allowing their mobile phone use and social media use to have a “magical impact” was a long-held belief.
Policy often responds to a changed reality. It is very hard for policies to anticipate what is coming. Yet that is what we need when it comes to emerging technologies.
As a result, there’s a lack of guardrails, regulation, laws, insights, and agency. That leads to a lack of accountability. Very often, companies can get away with a significant amount of risk taking or harm because there are no rules by which they must officially abide. Therefore, there’s impunity for the way in which they handle things. There’s also a lack of accountability—a lack of redress for people—and a lack of liability on the part of any company for what is happening to our societies.
This isn’t a book against technology though, is it?
The Tech Coup is a provocative title. It really talks about this power shift away from the public and more into private hands. It talks about an accountability gap. But it is not a book that is against technology. I, myself, am curious, excited, and interested in technology.
The problem is that too often companies—big and small, and well-known ones, as well as ones without name recognition with consumers—have really amassed a lot of power. That can be through the products that they develop. Think about spyware, for example, which by default, violates people’s privacy by hacking into their devices. Even the most powerful people of this world cannot evade this kind of technology. Spyware has been used against political leaders and also in opposition to figures, journalists, and judges. It is a very, very aggressive technology made by small companies, and it really needs addressing through regulation. For too long, it has just been able to grow and proliferate all over the world.
The idea that this is a statement about any technology is, of course, misguided. It’s about specific technologies that hurt our democracy, and the broader trend that companies’ commercial interests and power have simply gotten too big and have gotten out of hand without countervailing powers. These are the types of topics that I touch on in the book.
How should regulators approach the task of governing tech?
There is a sense of urgency behind the book. Yet readers will not come away feeling depressed and disempowered. There are so many things that can be done, that should be done, to rebalance this imbalance toward greater democratic governance and oversight. I focus on this in the solutions segment of the book as well.
Policy often responds to a changed reality. It is very hard for policies to anticipate what is coming. Yet that is what we need when it comes to emerging technologies. AI is now the last wave that has led to new questions of governance and regulation, yet it won’t be the last.
It is key to start thinking of new ways to anticipate the next wave of an emerging technology. Returning to core principles—nondiscrimination, antitrust, access to information, and transparency—is needed, as well as putting those principles at the core of a regulatory effort and truly empowering the enforcement bodies, the regulatory bodies, to uphold them.
Basically, whether it’s AI or the next yet-unknown wave of an emerging technology, the regulators should be able to act without having to wait for very technology-specific laws to be adopted. Waiting will not just be the normal kind of response rate to changing realities. It will become harder and harder to respond to all emerging technologies because the technologies move very fast. By comparison, the regulatory process is too slow. That discrepancy will lead to a growing gap between the realities that are unfolding before our eyes and the pace with which policies can catch up. Therefore, it’s also time to think about innovating governance and democratic policies simultaneously.
What is the ‘precautionary principle’ of regulation that you recommend?
When we look at bringing new AI applications to market, we question whether we can delay the moment between an innovation being known, discovered, or found and bringing it to market. Right now, we see an enormous push, competition of course, among AI companies that move in favor of just pushing new applications out into the market without much friction, without much pausing and assessing of the unintended consequences.
When I was thinking about solutions for this race to market, I thought back to a concept that we know in the EU, which is called the “precautionary principle.” The idea behind it is actually part of the EU Treaty, so it’s quite well anchored in law. The concept is that there may be some innovations, like GMOs [genetically modified organisms] for example, where the invention itself offers a solution.
There are unknown consequences for a larger ecosystem. For example, manipulating a crop may be great for pesticide use or resistance to certain diseases. Yet we don’t know what the spread of it in nature will do to biodiversity, or whether it will take over entire crops, as we’ve seen in the past.
The idea is building a pause between the innovation being discovered and releasing it into the wild. While I’m not suggesting applying the “precautionary principle 101” to AI and other emerging technologies, we can still learn from it. We may need to look at the product or the technology more closely regarding the notion of “whether we know enough about it, and know enough about the public interest and about risks to society or unintended consequences for society.” We may need to take this closer look before an emerging technology enters the market, unlike the current climate, where the risks are somewhat learned by society as a whole in an unconstrained way. That’s how we could seek solutions within concepts that we know and apply them to the way we treat tech.
When we look at big tech companies and how much we depend on their services working well—some of them have really become too big to fail, and in that sense are analogous to financial- services companies.
You also think the ‘too big to fail’ principle can work with tech.
When we look at big tech companies and how much we depend on their services working well—some of them have really become too big to fail, and in that sense are analogous to financial-services companies. Consider cloud computing and cyber-society companies, for example.
If we are able to put more checks, oversight, and responsibility on these companies, hopefully there will be fewer incidents and breaches of the kind that we saw recently with the CrowdStrike incident. That happened—airports being down across the world and databases no longer working—after I wrote the book. We had an incident in the Netherlands where the communication systems used by the defense sector were down. It caused a major systemic crisis.
Therefore, the overreliance on these tech services working well without having the proper obligations—the proper assessments of whether they are actually up to the task—leads to vulnerabilities in the system. We can borrow from the notion of too big to fail and apply it to tech companies that are critical nodes in our society and really ensure that there’s clarity on their resilience, responsibility, and functionality.
You also want to see a government ‘technology expert service’ created.
When I served as a member of the European Parliament for ten years, there was not a single legislative proposal, or regulatory idea, or even an amendment that would be proposed without going through the legal services. At some point, independent legal experts would look at these proposals through a legal lens and give feedback.
How could the draft legislation be improved so that it would be more legally sound? Were there unintended gaps that could be abused by those who wanted to object to the law? The legal service was there—nonpartial, independent, and available to any member of the European Parliament, no matter which political party [proposed the legislation].
There is a lack of technological expertise found throughout regulatory bodies, and it makes legislators vulnerable to lobbying. The tech lobby is incredibly powerful. Tech is complicated. Sometimes, easy frames are pushed by these tech lobbyists. Similar to the legal services mentioned, the US Congress or Parliament in Brussels, for example, would benefit a great deal from having an expert team informing lawmakers about how tech works. This would make the work of lawmakers easier and more informed. If they want to solve a problem, how might they go about it? A team could offer a more independent but expert view as an alternative to these lobbying narratives that are now so dominant in informing lawmakers.
Do you see any role for a data or privacy opt-in economy?
Well, there’s a lot of talk about the importance of data, not only when it comes to protection of privacy but also as a key ingredient for training large language models for AI applications. So data remains an incredibly central part of both the problem and the solution. It is part of the problem in relation to surveillance capitalism and the outsize power of tech companies because they “hoover up” so much data. It is also considered part of the solution—better data protection, and also systems like data commons platforms and providing more agency for individuals over data.
It is important to explore every possibility to make people more independent and empowered in making decisions about their digital lives. Yet I worry about the economic lens with which data is often viewed when people speak about solutions of data stewardship. That’s because, again, the power asymmetry between anyone who would have to manage this data for an entire group of people and the individual is just out of alignment. As we’ve seen with a lot of opt-in or opt-out choices, it’s hard for people to be fully informed about the choices before them. Companies are really good at optimally designing for people to click “yes” to giving away a significant amount of their data.
The question is, how much effort can we expect individual internet users to put into navigating the use of their data, especially when there are economic incentives [for tech companies]? Data privacy has become a sort of luxury good instead of a fundamental right. This option is a good direction to explore. Again, we must be mindful of this power relationship in terms of information and economic incentives. From that point of view, a lot of questions remain unanswered.
What is your view of regulation when it comes to large tech platforms such as Wikipedia?
When we look at regulating tech we also see, as the EU has recently done, that scale is a big criterion for imposing more obligations on the biggest platforms, the biggest tech players. That often makes a lot of sense.
Another articulation of scale is, of course, capital. In that sense, it would be easy to distinguish between for-profit companies that are using data and scaling in the interest of shareholder value versus not-for-profit and much more community-led ones like Wikipedia that are updating information that everyone is using globally.
It is important to appreciate that there might be unintended consequences for a major platform like Wikipedia. Also, if you look at the comparison between incentives on the part of WhatsApp, for example, which is owned by Facebook, or Signal, which is also a not for profit, we can see big differences in the incentives on the part of the platforms or services. That should count for something.