As gen AI advances, regulators—and risk functions—rush to keep pace

| Article

The rapid advancement of generative AI (gen AI) has regulators around the world racing to understand, control, and guarantee the safety of the technology—all while preserving its potential benefits. Across industries, gen AI adoption has presented a new challenge for risk and compliance functions: how to balance use of this new technology amid an evolving—and uneven—regulatory framework.

As governments and regulators try to define what such a control environment should look like, the developing approaches are fragmented and often misaligned, making it difficult for organizations to navigate and causing substantial uncertainty.

In this article, we explain the risks of AI and gen AI and why the technology has drawn regulatory scrutiny. We also offer a strategic road map to help risk functions navigate the uneven and changing rule-making landscape—which is focused not only on gen AI but all artificial intelligence.

Why does gen AI need regulation?

AI’s breakthrough advancement, gen AI, has quickly captured the interest of the public, with ChatGPT becoming one of the fastest-growing platforms ever, reaching one million users in just five days. The acceleration comes as no surprise given the wide range of gen AI use cases, which promise increased productivity, expedited access to knowledge, and an expected total economic impact of $2.6 trillion to $4.4 trillion annually.1The economic potential of generative AI: The next productivity frontier,” McKinsey, June 14, 2023.

There is, however, an economic incentive to getting AI and gen AI adoption right. Companies developing these systems may face consequences if the platforms they develop are not sufficiently polished. And a misstep can be costly. Major gen AI companies, for example, have lost significant market value when their platforms were found hallucinating (when AI generates false or illogical information).

The proliferation of gen AI has increased the visibility of risks. Key gen AI concerns include how the technology’s models and systems are developed and how the technology is used.

Generally, there are concerns about a potential lack of transparency in the functioning of gen AI systems, the data used to train them, issues of bias and fairness, potential intellectual property infringements, possible privacy violations, third-party risk, as well as security concerns.

Add disinformation to these concerns, such as erroneous or manipulated output and harmful or malicious content, and it is no wonder regulators are seeking to mitigate potential harms. Regulators seek to establish legal certainty for companies engaged in the development or use of gen AI. Meanwhile, rule makers want to encourage innovation without fear of unknown repercussions.

The goal is to establish harmonized international regulatory standards that would stimulate international trade and data transfers. In pursuit of this goal, a consensus has been reached: the gen AI development community has been at the forefront of advocating for some regulatory control over the technology’s development as soon as possible. The question at hand is not whether to proceed with regulations, but rather how.

The current international regulatory landscape for AI

While no country has passed comprehensive AI or gen AI regulation to date, leading legislative efforts include those in Brazil, China, the European Union, Singapore, South Korea, and the United States. The approaches taken by the different countries vary from broad AI regulation supported by existing data protection and cybersecurity regulations (the European Union and South Korea) to sector-specific laws (the United States) and more general principles or guidelines-based approaches (Brazil, Singapore, and the United States). Each approach has its own benefits and drawbacks, and some markets will move from principles-based guidelines to strict legislation over time (Exhibit 1).

1
Regulations related to AI governance vary around the world.

While the approaches vary, common themes in the regulatory landscape have emerged globally:

  • Transparency. Regulators are seeking traceability and clarity of AI output. Their goal is to ensure that users are informed when they engage with any AI system and to provide them with information about their rights and about the capabilities and limitations of the system.
  • Human agency and oversight. Ideally, AI systems should be developed and used as tools that serve people, uphold human dignity and personal autonomy, and function in a way that can be appropriately controlled and overseen by humans.
  • Accountability. Regulators want to see mechanisms that ensure awareness of responsibilities, accountability, and potential redress regarding AI systems. In practice, they are seeking top management buy-in, organization-wide education, and awareness of individual responsibility.
  • Technical robustness and safety. Rule makers are seeking to minimize unintended and unexpected harm by ensuring that AI systems are robust, meaning they operate as expected, remain stable, and can rectify user errors. They should have fallback solutions and remediation to address any failures to meet these criteria, and they should be resilient against attempts to manipulate the system by malicious third parties.
  • Diversity, nondiscrimination, and fairness. Another goal for regulators is to ensure that AI systems are free of bias and that the output does not result in discrimination or unfair treatment of people.
  • Privacy and data governance. Regulators want to see development and usage of AI systems that follow existing privacy and data protection rules while processing data that meet high standards in quality and integrity.
  • Social and environmental well-being. There is a strong desire to ensure that all AI is sustainable, environmentally friendly (for instance, in its energy use), and beneficial to all people, with ongoing monitoring and assessing of the long-term effects on individuals, society, and democracy.

Despite some commonality in the guiding principles of AI, the implementation and exact wording vary by regulator and region. Many rules are still new and, thus, prone to frequent updates (Exhibit 2). This makes it challenging for organizations to navigate regulations while planning long-term AI strategies.

2
AI governance–related policy and regulatory efforts are under way globally.

What does this mean for organizations?

Organizations may be tempted to wait to see what AI regulations emerge. But the time to act is now. Organizations may face large legal, reputational, organizational, and financial risks if they do not act swiftly. Several markets, including Italy, have already banned ChatGPT because of privacy concerns, copyright infringement lawsuits brought by multiple organizations and individuals, and defamation lawsuits.

More speed bumps are likely. As the negative effects of AI become more widely known and publicized, public concerns increase. This, in turn, has led to public distrust of the companies creating or using AI.

A misstep at this stage could also be costly. Organizations could face fines from legal enforcement—of up to 7 percent of annual global revenues, according to the AI regulation proposed by the European Union, for example. Another threat is financial loss from falloff in customer or investor trust that could translate into a lower stock price, loss of customers, or slower customer acquisition. The incentive to move fast is heightened by the fact that if the right governance and organizational models for AI are not built early, remediation may become necessary later due to regulatory changes, data breaches, or cybersecurity incidents. Fixing a system after the fact can be both expensive and difficult to implement consistently across the organization.

The exact future of legal obligations is still unclear and may differ across geographies and depend on the specific role AI will play within the value chain. Still, there are some no-regret moves for organizations, which can be implemented today to get ahead of looming legal changes.

These preemptive actions can be grouped into four key areas that stem from existing data protection or privacy and cyber efforts, as they share a great deal of common ground:

Transparency. Create a taxonomy and inventory of models, classifying them in accordance with regulation, and record all usage across the organization in a central repository that is clear to those inside and outside the organization. Create detailed documentation of AI and gen AI usage, both internally and externally, its functioning, risks, and controls, and create clear documentation on how a model was developed, what risks it may have, and how it is intended to be used.

Governance. Implement a governance structure for AI and gen AI that ensures sufficient oversight, authority, and accountability both within the organization and with third parties and regulators. This approach should include a definition of all roles and responsibilities in AI and gen AI management and the development of an incident management plan to address any issues that may arise from AI and gen AI use. The governance structure should be robust enough to withstand changes in personnel and time but also agile enough to adapt to evolving technology, business priorities, and regulatory requirements.

Data, model, and technology management. AI and gen AI both require robust data, model, and technology management:

  • Data management. Data is the foundation of all AI and gen AI models. The quality of the data input also mirrors the final output of the model. Proper and reliable data management includes awareness of data sources, data classification, data quality and lineage, intellectual property, and privacy management.
  • Model management. Organizations can establish robust principles and guardrails for AI and gen AI development and use them to minimize the organization’s risks and ensure that all AI and gen AI models uphold fairness and bias controls, proper functioning, transparency, clarity, and enablement of human oversight. Train the entire organization on the proper use and development of AI and gen AI to ensure risks are minimized. Develop the organization’s risk taxonomy and risk framework to include the risks associated with gen AI. Establish roles and responsibilities in risk management and establish risk assessments and controls, with proper testing and monitoring mechanisms to monitor and resolve AI and gen AI risks. Both data and model management require agile and iterative processes and should not be treated as simple tick-the-box exercises at the beginning of development projects.
  • Cybersecurity and technology management. Establish strong cybersecurity and technology, including access control, firewalls, logs, monitoring, etcetera, to ensure a secure technology environment, where unauthorized access or misuse is prevented and potential incidents are identified early.

Individual rights. Educate users: make them aware that they are interacting with an AI system, and provide clear instructions for use. This should include establishing a point of contact that provides transparency and enables users to exercise their rights, such as how to access data, how models work, and how to opt out. Finally, take a customer-centric approach to designing and using AI, one that considers the ethical implications of the data used and its potential impact on customers. Since not everything legal is necessarily ethical, it is important to prioritize the ethical considerations of AI usage.


AI and gen AI will continue to have a significant impact on many organizations, whether they are providers of AI models or users of AI systems. Despite the rapidly changing regulatory landscape, which is not yet aligned across geographies and sectors and may feel unpredictable, there are tangible benefits for organizations that improve how they provide and use AI now.

Failure to handle AI and gen AI prudently can lead to legal, reputational, organizational, and financial damages; however, organizations can prepare themselves by focusing on transparency, governance, technology and data management, and individual rights. Addressing these areas will create a solid basis for future data governance and risk reduction and help streamline operations across cybersecurity, data management and protection, and responsible AI. Perhaps more important, adopting safeguards will help position the organization as a trusted provider.

Explore a career with us