Blog
14.06.2023

The EU AI Act proposal: The dilemma of regulating a new technology

In the wake of the recent EU Parliament vote on the AI Act, visiting researcher Thorsten Jelinek reflects on the complex dilemma inherent in regulating AI. 

Significant political progress has been made in finalising the AI Act (AIA) proposal, which has now entered the trilogue phase following the vote on the Commission's proposal by the European Parliament. The AIA proposal is aimed at banning "unacceptable risks" and regulating "high-risk" AI applications that may violate fundamental rights, safety, democracy, the rule of law, and endanger critical infrastructure. The trilogue negotiations – which is among the European Parliament, the Council of the European Union, and the European Commission – are expected to be challenging, not least due to the late inclusion of generative AI models, such as OpenAI's ChatGPT and Google's Bard, in the legislative process. Due to efforts to keep pace with AI advancements, the current proposal by the European Parliament shows elements of both potential overregulation and underregulation of generative models. 

On one hand, the European Parliament's proposal of the AI Act could potentially overregulate AI, which may hinder EU competitiveness – especially for smaller firms that cannot easily afford the costs of a comprehensive risk management system for all high-risk scenarios. Conversely, it would primarily benefit the few large tech companies that already dominate the AI market, as they can afford the compliance costs. This could lead to further market concentration and less competition. In this context, big tech's call for regulation appears to serve their own interests rather than the broader societal interest. Additionally, there are concerns that the Act would not address the lack of native European generative models or AI infrastructure, which are important for boosting the EU's competitiveness, realising digital sovereignty aspirations, and aligning with EU values.  

On the other hand, the AI Act, as it currently stands, would underregulate AI, particularly large language and generative models, as it does not address the immediate risks associated with their use, including the automation of fake news and hate speech. Instead, these issues are covered by the Digital Services Act (DSA), which came into force in November 2022. However, the DSA only applies to “intermediary services,” such as social networks, search engines, and other hosting services. It does not extend to multimodal digital services provided by providers of generative models, which encompass text generation, language translation, image generation, music composition, or video synthesis. 

The issue of overregulation and underregulation arises from the challenge of keeping up with the rapid pace of technological advancement and a fundamental dilemma that lawmakers face: on one hand, new technologies are easier to regulate due to their early stage of development, but the potential implications and risks they may pose are often unclear; on the other hand, attempts to regulate mature and widely used technologies face greater resistance and difficulty, making it harder to address well-understood consequences and establish effective regulatory measures. Striking the right balance between proactive regulation and adaptability is crucial in navigating this conundrum and ensuring that regulatory frameworks keep pace with emerging technologies while adequately addressing their societal impacts. The EU’s early regulation process raises the question of how much of the burden of this dilemma is shifted onto businesses. Currently, businesses would be saddled with the task of anticipating all potential misalignments of their AI services in high-risk areas, making them liable for risks that are difficult to predict and resulting in legal uncertainty. 

To navigate this challenge more effectively, it is essential to continuously improve the policy process itself. This involves ongoing monitoring of technology and market-related developments, incorporating regulatory flexibility, and adopting an adaptive management approach to address the intricate and evolving relationship between technology and society. Ensuring policy coherence is equally crucial, as the AI value chain involves multiple stakeholders and intersects with various existing regulations concerning content, privacy, non-discrimination, intellectual property, trade secrets, and cybersecurity. To strike a balance between innovation and safety and increase regulatory certainty, it will be crucial for the trilogue negations not to treat generative models separately, but to define a minimum set of common rules that apply to both the high-risk AI applications and generative AI models. In addition, incorporating specific provisions for high-risk use cases derived from generative AI models would further alleviate the burden, especially for smaller businesses. 

At this juncture, the need for an AI regulation should be unquestionable to steer the technology towards serving society's best interests. Regulations not only impose constraints but also shape markets, and it is crucial that this process remains inclusive and does not confer additional advantages to big tech. However, if in the long run the EU aims to gain control over its technology and policy choices and exert influence over the development of AI in line with its own values, standards, and regulations, it must further prioritise strengthening its own AI ecosystem. The development of a native AI infrastructure is essential to achieve this objective. 

Teaser photo by Emiliano Vittoriosi on Unsplash.