DigitalEnergyIndustryResearch & Innovation

To Support Start-Ups in the EU and Avoid AI Risks, We Must Regulate Big Tech

By Andreas Schwab, MEP (EPP Group- Germany)

When it was released two years ago, ChatGPT surprised the world with skills that were thought to be decades away – text that would have taken us hours to write can now be generated in seconds. Artificial Intelligence (AI) holds enormous potential, but irresponsible development can pose significant risks. By shifting regulation away from small and medium-sized enterprises (SMEs) and towards Big Tech’s foundation models with systemic risk, we can simultaneously enhance European competitiveness in AI and reduce the greatest risks arising from dangerous capabilities.

AI’s Potential and Risks

Properly implemented, AI can drive innovation, improve efficiency and create opportunities for businesses and individuals. For example, in healthcare, AI-powered tools can analyse medical images to detect diseases like cancer. In manufacturing, AI-driven predictive maintenance can foresee equipment failures, reducing downtime and improving operational efficiency. Lastly, in agriculture, AI-powered drones and sensors can monitor crop health and optimize irrigation, creating opportunities for precision farming.

To realise these benefits, it is crucial to avoid several risks – especially from large and rapidly evolving general-purpose AI models such as GPT.

Industry experts such as Geoffrey Hinton, the “Godfather of AI” who recently won the Nobel price, have warned about the dangers of creating increasingly capable general-purpose AI systems[1]. Even their developers don’t understand their inner workings, so they can develop unforeseen dangerous capabilities and behaviours[2]. Issues range from facilitating access to biological weapons, over the transparency and interpretability of AI systems as black boxes to even their controllability.

Combining Risk Reduction and European Competitiveness

Notably, these risks disproportionately stem from a few big tech companies—often based outside the EU—that hold a dominant position in the AI market. Therefore, we can mitigate these risks by imposing stricter controls on them while supporting smaller European AI companies that are developing specialized trustworthy AI tools. This can be achieved via a combination of the following three levers: applying advanced safety standards only to the largest, most expensive models with potentially dangerous capabilities, leveraging EU law to create a level playing field in AI, in Europe as well as abroad and providing strategic investments for responsible European AI development.

Focusing Regulation on the Most Dangerous Models

In addition to its use-based regulations, the AI Act also includes provisions to apply safety requirements only to the biggest models with potentially dangerous capabilities without burdening smaller developers. Specifically, the AI Act mandates model evaluations, advanced cybersecurity measures, and serious incident reporting for providers of General Purpose AI Systems trained with more than 10^25 FLOPs[3]. According to Epoch AI, these models are estimated to cost tens of millions of Euros, so this measure addresses the most severe risks without burdening SMEs through additional reporting requirements. [4]

A Level Playing Field

Democratising the digital technology market and preventing a few unaccountable big tech companies from controlling the sector are challenges European policymakers are already familiar with. The Digital Markets Act (DMA) sets fair competition rules in tech. Although it doesn’t explicitly mention AI, it overlaps with leading AI developers designated as gatekeepers who use AI tools in areas relevant to the DMA. Gatekeepers employ AI in ranking algorithms, data access, and advertising—all areas the DMA regulates. It mandates transparency, fairness, and non-discrimination in AI-driven content rankings, governs data access, and introduces transparency requirements for AI-based advertising, so it provides a robust groundwork for AI governance.[5]

 To avoid disadvantaging European AI companies, the EU must enforce AI regulations internationally, leveraging mechanisms like the Brussels Effect or its position in the chip supply chain to ensure global compliance.

Strategic Investment

According to the Draghi-Report, private investment in AI is 7 times higher in the US than in Europe[6]. This highlights the need for investment in AI industry, international cooperation and nurturing AI talent.

Since many of the biggest AI risks stem from general-purpose AI models, we can reap many benefits of AI while avoiding the risks by supporting specialised narrow AI tools such as those to assist in early detection of diseases.

This innovation in trustworthy AI requires investment in European AI talent.

Schools and universities must educate students about technical AI skills and working professionals need opportunities for upskilling in AI. Finally, AI talent must be incentivized to stay in Europe to provide the domestic industry with skilled workers.

This investment can be coordinated internationally with a CERN for AI, as advocated for in our EPP Manifesto[7]. It could help provide SMEs, academia and civil society access to the necessary resources and networks to help them secure Europe a seat at the AI table. Where strict compliance with EU regulations and a focus on safety is given, research cooperation with other countries can enable European developers to learn from their expertise and ensure European safety standards are applied internationally.

Making Europe a Leader in Trustworthy AI

AI presents unprecedented opportunities for innovation and growth within Europe. By investing strategically in a robust AI ecosystem centred around SMEs, specialized tools and interpretability research, and by enforcing international regulations like compute thresholds that promote a level playing field, Europe can lead the world in developing safe, transparent and trustworthy AI.

 

 

[1] Miles Kruppa, Deepa Seetharaman, ‘A Godfather of AI Just Won a Nobel. He Has Been Warning the Machines Could Take Over the World. ’, in: The Wall Street Journal, 9 Oct. 2024, https://www.wsj.com/tech/ai/a-godfather-of-ai-just-won-a-nobel-he-has-been-warning-the-machines-could-take-over-the-world-b127da71 (last accessed 27 Nov. 2024).

[2] Noam Hassenfeld, ‘Even the scientists who build AI can’t tell you how it works’, in: Vox, 15 Jul. 2023, https://www.vox.com/unexplainable/2023/7/15/23793840/chat-gpt-ai-science-mystery-unexplainable-podcast (last accessed 27 Nov. 2024).

[3] Art. 51 (2) ; Art. 55 (2) Artificial Intelligence Act.

[4] Ben Cottier, Robi Rahman, Loredana Fattorini, Nestor Maslej, David Owen, ‘The rising costs of training frontier AI models’, in: ArXiv, 03 Jun. 2024, https://arxiv.org/abs/2405.21015 (last accessed 27 Nov. 2024).

[5] Andreas Schwab, ‘Digital Markets Act and artificial intelligence services’, in: Concurrences, Sept 2024, https://www.concurrences.com/en/review/issues/no-3-2024/libres-propos/digital-markets-act-and-artificial-intelligence-services (last accessed 27 Nov. 2024).

[6] European Commission, Draghi Report: The Future of European Competitiveness, Part B, p.79.

[7] EPP 2024 Manifesto, Section 2.6.