Europe as a world leader in AI accepted by citizens and consumers
Artificial intelligence (AI) is the ground-breaking technology of our time. Not only will it transform the way we work, travel and business – AI will accompany us in our personal and private lives such as in healthcare, e-governance and security. The presence of AI in our lives should not frighten us – rather it should encourage us. We must leverage on its potential and discuss how to mitigate the risks that accompany the technological transformation.
With the Artificial Intelligence Act (AI Act) currently under scrutiny in the European Parliament, the EU is now at a crossroad. It can shape legislation on how we want AI to transform our way of living and where we want innovation to take place.
We must pave the way now to enable Europe to lead in this emerging technology while safeguarding fundamental rights and European values that we defend.
It is important to underline that even today’s powerful data-driven algorithms can only solve tasks in domain-specific niches. In most applications, AI does not replace human work, but rather assists, through recommendations or taking over repetitive tasks. In that regard, the majority of AI applications are almost or even completely risk-free. Furthermore, AI does not ‘understand’ the task that is performed. It simply detect patterns in the data, extrapolate findings and apply them to new similar scenarios. Hence, the dystopian fantasy of computers developing their own will and taking over the wold is, to put it mildly, highly unlikely.
Nevertheless, even today’s AI-based systems can cause harm if misused. Primary examples of such misuse are the deployment of AI to develop surveillance tools, social scoring or the spread of misinformation and propaganda. Furthermore, AI can make discriminatory decisions if trained on data that has underlying biases. In such cases, regulatory action is of course required, while taking into account that no data-set can be completely bias-free.
On the other hand, AI has enabled key transformations in our society and will continue to revolutionise it for decades to come. Hence, the consequences for our competitiveness, our prosperity and our security should not be underestimated. For example, in healthcare, the rapid processing of data through AI can help make diagnoses faster and more targeted and adapt therapies and medicines to the patient, as well as relieving the burden on healthcare staff. In the area of sustainability, AI can help us optimise energy use, make transport systems more efficient and drive sustainable solutions in agriculture.
Currently, the EU has severely fallen behind in the global tech race. The EU lacks market power, investment, research and skills compared to the US and China and other global players. Only 8 of today’s top 200 digital companies are domiciled in the EU.
Moreover, the gaps are growing. The US and China invest up to 5 or 6 times more than the EU does in AI. Top European researchers are emigrating to the US and China. With risks, benefits and competition in mind, it needs to be the goal of the EU to be the shaping power in the use of AI, based on our values, fundamental rights and ethical principles, rather than always running one step behind other forces and only reacting to new developments. Otherwise, we will end up as the data colony of the US or China.
While the Commission’s AI Act is a good starting point to regulate AI, the law nevertheless requires significant changes enabling it to cope with the complex and highly dynamic field of AI while ensuring leeway for innovation.
To strike that right balance between encouraging innovation and protecting our fundamental right, we must first create more legal certainty, harmonise rules across Europe and decrease bureaucracy.
A risk-based regulatory approach can help reduce bureaucracy for businesses and SMEs while safeguarding prudence of AI algorithms deployed across the EU.
In essence, the risk-based approach would only make high-risk AI applications subject to regulation while exempting the majority of risk-free applications. Furthermore, the AI Act must avoid overlaps with other regulation already in place.
Additionally, the AI Act should establish the concept of “Trustworthy AI”, a European response to AI development, which is lawful, ethical and robust to cybersecurity and other risks. The European Standardisation Organisation and EU Commission should translate the above-mentioned principles and values into technical standards. This concept can increase trust in the new technology in the public sphere and function as a trademark in the global AI market, potentially giving the EU a competitive edge.
Furthermore, we should provide developers with regulatory sandbox environments in which they can experiment with AI algorithms under a less stringent regulatory environment. Such regulatory sandboxes will help spur innovation while protecting the public from unintentional harm that can result from experimenting with AI.
We must use our regulatory and market powers now to accelerate innovation in AI to make the EU a global leader in AI that shapes the international debate through setting standards. Only then will our continent be able to fully exploit the enormous potential of new AI technologies and use them to address topical challenges such as our energy dependence or climate change.