DigitalEnergyIndustryResearch & Innovation

Supporting the development and use of trustworthy AI

By Brando Benifei, MEP (S&D Group – Italy)

“The narrative that the AI Act is an obstacle to innovation is something we have heard for many years and continue to hear after its entry into force. The reason why it is still very successful today is simple: it is much easier. It is easier to achieve highly profitable results in an unregulated environment, because you only have to look at those two or three companies that have been very successful to think that unregulation is the way to go.

What has less appeal is the account of accidents (fatal and non-fatal) and the violation of users’ rights that that same context offers. We have been reading about this for months with the many lawsuits pending in the United States against companies offering generative AI, lawsuits that, if they had a certain result, would bring their business model to its knees. Just as we have also seen several cases in Europe in recent years, before the AI Act came into force, for the uncontrolled use of algorithms that had an impact on the lives of vulnerable individuals such as people who had applied for financial support from the state or students who had been misjudged because of a badly set-up algorithm with direct consequences on their academic future.

Then there is the European Union, which has come up with a text that, although not perfect, has the ambition to strike the right balance between innovation and respect for human rights, as we explicitly added, with the European Parliament’s amendments for which I was co-rapporteur, in Article 1.

There can be no widespread adoption of AI tools without the trust we place today in the fact that our appliances are safe because they have a CE mark. If we think, however, that AI is not a mere household appliance, and that it already helps us to make important decisions for our private lives and our work, perhaps it is good that upstream the choices made during its development are transparent and shrewd, far from approaches such as that of “move fast and break things”, perhaps suitable for other contexts and times, but not for this one.

And we should be proud of this, as it already happens in other sectors such as food, where citizens of other countries discover that the same companies sell in Europe a healthier version of the same product, because it has fewer additives and colourings, than the one they find on their shelves. The same has happened with the GDPR, which certain commentators have tried to portray as the cookie law (which, incidentally, is a different one) without remembering that many of the developments in favour of privacy for users outside the European Union have come about precisely because of work done on the basis of those norms that best protect the rights of the data subject.

Norms that protect not only from the aggressiveness of certain companies on the use of our personal data, but also from the intrusiveness of governments, limiting the risks of a dictatorship of surveillance.

Getting there will require more effort, no doubt, but the result will be better because it will be guaranteed by an advanced system of controls. The possibility for users to be able to know the rationale behind the choice of a system using AI will help them to trust that system; knowing how AI has been trained will help to prevent forms of discrimination and to choose the AI best suited to the purpose we have in mind; the obligation in some cases to report high-risk AI systems in use and violations of the AI Act centrally will provide an up-to-date overview of the risks across the Union.

The request for a Fundamental Rights Impact Assessment (FRIA), which we obtained in negotiations for high-risk AI used in the public sector and in those sectors with a high social impact such as banking, goes in that direction. A public body, including law enforcement authorities, must act even more responsibly since, unlike the private sector, the user cannot find a cheaper and more efficient alternative on the market. Furthermore, the state and the police exercise power, with legal effects, over people and their freedom.

The FRIA therefore has the task of forcing the deployer to pause and reflect for further evaluation on the possible negative effects AI might have on people.

It would be a mistake, then, to think that these prescriptions can only be addressed by the big companies we know. In the AI act there are also many provisions for SMEs and start-ups, which enjoy preferential access to regulatory sandboxes to experiment with the uses of AI, under the guidance of the authorities, without the risk of sanctions.

The very fact that it is a EU regulation, and thus uniformly applicable throughout the EU, is an advantage not to be taken for granted when one considers that in the United States, as it is happening with privacy regulations, state initiatives are emerging to regulate AI that inevitably create a legislative puzzle that is difficult for a start-up to tackle.

While investments, support for compliance and public-private partnerships will undoubtedly be needed to finance the birth and growth of start-ups, we must be confident that designing AI systems in accordance with the requirements of the AI Act from the outset will foster the spread of secure systems for the benefit of users and companies.”.