Towards Ethical and Transparent Artificial Intelligence: The Role of the European Parliament in Protecting Fundamental Rights
Artificial intelligence is no longer a futuristic concept; it is a transformative force shaping societies, education, industries and governance. Yet, its power raises challenges concerning fundamental rights, transparency, and accountability. As the European Union strives to lead in creating rights-based AI frameworks, the European Parliament has assumed a crucial role in shaping legislation to protect Europeans while enabling innovation. The Artificial Intelligence Act is central to this effort, addressing judicial safeguards, human oversight, and scrutiny of Member States’ implementation.
Effective implementation of the AI Act requires more than initial legislative success – it demands ongoing scrutiny and adaptability.
As a Member of the AI Implementation Working Group, I will work to ensure that Member States’ implementation of AI regulations complies with EU law.
The Role of the European Parliament in AI Governance
The European Parliament has emerged as a proactive actor in shaping AI governance, focusing on upholding fundamental rights, democratic principles, and establishing societal trust in new technologies. Through tough negotiations with the Council, the Parliament has worked hard to balance safeguards and innovation, protecting individuals from risks of bias, discrimination, and overreach, while at the same time fostering innovation and allowing for AI systems to be developed and tested by SMEs in sandboxes.
These sandboxes offer (prospective) providers a controlled environment to develop, train, test, and validate innovative AI systems for a limited period before market entry.
The AI Act recognises the significant role SMEs and start-ups play in AI innovation and includes specific provisions to support their participation.
While the AI Act encourages innovation, it also emphasises the importance of responsible AI development and deployment. The framework includes judicial review and human oversight over AI systems, particularly in sensitive areas like law enforcement, healthcare, and public administration. Moreover, the Parliament emphasises the importance of ongoing scrutiny by the European Commission and a robust review mechanism for Member States to ensure that national implementations align with EU standards.
Human oversight: A cornerstone of trustworthy AI
The AI Act implements various safeguards to keep AI systems under human control and subject to judicial oversight. These safeguards feature throughout the lifecycle of an AI system, from design and development to deployment and post-market monitoring. They are essential to addressing the risks posed by algorithmic biases and errors which can perpetuate discrimination or exacerbate existing inequalities. High-risk AI systems in particular must not operate as unaccountable “black boxes”: human oversight ensures that AI tools complement, rather than replace, human judgment, aligning technological capabilities with ethical standards and democratic values.
Fundamental Rights Impact Assessment: anticipating and mitigating risks
To address potential harms to fundamental rights, the AI Act requires certain deployers of high-risk AI systems to conduct fundamental rights impact assessments. This involves identifying risks to fundamental rights that the use of a high-risk AI system may pose to individuals or groups, particularly vulnerable groups.
Biometric identification and national security
The use of AI for biometric identification, especially in post and real-time contexts, is particularly contentious. Biometric technologies, such as facial recognition, significantly affect privacy, freedom of assembly, and non-discrimination. Deployed without adequate safeguards, these systems risk turning public spaces into surveillance zones. The chilling effect for the freedom of expression also has significant impact on our society and risks undermining democratic participation.
The Parliament has recognised these risks, pushing for strict limitations on the use of remote biometric identification, especially in public spaces. However, the potential to introduce post-remote biometric identification, such as retrospective analysis of video footage, raises complex ethical and legal questions. While such tools could enhance security and aid in criminal investigations, they must be carefully regulated to prevent abuses and ensure fundamental rights compliance. We will carefully scrutinise Member States’ implementation of provisions that introduce such law enforcement tools.
National security presents another challenge. Member States often claim exemptions for AI systems in areas deemed essential for public safety and security.
These exemptions can create loopholes from the AI Act’s safeguards, leading to disparities in the protection of rights across the EU. The Parliament has highlighted the need for vigilance in monitoring how Member States invoke national security exceptions, urging the Commission to publish a legislative proposal to scrutinise such claims rigorously.
Conclusion
Europeans must feel confident that AI technologies serve their interests without compromising their rights. This requires robust legal frameworks, public engagement, and education to foster understanding of AI’s benefits and risks.
Transparency is key. Europeans must have access to clear information about how AI systems operate and how their data is used. This empowers individuals to make informed choices and hold entities accountable for misuse.
The European Parliament’s role in advancing rights-based, transparent AI governance reflects the commitment to protecting fundamental rights. The AI Act contributes toward balancing innovation with accountability, emphasising safeguards such as human oversight, judicial review, and rigorous monitoring of Member States.
Through its ongoing efforts, the Parliament reaffirms that technology must serve humanity, not the other way around. By prioritising transparency, public engagement, and continuous scrutiny of AI systems, the EU sets a global standard for responsible AI that upholds the dignity, freedoms, and trust of all Europeans.