Bridging legislation and innovation: how European harmonized standards help implement the AI Act
The advent of Artificial intelligence (AI) has ushered in transformative opportunities, along with challenges and risks. To address these, in 2024 the EU adopted the AI Act (AIA). This groundbreaking piece of legislation establishes a uniform regulatory framework across the EU to address potential harmful risks caused by AI systems to the safety, health, and fundamental rights of individuals.
European standards are key to ensure the Act’s implementation. In May 2023, the European Commission issued a Standardization Request to CEN and CENELEC (two of the official European Standardization organizations) to develop dedicated European Standards in advance of its adoption.
This article explores the role standards play for the EU’s ambitions to regulate AI, and where they get their power from.
- The power of Harmonized Standards
At the heart of the system is the concept of Harmonized Standards (hENs). hENs are a special type of European standards that ensure that products, processes, and services comply with essential legal requirements. These standards provide technical benchmarks that help convert abstract legislation, such as the AI Act, into measurable criteria. The hENs – while remaining voluntary- are developed based on a so-called ‘Standardization Request’ from the European Commission.
Adherence to hENs grants a “presumption of conformity”: once a suitable harmonized standards is published, its users (e.g. manufacturers) are automatically assumed to be compliant with the relevant requirements in legislation. This not only simplifies compliance, but also promotes legal certainty for businesses and institutions.
- Standardization, a collaborative endeavour
Our standards are developed through a rigorous, consensus-based, and inclusive process that ensures all stakeholders — academia, industry, societal representatives, and SMEs — have a voice. They provide an excellent venue to discuss technical documents while ensuring that they are practical and balanced and support a level playing field for all players, including small and medium-sized enterprises.
This inclusive approach is at the very heart of the European Standardization System: by bringing all actors together, it ensures a unified interpretation of legal requirements. This collaborative process not only fosters clarity, but also establishes a robust foundation for building transparency and trust, both key to ensuring trustworthy AI.
Collaboration extends beyond European borders. CEN and CENELEC have long standing agreements with their international counterparts the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) to ensure global consistency and avoid the duplication of work. This international alignment is crucial for businesses operating globally, as it reduces barriers to trade and helps them maintain compliance across different regulatory frameworks.
- Standards are key to meet the AI Act’s requirements
The strength of the process described above is evident in the AI Act. The Act adopts a risk-based approach, by categorizing AI systems into four risk levels: unacceptable, high, limited, and low. While unacceptable-risk AI is outright banned, high-risk AI systems face stringent requirements, including risk assessments, logging, and transparency measures.
However, the AIA’s provisions often lack specificity. For instance, it mandates a risk management system, but does not define its scope or methodologies. This is where hENs play a crucial role: they fill these gaps, offering clear instructions for compliance.
There are other reasons standards are a strategic tool to support the EU’s AI ambitions. They:
- Promote Trust: by ensuring that AI systems are safe and uphold fundamental rights, standards build consumer and societal trust in AI technologies;
- Enhance Competitiveness: standards reduce market fragmentation and create a level playing field, encouraging innovation and investment in AI;
- Foster Interoperability: they facilitate the seamless integration of AI systems across sectors, boosting the EU’s position in global value chains.
- CEN and CENELEC’s work on AI
In May 2023, the European Commission requested CEN and CENELEC to develop a series of standards to support the AIA. The request includes standards covering ten aspects of AI systems: Risk Management, Data Governance and Quality, Record Keeping, Transparency, Human Oversight, Accuracy, Robustness, Cybersecurity, Quality Management, and Conformity Assessment.
A specific focus is also put on the risks that AI could pose to the health, safety, and fundamental rights of individuals.
Responsibility for developing these harmonized standards for Europe is with CEN and CENELEC’s Joint Technical Committee 21 ‘Artificial Intelligence’. JTC 21 is developing a series of European homegrown standards. Whenever deemed relevant, it can adopt international standards (developed by ISO/IEC JTC 1/SC 42) to keep the divergence between the European and international level as small as possible.
Current work in JTC 21 primarily addresses requirements with regards to AI trustworthiness, Conformity Assessment, Risk Assessment and Quality management systems. They will be further complemented with a series of standards to cover all ten concrete aspects of AI systems mentioned above.
These hENs consider common (horizontal) requirements to address risks in AI systems. They will provide the necessary guidance to manufacturers, enabling them to adapt these solutions to their specific systems.
The work, of course, comes with its own set of challenges, stemming from the inherent complexities of AI itself: an exceptionally broad spectrum of relevant stakeholders, bringing a variety of perspectives; the novelty of the topic; and its interplay with other fields, requiring strong coordination. In addition, defining requirements for some topics, such as trustworthiness, is challenging, as they still suffer from a lack of maturity or scientific consensus.
Despite the challenges, standardization work in JTC 21 has been happening in parallel to the political process, to ensure the availability of the hENs before August 2026, the deadline for high-risk AI systems to comply with AIA requirements.
To further raise awareness about its activities and attract new stakeholders, JTC 21 established a Task Group ‘Inclusiveness’. The Group publishes a freely available dedicated Newsletter, with the support of the European Trade Union Confederation (ETUC).
Conclusion
As the EU tries to become a world leader in AI regulation, Harmonized Standards can help bridge the gap between ambitions and reality. By translating legislative mandates into practical guidance, standards ensure that AI systems are safe, trustworthy, and compliant.
Moreover, the inclusive and collaborative nature of the European standardization process ensures that the resulting frameworks are balanced and reflective of diverse societal needs.
While challenges remain, the combined efforts of CEN and CENELEC, National Standardization Bodies, and all stakeholders underscore the EU’s commitment to leading the world in ethical and innovative AI. For companies and organizations of all sizes, understanding and adopting these standards is not just a compliance exercise—it is a strategic move toward building a sustainable and competitive future in the AI-driven era.