
AI Summit: AFNOR will be there
In February 2025, AFNOR participates in the Summit for Action on AI called for by the President of the Republic. Its credo: the French AI industry needs voluntary standards. The aim is to equip professionals with the tools they need to build trustworthy AI and thoroughly study its environmental impact.
Artificial intelligence
ChatGPT, DeepSeek today... Faced with the AI tornado, AFNOR offers voluntary standards as a shelter! Back in 2022, we said that artificial intelligence, especially with the latest developments in generative AI, needed a code of conduct to promote understanding between stakeholders and interoperability between systems, as well as a framework to build trust, particularly for high-risk AI systems listed in what is now the AI Act, the European regulation on artificial intelligence. It was with this stance that, in February 2025, the group participated in the Summit for Action on AI in Paris, as requested by the President of the Republic, in two stages:
- A first scientific session on February 7 at the École Polytechnique
- A second political event took place on February 10 at the Grand Palais.
On February 7, AFNOR will host a round table discussion on trustworthy AI (session no. 3, Faurre auditorium), with the participation of Touradj Ebrahimi, professor at the Swiss Federal Institute of Technology in Lausanne, representatives from European and international standardization bodies (BSI, CEN-Cenelec, Danish Standards, DIN, IEC, ISO) and the company Naa.ia, the first French company to be ISO 42001 certified by AFNOR Certification. You can register here (limited number of places). The voluntary standard ISO/IEC 42001, an adaptation of the parent standard ISO 9001 to the AI universe, is establishing itself as a valuable foundation for AI professionals who want to move forward methodically and work toward continuous improvement. "It specifies the framework for a quality management system for companies designing or developing AI systems, without forgetting users," summarizes Virginie Desbordes, head of Digital Trust at AFNOR Certification.
AI standardization: a question of sovereignty
The ISO 42001 standard is the result of international consensus. It forms the basis for a series of normative documents expected at European level. Around ten of these are even set to become harmonized standards, i.e., they will propose a detailed approach to each requirement of the IA regulation and, once published in the Official Journal, will offer a presumption of regulatory compliance to any actor who applies them. Their scope includes trustworthiness frameworks, risk management, conformity assessment, etc. The subject of trustworthiness is being debated at CEN-Cenelec within a committee co-chaired by AFNOR for France. "Europe defends the idea that the development and use of high-risk AI guided by principles—principles that will ultimately become constraints—stimulate innovation while ensuring better control of the results of these innovations," emphasizes Anna Médan, AI program manager at AFNOR Standardization.
AFNOR took the lead by forming a community of stakeholders committed to establishing and defending a French position, rather than having rules imposed by third parties. This was the aim of the Grand Défi IA (Great AI Challenge), a mission led by the General Secretariat for Investment. When it was launched in 2021, AFNOR was tasked with " create a normative environment of trust to accompany the tools and processes for certifying critical systems based on artificial intelligence
The defense of a French AI community that agrees on best practices remains relevant, as indicated by a report from the Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST) in November 2024: " France must be allowed to defend its national interests and those of its national companies as effectively as possible in the field of AI standardization, which means greater involvement of AFNOR and COFRAC.
"
France has made its mark by also approaching AI from an environmental perspective. This is evidenced by the interest generated by the reference framework. AFNOR Spec 2314 on frugal AI, co-developed with dozens of stakeholders including Ecolab, a service of the General Commission for Sustainable Development (Ministry of Ecological Transition), and available for free download here If AI systems pose a risk, it is that of increased pressure on the environment, given the energy required to train them! The AFNOR standard reviews the criteria to be taken into account when assessing these impacts. This document will greatly facilitate the design of voluntary standards that are being developed at European and international level. This is particularly true given that AFNOR is associated with the Global Coalition for Sustainable AI , launched in the wake of the summit by the French Ministry for Ecological Transition. On this subject, as on all others related to AI, AFNOR Compétences offers a range of training courses expanded.
Finally, wanting responsible AI means ensuring that practices are consistent and pushing for events on the subject to be responsible themselves. In this sense, after an audit, AFNOR Certification awarded the organizers of the Paris Summit ISO 20121 certification. and the Equality Major Event label .




