
Grand AI Challenge: Season 2 kicked off this Thursday
In 2021, the French government launched the Grand Défi IA (Great AI Challenge), tasking AFNOR with creating the regulatory environment for deploying "trustworthy AI." On July 25, 2024, the program was renewed, with an important task to be completed by April 2025: harmonized standards.
Artificial intelligence
Utilisez les flèches gauche et droite pour avancer ou reculer de 5 secondes. Utilisez Début pour aller au début, Fin pour aller à la fin.
When the French government launched the Great AI Challenge, In 2021, there was this objective, clearly signaling a strategy of influence: "At the end of the project, France will have a standardization strategy in the field of AI and will implement it through coordinated partnerships (...), it will increase its influence in European and international standardization bodies (...), and it will improve the competitiveness of French companies."
Orchestrated by the General Secretariat for Investment Programming (SGPI), a department of the Prime Minister's Office, Season 1 of the Grand Défi IA took place well ahead of the production of standards: the aim was to seek out companies that were unfamiliar with standardization, explain and popularize existing voluntary standards, map standards, and facilitate their adoption. And, of course, to bring together as many experts as possible around the project. Read the press release issued at the start of the partnership here.
Harmonized standards for the AI Act coming soon
Mission accomplished. Over the past three years, France, through AFNOR, has become a major player in the standardization of artificial intelligence, for example by leading the strategic mapping group at ISO-IEC JTC1/SC42 and taking on the vice-presidency of CEN-CLC JTC 21. "We have structured the French ecosystem around a roadmap, through partnerships with players such as Confiance.ai, France Digitale and Hub France IA," explains Morgan Carabeuf, head of the digital division at AFNOR Standardization. We have mobilized players such as INRIA, Microsoft, Numalis, IBM, IRT System X, Airbus, Schneider, the Montaigne Institute, and others." The publication, at the end of 2023, of the standard ISO/IEC 42001 Framing AI management systems (a certifiable standard) is also a victory.
And yet, "given the challenges surrounding harmonized standards, more French experts with a strong involvement would be needed," continues Morgan Carabeuf. What are harmonized standards? They have a special status, halfway between voluntary standards and regulatory standards. In the European Union, a product's compliance with harmonized standards constitutes a presumption of compliance with the law, and thus becomes a real competitive advantage in the market. That's how powerful they are. In this case, the AI Act's harmonized standards must be ready by April 2025. So we're in the middle of the race, and this is not the time for France to take a break. This is the meaning of the amendment to the contract signed with the SGPI on July 25, 2024. Especially since the context is changing rapidly. At the end of 2022, we saw the arrival of ChatGPT, quickly followed by its "cousins." In terms of best practices, generative AI undoubtedly deserves a dedicated strategy.
The AI Trustworthiness Framework standard, a gateway
But the roadmap for the coming years revolves mainly around the future voluntary European standard on the characterization of trust for AI (AI Trustworthiness Framework), a project proposed by France and accepted in January 2024. Currently being drafted, for a maximum duration of three years, this future standard will also be harmonized within the EU-27. "It is seen as strategic by the European Commission, which has made it one of its priorities. It may refer to other more detailed standards, either from ISO, ETSI [the telecommunications standards body, ed.], or IEEE [the electronics standards body]," explains Morgan Carabeuf. The French strategy has been to reduce the risk of regulatory overload by proposing this project, which is intended to be the gateway to para-regulatory standardization. This project will provide high-level requirements and guide companies in their efforts to comply with the AI Act.
At the helm is Enrico Panai, an AI ethicist and staunch advocate of standardization for many years. "The idea has been around for a long time and has given rise to several publications that inspired the AI Act," he says. The common thread is that trust is a fundamental element of all standardization processes. "Why? "Because Europe is convinced that trust is necessary for a market to develop," replies Enrico Panai. "This is in contrast to those who condemn regulation on the grounds that it stifles innovation."
Trust, a necessary condition for innovation
The ethicist points out that in Europe, innovation is driven by SMEs and microbusinesses, not by GAFAM. "We need to be aware of this. If consumers don't trust AI technology, companies won't be able to develop anything at all," Enrico Panai insists. Another aspect that he believes is underestimated is that "technology is never built by a single company. You need a whole chain of players and developers. But it's impossible to sign contracts without standards to align with!"
Working on an intangible subject such as trust is not easy, but it is possible to achieve this using "proxies" (markers, indicators). "For example, we indicate that a certain measurable characteristic is proof that a relationship of trust can be built. It's the same as in a restaurant, where you can tell yourself that an open kitchen, clear signage for stairs, or clean restrooms are good signs," compares Enrico Panai. The message is clear: if you want to get involved in standardization... don't hesitate! "It's a collaborative effort where many voices are needed to achieve a shared and acceptable result."




