Publié le 15/02/2022 Be involved in writing voluntary standards for artificial intelligence
Following the survey conducted in the summer of 2021 among artificial intelligence players, AFNOR, mandated by the State, has finalized the roadmap to provide this strategic sector with new voluntary standards in order to disseminate best practices. It’s you who will write these standards!
Artificial intelligence (AI) systems are already part of our daily lives. An industry is developing in this sector with multiple applications, making it an issue of economic sovereignty for France and Europe, especially in this first half of 2022 under the French Presidency of the European Union. But to gain market traction, AI-based systems have to inspire confidence. Players must share good practices, a common vocabulary and protocols. This sharing is only possible with voluntary standards. In Europe, these standards will support the regulation that the Brussels Commission is preparing on the subject. Unlike this future regulation that will be prescriptive in the same way as GDPR is now, standardization is voluntary.
AI Standards: 6 focus areas
To this end, in May 2021 the State mandated AFNOR, as part of France’s ‘Great AI Challenge’, to lead a project aligned with the Future Investment Program (PIA) and the France Recovery plan. The mission statement is clear: “Create the trusted normative environment supporting the tools and processes for certifying critical systems based on artificial intelligence.” The work program is now finalized. You can view it here (PDF), particularly if, like 260 French players in the field, you took part in the consultation conducted in summer 2021 to build it. You can then participate in the development of standards within our standardization commissions. The roadmap has 6 focus areas:
- Focus area 1: Develop standards relating to trust
The selected priority characteristics to be standardized are safety, security, explainability, robustness, transparency and fairness (including non-discrimination). Each characteristic will require a definition, a description of the concept, technical requirements and associated metrics and controls.
- Focus area 2: Develop standards on AI governance and management
AI is generating new applications, all of which carry risks. These risks have various origins: poor data quality, poor design, poor qualification, etc. A risk analysis for AI-based systems is therefore essential, in order to then propose a risk-management system.
- Focus area 3: Develop standards on oversight and reporting for AI systems
This is about ensuring that AI systems are controllable (ability to take back control), that humans will be able to take back control at critical moments when an AI system goes beyond its nominal operating range.
- Focus area 4: Develop standards on competencies of certification bodies
It will be up to these bodies to ensure not only that organizations have put in place processes for the development and qualification of AI systems, but also that the products comply with the requirements, including regulatory requirements.
- Focus area 5: Develop standardization of certain digital tools
One of the issues around AI is the need to have simulations based on synthetic data, not real data. Standards must ensure that this data is reliable.
- Focus area 6: Simplify access to and use of standards
In order to implement this strategy and adjust it along the way, a consultation platform will be made available to you. In the meantime, feel free to view the document and discuss it with those around you!