Between the adoption of Regulation (EU) 2024/1689 (IA Act), the AI summit bringing together European data protection authorities in Paris in February 2025, the opinions of the European Data Protection Board (EDPS) and the CNIL action plan aimed at reconciling innovation and the protection of individuals, the regulatory framework around artificial intelligence is becoming clearer and stronger.
In this dynamic, the CNIL published in April 2024, then in February 2025, three recommendations concerning the application of the RGPD (EU Regulation 2016/679) to AI systems and models. These recommendations specify the essential principles that must be respected in order to ensure compliance and to protect the rights of the persons concerned.
So, what are your obligations and how can you ensure the compliance of your AI tools?
AI systems that are said to be “at unacceptable risk” because they represent an obvious threat to people's safety, livelihoods and rights have been banned for two months.
It is therefore imperative to map your use of AI, in:
1. Determining the nature of the AI used, between system automated generating predictions, content, recommendations, or decisions that can influence physical or virtual environments or template developed to perform a wide range of distinct tasks, regardless of how it is brought to market.
2. Analyzing the level of risk associated with AI according to the nature of the AI.For AI systems, it should be assessed whether it presents an unacceptable, high, limited, or minimal risk. For AI models, it is a question of determining whether it is at systemic risk and has high-impact capabilities that have a significant impact on the Union market, or not.
This will then allow you to Define your role in the AI value chain (supplier, deployer, importer, distributor) and therefore the obligations that are applicable to you depending on the nature of the AI used and its level of risk.
Once this mapping has been carried out, it then becomes essential to organize, internally, your rules for development, if applicable, and for the use and use of AI.
Such a document should cover several aspects that are crucial to ensure the safety of your business, people, and the AI used. As such, an AI Policy should ensure ethical, transparent and responsible use of AI and contain the following elements:
- the identification of AI-enriched tools that you have mapped;
- informing your employees and other users about the use of AI and the decisions that can be made with AI;
- an internal process in case of development or use of a new AI that has not yet been mapped;
- the obligation to assess the risks of any new AI tool and to carry out a regular assessment of the AI systems and models already mapped;
- the obligation for your employees to use only mapped AI tools whose risk level has been assessed and validated by your company;
- a policy for managing the data used to train or operate AI tools, and the data generated by them in compliance with the key principles of the GDPR.
Thus, beyond the procedures to be followed in the event of new projects within your company, such an internal policy will allow you to determine the behaviors to adopt on a daily basis in the face of artificial intelligence.
It will also be appropriate to Raise awareness among your employees to best practices and risks associated with the use of AI (risk of trade secret violation, algorithmic biases, risks of errors, ethics, data security, data security, risks related to data protection, etc.).
As soon as artificial intelligence involves the processing of personal data, it should comply with the requirements of the applicable data protection regulations.
In its recommendations of April 2024, in light of the IA Act and the RGPD, the CNIL also defined a procedure to follow in seven steps to ensure the compliance of an actor when using artificial intelligence.
1. Define a specific purpose, by determining the functionalities of the system or the AI model, its foreseeable capabilities that are most at risk and/or its conditions of use.
2. Determine your responsibility between data controller and subcontractor, depending on your role in the AI value chain and your degree of control and initiative over the treatment carried out.
3. Define the legal basis applicable to your treatment, among the six legal bases of the GDPR: consent, legitimate interest, execution of a contract, legal obligation, mission in the public interest or protection of the vital interests of the person concerned or of a third party.
4. Verify the possibility of reuse of the data collected, whether you collected them yourself by carrying out a compatibility test, with a third party by ensuring the validity of its collection, or by carrying out a case-by-case analysis for publicly accessible data.
5. Ensuring compliance with the principle of data minimization, by collecting only data that is strictly adequate, relevant and limited to what is necessary to achieve your purpose and this from the conception of the AI (Privacy by design), by conducting a pilot study and/or by interviewing an ethics committee.
6. Define a shelf life, distinguishing between conservation for the development phase and conservation for the maintenance or improvement of the system.
7. Assess risks and conduct a data protection impact assessment (AIPD), in order to map and assess the risks of the treatments put in place, which will allow you to define the security measures to be adopted to protect the personal data processed by your system.
As with any processing of personal data, the central concern of the RGPD and the CNIL is to ensure the defense of the persons concerned. AI is no exception, and people's information is an essential key in this regard.
As soon as an AI collects, processes, stores, uses personal data as part of its development and training, you must inform the persons concerned.
This information should:
· specify in a clear, precise and accessible manner the purposes of the treatment;
· indicate the categories of data collected and their sources;
· clearly distinguish between treatments for development purposes and those for other purposes;
· indicate any data used for learning purposes, and retained by the AI;
· specify the nature of the risks associated with data extraction.
The CNIL nevertheless admits a limit to this information obligation, assuming that its dissemination adapts to the operational constraints of artificial intelligence.
Since the protection of the persons concerned also involves the possibility of exercising their rights, you must ensure the effective exercise of the rights of access, rectification, deletion, delimitation, opposition, withdrawal of consent, withdrawal of consent, portability and digital death.
The persons concerned must be able to exercise their rights both:
· on learning databases; what
· on AI models if they are not considered anonymous.
However, the CNIL admits limits in the exercise of rights and aims, in its recommendations of February 2025, to guarantee the rights of the persons concerned without hindering innovation in AI.
Thus, if the presence of the data subject's data in the model or its learning base is not obvious, you will be able to demonstrate, as a data controller, that you are no longer in a position to identify the person.
In any case, in order to anticipate difficulties in the effective exercise of the rights of the persons concerned, you can:
- indicate to the persons concerned the additional information to be provided in order to allow them to be identified;
- identify the person concerned within the training data if you still have it before checking whether it has been stored by the model and is likely to be extracted from it;
- rely on the typology of training data when you no longer have it to anticipate the categories of data that may have been stored, in order to facilitate attempts to identify the person concerned;
- for generative AI cases, establish at the design stage an internal procedure consisting in querying the model to verify the data that it could have stored about the person concerned thanks to the information provided.
In good practice, the CNIL especially invites you, as a supplier, to anonymize data of training or, failing that, to ensure that the AI model is anonymous at the end of its training.
Adopting the recommendations made by French and European data protection offices as early as the design of AI systems and models is the key to anticipate the security of artificial intelligence tools while guaranteeing the protection of the rights of the persons concerned.
The firm's IT/Data team supports you to secure your practices and integrate these requirements into your projects.Contact us today to anticipate regulatory changes and bring your tools into compliance.
Jeannie Mongouachon, partner lawyer and Juliette Lobstein, associate lawyer at Squair
Sources: