Between the adoption of Regulation (EU) 2024/1689 (IA Act), the February 2025 AI summit that brought together European data protection authorities in Paris, the opinions of the European Data Protection Board (EDPB) and the CNIL action plan for balancing innovation and the protection of individuals, the regulatory framework around artificial intelligence is becoming clearer and stronger.
In line with this dynamic, the CNIL published three recommendations in April 2024 and February 2025 on the application of the GDPR (EU Regulation 2016/679) to AI systems and models. These recommendations set out the key principles that must be respected in order to ensure compliance and to protect the rights of data subjects.
So, what are your obligations and how can you ensure that your AI tools are compliant?
Since February 2025, “unacceptable risk” AI systems that represent a cleat threat to individual's security, livelihoods and rights are prohibited.
It is therefore imperative to map your use of AI, by:
1. Determining the nature of the AI you used, between automated generating predictions, content, recommendations, or decisions that can influence physical or virtual environments o developed to perform a wide range of distinct tasks, regardless of how it is brought to market.
2. Analyzing the level of risk associated with AI according to the nature of the AI. For AI systems, it is necessary to assess whether it presents an unacceptable, high, limited, or minimal risk. For AI models, you need to determine whether they present systemic risk and have high-impact capabilities with a significant impact on the Union market, or not.
This will then enable you to define your role in the AI value chain (supplier, deployer, importer, distributor) and therefore the obligations applicable to you depending on the nature of the AI used and its level of risk.
Once this mapping has been carried out, it then becomes essential to internally organize your rules for developing, if applicable, and using and resorting to AI.
Such a document should cover several crucial aspects to guarantee the safety of your company, people, and the AI used. To this end, an AI Policy should ensure an ethical, transparent and responsible use of AI and contain the following elements:
- the identification of AI-enhanced tools that you have mapped;
- information for your employees and other users on the use of AI and the decisions that can be made using AI;
- an internal process for developing or using a new AI that has not yet been mapped;
- the obligation to assess the risks of any new AI tool and to carry out a regular assessments of AI systems and models that have already been mapped;
- the obligation for your employees to only use only AI tools that have been mapped and for which the risk level has been assessed and validated by your company;
- a policy for managing the data used to train or operate AI tools, and the data generated by them in compliance with the key principles of the GDPR.
Thus, in addition to the procedures you need to follow in the event of new projects within your company, such an internal policy will enable you to determine the behaviors you need to adopt on a daily basis in the face of artificial intelligence.
You will toned to raise your employees' awareness on best practices and the risks associated with the use of AI (risk of trade secret violation, algorithmic biases, risks of errors, ethics, data security, data protection risks, etc.).
Whenever artificial intelligence involves the processing of personal data, it must comply with the requirements of the applicable data protection regulations.
Taking into account the IA Act and the GDPR, the CNIL'April 2024 recommendations set out a seven steps procedure to ensure a player's compliance when using artificial intelligence.
1. Define a specific purpose, by determining the functionalities of the AI system or model, its foreseeable capabilities most at risk and/or its conditions of use.
2. Determine your responsibility between data controller and data processor, depending on your role in the AI value chain and your degree of control and initiative over the processing carried out.
3. Define the legal basis applicable to your processing, among the six legal bases of the GDPR: consent, legitimate interest, performance of a contract, legal obligation, mission of public interest or safeguarding of the vital interests of the data subject or of a third party.
4. Check the reusability of collected data, whether you collected it yourself by carrying out a compatibility test, from a third party by ensuring the validity of its collection, or by carrying out a case-by-case analysis for publicly accessible data.
5. Ensure compliance with the principle of data minimization, by only collecting data that is strictly adequate, relevant and limited to what is necessary to achieve your purpose, right from the AI design stage (Privacy by design), by conducting a pilot study and/or by consulting an ethics committee.
6. Define a retention period, clearly distinguishing between retention for the development phase and retention for system maintenance and/or improvement purposes.
7. Assess risks and carry out a data protection impact assessment (DPIA), to map and assess the risks of the processing operations in place, enabling you to define the security measures to be adopted to protect the personal data processed by your system.
As with any processing of personal data, the main concern of the GDPR and the CNIL is to ensure the defense of data subjects. AI is no exception to this, and informing people is an essential factor in this respect.
Whenever an AI collects, processes, stores, uses personal data as part of its development and training, you must imperatively inform the people concerned.
This information should:
· specify the purposes of the processing in a clear, precise and accessible manner;
· indicate the categories of data collected and their sources;
· clearly distinguish between processing operations for development purposes and those for other purposes;
· indicate any data used for learning purposes, and retained by the AI;
· specify the nature of the risks associated with data extraction.
However, the CNIL admits that there is a limit to this information obligation, by admitting that its transmission can be adapted to the operational constraints of artificial intelligence.
Since the protection of data subjects also involves the possibility of exercising their rights, you must ensure the effective exercise of the rights of access, rectification, erasure, limitation, opposition, withdrawal of consent, portability and digital death.
Data subjects must be able to exercise their rights both:
· on learning databases; what
However, the CNIL admits limits in the exercise of rights and its February 2025 recommendations aim at guaranteeing the rights of data subjects without hindering AI innovation.
Thus, if the presence of a data subject's data in the model or its learning base is not obvious, you can demonstrate, in your capacity as data controller, that you are no longer able to identify the person.
In any case, to anticipate difficulties in the effective exercise of the rights of the data subjects' rights, you may:
· inform data subjects of the additional information you need to identify them;
· identify the data subject within the training data, if you still have it, before checking whether it has been stored by the model and is likely to be extracted from it;
· rely on the typology of training data when it is no longer available to anticipate the categories of data that are likely to have been memorized, in order to facilitate attempts to identify the data subject;
· for generative AI, establish an internal procedure right from the design stage, consisting of querying the model to check the data it may have memorized about the data subject thanks to the information provided.
For good practice, the CNIL urges you, as a supplier, to make training data anonymous or, failing that, to ensure that the AI model is anonymous once it has been trained.
Adopting the French and European data protection authorities' recommendations right from the design stage of AI systems and models is the key to anticipating the security of artificial intelligence tools, while guaranteeing the protection of data subjects’ rights.
Squair’s IT/Data team can help you secure your practices and integrate these requirements into your projects. Contact us now to anticipate regulatory changes and make your tools compliant.
Jeannie Mongouachon, Partner and Juliette Lobstein, Associate at Squair
Sources: