Analysis
29 May 2024

The challenge of regulating generative artificial intelligence

On February 14, the French National Assembly's Law Commission published a report on the challenges posed by generative artificial intelligence with regard to the protection of personal data and the use of generated content. The report takes stock of the European regulatory framework, which still needs to be perfected, and proposes changes to domestic law, providing an interesting overview of how the French legislature is approaching these issues.

 

On 14 February 2024, the French National Assembly’s Law Commission presented a report (“the Report”) on the challenges of generative artificial intelligence (“GAI”) in terms of personal data protection and the use of generated content, presented by Members of Parliament (“MEPs”) Philippe Pradal and Stéphane Rambaud.[1]

This report, which is not the first parliamentary work on artificial intelligence (“AI”), is nevertheless the first parliamentary report specifically dedicated to artificial intelligence.[2]

In their work, the MEPs define GAI as a sub-domain of AI, which has existed for several decades[3] and is now developing exponentially,[4] the latter being defined as “a process that enables a machine to produce an intellectual result that reproduces or simulates human intelligence”, with the particularity of “responding to a new situation on the basis of previous situations, thus simulating human learning”.[5]

According to the Report, GAI can be defined as a technology whose “characteristics include the rapid production of original visual, audio or written content, sometimes through a simple interface that does not require any particular computer skills”, including “large language models (LLMs) [that] enable conversing with the GAI in a human language”.[6]

The Report points out that the availability to the general public of GAI, such as ChatGPT, has brought to light certain new or pre-existing issues such as the compliance of these tools with personal data protection legislation, the use of generated content in terms of civil or even criminal liability, and the protection of fundamental freedoms with regard to the potential of these technologies to manipulate information.[7]

This is illustrated, for example, by the increasingly frequent use of these technologies by police forces and the resulting questions about their ethical use, which have led to Soft Law initiatives such as the Toolkit for responsible AI Innovation in Law Enforcement, published jointly in 2023 by Interpol and the United Nations Interregional Crime and Justice Research Institute.[8]

The MEPs thus adopt a pragmatic approach in the Report, noting that GAI is as much a creator of opportunities, in that it can boost productivity, enrich human creativity and provide new resources for communication, training and education ; as it is a source of risks, such as risks to privacy, risks of bias and outside influences, risks of error, risks of misuse (identity theft and deepfakes in particular), or social risks such as job losses resulting from the productivity gains made possible by GAI.[9]

As a result, public authorities are faced with the dilemma of not restricting the range of opportunities offered by GAI, while not overlooking the extent of the potential risks of this technology for citizens. The aim of the MEPs’ work is to strike the right balance between regulating GAI and stimulating innovation, while ensuring effective protection for individuals and society.[10]

This report is therefore interesting in that it provides an overview of the legislature’s understanding of how GAI technologies work and the challenges they pose, as well as the way in which they are regulated today, and could be in the future, at both European (I) and national (II) levels.

 

I. At European level, a body of legislation in the process of being strengthened

 

Firstly, the report notes that the European approach, which is not focused solely on GAI, sees AI as a whole.[11]

In this context, the MEPs note that the existing European legislative framework, consisting of the European Data Protection Regulation (“RGPD”),[12] the Digital Markets Act (“DMA”)[13] and the Digital Services Act (“DSA”)[14] goes some way towards regulating AI.[15]

The RGPD indeed lays down several principles, including:[16]

  • Lawful, fair, and transparent data processing
  • Limited purpose of data collection
  • Integrity and confidentiality of collected data
  • Respect for the rights of data subjects with regard to their data

Violations of these principles also carry heavy penalties, and the Commission Nationale de l’Informatique et des Libertés (“CNIL”) is responsible for implementing the RGPD in France.[17]

With regard to the DMA, the Report states that it aims at ending the domination of large Internet companies, in particular the GAFAMs, by introducing strict rules to promote competition, protecting small businesses and stimulating innovation in the European digital market, while providing for severe penalties in the event of non-compliance[18], while the DSA aims at monitoring and holding accountable online platforms and protecting the rights of Internet users, as well as supporting European small businesses in this sector, by combating the dissemination of illegal or harmful content, such as online hate, misinformation or the sale of illegal products.[19]

Finally, with regard to the Data Act[20] due to come into force in September 2025[21], the Report points out that it aims at ensuring fairness between economic players in the use of data generated by connected objects, and enabling users to take full advantage of the digital data they generate.[22]

Nevertheless, the report highlights the inadequacy of the European legislation. Indeed, the texts in question apply only indirectly to AI and therefore do not cover all the issues raised by this technology. Indeed, while the RGPD provides a framework for the use of personal data by AIs, it does not make it possible to impose content labeling, nor to control the biases introduced by an algorithm or AI training method. Similarly, while the Data Act may make it easier for AI developers to access data in order to innovate in this sector, it is not enough to guarantee the quality of the AI thus designed.[23]

Faced with these limitations, the MEPs note that the European Union wishes to equip itself with appropriate regulations, which the AI Act should help to achieve.[24] Negotiations on this regulation, proposed in April 2021 by the European Commission and providing for a regulatory framework that suggests classifying AI systems according to the risks they pose to users and society, thus determining the appropriate level of regulation, resulted in a political agreement in December 2023. [25] The final text has been formally approved by the European Parliament on 13 March 2024[26] and it is not expected to come into force before 2026.[27]

The new rules laid down in the regulation will be directly and identically applicable in all member states, and will therefore follow a risk-based approach: [28]

  • Minimal risk: Minimal-risk applications, such as recommendation systems or spam filters based on AI, will be exempt from any obligation, as long as they present little or no risk to citizens’ rights or safety. Companies will nevertheless be able to voluntarily commit to additional codes of conduct for these AI systems.
  • High risk: AI systems considered high risk will have to meet stringent requirements, particularly with regard to risk mitigation systems; the quality of the data sets used; activity logging; detailed documentation; the provision of clear information to the user; human control; and a high level of performance in terms of robustness, accuracy and cybersecurity. These systems can be found in sectors such as water, gas, electricity, medicine, law enforcement, border control, administration of justice and biometric identification.
  • Unacceptable risk: AI systems considered a clear threat to people’s fundamental rights will be banned. These include AI systems or applications that manipulate human behavior to deprive users of their free will, such as toys that use voice assistance to incite minors to engage in dangerous behavior, or systems that enable social rating by states or companies, and certain predictive policing applications. In addition, certain uses of biometric systems will be prohibited, such as emotion recognition systems used in the workplace, and certain systems for categorizing people or remote biometric identification in real time for law enforcement purposes in publicly accessible spaces (with rare exceptions).

The regulation also lays down specific obligations in terms of transparency. Users will have to be aware that they are interacting with conversational robots. Ultra-realistic AI-generated video and other AI-generated content will have to be flagged as such, and users will have to be informed of the use of biometric categorization or emotion recognition systems. These information obligations will be the responsibility of providers.[29]

Lastly, the regulation provides that companies that fail to comply with the rules may be fined, the amount of which will vary according to the infringements committed: 35 million euros or 7% of annual worldwide turnover for violations relating to prohibited AI applications; 15 million euros or 3% of annual worldwide turnover for failure to comply with other obligations; and 7.5 million euros or 1.5% of annual worldwide turnover for providing inaccurate information.[30]

When it comes to GAI systems specifically, these will have to obey the overall framework, but should have specific transparency obligations imposed on them because of “the specific risks of manipulation they present”, with the proposed text providing in particular that when people interact with an AI system or their emotions or characteristics are recognized by automated means, they must be informed, or that if an AI system is used to generate or manipulate images, audio or video content to produce a result that substantially resembles genuine content, it should be mandatory to declare that the content is generated by automated means.[31]

The authors of the report therefore welcome the progress represented by this draft European regulation, while stressing the need for it to take into account the possible adverse economic consequences for emerging European players that overly restrictive regulation could have,[32] which would also reinforce the advantage of non-European players already dominating the market.

 

II. At national level, the role of the regulator and the protection of citizens’ fundamental freedoms are being strengthened on two fronts.

 

The report also proposes that, in parallel with the drafting of European regulations, certain changes be made to domestic law to enable better regulation of GAIs, so that they respect the rule of law.[33]

Firstly, and insofar as the European regulations in preparation will implement regulatory mechanisms, the Report indicates that the CNIL appears to be the French authority best placed to regulate GAIs.[34] Indeed, the Report stresses that personal data occupies a predominant place in GAI issues, since it comes into play at the model training stage, its learning, the use of data provided by users and the use of the data produced. The MEPs points out that the CNIL already has cutting-edge expertise in this field, acquired in the context of its assessment of the compliance of AI systems with the RGPD.[35]

In this respect, the Report recommends that the CNIL’s resources evolve to enable it to fulfill this role in the future, and that it become a “High Authority in charge of data protection and the control of artificial intelligence”, equipped with “a large number of experts and technicians capable of controlling complex algorithms”.[36]

Secondly, the Report recommends adapting the criminal justice response to the new risks brought about by the use of GAI. In the wrong hands, these systems can facilitate the commission of existing offences by providing assistance to their perpetrators or by massifying and/or automating criminal operations, but they can also be the source of new behaviors that are poorly covered by existing incriminations.[37]

In this respect, drawing a parallel with the existing aggravating circumstance when an offence is committed using cryptology[38], the Report recommends extending this aggravating circumstance to offences committed in connection with the use of algorithmically generated content.[39]

In addition, the Report recommends adapting certain criminal offences to take account of the new behaviors arising from the use of GAIs. For example, the Report recommends amending article 226-8 of the French penal code, which punishes the use of photomontage without a person’s consent and introducing a new article 226-8-1 in the same code, to penalize deepfakes carried out without the consent of the person represented.[40] The MEPs here endorse two amendments proposed by the government on July 3, 2023 as part of the assessment of the law to secure and regulate the digital space, which is currently being debated in Parliament.[41]

Finally, the Report recommends anticipating the consequences that GAI systems could have on civil and criminal liability mechanisms. While noting that this issue goes beyond the scope of GAIs and concerns AI in general, the MEPs point out that it is particularly relevant to GAIs, since the opacity of the learning phase of these systems, and of the algorithms and data used, means that the liability of the author of the GAI system is not always certain in the event of damage.[42]

With this in mind, the Report recommends adapting the liability regime for GAIs to their specific characteristics, notably by lightening the burden of proof to limit the asymmetry between users and providers, initiating a study of the liability of service providers relying on an GAIs that they did not design themselves, and reforming the legal framework for group action in the field of personal data protection.[43]

This report once again demonstrates the interest that AI and GAI are provoking among French public authorities, with the French National Assembly being no exception in this respect, as the French Market Authority has also recently taken an interest in the contributions of this technology. It remains to be seen whether this interest will translate into concrete legislative and regulatory advances in the months and years to come.

Related content

Press review
19 July 2024
Press review – Week of 15 July 2024
This week’s press review looks at the European Commission’s complaint against the social network X (formerly Twitter) for misleading its...
Publication
14 July 2024
Overview of 2024: Ethics & compliance
Overview of decisions and events relating to ethics and compliance which have occurred in France over the last twelve months.
Press review
28 June 2024
Press review – Week of 24 June 2024
This week, the press review covers the conviction of Jean-Paul Huchon for illegal taking of interests, the case of Jean-Christophe...
Press review
21 June 2024
Press review – Week of 17 June 2024
This week, the press review covers the admissibility of the actions against Total and EDF relating to breaches of the...
Event
19 June 2024
Compliance and forensic investigations: optimising how companies, lawyers and forensic professionals work together
Grant Thornton France invited Stéphane de Navacelle to take part in a panel with Jean-Marie Pivard (Publicis Groupe), Jennifer Fiddian-Green...
2 min
Event
19 June 2024
Discussion on harassment prevention and exposure
Invited by Colas Rail, Stéphane de Navacelle discussed with 100+ group top managers during their Management Committee 2024, on 19 June 2024.
2 min
Event
13 June 2024
Future prospects for International Anti-Corruption Court
A panel held during the 20th Annual IBA Anti-Corruption Conference hosted at the OECD in Paris.
Press review
7 June 2024
Press review – Week of 3 June 2024
This week, the press review covers the trial of several Île-de-France’s elected officials including concealment of misappropriation of corporate assets...
Event
5 June 2024
Update on Sanctions Litigation, Arbitration, and Enforcement – with EU, French and Swiss perspectives
A panel held on 5 June 2024 in Berlin, during the C5's European Forum on Global Economic Sanctions.
Event
3 June 2024
Internal investigations by lawyers: how to approach labor & criminal law issues?
An animated debate on benefits brought by investigating lawyers on criminal or labor law investigations, i.e. independence, secret protection, confidentiality...
1 min
Press review
31 May 2024
Press review – Week of 27 May 2024
This week, the press review covers the conviction of a French senator for illegal taking of interest, the adoption of...
Video
16 May 2024
Anticorruption initiatives in Latin America: Lessons from the last decade (webinar)
To contribute to the Latin America and Caribbean Weeks event, organised by the French Ministry of Europe and Foreign Affairs...