Legal Design Roundtable. Safeguarding elections: How to Combat AI-generated disinformation?
- rossanaducato
- Jun 7, 2024
- 11 min read
The European elections have officially started this week. This democratic exercise is probably among the most important in recent years in light of the scale of the internal and international challenges we are facing. Another factor characterising these elections is their unprecedented vulnerability to fake news and manipulations enabled by generative artificial intelligence (GenAI), such as deepfakes.
During the Legal Design Roundtable held in Brussels on the 26th of April, we discussed the steps taken by the EU to ensure the integrity of the 2024 elections. Various stakeholders from academia, policy-making, civil society, and industry debated the effectiveness of the measures in place for the online environment and shared their thoughts on the way ahead. Here, we summarise the takeaways of that discussion, moderated by Professor Alain Strowel (UCLouvain).

Krisztina Stump, Head of Media Convergence and Social Media Unit, DG CNECT, European Commission, kicked off the roundtable with the numbers of the phenomenon at stake. There are growing concerns among the population about the risks that disinformation poses to democracy (as emerge in recent reports of The World Economic Forum and the Eurobarometer survey), particularly with the added complexity of GenAI (see here).
The EU has taken several actions to address systemic risks stemming from disinformation. For instance, the Digital Services Act (DSA) offers the key framework, with obligations directed to very large online platforms (VLOPs) and very large online search engines (VLOSEs) to mitigate those risks. More specific measures are contained in the Code of Practice on Disinformation, such as the demonetisation for those spreading disinformation, transparency safeguards for political advertising, guarantees against manipulative behaviour (including malicious deepfakes) to protect the integrity of services, transparency measures to empower users and researchers.

According to Stump, none of these measures alone is a silver bullet, "But all together they can be useful and efficient in reducing the spread of disinformation".
The Code has been signed by 44 actors, including some VLOPs, civil society organisations, advertising industry players, and technological providers, underlying the importance of the collaborative approach among stakeholders in this area.
With specific reference to the topic of our roundtable, the Commission has issued Guidelines under the DSA for providers of VLOPs and VLOSEs on the mitigation of systemic risks for electoral processes.
The guidelines contain a set of recommendations to be implemented for this incoming election (and beyond), including specific mitigation measures concerning GenAI. For instance, platforms that could be used to create AI-generated content should ensure that it is detectable (through watermarks, for example) and based on reliable sources. They should also make efforts to warn the users about potential errors and encourage them to check the veracity of that information, conduct red-teaming before the release of the system, and set appropriate metrics and safeguards.
Another set of recommendations is established for the VLOPs and VLOSEs that can facilitate the spread of deepfakes. In this case, some of the proposed mitigation measures consist of labelling the content (as AI-generated), including metadata and content moderation processes to facilitate the detection of deepfakes, and media literacy initiatives directed at users.
According to Stump, the EU toolkit for the European elections contains multiple instruments that can be effective in addressing concerns raised by the spread of disinformation online and deepfakes. However, it is key that the platforms follow the rules in place and take recommendations seriously.
Cecilia Zappalà, Head of EU Government Affairs and Public Policy at YouTube, provides an overview of the measures that a VLOP, like YouTube, is taking to protect election integrity.
The YouTube “4R” approach (remove, raise, reduce, reward) and its community guidelines are the key reference points, applying to the phenomenon of misinformation in elections, whether it is human-generated or synthetic content.
Within this context, the guidelines justify the removal of two typologies of content: manipulated content and misattributed content. The former is material that has been altered to mislead the user or can pose a serious risk of harm, e.g. a footage altered to falsely depict a candidate announcing the withdrawal from the election race. The second type of material that can be removed is misattributed content. For example, it would be the case of a video of a head of State condemning a specific violent episode but referred to a different event.
The content moderation policy is enforced by a mix of human moderators (approx. 20,000 people across Google and YouTube) and machine learning systems that can flag content.
However, beyond removing content against the community guidelines, YouTube is taking further actions to monitor potential threats. For example, the Intelligence Desk, a team within their Trust & Safety division, is tasked with tracking specific trends on and off the platform to help the company react promptly when any potential issue arises.

Zappalà reports that this work was particularly valuable during the Slovak campaign last year: "This team helped us terminating fourteen YouTube channels that were spreading misinformation ahead of Slovak elections".
Another strategy YouTube takes in this context is to raise authoritative sources and reduce the spread of certain content deemed not to be authoritative through their recommender system. This evaluation is done by external reviewers who are asked to watch and rate the authoritativeness of the content based on publicly available guidelines. This information is then fed to the machine learning systems to build models in scale so that the content considered authoritative features more prominently in YouTube’s recommendations and on the search results.
Concerning GenAI content specifically, YouTube is implementing some transparency measures. First, it requires creators to disclose when the content they upload is synthetic. Second, the platform labels the content, which will appear either at the level of the video or in the description. The former – more prominent - solution is usually adopted if the content relates to elections.
An additional feature that has yet to be rolled out in Europe is the possibility for people to request the removal of content that reflects their likeness.
Eliška Pírková, Senior Policy Analyst and the Global freedom of expression lead at Access Now, intervenes in the debate with a critical analysis of the status quo regarding the creation and dissemination of AI-generated content.
This latter has become a growing concern, leading various actors to take initiatives worldwide.
In Europe, thanks to the DSA, the European Commission initiated the first information requests directed to VLOPs to obtain data on how they identify the risks stemming from generative AI. As a follow-up, the DSA enforcement team and the DG CONNECT issued guidelines on electoral integrity, which contain specific sections on watermarking or labelling AI-generated content. In 2024, major VLOPs announced their commitments to fight manipulative AI-generated content before the European elections through the adoption of automated tools to label or detect deepfakes.
Providing comments in response to the public consultation on the DSA guidelines, Access Now expressed serious concerns about the use of watermarking as a mitigation measure and the risks it poses to fundamental rights, such as privacy.
Pírková noticed that GenAI indeed has the potential to intensify the spread of misinformation, whether we look at the stage of creation and dissemination of AI-generated content.
As to the creation moment, Large Language Models allow us to generate human-like content relatively effortlessly. Pírková points out that more research is needed to get conclusive evidence as to whether AI-generated content also increases the quality and persuasiveness of the message. To this end, distinguishing between text-based and image-based outputs might be helpful. Indeed, there are some signals that text-generated content can be perceived as more credible (even when it fabricates the sources!).
Regarding image-based outputs, the technology does not seem to be at that level of sophistication yet. In several cases, it is still possible to identify that the picture or video was altered (as in the famous manipulated video portraying Joe Biden caught in inappropriate behaviours with his granddaughter. In a controversial decision, Meta decided not to remove the content precisely because it was considered evidently fake).
More complex is the assessment when it comes to AI-generated voices [NR: in the aftermath of the roundtable, the Scarlett Johansson complaint against OpenAI raised the public attention on audio deepfakes].
There is a risk that this technology can be abused in the electoral system, but Pírková also sheds light on the potentially positive uses of GenAI for nurturing the democracic discourse, which could be worth exploring. She referred, in particular, to the case of the former Pakistani Prime Minister Imran Khan, who used GenAI to create political speeches and communicate with his supporters while in jail and prevented from running in elections.
As to the stage of dissemination of deepfakes, some studies show that GenAI systems can boost the spread of disinformation online. However, much of the issues within this context can be traced back to already identified problems, which lie within the realm of control of platforms, their recommender systems, and monetisation mechanisms. In the EU, various progressive measures are available to protect users (for instance, in the DSA), and they can achieve significant results without creating the need for sectoral legislation for GenAI. However, the enforceability of such rules still needs to be seen.
She concludes with a bittersweet note: while we have a number of promising tools in place, most of them are very recent and will require time to be implemented. .

"We are cautiously optimistic but also quite worried about what is coming our way. Hopefully, this will be a good learning curve for the upcoming European elections, where many of these measures should be fully up and running".
Jean De Meyere, doctoral researcher at the Centre of IT and IP Law at KU Leuven and UCLouvain, brought his experience investigating misinformation online.
In his opening remarks he shared a common frustration among researchers, due to the lack of access to platforms’ information on how they moderate content, the functioning of their ranking algorithms, and the measures they take to reduce misinformation. Access to this data is crucial if one wants to assess the efficiency of the measures and verify whether they impact fundamental rights.
With the DSA, the level of transparency is likely to improve, as this instrument opens up several new possibilities for researchers, civil society and users to receive information on platforms’ actions. He noticed that there are at least three new relevant measures.
The first one: Art. 14 DSA governs how VLOPs must write and distribute the T&C (including privacy policies, norms of the community, etc.) to users. In particular, they have to make available a summary of them in a “concise, easily-accessible and machine-readable” way using “clear and unambiguous language”. Leveraging on this obligation, the Commission has set up the Digital Services Terms and Conditions Database, an initiative that contains and indexes the “legal documents” produced by platforms, making them available and examinable to anyone. This initiative can be very useful to researchers who want to perform data analysis over platforms’ T&C.
A second important provision in the DSA concerns the statement of reasons, i.e., the explanation that a platform shall provide whenever it moderates content, e.g., informing users about the content restricted, the type of the restriction (e.g., reduction of visibility, suppression, or deletion of the account), the duration of the restriction, the ground for the decision, and the role of automated tools in moderating the content.
All these statements are collected in the publicly accessible DSA transparency database, which currently contains over 16 million entries. According to De Meyere, this is another useful resource that can facilitate research. For example, he showed how he used the database to check how many VLOPs adopt AI to detect misinformation and moderate the content with automated tools. However, he also noticed that not all platforms provide the same level of detail in their statements of reasons, especially when it comes to the grounds justifying the decision. For instance, in a statement issued by Meta, the platform contested the violation of Section 3.2 of its ToS, which is an omnicomprehensive provision about the content allowed on the platform.

"It is like you are fined by the police, and the judge would explain that you've got to respect the Civil Code, but without specifying which specific provisions".
Hence, these differences in the information provisions do not facilitate the job of those wanting to look into misinformation and content moderation. Some technical issues could be improved on the database itself, such as implementing an API (the Commission is already working on it).
Finally, another important set of measures comes from Art. 40 DSA. Through a specific (and complex) process, vetted researchers can request access to non-publicly available data of platforms to investigate systemic risks (operative guidelines still need to be issued, for instance, in Belgium). However, a second route is available to all researchers for investigating systemic risks using publicly available data. De Meyere went down this road himself, and he reported that it was not an easy one: researchers have to provide extensive details on their research, the form to complete is different from one platform to another (hence, increasing transaction costs if one wants to survey multiple platforms), but most importantly, the platform can reject the request without providing meaningful information on what the researcher did wrong, and often without offering recourses against the decision.
All in all, the DSA has, in principle, improved the conditions for access to information retained by some private actors, allowing impactful research on issues such as online misinformation and how to counteract it. However, several challenges remain as to how effectively these new measures can be implemented.
Michael Doherty, Professor of Law at Lancaster University and editor-in-chief of the Legal Design Journal, concludes the round of interventions with a reflection from his perspective as a constitutional scholar and legal designer.
He praised the EU's efforts with the DSA and the election guidelines as a significant attempt to intervene and regulate political speech in defence of democracy.
The balance between democracy and freedom of expression is certainly not a new topic in the constitutional debate. One of the pillars of liberal democracies is to recognise the widest possible freedom of expression, restricting free speech only where strictly necessary to protect other pressing social interests, such as privacy or national security. Such freedom includes the right to receive information from and through VLOPs and VLOSEs.
He also recalls that in the context of communication around elections, one of the starting points of the debate is the “marketplace of ideas”, i.e. electors are primarily entrusted to decide what information to access and what relevance is given to it.
Hence, both cases seem to militate against strong intervention in the regulation of political speech. However, the principle of proportionality can justify restrictions when there is a threat to other values.
According to Doherty, the measures issued so far in the EU aim to reduce information asymmetry and go in the right direction: the spread of misleading information in the electoral context, especially when generated by AI, is indeed a threat to democracy, and the restrictive measures are rationally connected to such a threat, intending to protect voters and the faith in the “marketplace” of ideas.
At the same time, he stressed the need to always be vigilant about the risks of such restrictions in political speeches, especially during election times.

"We should be justifiably nervous about endowing States with powers to limit speech, including about governments".
Where can we draw the line? To this end, he recalls Popper’s theory of open societies and the dilemma of freedom of expression: should we recognise such freedom to people who would ultimately use it to undermine democracy, the rule of law, and the preservation of freedom of expression? In the case of the most dangerous type of content, Doherty acknowledges that that paradox does not stand. On the contrary, it's a necessary responsibility of a democratic society.
Can design play a role within this context? Doherty significantly recalls the 2000 US presidential elections, which were decided by a handful of votes in Florida. Those elections were characterised by a big controversy about a series of bad design decisions related to the ballots that led people to vote for the wrong candidate or invalidate their vote.
What if the right design decision was made at that time? He recognised alternative history as a dangerous exercise, but what can be learned from this story is that “design matters, and it has an impact on democracy”.
As a legal designer, Doherty was overall pleased to see many references to human-centric design in recent EU interventions. However, he wonders to what extent the notion of human-centric design has been truly internalised or the toolkit of design thinking embraced in the EU policy-making effort.
There are some encouraging signs in this direction.
For example, the Commission's recommendation that VLOPs provide users with interfaces to label AI-generated content can be considered an example of design thinking used as a communication tool.
However, human-centric design can also be used as a methodology to design many measures to address misinformation issues. The AI Act, for instance, proposes regulatory sandboxes to facilitate (by design) the responsible development of compliant AI systems.
According to Doherty, there is scope for a more meaningful adoption of design practices, which could prove particularly useful for more extensive user research on the impact of disinformation on voters in the online and offline ecosystem. Indeed, he reminds us not to lose sight of the offline world, advocating for bringing the design of (digital) interventions into the "analogic" dimension.