Submission to the European Commission’s Consultation: Guidelines for Providers of VLOPs & VLOSEs

Submission by Digital Action to the European Commission’s Consultation on Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes 

Thank you for the opportunity to feed into the European Commission’s development of Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes. We would like to start by commending the Commission for its efforts to protect elections and people in this historic election year.

Digital Action is a Non-Governmental Organisation whose expertise centres on bringing civil society organisations together to influence policy and address digital threats that have real-world consequences. Since 2019, Digital Action has been mobilising a global network of partners to demand better standards from the governments and corporations responsible for our digital environments.

Our organisation is also the convener of the Global Coalition for Tech Justice, a global movement to ensure Big Tech plays its role in protecting elections and citizens’ rights and freedoms across the world, particularly in the global majority where companies – Meta/Facebook, Google/YouTube, X, TikTok et al – have been negligent in dealing with the impacts of their social media and messaging products. The Coalition has over 200 member organisations and individuals from around the world, mostly countries in the Global Majority, who are trying to advance a greater agenda in Platform Regulation and Governance, as well as equitable and effective safeguards to protect democratic and human rights.

Our submission focuses on key priorities we deem particularly important in the European Commission’s proposed guidelines, not least in view of the EU’s role as a global norm-setter in relation to platform regulation, governance and liability regimes, and for the accountability and transparency of Very large Online platforms. Our submission will address, therefore, the following points:

  1. The EU as a global norm-setter
  2. Increased Transparency
  3. Access to Data
  4. Use of AI-Generated Content
  5. Recommender Systems
  6. The EU as a global norm-setter 

Digital Action commends the initiative, as well as the move to more detailed elections guidelines for VLOPs given the increased risks to information and electoral integrity, and to fundamental rights, during elections. The proposed guidelines can play a role in driving higher standards globally in how VLOPs operate in a historic election year across the world.

On that note, it is important to flag that corporate leaks and on-the-ground civil society experience suggest that social media companies are barely investing in harm assessment and mitigation outside of the US and English language content. Yet as the companies’ influence on information ecosystems in Africa, Asia and Latin America grows, their impacts are becoming more serious, more extensive, and more frequent. When it comes to the widespread use and impact of social media, 2024 – with its global megacycle of 65+ elections in over 55 countries – will be the make-or-break year for democracy and freedoms and the ultimate test for social media companies.

In a moment where lawmakers in other parts of the world are attempting to champion the idea of creating new or renewing existing liability models and platform-related regulations, they could benefit from taking forward some of the lessons from the DSA such as Fundamental RIghts Impact and risk-assessments, increased transparency and the tiered approach towards the obligations (VLOPS and the rest). But, as the approval and development of regulation takes time, and we have entered the 2024 elections mega-cycle under a rather dire situation where elections in countries such as Bangladesh, Indonesia1 and Taiwan2 have already concluded, their own processes revealed that platforms were even less prepared to “handle problems and have been considerably less responsive“3. And, in this context, users end up being directed towards lower quality information like disinformation, misinformation, fake news, sensational stuff, rumours, hoaxes and so forth.

It is urgent that we stop the false narratives shared either by internal or external influence operations aimed at undermining voters’ trust and foster manipulation. And the EU is setting the example to the rest of the world by raising the level of requirements for VLOPs through regulations such as the DSA and the AI Act, as well as the current guidelines.

  1. Increased Transparency 

Transparency is the stepping stone towards greater understanding of tech harms, including their links to and relationship with the investments, policies, design and decisions (or omissions) of VLOPS, such as the size of global and regional teams, resources deployed towards content moderation, etc. Digital Action is fully supportive of the proposals to secure transparency in the draft guidelines, including the publication of risk assessments, mitigation measures, and fundamental rights impact assessments, which VLOPs have failed to do in other regions to date. We believe achieving this level of transparency in the European Union will set an important precedent globally.

Furthermore, the guidelines could push for the provision of further detailed disclosures than those provided so far under the DSA’s transparency reports, in order to assess the resources and measures that are exclusive to the European Union versus those that also serve other global markets. Such disclosures will allow a more accurate assessment of VLOPs compliance with the DSA while also enabling providing valuable information for the EU’s dialogue with other countries and regions on fostering improved international platform governance.

  1. Access to Data 

Access to Data is fundamental for researchers, in civil society and academia, within and outside the EU who further the EU’s understanding of tech harms and how these relate to platform failures and governance. Unfortunately, researchers in Global Majority countries are experiencing significant barriers to accessing data due to high fees for APIs.4 The guidelines are therefore a good opportunity to recommend improved voluntary provision of data access to elections-focused researchers in order to address the equity gap between global platform researchers.

During such a pivotal year for elections, access to data should be free for elections-focused researchers everywhere as we rely on third party scrutiny and research in order to achieve a more accurate, global assessment of systemic platform failures.

  1. Use of AI-Generated Content 

Digital threats to election integrity from AI generated content, observed most recently in elections in Taiwan and Pakistan, point to the urgency of addressing the manipulation potential of generative AI. We therefore welcome the idea of including specific recommendations in the guidelines to ensure enhanced mitigation measures for generative AI content, which will feed into the process of driving much-needed global standards.

  1. Recommender Systems 

Digital Action fully supports the Joint-Submission on the Commission’s Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes5. This recommends the inclusion of specific guidelines around recommender system safety to moderate the virality of content that threatens electoral integrity. We would like to, therefore, reinforce that “achieving safety by design is at the heart of tackling content that threatens the integrity of the electoral process. By disabling profiling-based recommender systems by default and optimising for values other than engagement, VLOPs can take significant steps toward mitigating the systemic risks that their recommender systems pose to election integrity. However, these must be done in tandem with other measures to ensure safety by default whilst maintaining a positive user experience“.


We thank the European Commission for the opportunity to contribute to the development of these guidelines and for the openness towards Civil Society and Academia’s input in broader processes such as the DSA’s Implementation. Digital Action looks forward to continuing to contribute to the development of the EU’s regulatory and policy frameworks as well as fostering stronger global dialogue on platform governance and accountability.


1 Global Coalition for Tech Justice. Indonesia Country Briefing.

2 Global Coalition for Tech Justice. Taiwan Country Briefing.

3 Global Coalition for Tech Justice. Indonesia Country Briefing.

4 Rachelle Faust and Dan Arnaudo. The Urgency of Social Media Data Access for Electoral Integrity. Tech Policy Press.

5 Joint-Submission on the Commission’s Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes.


If you’d like to find out more get in touch at: [email protected]


Global Coalition for Tech Justice – Campaign asks for the 2024 Year of Democracy campaign 

The campaign’s set of specific, measurable demands 

Our coalition is making a public call for action for Big tech companies to fully and equitably resource efforts to protect 2024 elections. Companies should address every country in which they operate and make investments in trust and safety proportionate to risk, not market size.

Where companies fail to protect election integrity while respecting human rights obligations (including the pre- and post-election phases), regulators in democratic jurisdictions must step in to protect elections and people. Democratic institutions must ultimately be the guarantors of free and fair elections and ensure safe election periods with full respect for human rights. In elections where governments are oppressive or fostering democratic decline, regulators in countries where companies are headquartered must take the lead in holding companies to account to ensure their compliance with international human rights and electoral standards.

A call for “Big Tech Action Plans” 

Fully and equitably resourced Action Plans – at global and country levels – should be designed to protect the freedoms, rights and safety of users during the 2024 global election cycle, including the right to free expression and non-discrimination, and providing an online information environment conducive to free and fair elections (free of misinformation, hate speech, and manipulation) whether or not the local political context is free and fair. Moreover, attention should be given to preventing online harms to groups who have been shown time and again of being at particular risk during election periods, notably women politicians.

We demand each Big tech company establish and publish fully and equitably resourced 2024 Action Plans (globally and for each country holding elections), including the following fundamental features: 

  1. Mainstream international human rights and electoral standards. This means assessing the compliance of existing and new policies and enforcement protocols with human rights and electoral standards. It means engaging with election bodies, free of political interference, and enabling the work of independent election monitors.
  1. Expeditiously publish and respond to the findings of robust human rights impact assessments or commission these assessments where they haven’t yet been conducted, adopting best international practices. Independent and external parties should conduct the assessments, whose results should be transparently incorporated into decision-making and planning processes, as well as communicated to stakeholders to enable better preparation for election periods.
  1. Be fully resourced and proportionate to risk of harm, not market size. This means investment figures and numbers of employees and contractors per language/dialect, for trust and safety efforts per country must be published and resourcing decisions justified, and provisions made to ensure expertise on national and regional context, languages and dialects for content moderation. More importantly, companies and regulators must focus on the impacts of these policies and investments: people should be protected even in markets with particular human rights challenges, with a view to maintaining equitable access to safe online platforms worldwide. It means the companies must address the past gross inequity of billions invested to protect the US elections compared to neglect of global majority countries where they pose a risk to rights and freedoms in 2024. Companies should adopt a standardised reporting format to document their actions during this period, to allow researchers, regulators, civil society and other actors to monitor their actions and impacts in full transparency.
  1. Provide the full spectrum of tools and measures available, both new tools developed in response to threat/risk assessment and mitigation exercises as well as the best proven tools and measures already road-tested in markets where companies have most invested in elections-related trust and safety to date (namely the United States). This means total transparency on what has been implemented where, and providing reasoned justifications on variations across the world.
  1. Be operational at each stage in the electoral process, from the months leading up to an election, through polling day, to the conclusion of the electoral process (whether weeks or months) following the vote. This means averting past mistakes where companies have rolled back measures only to facilitate post-election violence and the undemocratic actions of bad actors.
  1. Be based on and implemented with local context and expertise. This means properly resourcing local staff and multi-stakeholder engagement, as well as larger and more linguistically and culturally competent content moderation teams to compensate for poor algorithmic performance outside of English. Ideally, platforms should coordinate to facilitate civil society dialogue with all relevant platforms rather than having separate conversations leading to differing standards.
  1. Strengthen, increase and resource partnerships with fact-checkers, independent media, civil society and other bodies that protect electoral integrity. Companies should engage with full respect for partners’ independence, report on the amount of meaningful engagement in a standardised format, and also collaborate with other companies to optimise investments in trust and safety and reporting.
  1. Be independent of government and partisan political pressure. This means companies publishing all their contacts with governments, and all government requests to suppress speech and surveillance demands, where permitted by law. It means eliminating policy exemptions for politicians where this would allow them to undermine the electoral process or violate the rights of citizens. Politicians should not be exempted from policies that protect users, and political ads should be fact-checked.
  1. Establish proper oversight and transparency. This means data access and training for researchers, civil society, independent media and election monitors to monitor activity on the platforms. It means re-investing in and opening up Crowdtangle at Meta, creating equivalent tools at Alphabet, and maintaining an open and free/affordable API at Twitter. It means full transparency and accuracy of ad libraries and their functionality (such as targeting parameters), as well as publication of financial information to enable scrutiny of campaign finance and spending. And it means transparency and independent audits on enforcement, including ad library error rates and algorithmic impact on harms. Companies should be transparent about their content moderation policies and enforcement, including notice, review and appeal mechanisms, reporting these in a standardised format.
  1. Enable accountability. This means allowing documentation and archiving of all actual or potential harms taking place on the platforms as well as documentation to test the accuracy and effectiveness of harm mitigation measures, to enable real-time and post-facto accountability efforts.

The updated list of signatories to these asks can be found here.

Share this post