Artificial Intelligence

Protecting Electoral Integrity Against AI-Driven Disinformation

Isaac Noumba
Co-founder/President
Updated
Dec 4, 2025 12:33 AM
News Image

Protecting Electoral Integrity Against AI-Driven Disinformation

Deepfakes – synthetic media created by generative AI that realistically mimic a person's likeness or voice  –  and other AI generated media present a critical governance challenge. They can be ​​precision-targeted to inflame existing social, racial, or religious tensions. A realistic (though fake) video of a member of one group committing a heinous act against another can incite outrage, mistrust, and real-world violence. The core threat is the erosion of a shared, verifiable reality. When "seeing is no longer believing," it becomes impossible to establish a common set of facts for public discourse, policy debate, or national unity.

The strategic deployment of deepfakes and generative AI to manipulate public opinion,as exemplified  in the Cameroonian electoral crisis, directly threatens national sovereignty, social cohesion, and the integrity of democratic processes.

For instance, during the 2025 presidential election in Cameroon, AI-generated images were reportedly used by political parties on social media. These images fabricated large crowds at campaign events, creating a false narrative of overwhelming popular support.1

As another example, a viral, sexually explicit deepfake targeting a Cameroonian bishop, who endorsed a candidate, circulated before the election. Church leaders and local media identified it as a politically motivated proxy attack, aiming to create scandal.2

This represents a critical opportunity to establish a new governance stack for digital media, prioritizing content provenance and user-led authentication as a cornerstone of national security in the information domain.

Existing mechanisms against Deepfake and AI-Driven Disinformation

1. State-Level Mechanisms (Legal & Institutional)

The current Cameroon government’s mechanisms are not designed for AI-specific authentication but for general cybercrime and "false news" prosecution.

- Law No. 2010/012: The primary state mechanism is the 2010 Law on Cybersecurity and Cybercriminality. This law criminalizes the publication and spread of "false news" (Article 78).

This is a reactive and punitive tool. It does not authenticate content; it prosecutes individuals after the content has spread. It is a legal deterrent, not a technical verification system, and is criticized for being used to stifle dissent.

- Institutional Framework (ANTIC): The National Agency for Information & Communication Technologies (ANTIC) is the key state regulator.

ANTIC's traditional focus is on network infrastructure, cybercrime, and electronic certification (e.g., for e-commerce), not on public-facing media content authentication. There have been reports that the agency is building some technical capacity and tools for assessing deepfake but no public reports or communication on the matter have been official.

- Aspirational Policy (SNIA): In July 2025, Cameroon adopted a National Strategic Framework for Responsible AI (SNIA).

This framework is a statement of intent. It calls for the future creation of a "Cameroonian AI authority" and a new legal framework, but these bodies and their technical "mechanisms" do not yet exist.

2. Civil Society Mechanisms

This is where the actual authentication work happens. The mechanism is not software; it is human capital and digital literacy.

- Fact-Checking Platforms (#defyhatenow & 237Check):

The Mechanism: The #defyhatenow initiative is a key actor in Cameroon. It runs the #AFFCameroon (Africa Fact-Checking Fellowship) to train journalists and bloggers, who then publish their findings on the 237Check.org platform.

Authentication here is based on traditional, manual fact-checking:

  • Source Verification: Investigating the origin of the content.
  • Reverse Image Search: Using AI-powered tools (like Google Images) to find the original context of a photo or video frame.
  • Forensic Analysis: Manually looking for inconsistencies in audio, shadows, or facial movements (typical of "cheap fakes").

- Community-Led Education:

These organizations build societal resilience from the ground up.

  • Civic Watch: Partners with UNESCO to run workshops for journalists on fact-checking and source credibility in the run-up to elections.
  • Mentes Libres-Free Minds: A 2025 UNESCO Hackathon winner, this project focuses on empowering youth in conflict zones. They do not build new tools, but rather train young people to use "simple, free AI-powered verification tools" (i.e., existing web tools) to spot misinformation themselves

References:

1https://oecd.ai/en/incidents/2025-10-11-8327

2https://catholicreview.org/deepfake-claims-emerge-as-cameroonian-bishop-faces-viral-misconduct-video/#:~:text=%E2%80%9CThe%20fact%20that%20these%20allegations,political%20motivations%20behind%20the%20scandal.%E2%80%9D