
Deepfakes – synthetic media created by generative AI that realistically mimic a person's likeness or voice – and other AI generated media present a critical governance challenge. They can be precision-targeted to inflame existing social, racial, or religious tensions. A realistic (though fake) video of a member of one group committing a heinous act against another can incite outrage, mistrust, and real-world violence. The core threat is the erosion of a shared, verifiable reality. When "seeing is no longer believing," it becomes impossible to establish a common set of facts for public discourse, policy debate, or national unity.
The strategic deployment of deepfakes and generative AI to manipulate public opinion,as exemplified in the Cameroonian electoral crisis, directly threatens national sovereignty, social cohesion, and the integrity of democratic processes.
For instance, during the 2025 presidential election in Cameroon, AI-generated images were reportedly used by political parties on social media. These images fabricated large crowds at campaign events, creating a false narrative of overwhelming popular support.1
As another example, a viral, sexually explicit deepfake targeting a Cameroonian bishop, who endorsed a candidate, circulated before the election. Church leaders and local media identified it as a politically motivated proxy attack, aiming to create scandal.2
This represents a critical opportunity to establish a new governance stack for digital media, prioritizing content provenance and user-led authentication as a cornerstone of national security in the information domain.
Existing mechanisms against Deepfake and AI-Driven Disinformation
The current Cameroon government’s mechanisms are not designed for AI-specific authentication but for general cybercrime and "false news" prosecution.
- Law No. 2010/012: The primary state mechanism is the 2010 Law on Cybersecurity and Cybercriminality. This law criminalizes the publication and spread of "false news" (Article 78).
This is a reactive and punitive tool. It does not authenticate content; it prosecutes individuals after the content has spread. It is a legal deterrent, not a technical verification system, and is criticized for being used to stifle dissent.
- Institutional Framework (ANTIC): The National Agency for Information & Communication Technologies (ANTIC) is the key state regulator.
ANTIC's traditional focus is on network infrastructure, cybercrime, and electronic certification (e.g., for e-commerce), not on public-facing media content authentication. There have been reports that the agency is building some technical capacity and tools for assessing deepfake but no public reports or communication on the matter have been official.
- Aspirational Policy (SNIA): In July 2025, Cameroon adopted a National Strategic Framework for Responsible AI (SNIA).
This framework is a statement of intent. It calls for the future creation of a "Cameroonian AI authority" and a new legal framework, but these bodies and their technical "mechanisms" do not yet exist.
This is where the actual authentication work happens. The mechanism is not software; it is human capital and digital literacy.
- Fact-Checking Platforms (#defyhatenow & 237Check):
The Mechanism: The #defyhatenow initiative is a key actor in Cameroon. It runs the #AFFCameroon (Africa Fact-Checking Fellowship) to train journalists and bloggers, who then publish their findings on the 237Check.org platform.
Authentication here is based on traditional, manual fact-checking:
- Community-Led Education:
These organizations build societal resilience from the ground up.
References:
1https://oecd.ai/en/incidents/2025-10-11-8327