Model and actress Savannah Adwoa Mensah revealed her online image was misused in AI-driven scams, highlighting the rising danger of synthetic identity abuse across Ghanaian social media and the need for enhanced legal and media literacy responses.
Model and actress Savannah Adwoa Mensah says she first realised how vulnerable her online identity had become when she spotted a flawless, unfamiliar image of herself used to sell a herbal skincare product on Facebook. She publicly warned followers: "If you see an ad of me promoting this product, it’s not me. It’s an AI-generated image used without my consent." According to local reporting, her experience is far from isolated as synthetic likenesses proliferate across Ghanaian social media.
The misuse stretches beyond images. Journalists and broadcasters report cloned voices and fabricated endorsements being deployed to market dubious medical remedies and commercial products, sometimes without any identifiable company behind them. Industry analysts and international reporting have linked such scams to organised fraud rings that exploit generative technology to scale deception rapidly.
High-profile incidents elsewhere in the region underline how quickly false material can spread. Broadcasters in South Africa were impersonated in realistic videos that promoted investment scams and drew hundreds of thousands of views before platforms intervened, demonstrating the speed and reach of synthetic media when coupled with social networks.
Data firms and verification specialists have sounded the alarm about a sharp uptick in deepfake-enabled fraud. A recent industry analysis found a multi-fold rise in cases linked to synthetic identities in late 2024, warning that these techniques have moved from niche experiments to tools that cause measurable financial and reputational harm.
Ghana’s existing laws offer routes for redress but have yet to be tested thoroughly against the novel mechanics of AI impersonation. Legal practitioners note that unauthorised use of a person’s image or voice may engage data-protection provisions and constitutional privacy guarantees, but they caution that courts have limited precedent for assigning liability in cases where synthetic media are generated and distributed by opaque actors.
Enforcement faces practical hurdles. Investigators and prosecutors contend with scant forensic capacity to trace the provenance of synthetic content, challenges in preserving admissible digital evidence and jurisdictional obstacles when campaigns originate overseas. Observers say those gaps make quick takedowns and prosecutions difficult, even when the harms are clear.
Alongside legal responses, media literacy advocates emphasise prevention. Trainers and communications scholars have urged the public to develop verification habits ahead of elections and other high-stakes moments, offering practical checks to distinguish manipulated media and reduce the likelihood of viral amplification.
Security analysts warn the political implications are acute: AI-crafted audio or video can be tailored to sway voters, smear opponents or trigger financial consquences, particularly around election cycles. Commentators advise a mix of platform responsibility, stronger verification systems and public awareness campaigns to shore up trust in digital information flows.
For those targeted, the consequences are immediate and personal. Senior journalist Maame Esi Nyamekye Thompson responded online to a counterfeit diabetes advert bearing her likeness: "This is still ongoing. I never did this advert lol." Her reaction captures the indignity and confusion victims face as they try to disentangle their reputations from synthetic falsehoods while regulators, platforms and civil society scramble to catch up.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph: - Paragraph 1: [7], [2] - Paragraph 2: [7], [5] - Paragraph 3: [5], [7] - Paragraph 4: [5], [7] - Paragraph 5: [4], [6] - Paragraph 6: [6], [7] - Paragraph 7: [3], [6] - Paragraph 8: [2], [3] - Paragraph 9: [7], [3]
Source: Noah Wire Services
Verification / Sources
- https://cedirates.com/news/i-never-did-this-advert-ai-clones-hijack-ghanaian-identities-for-profit/ - Please view link - unable to able to access data
- https://www.theghanareport.com/deepfake-and-our-elections/ - This article discusses the threat posed by deepfakes to Ghana's electoral process, highlighting how AI-generated media can mislead voters and distort public perception. It provides examples of manipulated videos circulating on social media, including one involving Vice President Dr. Mahamudu Bawumia, and emphasizes the need for vigilance and verification to maintain electoral integrity.
- https://www.theghanareport.com/how-to-spot-deepfakes-ahead-of-ghanas-2024-election-2/ - This piece offers guidance on identifying deepfakes in the context of Ghana's 2024 general election. It explains how AI-generated content can mimic real individuals to spread misinformation and provides tips for recognizing such content, stressing the importance of verifying information to uphold the integrity of the electoral process.
- https://www.graphic.com.gh/news/general-news/sharing-deepfake-content-online-is-a-criminal-offence-police-warn.html - The Ghana Police Service has issued a warning against the creation and circulation of deepfake content, stating that offenders risk prosecution under Ghanaian law. The article highlights the dangers of AI-generated media and the legal consequences of sharing such content on social media platforms.
- https://www.cnbc.com/2024/05/28/deepfake-scams-have-looted-millions-experts-warn-it-could-get-worse.html - This report details how deepfake scams have defrauded companies of millions of dollars, with experts warning that the problem could worsen as generative AI technology evolves. It includes a case where a Hong Kong finance worker was tricked into transferring $25 million to a fraudster using a deepfake of his CFO.
- https://www.theghanareport.com/deepfake-risk/ - This opinion piece explores the risks associated with deepfakes, particularly in the context of Ghana. It discusses how AI-generated content can be used maliciously and the challenges in detecting such media, urging readers to be cautious and critical of information encountered online.
- https://www.theghanareport.com/the-rise-of-deepfakes-a-growing-threat-in-the-digital-age/ - This article examines the rise of deepfakes and their implications in the digital age. It explains how AI and machine learning are used to create realistic audio, video, and images that can make people appear to say or do things they never actually did, highlighting the potential dangers and the need for awareness.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.
Freshness check
Score: 8
Notes: The article was published on April 5, 2026, and reports on recent incidents involving AI-generated images of Ghanaian public figures. Similar cases have been reported in the past, such as the use of AI-generated images of Taylor Swift in 2024 (citizen.digital). However, the specific incidents involving Savannah Adwoa Mensah and Maame Esi Nyamekye Thompson appear to be recent and not previously reported, indicating a high level of freshness.
Quotes check
Score: 7
Notes: Direct quotes from Savannah Adwoa Mensah and Maame Esi Nyamekye Thompson are included. While these quotes are compelling, they cannot be independently verified through the available sources. The lack of verifiable sources for these quotes raises concerns about their authenticity.
Source reliability
Score: 5
Notes: The article originates from CediRates, a website that appears to be a niche publication. The lack of information about the publication's editorial standards and independence raises concerns about the reliability of the source. Additionally, the article relies heavily on quotes from individuals without independent verification, which further diminishes its reliability.
Plausibility check
Score: 6
Notes: The incidents described are plausible, given the increasing use of AI-generated images without consent (oecd.ai). However, the lack of independent verification for the quotes and the reliance on a single source for the information about these incidents raises questions about the overall credibility of the claims.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary: The article reports on recent incidents involving AI-generated images of Ghanaian public figures. While the incidents are plausible and the content is recent, the lack of independent verification for the quotes and the reliance on a single, niche source raise significant concerns about the credibility and reliability of the information presented. Given these issues, the content does not meet the necessary standards for publication under our editorial guidelines.