A "Talking Gorilla" Walks into Mercato

A "Talking Gorilla" Walks into Mercato

Jun 28 , 2025. By Ahmed T. Abdulkadir ( AhmedT. Abdulkadir (ahmedteyib.abdulkadir@addisfortune.net) is the Editor-in-Chief at Addis Fortune. With a critical eye on class dynamics, public policy, and the cultural undercurrents shaping Ethiopian society. )


A couple of evenings ago, TikTok served up a scene that felt equal parts circus and science fiction. An Amharic-speaking Gorilla character strolling through Merkato, waving at fruit vendors, cracking jokes and locking eyes with the camera in a way that seemed unnervingly human. A few swipes later came an “anchorwoman” with immaculate diction, flawless posture and the faintest hint of the uncanny, delivering breaking news from the same open-air market. Neither creature ever existed.

Both videos were churned out by the latest text-to-video engines  Google’s Veo 3 and OpenAI’s Sora  systems able to spin photorealistic images, motion and native-sounding audio from a handful of words. Today, the clips these platforms generate draw laughs; tomorrow, they could fool an electorate.

For generations, video was treated as proof. Deepfakes exploit that reflex. Researchers now say that even tech-savvy viewers struggle to distinguish between real and fake, and the tools are becoming increasingly simpler. What once required a studio now sits in a teenager’s phone. The next question is not whether these systems will be misused, but how much damage they will cause.

Social media already has a well-documented record of amplifying rumours in Ethiopia. Unverified posts on Facebook and Telegram have stoked ethnic tensions and political unrest. A recent Internews survey found more than 80pc of users from Ethiopia rely on such platforms for news. Synthetic clips that are ultrarealistic, quick to share and hard to trace are sliding into that stream. The result is a fracture in basic consensus. When everything looks plausible, nothing feels certain.

Historian Timothy Snyder warns that “when nothing is believable, anything is.” Deepfakes widen that void and hand bad actors a precision tool for chaos. The threat is not confined to politics. One of the ugliest applications of AI video is non-consensual pornography. A wave of fabricated nude images featuring international celebrities has shown how quickly these forgeries spread.

Closer to home, a well-known Ethiopian activist recently found a bogus, explicit video of herself circulating online. The fallout  shame, harassment, reputational damage, and even threats of violence  arrived instantly. Mental health trauma follows such attacks, and recovery is slow. For women in politics, activism, or media, the mere possibility of becoming a target can encourage self-censorship. When visibility turns into vulnerability, democracy itself suffers.

Global watchdogs are sounding the alarm, including the World Economic Forum (WEF), which ranked “AI-generated misinformation and disinformation” the second-most likely global risk. Deepfake clips of candidates have surfaced in campaigns from Taiwan to India. The Munich Security Conference called synthetic media a frictionless path to forgery.

Where the legal code offers only sparse language on digital disinformation, and trust in state institutions is fragile after years of conflict, such as in Ethiopia, they are especially vulnerable. The country’s rapid online expansion compounds the risk. As of 2023, only 17pc of Ethiopians enjoyed regular internet access, yet millions more are coming online each year. Many novices are unfamiliar with privacy settings, data sharing, and verification techniques. The gap between access and understanding is an epistemic minefield.

Unlike digitally mature societies that crawled through the dial-up era, learned to doubt suspicious download links and watched Photoshop hoaxes evolve, Ethiopia is leaping straight into an internet shaped by algorithms and generative AI. New users, parachuted into that virtual world, are prone to accept what they see. This should be more alarming, especially when the message arrives in fluent Amharic, wrapped in familiar gestures and trusted formats.

Cheap processing power means a basic smartphone can now crank out believable fakes in minutes. As hardware barriers fall, the volume of synthetic material will skyrocket. The supply of doubt may not keep pace with demand.

The consequences appear in comment threads. Some viewers questioned the gorilla’s authenticity, while many accepted it outright. Ask anyone teaching an older relative to navigate social media. Confusion and conspiracy theories spread faster than software updates.

Economists and creators often frame the AI debate around jobs. A 2023 global poll found 54.6pc of artists fear that generative tools will cut their income. Such concerns echo earlier panics, such as film versus radio and television versus cinema. Markets adjust. The more profound crisis is epistemic, not economic. Machines are eroding the shared sense of what is true.

The talking gorilla may seem amusing. The AI-generated news anchor may seem like a harmless novelty. But each instance marks a data point in an emerging media ecosystem where truth is no longer discovered, but constructed, and increasingly, constructed by machines.

So far, remedies lag behind the threat. Fact-checking desks struggle to keep up with the volume of new content and the diversity of local dialects. Platform labels  “synthetic or manipulated media”  tend to appear hours after a clip has gone viral. Detection algorithms face an arms race they may never win, as each breakthrough in forgery spawns a counter breakthrough in disguise.

Education could offer a partial shield. Studies show that viewers warned in advance that a video might be fake slow down and doubt what they see. Digital-literacy programs in schools, community centres and even church gatherings could help new users build a healthy scepticism. Local-language guides to privacy, verification, and reverse-image searches would add friction to the forger’s business model.



PUBLISHED ON Jun 28,2025 [ VOL 26 , NO 1313]


[ssba-buttons]

Editorial