Ethiopia's Battle Beyond Borders: How Facebook's Content Crisis Fuels Real-World Conflict


Jun 3 , 2023
By Kirubel Tadesse


The 17th Internet Governance Forum, a United Nations initiative that champions international cooperation to enhance Internet accessibility, was inaugurated by Ethiopia's Prime Minister Abiy Ahmed (PhD) in November last year. A hint of irony coated the event, hosted by a country notorious for its extensive internet shutdowns and social media restrictions in favour of its ruling party.

In his opening speech, the Prime Minister bewailed the Internet’s role in disseminating misinformation during the recent civil war in Ethiopia, deftly sidestepping his government’s digital transgressions. Led by Abiy and his Prosecutor General, Gideon Timothewos (PhD), the government has routinely shown disdain for digital rights; their actions make previous leaders, Meles Zenawi and Hailemariam Desalegn, seem positively libertarian.

The frequent Internet blackouts, a severe drain on the country's coffers, are protested by several domestic and international bodies. Nevertheless, the appeals seem to fall on deaf ears.

Social media’s role in the conflict should not be overlooked either. The bloody civil war in Tigray and the neighbouring states, which only ended last year through a US.-backed truce, was exacerbated by the unchecked use of social media platforms, especially Facebook. Reports suggest these platforms were used to coordinate attacks, spread hate speech, and ignite ethnic violence.

The spotlight was thrown on Facebook's role in Ethiopia’s civil war by Frances Haugen, a former Facebook employee turned whistleblower, who claimed in her 2021 testimony to Congress that the social media giant was exacerbating ethnic conflict in Ethiopia. While Facebook has faced flak for its questionable content moderation policies and practices, especially from the African user base, the issue extends beyond the realm of regional markets and into matters of international concern.

Protection granted by the Communication Decency Act under Section 230 of the 1996 US law has shielded Facebook from legal repercussions for its content moderation failings. However, with the mounting evidence of Facebook-linked political violence in Ethiopia, it is time for the US to reassess its regulatory approach.

Rethinking Section 230 could instigate essential reform. Amending it to introduce penalties for negligent content moderation could serve as a deterrent. An additional proposal involves establishing a multi-stakeholder content governance mechanism, including the participation of Facebook, national governments, and international institutions, to ensure social media platforms adhere to human rights agreements.

While voluntary participation presents its own challenges, this mechanism could provide a platform for legitimate grievances to be aired and addressed.

The necessity to strike a balance between online freedom of speech and the prevention of harm has never been more pressing. The onus lies not just on Facebook and other social media platforms but also on international regulatory bodies and national governments to find this equilibrium and uphold the integrity of our digital world.

However, it must be emphasized that these proposals are not a magic bullet. They are potential solutions to a deep-rooted and complex problem that requires ongoing and multi-faceted effort. Facebook’s global influence extends far beyond its American base, as does its potential fallout from its failure to effectively manage harmful content. An international approach to this issue is essential.

Facebook's seeming disregard for content moderation has ignited a firestorm, casting a damning spotlight on the intersection of technological ingenuity and ethical responsibility. With its staggering 2.9 billion users globally and 8.4 million in Ethiopia alone, its content moderation and governance strategies have become subjects of critical scrutiny as they shape global digital discourse and can potentially incite offline violence.

Examining the company’s systems paints a disquieting picture of a tech behemoth unable to regulate its digital realm effectively. Despite Facebook's insistence that its content moderation combines human efforts with algorithmic intelligence, it struggles to moderate inflammatory content in non-English languages. While Facebook claims an AI success rate of 97pc for hate speech removal, the reality in Ethiopia contradicts such assurances, with daily posts violating Facebook's "Community Standards," thus potentially inciting offline harm.

Whistleblower Haugen’s testimony was an unambiguous indictment of Facebook's role in fanning ethnic violence in Ethiopia, a charge Facebook has repudiated. However, independent researchers have raised alarms about the company’s inability to filter hate speech effectively. Global Witness reported a shocking loophole in Facebook's content moderation — posts containing hate speech that was previously removed were approved for publication when submitted as ads.

The stakes and the potential costs of inaction are high for human lives and social cohesion. As Facebook expands its global reach, it is incumbent upon both the platform and policymakers to ensure that technological innovation is balanced with ethical responsibility and that the "town square" of the 21st Century does not become a stage for conflict and division. It may sound daunting, but it is within our grasp with coordinated international effort.

Facebook’s moderation policies have gained prominence not just for their potential to prevent the incitement of violence but also as a test case for balancing the necessity of open discourse and the imperative of security. However, critics charge that Facebook's approach to content moderation is flawed, especially in less influential markets, where users seem to be over-censored and exposed to harmful content.

Facebook must recognise that its current approach is insufficient and inconsistent, especially in countries where its content moderation falls woefully short. As the recent history in Ethiopia painfully illustrates, the company's failures are more than just public relations disasters; they are literally a matter of life and death. Its response should be proportionate to the magnitude of the challenge at hand.



PUBLISHED ON Jun 03,2023 [ VOL 24 , NO 1205]



Kirubel Tadesse (ka2464a@american.edu) is a researcher at the Internet Governance Lab at American University (Washington, D.C.), where he is attending the school's PhD. in communication program.





How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.

Put your comments here

N.B: A submit button will appear once you fill out all the required fields.





Editors' Pick




Editorial




Fortune news


Back
WhatsApp
Telegram
Email