Meta’s Content Moderation Rollback: A Setback for LGBTQI+ Safety and Digital Rights

Meta’s recent overhaul of its content moderation policies represents a seismic shift with troubling implications for LGBTQI+ safety and human rights. Under the guise of championing free expression, Meta’s removal of critical safeguards threatens to amplify harmful narratives, entrench systemic bias, and enable digital violence. This decision not only undermines the platform’s stated commitment to inclusivity but also reflects a troubling pattern among tech giants of prioritising engagement metrics and profit over user safety. 

For marginalised communities, particularly LGBTQI+ individuals, this decision intensifies existing vulnerabilities by creating a digital environment that is less safe and more hostile. Queer people already feel unsafe on digital platforms, with studies showing that LGBTQI+ persons face a disproportionate amount of online violence, especially in countries with homophobic laws. Harmful narratives targeting queer communities often spiral into real-world consequences, from psychological harm to physical violence— experiences many of us within the queer movement in West Africa have personally endured or witnessed. These realities emphasize the critical importance of responsible moderation. 

Digital platforms have the responsibility of aligning their content moderation and safety policies with international human rights standards, ensuring that a rights-based approach is applied when developing products and the policies governing their usage. In prioritising free expression at the expense of adequate safeguards for all users, Meta risks creating an uneven playing field where the voices of those perpetuating hate are amplified, while marginalised voices are drowned out or silenced altogether. 

Changes In Meta’s New Content Moderation Policy And Their Impact

A closer look at the key policy changes reveals why this shift is dangerous: 

Removal of Anti-LGBTQI+ hate speech protections

Meta’s old content policy against targeted discrimination

Under Meta’s previous policies, content that framed LGBTQI+ identity as a mental disorder, deviant behaviour, or immoral choice was classified as hate speech and removed. The updated guidelines now allow such harmful rhetoric as long as it is framed as an “opinion”.

Meta’s new policy allowing “allegations of mental illness or abnormality when based on gender or sexual orientation”

Impact: Allowing hate speech disguised as personal belief undermines the safety of LGBTQI+ individuals, particularly in regions where they are criminalised or subjected to violence. It legitimises language that dehumanises people’s identity, creating a hostile digital environment that mirrors- and amplifies- real-world discrimination. 

Threat to public health and advocacy by ending partnerships with independent fact-checkers:  

Meta’s decision to cease third-party fact-checking removes a vital layer of accountability in combating misinformation. Fact-checking partnerships were key to challenging dangerous falsehoods about LGBTQI+ health, such as discredited “Conversion Therapy’’ or HIV Myths.

Impact: In societies already plagued by misinformation and disinformation; like some of the focal Countries we work across, the absence of fact-checking allows harmful narratives to spread unchecked. This shift directly threatens public health campaigns and advocacy efforts aimed at protecting LGBTQI+ rights and access to accurate information. 

Risks of community-based content moderation 

Meta’s reliance on a crowdsourced model for moderation, where users flag and contextualise content, shifts responsibility from trained oversight to individual sentiment. We have observed this play out on X (Formerly Twitter) and can confirm that this model is fraught with risks and incentivised for the “loudest’ and not the truthful. 

Impact

In countries where societal prejudice against LGBTQI+ people is deeply ingrained, like some of our focus countries, majority-driven moderation institutionalises bias. Community moderation  – a model that has been ineffective on X, formerly known as Twitter — is a testament to how crowdsourcing fact-checking can render digital spaces more hostile and make them less inclusive for LGBTQI+ Voices. Entrusting the power to define harm to individuals who may also be perpetrators of this harm creates room for online violence, misinformation and disinformation to thrive. This is particularly concerning in African regions, where online violence against LGBTQI+ individuals is underreported.

Risk of reinforcing colonial practices 

In previous years, Meta demonstrated a commendable commitment to inclusivity by engaging marginalised communities, including women and LGBTQI+ individuals, in policy development and content moderation processes. This collaborative approach allowed for the identification and flagging of harmful terms in Indigenous languages, words weaponised to incite hate and violence across Meta’s platforms. These efforts, which required careful vetting and contextual understanding, significantly reduced digital violence in non-colonial languages, providing a vital layer of cultural nuance and safety. 

Impact: Meta’s abandonment of this nuanced, inclusive model signals a retreat from its stated commitments to human rights. The digital divide is not merely about access to technology but also the fairness of the rules governing online engagement. Without a deliberate focus on equity in all languages, content moderation becomes a tool for exclusion rather than empowerment. 

Why Meta’s framing of free expression falls short 

Meta positions these changes as a “necessary expansion of free expression”, but this argument fundamentally misunderstands the nature of freedom. True freedom of expression is not the right to harm others without consequence. It is a balance between the right to speak and the imperative to protect those who are vulnerable to hate, violence, and disinformation. When platforms like Meta fail to uphold protections, they privilege the loudest Voices at the expense of marginalised communities. Crowdsourced Moderation does not equate to fairness in environments where prejudice is the status quo. By removing institutional safeguards, Meta forfeits its responsibility to create a safer, more equitable internet. 

The timing of these policy shifts coinciding with broader political realignments raises concerns about Meta’s motivations. This change reflects a pattern of corporate governance prioritising appeasement over principled human rights commitments. Platforms must be accountable to the global public good, not political winds or profit-driven deregulation. The recent changes also speak to the larger systemic problem of exclusion; marginalised communities are often excluded from digital product creation,  tech governance and decision-making processes and where there is an attempt at inclusion, it is usually not meaningful.

This problem is reflected in the ways LGBTQI+ lives are continuously used as bargaining chips in political and commercial negotiations, with no accountability for the harm their lives are exposed to. If LGBTQI+ communities and professionals are meaningfully included in policy creation and review processes such as this, there will be less likelihood for the rollout of such discriminatory policies enabling digital violence. 

Standing alongside young LGBTQI+ persons, we maintain that: 

  • Digital Platforms must balance free expression with accountability

  • Content governance must prioritise safety and dignity, particularly for marginalised groups

  • Human rights impact assessments must guide policy shifts NOT political convenience or market pressure. 

Meta has failed on all counts and this rollback represents a failure of stewardship, despite having the resources and global reach to lead with responsibility and integrity. The internet we build today determines tomorrow's freedom, hence the urgent concern that Meta’s policy changes will embolden harmful ideologies and compromise trust. Digital spaces must be arenas of truth, safety, and empowerment, not breeding grounds for hate and disinformation.

We demand that Meta reconsiders this dangerous course and restore robust, rights-based protections for all users. 

Authors: Kenny Owen, Marline Oluchi and Lydia Ume

Editor: Lydia Ume

Next
Next

Vacancy: Co-Programme Lead (Anglophone/Francophone)