Mini Cart 0

Your cart is empty.

Editorials, The Grid

What the ‘X-ification’ of Content Moderation on Meta Means for Social Media Experience

Meta is ditching its moderation system for X’s model, rolling out first in the U.S. before expanding globally.

  • Johnson Opeisa
  • 10th January 2025

After nearly nine years of relying on independent fact-checkers to minimise misinformation, Meta is pivoting to a user-powered content moderation system, Community Notes, on its platforms Facebook, Instagram, and Threads.

 

This daring move was announced in a blog post by Joel Kaplan, Meta’s Chief Global Affairs Officer, accompanied by a video from CEO Mark Zuckerberg on Tuesday, January 7. According to Kaplan, the move stems from growing dissatisfaction with the traditional fact-checking system, which Meta believes has become overly restrictive and biased. 

 

“In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content,’’ Kaplan’s blog post read in parts. “This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.”

 

Meta believes Community Notes can do better, so preparations for its roll-out — which will start in the U.S. before expanding to other regions — are underway. Abruptly bidding good riddance to its fact-checking partners for a model that hasn’t entirely been successful invites scrutiny, and a deeper exploration of the shift’s broader implications for social media users.

 

To begin with, Community Notes is entirely user-driven, designed solely to combat misinformation by providing additional context, and explanations, or correcting entirely false posts. Contributors to this initiative are everyday users who sign up and pass an initial phase of rating other contributors’ notes for relevance and importance before gaining the ability to write their own. A note’s publication depends on these contributors’ evaluations.

 

The process doesn’t end with publication. Based on user reactions (upvotes or downvotes) to a note, X — despite generally avoiding modifying or editing notes to maintain transparency — reserves the right to label or remove notes that violate its rules, privacy policies, or terms of service.

 

However, the impact of Community Notes in combating misinformation on X has done little to improve its reputation as a pseudo-news platform. This shortfall was especially apparent during the November 2024 U.S. general election, when reports claimed X and Musk were complicit in propagating misinformation during the electioneering process.

 

Meta’s adoption of the notes also implies similar political underpinnings.  Ahead of Donald Trump’s return to the Oval Office in January 2025, Meta donated $1 million to the Republican inauguration fund last December, likely aiming to improve its relationship with the incoming president. This comes after Facebook and Instagram suspended Trump’s accounts in 2021 for actions deemed extreme.

 

Responding to a reporter on Wednesday, Trump remarked that Meta’s recent changes were “probably” a response to his earlier threats against social media companies.

 

Beyond the apparent political undertones, and the troubling notion of not one but three major social networks seemingly bending the knee to a government, the abilities of the Community Notes contributors also come to the fore.

 

By replacing professionally trained fact-checkers with this user-enabled model, moderation on Meta’s platforms — like on X — will largely depend on individuals who have little to no expertise in the subjects they are tasked with evaluating for relevance and factual accuracy. Even for contributors with relevant knowledge, the fundamental mechanics of Community Notes pose challenges to their functionality.

 

Since a note requires substantial approval from other contributors before becoming publicly visible, there’s a significant risk of delayed responses. Late visibility of notes after users have already engaged with misleading content is one issue; their complete absence is another.

 

According to Poynter Institute, a fact-checking and media literacy organisation, 60% of the most-rated notes languish unseen because they fail to achieve ideological consensus among contributors. As a result, posts most in need of a Community Note often remain without one.

 

Nevertheless, the introduction of Meta’s Community Notes first in the U.S. offers some hope for improvement — both from drawing lessons from its execution and in refining X’s model — before its widespread adoption.

 

The bigger issue, however, is Meta loosening its grip on hate speech as part of its new policy. In his video announcement, Zuckerberg stated that Meta’s current approach prioritises free speech over maximum policing of extreme views. Under this new framework, the focus will be “on tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud, and scams. For less severe policy violations, we’re going to rely on someone reporting an issue before we take any action.”

 

This has been the experience on X since Musk’s takeover. It would be far-fetched to label it a net negative move. And as Mark Zuckerberg noted,The reality is that this is a tradeoff. It means we’re going to catch less bad stuff, but we’ll also reduce innocent people’s posts and accounts that we accidentally take down.

 

Ultimately, Meta’s decision to discard its moderation policies for X’s model signifies that social media platforms are gradually shifting responsibilities. This leaves users with the major obligation of navigating disinformation and harmful speech — much like we do in real life.

 

Share BOUNCE, let's grow our community.