Joel Kaplan, the chief global affairs officer at Meta, Facebook’s parent organisation, on Friday announced that, starting Monday, the US tech giant would discontinue its fact-checking programme in the country.
By Monday afternoon, our fact-checking program in the US will be officially over. That means no new fact checks and no fact checkers. We announced in January we’d be winding down the program & removing penalties. In place of fact checks, the first Community Notes will start…
— Joel Kaplan (@joel_kaplan) April 4, 2025
No more fact-checks
Also Read
Kaplan took to X to announce the update, a topic that has been widely debated in recent months. His post stated that there would no longer be any new fact-checkers, and that Meta would severe its ties with existing fact-checkers in the US.
"By Monday afternoon, our fact-checking programme in the US will be officially over. That means no new fact checks and no fact checkers. We announced in January we’d be winding down the programme & removing penalties. In place of fact checks, the first Community Notes will start appearing gradually across Facebook, Threads & Instagram, with no penalties attached." Kaplan posted.
"More speech, fewer mistakes"
Earlier this year, in January, Meta founder Mark Zuckerberg, in a video titled 'More speech and fewer mistakes', had outlined significant changes to Meta's content moderation policies.
Among these changes was the decision to end Meta’s use of fact-checking organisations, opting instead for a new system called Community Notes, similar to the system used by the Elon Musk-owned social media platform, X. With Friday;s announcement, Meta has officially entered the next phase of phasing out its fact-checking unit.
The move has also been seen as potentially linked to the regime change in the United States, with now-President Donald Trump and his supporters long accusing online content moderation of being a tool of censorship.
How did it go so far?
According to a Reuters report, Canada-based tech company Telus sent home 2,000 workers from its content moderation centre in Barcelona after Meta Platforms severed its contract. Local unions CCOO and UGT confirmed this development.
The company, operating locally as CCC Barcelona Digital Services, emailed its workers on Thursday, placing them on gardening leave, stating that a client had warned on April 1 that it would suspend their services.
Currently, social media platforms such as Facebook and Threads rely on third-party fact-checking organisations to verify the authenticity and accuracy of content shared on their platforms.
These organisations assess content and flag misinformation for further investigation. When a fact-checker identifies content as false, Meta takes steps to significantly reduce its reach, ensuring it is seen by a much smaller audience. However, third-party fact-checkers do not have the authority to delete content, suspend accounts, or remove pages. Only Meta has the ability to remove content that breaches its Community Standards and Ads policies, including but not limited to hate speech, fraudulent accounts, and terrorist-related material.
Since 2016, Meta has collaborated with over 90 fact-checking organisations across more than 60 languages globally. Some prominent partners include PolitiFact, Check Your Fact, FactCheck.org, and AFP Fact Check. Several of these partnerships have lasted nearly a decade, with PolitiFact being one of the first to collaborate with Meta in 2016.
How Community Notes will work
Similar to X, which was formerly known as Twitter before its acquisition by billionaire Elon Musk for $44 billion in 2022, Meta will now implement a so-called Community Notes model to moderate content instead of relying on fact-checkers.
X's Community Notes, initially called BirdWatch, was launched as a pilot in 2021 and gained substantial traction in 2023. The feature is designed to identify and highlight potentially misleading information on the platform.
Community Notes appear in boxes marked 'Readers added context' beneath posts on X that have been flagged as potentially misleading or inaccurate. These notes typically provide a correction or clarification, often accompanied by a hyperlink to a reliable online source that backs up the correction.
The annotations are created by eligible users who have chosen to participate in the programme. To be eligible, users must have no X violations on their account since January 2023, possess a verified phone number from a legitimate mobile carrier, and have an account that is at least six months old.
Once approved as contributors, participants can rate other Community Notes as either 'Helpful' or 'Not Helpful'. Contributors are given a Rating Impact score, which reflects how often their ratings influence notes that achieve 'Helpful' or 'Not Helpful' status. A Rating Impact score of 5 enables contributors to advance to the next level, allowing them to write Contributor Notes for X posts as well as rate them.
Community Notes that receive five or more ratings undergo algorithmic evaluation. The algorithm classifies each note as 'Helpful', 'Not Helpful', or 'Needs more ratings'. At this stage, the notes are not visible to X users, but only to contributors.
Only notes that receive a final 'Helpful' status from the algorithm are displayed to all X users beneath the related post.
Although Meta has yet to clarify precisely how its Community Notes will operate, Zuckerberg mentioned in a video that the system will be similar to X’s Community Notes.

)