Meta Announces Big Changes in Content Moderation Policies, Ends Fact-Checking Program
Share
In a recent announcement, Mark Zuckerberg, Meta CEO, has announced some major changes in its content moderation policies and the elimination of its fact-checking program. This marks a significant turn towards its foundational commitment to free expression.
He said that Meta, the parent company of Facebook and Instagram, will roll out a new feature called Community Notes on its platforms.
This feature will be introduced to promote free expression and enrich more subtle conversations among the users. The company will no longer rely on fact-checking to combat misinformation.
Notably, over the years, Meta has faced several backlashes for its handling of misinformation, with allegations of hypocrisy and inconsistency in its fact-checking practices.
Despite the companys efforts to meet the users needs, it has been called out for its role in spreading misinformation, sparking irrelevant and harmful debates, free speech, and the responsibility to regulate online content.
According to Mark Zuckerberg, the decision to eliminate the fact-checking program is aimed at “restoring free expression” on the platform. He stressed that the fact-checking of content sometimes turned out to be overly broad and suppressed legitimate discussions.
This move comes ahead of the inauguration of the United States President-elect Donald Trumps new administration. Donald Trump will be sworn in as president on January 20, 2025, marking his return to the White House for a second non-consecutive time.
This time Elon Musk will also be seen in a significant role in the new administration. He, along with Vivek Ramaswamy, will lead the newly established Department of Government Efficiency (DOGE).
CEO Zuckerberg acknowledged that these changes are sparked by Trumps presidential victory.
The recent elections also feel like a cultural tipping point towards Tower once again prioritizing speech,” he added in a video shared online.
Additionally, the new community notes will enable users to add context to misleading posts and provide a more insightful and collaborative approach to address misinformation. This feature is designed to encourage constructive conversations among users. It will also provide additional information to help people to better understand complex issues.
This policy will first begin in the US, Meta will end its fact-checking program with independent third parties. The company has decided to end-to-end this program, as expert fact-checkers have their own biases, resulting in content ended up being fact-checked.
Weve seen this approach work on Xwhere they empower their community to decide when posts are potentially misleading and need more context,” Meta Chief Global Affairs Officer Joel Kaplan shared in a blog post.
This is to be noted that Meta has also planned to allow lifting some restrictions on few topics that are a part of the mainstream discussions, such as immigration or gender. This change is a part of a broader shift to enforcing rules against illegal and high-severity violations like terrorism and child exploitation.
The company said its approach to building complex systems to manage the program has gone too far, made too many mistakes, and censored too much content.
Meanwhile, these latest changes related to Meta’s content moderation policies have sparked concerns among the experts. They warned that this move could also lead to the spread of misinformation and harmful content.
Ross Burley, co-founder of the non-profit Centre for Information Resilience, extended his concerns over the new meta policy. He implied that the move was motivated by “political appeasement” rather than a genuine desire to improve content moderation.
Newsletter
Stay up to date with all the latest News that affects you in politics, finance and more.