Meta drops fact-checking and loosens its content moderation rules



Meta, the parent of Facebook, Instagram and Whatsapp, today announced a major overhaul in how it’s handling content moderation on its site, taking off some of the guardrails that it had put in place over the last several years in response to criticism that it had helped to spread political and health misinformation. In a blog post called “More speech, fewer mistakes”, its new chief global affairs officer Joel Kaplan outlined changes are in three key areas:

Meta is ending its third-party fact checking program, moving to a Community Notes model, which is what sites like X.com use.

It also added that it would lift restrictions around “topics that are part of mainstream discourse.” Instead it would focus enforcement on “illegal and high-severity violations.”

Lastly, users will be encouraged to take a “personalized” approach to political content, making way for considerably more opinion and slant in people’s feeds that fits whatever they want to see.

Meta had put a lot of its provisions in place in the wake of political and public criticism of how it helped spread election misinformation, bad advice on Covid-19, and other controversies, leading to the company to form an Oversight Committee, increase moderation, and put in place other levers to help people control what content they saw, and to alert Meta to content when they believed it was toxic or misleading. That has not sat well with everyone, with some critics alleging that the policies are not strong enough, others believing that they lead to too many mistakes, and others believing that the controls are too politically biased.

Then, over the last year or so, some of Meta’s commitment to the rules had started to fall apart. Last month, Nick Clegg, the company’s outgoing policy chief, gave a mea culpa interview where he described the company overdoing its moderation. And the Oversight Board has never really proven to be as effective as it had wanted to be.

Now, with accountability changing with political tides, Meta appears to want to take a more hands-off approach. “Meta’s platforms are built to be places where people can express themselves freely. That can be messy. On platforms where billions of people can have a voice, all the good, bad and ugly is on display. But that’s free expression,” Kaplan writes in the post.

The moves are significant not least because they are coming just ahead of a new presidential administration in the U.S. Trump and his supporters have signalled their interpretation of free speech to be significantly more focused on encouraging a much wider set of opinions.

Facebook has been in the crosshairs of criticism throughout that, not least because at one point one of the people it banned from its platforms was… Trump himself.

For its part, the Oversight Board said that it “welcomes the news that Meta will revise its approach to fact-checking, with the goal of finding a scalable solution to enhance trust, free speech and user voice on its platforms.” It said that it would be working with Meta to shape its approach to “free speech in 2025.”

We would also like to take this opportunity to thank Nick Clegg who, as president of global affairs at Meta, was instrumental in overseeing the creation of the Oversight Board and has been a strong advocate for freedom of speech on Meta’s platforms. We look forward to Joel Kaplan’s leadership in continuing this important work.”  

The developments are also coming at a time of change at Meta itself. CEO Mark Zuckerberg has signalled a stronger interest in working with the Trump administration. Yesterday, three new board members appointed at the company included a major support of incoming President, UFC head Dana White. Last week, Meta also replaced its longtime public affairs head, Nick Clegg, with Joel Kaplan, who had already been one of Meta’s most prominent Republicans.

More to come, refresh for updates.




Source