As unrest fueled by disinformation spreads, the U.K. may seek stronger power to regulate tech platforms



The U.K. government has indicated it may seek stronger powers to regulate tech platforms follows days of violent disorder across England and Northern Ireland fuelled by the spread of online disinformation.

On Friday prime minister Keir Starmer confirmed there will be a review of the Online Safety Act (OSA).

The legislation, which was passed by parliament in September 2023 after years of political wrangling, puts duties on platforms that carry user-to-user communications (such as social media platforms, messaging apps etc) to remove illegal content and protect their users from other harms like hate speech — with penalties of up to 10% of global annual turnover for non-compliance.

“In relation to online and social media, the first thing I’d say is this is not a law-free zone, and I think that’s clear from the prosecutions and sentencing,” said Starmer, emphasizing that those who whip up hate online are already facing consequences as the Crown Prosecution Service reports the first sentences associated with hate speech postings related to violent disorder being handed down.

But Starmer added: “I do agree that we’re going to have to look more broadly at social media after this disorder, but the focus at the moment has to be on dealing with the disorder and making sure that our communities are safe and secure.” 

The Guardian reported that confirmation of the review followed criticism of the OSA by the London mayor, Sadiq Khan — who called the legislation “not fit for purpose“.

Violent disturbances have wracked cities and towns across England and Northern Ireland after a knife attack killed three young girls in Southport on July 30.

False information about the perpetrator of the attack erroneously identified them as a Muslim asylum seeker who had arrived in the country on a small boat. That falsehood quickly spread online, including through social media posts amplified by far-right activists. Disinformation about the killer’s identity has been widely linked to the civil unrest rocking the country in recent days.

Also on Friday, a British woman was reported to have been arrested under the Public Order Act 1986 on suspicion of stirring up racial hatred by making false social media posts about the identity of the attacker.

Such arrests remain the government’s stated priority for its response to the civil unrest for now. But the wider question of what to do about tech platforms and other digital tools that are used to spread disinformation far and wide is unlikely to go away.

As we reported earlier, the OSA is not yet fully up and running because the regulator is in the process of consulting on guidance. So some might say a review of the legislation is premature before at least the middle of next year — to give the law a chance to work.

At the same time, the bill has faced criticism for being poorly drafted and failing to tackle the underlying business models of platforms that profit from driving engagement via outrage.

The previous Conservative government also made some major revisions in fall 2022 that specifically removed clauses focused on tackling “legal but harmful” speech (aka, the area where disinformation typically falls).

At the time, digital minister Michelle Donelan said the government was responding to concerns about the bill’s impact on free speech. However another former minister, Damian Collins, disputed the government’s framing — suggesting the removed provisions had only intended to apply transparency measures to ensure platforms enforce their own terms and conditions, such as in situations where content risks inciting violence or hatred.

Mainstream social media platforms, including Facebook and X (formerly Twitter), have terms and conditions that typically prohibit such content, but it’s not always obvious how rigorously they’re enforcing these standards. (Just one immediate example: on August 6, a U.K. man was arrested for stirring up racial hatred by posting messages on Facebook about attacking a hotel where asylum seekers were housed.)

Platforms have long applied a playbook of plausible deniability — by saying they took down content once it was reported to them. But a law that regulates the resources and processes they are expected to have in place could force them to be more proactive about stopping the free spread of toxic disinformation.

One test case is already up and running against X in the European Union, where enforcers of the bloc’s Digital Services Act have been investigating the platform’s approach to moderating disinformation since December.

On Thursday, the EU told Reuters that X’s handling of harmful content related to the civic disturbances in the U.K. may be taken into account in its own investigation of the platform as “what happens in the U.K. is visible here”. “If there are examples of hate speech or incitements to violence, they could be taken into account as part of our proceedings against X,” the Commission’s spokesperson added.

Once the OSA is fully up and running in the U.K. by next spring, the law may exert a similar pressure on larger platforms’ approach to dealing with disinformation, according to the Department for Science, Innovation and Technology. A Department spokesperson told us that under the current law the biggest platforms with the most requirements under the Act will be expected to consistently enforce their own terms of service –including where these prohibit the spread of misinformation. 




Source