Depending on the Facebook groups or Twitter threads you are a member of, respect can be a hard thing to come by online. However, the UK government used the phrase strongly last month when it declared that it would fend off attempts to scale back the proposed internet control measures.
Critics of the online safety bill claim that despite the polite language, the legislation is actually a wolf in sheep’s clothing. On July 12, the much-debated legislation will be brought back before parliament, and MPs made it clear this week that they don’t agree with the current version, which gives the culture secretary excessive control over the internet.
The chairman of the digital, culture, media, and sport committee, Julian Knight, a Conservative, expressed concern that the secretary of state would exert undue influence over Ofcom, the independent regulator tasked with carrying out the act. He urged the removal of provisions that would permit Nadine Dorries, who was still the culture secretary at the time of publication, to direct Ofcom to alter its codes of conduct, including those pertaining to dealing with terrorist and child sex content, before parliament takes them up for consideration.
A free press depends on protecting the regulator from the possibility of daily interference from the executive, according to Knight. “The government will still play a key role in determining the course of action, but Ofcom must not constantly be looking over its shoulder and responding to the whims of a backseat-driving secretary of state,” said the author.
The government was courteous in its firm rejection. Last month, the digital minister, Chris Philp, said in a statement to a committee of MPs reviewing the bill that the government would “respectfully resist” efforts to limit the secretary of state’s authority.
On that point, the government isn’t going to change, but it is making changes nonetheless
The law mandates that tech companies, or rather, platforms that generate user-generated content, such as social media behemoths and significant search engines like Google, have a duty of care to safeguard users from harmful content. Generally speaking, this duty of care is divided into three parts: preventing the dissemination of harmful or inappropriate content for children; ensuring that children are not exposed to illegal content; and, for major platforms like Facebook, Twitter, and TikTok, shielding adults from legal but harmful content (such as cyberbullying and eating disorder-related material).
The law will be regulated by Ofcom, which will have the authority to fine violators up to £18 million or 10% of their global revenue. It may also, in rare instances, block websites or applications. Ofcom released its implementation roadmap for the act on Wednesday, with a focus on addressing illegal content within the first 100 days of the law’s implementation.
Here is a brief summary of the changes that will be made as the bill moves on to the next phase. Depending on how it performs in the House of Lords, which is sure to have a few issues with it, it ought to become law by the end of the year or in the early months of 2023.
Ch-ch-changes: verified modifications
In time for the report stage on July 12, the government is introducing a few amendments; a second batch will be announced shortly after. One confirmed modification will require tech companies to protect internet users from state-sponsored misinformation that endangers UK society and democracy. This tightens up the bill’s current disinformation proposals, which already call for tech companies to act against state-sponsored misinformation that endangers people, like death threats.
A second confirmed change is also incremental. Ofcom already has the authority to require these platforms to adopt “accredited technology” to detect child sexual abuse and exploitation [CSEA] content thanks to a bill provision targeting end-to-end encrypted services. If that fails, they must make “best efforts” to create or implement new technology that can detect and get rid of CSEA. This action appears to be a response to Mark Zuckerberg’s plans to add end-to-end encryption to Instagram and Facebook Messenger.
First and foremost, practice what is expected by doing no harm
Philp stated during the committee stage that the government will introduce an offense for the purposeful sending of flashing images intended to cause epileptic fits. The safety bill might not include it, though.
He added that the government would publish a list of “priority harms” to adults “in due course,” contradicting the original plan to do so after the bill is passed. Platforms are required to address these harmful behaviors, which are expected to include self-harm, harassment, and eating disorders but are not considered crimes. It’s feared that this will make the bill into a censors’ charter where tech companies turn against content that exists in the gray area of acceptableness, like satire.
Those priority harms should be published in the amended bill, according to William Perrin, a trustee of the Carnegie UK Trust charity, so that MPs can discuss them before the legislation is passed. He asserts that media regulation must be free from executive interference. The government must relinquish the authority to define content that is harmful but not unlawful so that parliament can work out the details.
Increasing the scope of crime: other modifications
So-called category 1 tech companies, the heavy hitters like Facebook, Instagram, Twitter, YouTube, and TikTok, are covered by the “priority harms” clause. There are calls to include edgier platforms like 4chan and BitChute, which unquestionably host harmful content, in that list.
Last month, Philp also informed MPs that he would take into account requests to expand the list of illegal content—related to actual criminal activity—that all businesses operating in the bill’s purview must combat. Among the crimes that MPs want to include are trafficking and modern slavery. Currently, the bill’s “priority offenses” list includes threats to kill and the illegal sale of firearms.