HomeTechnologyBritain Passes Sweeping New Online Safety Law

Britain Passes Sweeping New Online Safety Law

Britain has passed a comprehensive law to regulate online content, which includes age-verification requirements for pornography sites and measures to reduce hate speech, harassment, and other illegal material. The Online Safety Bill covers areas such as terrorist propaganda, online fraud, and child safety. This law is considered to be one of the most extensive attempts by a Western democracy to regulate online speech. It took over five years to develop, sparking intense debates on striking the right balance between free expression, privacy, and blocking harmful content, especially targeting children.

At one stage, messaging services like WhatsApp and Signal threatened to exit the British market entirely unless provisions in the bill, which were perceived as weakening encryption standards, were revised.

According to Graham Smith, an internet law expert based in London, the British law goes beyond efforts in other regions by requiring companies to proactively screen objectionable material and assess its legality, rather than merely responding to reports of illicit content.

This law is part of a broader trend in Europe to end the era of self-regulation by tech companies, wherein they decide their own policies on content removal or retention. The Digital Services Act, an EU law, has also recently come into effect, mandating companies to more actively police their platforms for illegal material.

Michelle Donelan, the British secretary of technology, hailed the Online Safety Bill as a groundbreaking legislation and a major step toward making the UK the safest place online.

British politicians have faced pressure to pass this law due to concerns about the detrimental effects of internet and social media usage on the mental health of young people. Families who attribute their children’s suicides to social media have been among the most vocal advocates for the bill.

Under the new law, content aimed at children that promotes suicide, self-harm, and eating disorders must be restricted. Pornography companies, social media platforms, and other services will be required to implement age-verification measures to prevent children from accessing pornography. This measure has raised concerns among some groups who argue that it may hinder access to online information and compromise privacy. The Wikimedia Foundation, which operates Wikipedia, has stated that it may be unable to comply with the law and may be blocked as a result.

TikTok, YouTube, Facebook, and Instagram will also need to introduce features that allow users to limit their exposure to harmful content, such as eating disorders, self-harm, racism, misogyny, or antisemitism.

“At its core, the bill proposes that providers should assess the foreseeable risks posed by their services and take steps to mitigate them, as many other industries already do,” said Lorna Woods, a professor of internet law at the University of Essex who contributed to drafting the law.

However, the bill has garnered criticism from tech companies, free speech advocates, and privacy groups, who argue that it threatens freedom of expression by incentivizing content removal.

There are still uncertainties about how the law will be enforced, with the responsibility falling on Ofcom, the British regulator overseeing broadcast television and telecommunications. Ofcom must now establish rules on how to ensure online safety.

Companies that fail to comply with the law may face fines of up to £18 million (approximately $22.3 million) or 10% of their global revenue, whichever is higher. Company executives could also face criminal action if they fail to provide information during Ofcom investigations or violate rules related to child safety and child sexual exploitation.