Changes to Online Safety Bill will not weaken protection for children, claims government


Online Safety Bill: Ban on ‘legal but harmful’ online content axed

A proposed law change that would force big tech companies to remove ‘legal but harmful’ content has been ditched from the Online Safety Bill.

The section – which applies to material that promotes or glorifies eating disorders, self-harm and suicide – had proved controversial with critics who argued it threatened free speech.

But according to Reuters, Lucy Powell, Labour’s culture spokesperson said: “Removing ‘legal but harmful’ gives a free pass to abusers and takes the public for a ride. The government has bowed to vested interests, over keeping users and consumers safe.”

However, Culture Secretary Michelle Donelan denied watering down the legislation and argued that the changes do not undermine the protections for children.

The bill previously included a section that required large platforms such as Facebook, Instagram and YouTube to tackle some legal but harmful material accessed by adults.

However, despite the changes, social media giants will still have to stop youngsters – defined as those under 18 – from seeing content that poses a risk of causing significant harm.

The bill – which aims to rewrite the UK’s rules for policing harmful content online – is intended to become law in the UK before next summer.

Tech executives could be issued with large fines and even be jailed and platforms could be blocked if they are found to breach the new rules.

Free speech campaigners claim the legislation opened the door for tech firms to censor legal speech. Trade Secretary Kemi Badenoch said it was “legislating for hurt feelings”.

With the requirement now removed, companies will instead have to introduce a system allowing adult users more control to filter out harmful content they do not want to see.

Sharing pornographic deepfakes will be a crime in England and Wales

The sharing of non-consensual pornographic deepfakes – explicit images or videos which have been manipulated to look like someone without their consent – will be illegal, the government has announced.

The new offence, which could see offenders jailed, will form part of the controversial Online Safety Bill, which will be reintroduced to Parliament on December 5.

‘Downblousing’ – where photographs are taken down a women’s top without consent – will also be criminalised. The move would bring it in line with an earlier law against ‘upskirting’.

The legislation would also make it easier to charge people with sharing intimate photos without consent because prosecutors would no longer need to prove they intended to cause distress. Campaigners had argued that the existing law allowed men to evade justice because they admitted sharing images without consent but said they did not intend to cause any harm.

Around one in 14 adults in England and Wales have experienced a threat to share intimate images, says the government. There has been more than 28,000 reports of disclosing private sexual images without consent recorded by police between April 2015 and December 2021.

Deepfake porn is a rising problem, with one website that creates nude images from clothed ones receiving 38 million visits last year. A BBC Panorama investigation in August found that women’s private, explicit photos and videos are being traded by men on social media platform Reddit.

Professor Penney Lewis of the Law Commission welcomed the move. “Taking or sharing intimate images of a person without their consent can inflict lasting damage,” she said. “A new set of offences will capture a wider range of abusive behaviours, ensuring that more perpetrators of these deeply harmful acts face prosecution.”

TikTok and Bumble crack down on revenge porn

TikTok and Bumble are to join an initiative that aims to combat revenge porn – intimate images and videos shared without the person’s consent.

The tech companies have partnered with StopNCII.org (Stop Non-Consensual Intimate Image Abuse) which can detect and block images that it receives reports of, according to Bloomberg.

StopNCII.org builds on technology developed by Facebook and Instagram’s pilot in Australia launched a year ago which has helped more than 12,000 people to prevent more than 40,000 photos and videos being shared without their consent.

People being threatened with intimate image abuse can use the tool to create a digital fingerprint, known as a hash, of the photo. If an image is uploaded to a participating platform that matches the hash, a moderator will investigate it. If it meets the criteria, the picture will be blocked from being posted on the platform and taken down if already shared. Intimate images are never transferred to nor saved on the StopNCII.org website. In fact, the image never leaves the user’s device.

StopNCII.org is the UK-based nonprofit behind the Revenge Porn Helpline. A report by the organisation found that revenge porn cases increased by over 40 percent between 2020 and 2021, rising from 3,146 cases to 4,406.

In 2015, Google announced it would remove links to revenge porn on request and Microsoft soon followed suit. Both have placed forms on-line for victims to ask for help.

It comes amid plans by the government to force platforms that host user-generated content to take down non-consensual intimate images more swiftly, as laid out in its Online Safety Bill.

One in four children’s social media accounts has a fake age

A disturbing report of a 13-year-old boy exposing himself to an older woman after copying what he had seen in pornography highlights the dangers of exposure to explicit content at an early age.

More than 3.6 million social media accounts have been set up by children who have registered with a false date of birth, the advertising regulator has revealed.

Research found that 93 percent of young people aged 11 to 17 say they have an account with Facebook, Instagram, Snapchat, TikTok, Twitch, Twitter or YouTube, with a quarter (24 per cent) lying about their age when they set up their profiles.

The report by the Advertising Standards Authority (ASA) raised concerns that, as a result, youngsters are accessing alcohol, gambling and other age-restricted adverts.

It found that children were exposed to almost two thirds more age-restricted ads than under 17s who set up their profiles with their actual age.

The study discovered that youngsters are signing up for social media at increasingly young ages. Among 11 to 12-year-olds – younger than the minimum age of registration – 67 percent of profiles were set up before secondary school.

For its 100 Children Report, the ASA surveyed 1,000 children and directly monitored the smartphones and tablets of 97 children across the UK to see exactly what ads appeared while logged in to social media.

ASA director Guy Parker said: “This study is the latest example of how we’re developing new tools and methodologies to gain a real, up-to-date understanding of the ads young people are seeing on websites, social platforms and apps.

“With many children registering on social media with a false age, it’s vital that marketers of age-restricted ads consider their choice of media, use multiple, layered data to target their ads away from young people and monitor the performance of their campaigns. Targeting solely on the basis of age data is unlikely to be enough.”

News from elsewhere this week:

Kate Winslet: Parents feel powerless over children’s social media use – BBC

Twitter leans on automation to moderate content as harmful speech surges – Reuters

Can age verification stop children seeing pornography? – BBC             

What is Discord, the voice and text chat app popular with gamers? – Washington Post

How accurate is mental health advice on TikTok? [infographic] – SocialMediaToday


Gooseberry Planet offers a package of over 50 lesson plans, slides, digital workbooks and online games for children aged 5-13 years.  Visit our website for more details.

Scroll to Top