Rise In Child Abuse, Youtube Disinformation, Metaverse Risk & More

gp_news_thumbnail

MORE CHILD ABUSE ONLINE IN 2021 THAN EVER BEFORE: IWF

More online child sexual abuse material was recorded in 2021 than in any other year, according to research by the Internet Watch Foundation (IWF).

The IWF investigated 361,000 reports of online child sexual abuse in 2021, more than in the first 15 years of their existence combined, while IWF analysts took action against over a quarter of a million URLs containing imagery of children being sexually abused.

 Self-generated sexual imagery of 7-10 year olds increased three-fold in 2021 compared with 2020.  Children aged 11-13 years, however, continue to form the biggest age group for self-generated sexual content.  This is content created by children themselves using webcams, often in their own bedrooms and as a result of online grooming or extortion.

The announcement comes as the UK government launched its new “Stop Abuse Together” campaign to help parents and guardians to spot the signs of sexual abuse, to ensure their children stay safe online.

“Children are being targeted, approached, groomed and abused by criminals on an industrial scale,” said Susie Hargreaves OBE, Chief Executive of the IWF.

“So often, this sexual abuse is happening in children’s bedrooms in family homes, with parents being wholly unaware of what is being done to their children by strangers with an internet connection.”

“Parents need to be supported in knowing how to broach the topic with their children, and to give them the confidence to call out inappropriate behaviour when they see it.”

YOUTUBE ACCUSED OF ALLOWING DISINFORMATION TO SPREAD

YouTube is allowing its platform to be weaponized by “unscrupulous actors” to manipulate and exploit users, a consortium of fact-checking organisations has said.

In an open letter addressed to YouTube CEO, Susan Wojcicki, the group argue that YouTube’s existing policies are inadequate for removing abusive and misleading information on its platforms.

“We are glad that the company has made some moves to try to address this problem lately but based on what we see daily on the platform, we think these efforts are not working — nor has YouTube produced any quality data to prove their effectiveness,” the group say in the letter.

“Given that a large proportion of views on YouTube come from its own recommendation algorithm, YouTube should also make sure it does not actively promote disinformation to its users or recommend content coming from unreliable channels.”

The group makes a number of recommendations to YouTube to strengthen their content moderation policies, including combatting disinformation in languages other than English, providing context, adding debunks to conspiracy theories, and acting against repeat offenders who promote disinformation.

The group has requested a meeting with YouTube to discuss ways to improve its content moderation strategies.

Speaking to the Guardian, YouTube spokesperson, Elena Hernandez, said the company continued to look for ways to improve its efforts to reduce misinformation on its platform.

METAVERSE “NOT SAFE FOR KIDS”: WATCHDOG

Facebook’s new VR Metaverse is not safe for younger users, research by an online hate watchdog has found.

Users are exposed to abusive behaviour such as racism and pornographic content every seven minutes, according to the Center for Countering Digital Hate.

Facebook did not respond to at least 51 reports of abusive behaviour, including sexual harassment and grooming of minors, the research shows.

Metaverse “connects users not just to each other but to an array of predators, exposing them to potentially harmful content every seven minutes on average,” said Imran Ahmed, Chief Executive of the Center for Countering Digital Hate.

Meta’s terms of service do not allow under-13s to create an account, the company told CNBC.

TWITCH BANNED OVER 15 MILLION BOT ACCOUNTS IN 2021

Over 15 million bot accounts on Twitch were banned in 2021, the social media firm has announced.

The announcement comes after reports of widespread ‘hate raids’ on the platform in 2021, in which users took advantage of the site’s ‘raid’ feature to abuse and attack streamers from marginalised communities.

“Our community experienced some of the most vicious attacks ever seen against streamers – particularly streamers of colour, members of the LGBTQIA+ community, and military veterans,” said Twitch VP of Global Trust and Safety, Angela Hession.

“This kind of behaviour has no place on Twitch and we know there’s more we can do to protect our community.”


“Back to school” special discount offer for the New Year. Visit our website for more details.

Scroll to Top