Claims TikTok is failing to disclose scale of harmful videos


Social media firms could be forced by law to disclose this data

TikTok has been accused of failing to reveal data on the harmful content shown on its platform, including that promoting self-harm, suicide and eating disorders.

The video-sharing app, which is popular with the younger generation, is being selective over what viewing data it reveals, say researchers.

It comes as peers seek to change the law to force social media companies to disclose their data secrets.

Child safety campaigners, including the NSPCC, have previously said the app has chosen to deny the problem and must take meaningful action.

The Telegraph reports that readily available data on its Creative Center includes the most popular videos with their hashtags, their viewing figures based on age and geography and trends over time.

However, there is said to be scant information collated in a similar way for harmful videos that garner millions of views on the site, according to the Center for Countering Digital Hate (CCDH) and Reset.tech.

The Online Safety Bill is due to become law this year and imposes a duty of care on tech giants to protect youngsters from harmful content. Under the plans, social media bosses who fail to protect under-18s from harm online could face jail. The bill will also give Ofcom the power to fine companies up to £18m or 10 per cent of worldwide turnover.

Former health minister Lord Bethell has tabled a cross-party amendment to the legislation that would give independent researchers a legal right to access data currently only available after it has been screened by the social media firms.

The bill is set to be considered this week in the House of Lords.

The UK’s communications regulator would also be allowed to give researchers access to the anonymised data, provided they have proved their independence, intentions and absence of any commercial interests.

Lord Bethell said: “It is obvious to every mum and dad in the country that there’s stuff going on in the companies’ algorithms that creates extremely addictive and disturbing effects on some children as well as adults in terms of gambling and extremism.

“It is critically important that we understand how these algorithms work so we can prioritise interventions and support.”

However, TikTok said data published through its Creative Center is intended for advertisers and not for researchers to understand the scale of harmful videos. Instead, it provides other means by which people could track content, it said.

TikTok says its community guidelines ban eating disorder-related material on its platform and this includes hashtags explicitly associated with it.

However, users will often make subtle edits to terminology so they can carry on posting potentially disturbing content which evades the firm’s moderators.

In December, the CCDH released a report that identified 56 TikTok hashtags using “coded” language, under which it found potentially harmful eating disorder content.

Imran Ahmed, CCDH chief executive, said: “One of the fundamental ways we can hold tech companies to account is to open the bonnet of these machines and let independent researchers scrutinise what the data tells us about users’ exposure to harmful content.

“But social media platforms have a track record of concealing embarrassing data – data which might point to their catastrophic failures to keep children safe.”

TikTok says it has removed harmful content. A spokesman said: “TikTok is committed to enhancing transparency into content on our platform as well as providing ongoing visibility into content removed for violating our policies through our Community Guidelines Enforcement Reports.

“The Creative Center serves as a business tool created to support advertisers with inspiration, insights, and creative tools to develop effective TikTok ads, not as a transparency tool.”


We are currently offering 25% off all purchases until the end of half term! Head over to our website for details.

Scroll to Top