Tech to Identify Child Abuse, App Store Age Ratings Investigated & More



The UK Government has announced five recipients for its “Safety Tech Challenge Fund,” a series of grants to businesses with proposals to tackle online child abuse using technology.

Five winning projects have each been given an initial grant of £85,000 to develop their technology proposals, with the prospect of a further grant of £130,000 if the development of their proposal appears promising.

The recipients of the funds are based in a range of locations, including Edinburgh, Cambridge and Austria, and have submitted a range of proposals including the development of live-video AI moderation and age-assurance verification.

One of the recipients, a tech group called T3K-Forensics, say they will use the funds to develop AI-based software that will analyse smartphone data to detect new recordings of child sexual abuse material.

“Smart and easy ways to share harmful data at any time require equally smart and easy ways to find and stop this data spread,” said Martina Tschapka, Director Operations & Content Manager Online Child Safety at T3K-Forensics.

“Let’s not forget that numerous stories of suffering and vulnerability lie behind this material. It is T3K’s mission to play our part in stopping the vicious circle of suffering and revictimisation, and we are thrilled that our solution was picked to be a part of this crucial project.”


UK watchdog Information Commissioner’s Office (ICO) has written to Apple and Google over the process by which they arrive at age ratings for their App Store apps.

The letter follows concerns raised by campaign group, 5Rights Foundation, over the extent to which tech firms were complying with the UK’s Age Appropriate Design Code. An investigation by 5Rights found at least 12 systemic breaches of the Code in relation to creating an appropriate environment without a high risk of harm for children.

In her letter to 5Rights, Information Commissioner Elizabeth Denham said “we have contacted Apple and Google to enquire about the extent to which the risks associated with the processing of personal data are a factor when determining the age rating for an app.”

Denham also announced a roundtable discussion and a call for evidence in relation to the age application standard to deepen the ICO’s understanding of how the standards have been applied in the industry so far.

“We will use the findings of these activities to inform the scope of any further regulatory action in relation to age assurance,” she said.


Video-sharing app, TikTok, has announced a suite of new updates aimed to improve safety for younger users of the platform.

As part of the new measures, TikTok will expand its technology to identify and remove harmful content, as well as improving the language of its content warning labels and tackling content which features hoaxes in relation to suicide and self-harm. New resources in TikTok’s “Safety Centre” will be added and users will be prompted to read these.

The new measures follow research commissioned by TikTok into how younger users assess risk and the effect that exposure to harmful content can have on their mental health. A survey of over 10,000 teens, parents and teachers found that 31% of teens have had a negative personal impact after watching hoax videos about suicide and self-harm.

“We hope the work we’ve undertaken with these world-leading experts can help make a thoughtful contribution to this topic that others can draw insights and opportunities from as well,” said Alexandra Evans, TikTok’s head of safety public policy for Europe.

“For our part, we know the action’s we’re taking now are just some of the important work that needs to be done across our industry, and we will continue to explore and implement additional measures on behalf of our community.”

Gooseberry Planet offers a range of resources for children, teachers and parents. Visit our website for more details.

About The Author

Scroll to Top