Researchers looked at a tool that tackles the sharing of child abuse images
Technology aimed at tackling the sharing of child abuse images could turn millions of phones into facial recognition tools, researchers have warned.
A new study examined a tool – which tech firms may be forced to introduce under a new law – that scans the public’s private messages for images of child abuse before they are sent.
The research shows the technology could also mean it would be possible for governments to perform facial recognition on the phones of hundreds of millions of people to catch wanted criminals without the user’s knowledge.
Imperial College London examined the potential privacy implications of a tool called client-side scanning (CSS).
It adds to the ongoing row over how the Online Safety Bill – which is currently going through Parliament – will tackle encryption.
The legislation will essentially outlaw end-to-end encryption, which is when private messages can only be read by the sender and the recipient, and not even by a third party like WhatsApp.
Child safety campaigners argue that the Bill must ban it to tackle the horrors of child abuse. However, critics say the proposed law would create a ‘back door’ for rogue nation states, terrorists and criminals to take advantage.
WhatsApp, Signal and other messaging services have said they will leave the UK rather than abide by a requirement to weaken encryption.
The Bill could mean apps are forced to install CSS, which would scan images on a phone before they are encrypted and sent.
This uses a technique known as ‘neural hashing’: Images and videos are checked (using a kind of digital fingerprint) against a database of prohibited content and flagged for moderation if there’s a match.
CSS has been hailed as a compromise between privacy and the need to police illegal content. Because it happens on the device, rather than through a messaging provider, it does not interfere with the end-to-end encryption between sender and receiver.
However, the authors of the Imperial College London paper argue that their findings show that we don’t adequately understand the risks enough to use CSS technology on hundreds of millions of devices.
CSS has already been developed in the US by companies like Apple. But under huge pressure from privacy and security advocates, the tech giant abandoned its plans to introduce it.
The corresponding author of the paper, Dr Yves-Alexandre de Montjoye, of the university’s Department of Computing, said: “This Bill could result in the installation of software to check you don’t share images known to contain child sexual abuse material.
“But what our paper shows is that the software could be built or tweaked to include other hidden features such as scanning private content from the phones of hundreds of millions of people using facial recognition, the same technology used at airport gates.”
To carry out the study, the team recreated the algorithms that are behind CSS, to match the signature of pictures to the database of known illegal material. They then taught the software to also scan the content for wanted faces. It proved very accurate at identifying the faces of wanted people in images.
Co-author Shubham Jain, also from Imperial’s Department of Computing, said: “It’s vitally important to tackle illegal content online and we must do so in effective ways. However, CSS threatens to add a backdoor into personal devices, sacrificing the privacy of millions.”
Dr de Montjoye said: “It is our opinion that client-side scanning is not the innocuous ‘single purpose’ technology it has been described to Parliament as. We call on policymakers to thoroughly evaluate the pros and cons of client-side scanning, including the risk of it being abused, before passing laws that could result its installation on millions of phones.”
In April, Ciaran Martin, former CEO of the National Cyber Security Centre, wrote in an opinion piece in the Financial Times: “This controversial power will be driven through, but likely never used. Cue another bitter and damaging row about Britain’s perceived hostility to encryption, but with no actual benefit to those fighting online harms.”