I was discussing plans with a young person to explore parts of the UK they had never been before. When I pulled out a Road Atlas for her, she commented how quaint that was. It reinforced to me how much life has changed. Paper maps, newspapers, letters, cheque books and even money are almost things of the past. Young peoples’ lives are, to a large extent, lived, researched and organised online.
In this new reality, the Government’s new guidance on online safety* rightly identifies the need to teach pupils how to evaluate what they see online. There are many scenarios in which this is important for their safety: an obvious priority is recognising grooming or radicalisation; distinguishing between well-informed, health websites and those where there is either a commercial motivation or dubious advice is another. In addition to this is the whole issue of fake news.
The growth in social media usage (and the way that it can be monetised) has both enabled and encouraged the proliferation of fake news. Online anonymity, fake profiles and automated bots all contribute to the spread of disinformation but ordinary social media users, including children, who onward share such content, also unwittingly play a part.
Social media platforms and teaching fake news
As social media platforms have increased the number of fact checkers trying to counter this growth, so the producers of fake news have become better at evading them. Facebook’s Community Standards Enforcement Report** shows that over 2 billion fake accounts were removed by it in the first quarter of this year alone. In addition, fake news stories are likely to become harder to detect as a result of Deepfake technology. This is being developed using artificial intelligence to produce increasingly realistic voices and images. It is now possible to analyse and mimic an individual’s speech, seamlessly blend a face onto a different body (used in fake porn amongst other things), or convincingly synchronise lip movements to match words allegedly being spoken.
It is human nature to be intrigued by shocking stories, especially about public figures, but most of us retain a healthy scepticism of unlikely stories from doubtful sources. Children, on the other hand, with their limited life experience are more vulnerable to believing what they read, hear or see, and need help to recognise the possibilities that online content has been faked or manipulated. Fake news is not harmless; it contributes to social division, which can fuel extremism and violence as well as risking a society where truth can be dismissed as “fake news” and where no-one believes anything, because they can’t be sure what is true and what is not.
There are a number of clues that children can be taught to look out for, such as checking the source, the style of language, checking other news sites for collaboration, date and fact checking, but this on its own is not enough. In addition, they need to be aware of the growing capability to alter voices and images in a realistic way. We need teach a healthy scepticism (especially of items supporting unlikely, extreme or political views) and an awareness of the motivations (including those of foreign powers who seek to sow discord in our communities) that may be behind online posts. We need to help them develop skills of thoughtfulness, reflection and reasoned debate so that they question the views they are presented with and consider alternatives. We also need to make them aware that by ordinary users sharing fake news with their friends, they are helping to spread the problem and potentially damaging their own society in doing so.
This will be a challenging topic for schools to address but one that is important to prepare our children for their changing world.
(These figures include fake accounts created for all reasons, not just to spread fake news).