Ofqual to look into possible ban on AI bot ChatGPT in English schools
The rapid development of artificial intelligence (AI) is increasingly having an impact on education, bringing opportunities as well as challenges.
One of the most sophisticated chatbots yet, ChatGPT, is sparking alarm for its ability to generate convincing essays, which can trick much of the existing anti-plagiarism software.
Now the AI bot, which launched in November 2022, has been banned in New York schools amid fears it is being used to cheat on tests. The New York City education department has restricted access to ChatGPT on all of its devices and internet networks in schools, citing concerns about “safety and accuracy”.
“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said Jenna Lyle, a spokesperson for the New York City Department of Education, in a statement to The Washington Post.
England’s exam watchdog Ofqual is reported to be looking into developing new guidance for schools to prevent AI tools like ChatGPT being used by pupils.
ChatGPT, released by the Silicon Valley company OpenAI, is designed to understand human language, carry out conversations with humans, and, having been trained on a huge sample of text from the internet, can generate unique content in a style of writing dictated by the user.
Twitter and Tesla chief Elon Musk, who co-founded OpenAI but left in 2018, tweeted “It’s a new world. Goodbye homework!” in response to an article about the New York ban on ChatGPT on school devices.
An investigation by The Telegraph revealed ChatGPT’s potential for cheating on exams, reporting that when teachers reviewed the bot’s answers to GCSE questions, many of the responses would be marked between a pass and a grade six, the top score.
When it comes to the AI tool being used for harm, there are a number of safeguards in place but reportedly users have been able can trick it into generating illegal or unethical responses, such as shoplifting tips.
An investigation by Vice found that, with the right prompt, ChatGPT can be manipulated into giving instructions for committing crimes, making bombs, and even taking over the world.
Fresh fears over more delays to Online Safety Bill as poll shows majority of people want tougher measures
New laws to protect children from online harm face further delays – prompting critics to warn this could risk the entire bill if it runs out of parliamentary time.
The long-awaited Online Safety Bill could be pushed back because the Prime Minister will reportedly look to prioritise fresh legislation to tackle continuing strikes across the public services.
There is an increasingly tight parliamentary timetable this year and delays could spell the end for the proposed law because any bill that doesn’t receive Royal Assent by the end of a session in Parliament falls, but it can be “carried over”. However, this has already happened with this Bill and it can only occur once.
It comes as a cross-party group of MPs have backed an amendment to the Bill that would mean tech bosses are held accountable should their platforms have contributed to the serious harm, abuse, or death of a child.
New research suggests an overwhelming majority of UK adults – 80 per cent – want social media companies to employ senior managers responsible for this.
The proposed legislation intends to end self-regulation for platforms such as Facebook and Instagram and force them to remove harmful content.
Responding to critics who argued it curtailed freedom of speech, the Government ditched plans to in effect outlaw online material that is judged as “legal but harmful”, and dropped proposals to make social media giants liable for significant financial penalties for breaching regulations.
Online safety hit the headlines after a coroner last year ruled that schoolgirl Molly Russell, 14, died from “an act of self-harm while suffering from depression and the negative effects of online content”.
The Bill was due to return to the Commons for its report stage on Monday 16 January, but according to inews.co.uk, the Government is considering holding it back as Rishi Sunak seeks to force through new laws to impose minimum service levels on rail, schools and the NHS.
The new online laws were dropped from Commons business twice last year due to political turmoil. Earlier in December, Culture Secretary Michelle Donelan personally ‘guaranteed’ the Online Safety Bill will become law.
In its current form, the Bill would only hold bosses responsible for failing to give information to regulator Ofcom, rather than for corporate decisions that result in preventable harm or sexual abuse.
MPs including the Labour shadow cabinet and Conservatives Bill Cash and Miriam Cates are calling on the Government to amend the legislation to ensure companies to be liable for such incidents.
Sir Peter Wanless, the chief executive of the NSPCC, which commissioned the YouGov survey, said the Bill should provide “bold, world-leading regulation that ensures the buck stops with senior management”.
TikTok change restricts content shown to children
TikTok has rolled out more audience controls for creators, allowing them to block their content for users under 18 – amid growing pressure on social media companies to create better safeguards for minors.
The changes puts the onus on creators to keep inappropriate content away from children, because, the video-sharing platform says, inappropriate or ‘suggestive’ content is much harder to detect and keep away from children.
Since November 2022, content on TikTok Live was restricted by creators, meaning some livestreams wouldn’t show up for under 18s.
The same technology will now enable those posting videos to ban children from seeing their standard videos on the app. The changes will be expanded globally in the coming weeks.
The development comes after a recent study found TikTok‘s recommendation algorithm “bombards” teenagers with self-harm content and eating disorder content within minutes of them using the app.
It takes just 2.6 minutes to show vulnerable girls as young as 13 videos that feature dangerously restrictive diets, pro-self-harm and content romanticising suicide to users who show a preference for the material, even if they are registered as under-18s, according to research by the Center for Countering Digital Hate (CCDH).
Indeed, user accounts classed as “vulnerable” were shown these kinds of clips 12 times more than “standard” accounts.
TikTok will still remove any content that violates its community guidelines. The firm insists that its “strict policies prohibiting nudity, sexual activity, and sexually explicit content” will still apply to creators who use this new feature.
In a post it wrote: “Our goal has always been to make sure our community, especially teens on our platform, have a safe, positive and joyful experience when they come to TikTok.”
The metaverse helps children who refuse to go to school
The Metaverse is being used to teach children in Japan who refuse to attend school.
Some 110 elementary and junior high school students have taken part in the programme of immersive classes – with 10 per cent of children afterwards having gone back to school as normal.
Not-for-profit organisation Katariba launched Room-K to help youngsters create a trusting relationship with counsellors and gain a sense of belonging. There’s also a focus on acquiring social skills along with concentrating on studying.
The Japanese government is looking at ways to deal with the growing number of students refusing to attend school, which is said to be down to the impact of the Covid-19 pandemic and complex home environments.
Room-K offers pupils freedom of choice, allowing them to choose subjects and the time they want to study, with 45-minute sessions created for multiple subjects including Japanese, programming and reading with other students, reports Japan Times.
Participants can choose avatars such as heroes and princesses and move around freely in the metaverse space. By approaching other avatars, they can speak to other students through video calls, which is said to replicate break times in traditional schools.
Katariba’s Tomotaka Segawa, who is in charge of Room-K, said. “More children will be saved if online connections are turned into an opportunity to support them.”
Teens who frequently check social media may experience brain changes
New research has identified a possible link between youngsters habitually checking social media and brain changes that are associated with sensitivity to feedback from their peers.
The University of North Carolina study carried out MRI scans of nearly 200 children aged 12 to 15, a period of especially rapid brain development, across two years.
They were split into groups according to how often they checked Facebook, Instagram and Snapchat.
It found that youngsters with high engagement with social media at around age 12 showed a distinct trajectory, with their sensitivity to social rewards from peers heightening over time. Teens who checked their feeds less often followed the opposite path, with a declining interest in social rewards.
The study, published in JAMA Pediatrics, is one of the first ever long-term studies on child brain development and technology use.
Habitual users in the study reported checking their feeds 15 or more times a day; moderate users checked between one and 14 times; non-habitual users checked less than once a day.
The study authors acknowledged key limitations of their study: It can’t be determined if the brain changes are driven by social media use or if they reflect a natural tendency in adolescence – a period of expanding social relationships – to pivot toward peers, which could be driving more frequent social media use.
The study author wrote: “Our findings suggest that checking behaviours on social media in early adolescence may tune the brain’s sensitivity to potential social rewards and punishments.
“It is difficult to determine whether social media use prior to data collection caused these distinct neural trajectories or pre-existing differences in neural activation placed some youth at risk for more habitual checking behaviours.”
The neuroscientists said more studies are needed to examine long-term associations between social media use and brain development in youngsters.
News from elsewhere this week:
Adeniyi Alade: Parents should familiarise themselves with VR to keep kids safe – Press and Journal
How I (mostly) stopped my teen from gaming all night – New York Times
My five-year-old son has started playing Roblox. Should I be worried? – inews.co.uk