AI’s future role in safeguarding children online
There are a lot of people online who lie about themselves. Facebook has always been upfront about this: it recently admitted that up to 270m accounts are fake in some way. This isn’t exclusively children misrepresenting their age to get onto the platform, but many of them undoubtedly are – and the same will be true across all other social platforms.
Recently, Ofcom found that half of children aged 11 and 12 have a social media profile, despite the minimum age for most platforms being 13. 8 in 10 parents whose children are on social media are unaware of the restrictions. It’s clear that many underage children use social media and we all know that when a child’s peer group puts pressure on a child to use a social media platform, they will. It’s no different to underage drinking – if it’s seen as cool, and your friends are doing it, it’s highly likely. How many of us ‘old fogies’ blagged our way into a licensed bar when we were too young? Nothing has changed.
The dark side of social
Of course, the flip side to this is that older people can also pretend to be younger.
There has been a worrying rise in ‘sextortion’ where – commonly – online criminals pose as attractive women, luring teenage boys (and young men) into their confidence until they send explicit pictures over to the criminals. The criminals then threaten to post the images publicly unless the young person either hands over money or performs another task.
How robots can help
There are a lot of organisations trying to help with this problem, but it’s not easy. The Internet Watch Foundation takes down tens of thousands of child sexual abuse images every year. Microsoft acts in a similar fashion – but we have also seen troubling cases where the staff involved in this job developed Post-Traumatic Stress Disorder after viewing child sexual abuse content. Irrespective of taking down images, after the event is too late – they shouldn’t have been seen in the first place.
Furthermore, the problem doesn’t end with ‘simple’ image recognition – for example, grooming and sextortion doesn’t just involve image exchange; it involves conversations as well.
Human communications follow well-established patterns. If sit near the kitchen in your office, you’ll soon be sick of the ‘how was your weekend?’, ‘oh, just a quiet one’ conversation. Similarly, it’s possible to distinguish the difference between banter and bullying on the same basis. Banter has quite a rapid pattern to the conversation, but cyberbullying leads to longer pauses as the recipient of the messages reflects or simply freezes up in fear or surprise.
Privacy and protection
Much of what we hear about AI and machine learning is negative or ‘creepy’ – taking all our jobs, analysing our political stances and predicting pregnancies. However, robots are neutral third parties that will not be traumatised by inappropriate content and they can detect signs and signals that are invisible to the naked eye.
Of course, there is always legislation that must be respected – and incoming regulations like GDPR will force companies to be more stringent about consumer consent and data privacy.
Looking to the future
Grooming and sextortion incidents rarely happen suddenly or in isolation – there is always build-up and preamble –in future, this ‘digital fingerprint’ can be used to stop the crimes before they progress too far. Using behavioural analytics, cyber-psychology can analyse every message sent and received online to become a ‘crime scene’, detecting age, gender, cultural and ethnic backgrounds.
At present, the UK government is pressuring social media companies to take down offensive content within two hours. But two hours later is two hours too late. Imagine the damage done to a six-year-old who innocently clicks on a link and sees a public beheading. The government should open its mind and recognise that technology exists right now that can combat this issue. It is possible, in both my experience and my mind, that in time we will be able to predict where the online predator will appear next and more importantly pro-actively avoid the problem in the first place.
Parents need help here. Quite frankly they will never understand the online world as well as their children, but I hope it won’t be long before the phrase ‘parental controls’ leaves our vocabulary. No one wants to control their children and children certainly do not want to be controlled!
There is still much work to be done but I’m personally delighted that we’re within sight of verifying age and closer than we’ve ever been to protecting our families online. After all, ID cards now stop youngsters drinking in bars, so it is surely just a matter of time before our new-world problems are also fixed!