Portfolio #mentalhealth
8 December 2020
Nate Smith

Keeping kids safe online: Interview with Richard Pursey, CEO of SafeToNet

When it comes to taking safeguarding responsibilities seriously, SafeToNet is streets ahead of every tech giant out there. After seeing a terminally ill child in tears over online bullying, Richard Pursey channeled his experience in behavioural analytics into creating a tool to keep children safe from harm – without eroding their privacy.

SafeToNet is a tool that educates children in real-time as they use their phone or laptop – via a smart keyboard, which is trained to spot signs of low mood and prevent lashing out. A built-in buddy offers advice and guidance on digital wellbeing, and can help young people alert their parents in times of real crisis. 

Maddyness spoke to CEO Richard Pursey about everything he’s learnt about young peoples’ psychology over the past decade or so. He recounts some of the tangible impact SafeToNet – “a social impact business” – has had on vulnerable kids, and explains why freedom of speech should never take precedence over safety and security. 

[Maddyness] Tell me about your background and how that led you up to creating SafeToNet

[Richard] I’ve started and sold four technology companies and my background is in behavioural analytics. I was working in the City of London, deploying some of this tech that I was getting developed and became a private investor, and I’m a dad of four. 

SafeToNet was founded in 2013. At the best of times, communicating to a teenager is a challenge. Teenagers learn the art of ‘gruntspeak’ very early on – and ‘gruntspeak’ when they’ve got their head in a phone, and you’ve got no way of communicating with them.

I found that I didn’t know what my kids were doing, seeing, saying, hearing… and I couldn’t communicate with them. 

There were three points that led to the foundation of SafeToNet. Point one: a father that couldn’t really communicate with his kids and didn’t know what they were doing; they were living in a world I had no access to. Point two: as a private investor I was asked to invest in a social network. I was doing my due diligence and of course I saw all this horrific content that we all know about because we all now read about it. Back in 2013 it wasn’t that well known. 

The third part: I had what I still regard to be one of the best jobs in the world, certainly the best job I could ever have. I volunteered as a driver, taking terminally ill children to hospital for treatment. This is modern-day Britain, where the parents didn’t have the money to take their kids to hospital for treatment to extend their lives. That’s not right in my book; it just doesn’t make sense. But what was good about that job is that I got to know these kids pretty well, although sadly they all died. But I had a nice car and I was a driver; they didn’t really know who I was and so they would talk to me. 

See SafetoNet on NOTWICS Connecting People

There was one young lad in particular who I got on really well with. I was driving him to the Royal Berkshire Hospital in Reading and I looked to my left and he was sobbing. I naturally assumed he was in a state because he was dying – he died three weeks after this – but he wasn’t. He was having chemotherapy, he’d lost his hair, he was on his phone and his mates were ridiculing him. You realise how horrible children can be to each other. 

So I was determined to do something about this. I got myself into the NHS and I became a non-exec director at one of the primary care trusts, and I started to see all the mental health issues associated with the online world. So that’s basically how SafeToNet came about. But the biggest motivator of all came ultimately when I realised that Facebook, Snapchat, Twitter, Google… they could do something about this but they’re commercially not aligned to it. We all know that now; we didn’t know that then. Still to this day they talk about it being a social problem that you can’t fix, but of course that’s utter nonsense. 

Using my background in behavioural analytics, I began to sketch out what was possible. I was talking to these kids regularly asking them ‘do you want to be kept safe online?’ ‘yeah, of course, but we don’t want mum or dad snooping or spying on us’. So privacy became such a big issue – and I get it totally.

Our challenge as founders of this business was to see if we could find a way to still allow children to have their freedom and freedom of speech, to respect their privacy – and also deal with the primal need of a parent to keep their child safe online. And that’s a pretty tricky thing – when you also throw into the pot the complexities of working with iOS and Android. 

We tried to find the common denominator. Apple in particular don’t let you anywhere near the operating system of the actual device. So the next common denominator is the keyboard; we all use it to interact, search and socialise. Using behavioural analytics, we started doing research on cyber-psychology and cyber-criminology on how predators behave online and how the predated change their behavioural patterns when they’re being attacked or hurt. You do start to see some distinctive patterns – and we found more and more of them. 

Using technology on a keyboard we can tell, for example, how fast the child is typing compared to how they normally type, how fast they’re typing right at this moment. We can tell when autocorrect goes into overdrive. We know the pressure you use when you press the keys. All of these indicate heightened emotions. 

I’ll give you an example: if I started to be abusive to you now – if I started calling you names – my whole approach to you would change. It would be short and rapid-fire. Even before you’ve looked at what the child is typing, you can see the change in their behaviour pattern. Rapid-fire jabbing typically means an argument or some form of sexual dialogue.

We could detect signs of nine states of emotion – fear, anxiety, loneliness, sadness… all things I saw when I was driving these children for their treatment. 

Then you pour some statistical probabilities into the mix. I can tell you for instance that if your teenage son has shown signs of fear at 20 past 8 on a Sunday evening, there’s a 73% chance he’s being bullied. If you contribute to a YouTube thread with between three and eight words, there’s a 70% chance you’ve been hurtful. There are thousands of patterns like this that we’ve taught our machine. 

Then basically we taught it to filter harmful content in real-time as the child is using their keyboard. It’s the real-time element that’s crucial to the whole safeguarding landscape. I can tell you with some high degree of pride how impactful our software is as a result of real-time intervention. 

About seven or eight weeks ago, a 14-year-old girl in California with our software on our phone alerted her mum and dad to a problem, they intervened; she was cutting herself in the moment, and she was taken to hospital. If our software hadn’t been there, you can imagine what could have happened. 

SafeToNet is a social impact business. It is a commercial entity but it only exists to keep children safer online. We’re living in an utterly distorted world where we’re allowing our children to grow up with access to anything they want at any time, without any form of governance or control. 

Governments are waking up to this now and they’re beginning to find ways to hopefully bring the giants under control but the problem is it’s not just the giants. This is like a game of Whack A Mole. If you safeguard people on Facebook they’ll go onto some chatroom that none of us have ever heard of. If you block that, they’ll go somewhere else. Which is why technology that sits at the lowest level you can possibly get on a device is crucial. 

Why is what TikTok, Facebook and so on are doing not enough? 

They understand the problem – but only at the criminal level: the child sexual abuse and exploitation level. They do have tools to identify harmful content. 

But where they don’t understand it is where they hide behind the rules of freedom of speech. As far as I’m concerned, because I’ve seen what I’ve seen – and by the way, suicide rates for children in the UK are at an all-time-high with five children a week taking their own lives in this country alone – freedom of speech should sit below the right to safety. 

Facebook does pump lots of money into it; Snap does too. But the whole point is that these platforms should be built with safety by design – TikTok, relatively new, knew when it was formed that these harms existed. Did they design safety features to the right level? No of course they didn’t. Because they make tons and tons of money out of children doing stuff online. 

2017, Q3: Mark Zuckerberg in Facebook’s reports recognised that 1 in 10 people on Facebook are undesirable users. That’s 200 million people. If you know that – switch them off. But they won’t switch them off. If you look at things like end-to-end encryption and so on, which is just going to make children even more vulnerable to predation, quite frankly it’s a nonsense. 

To be fair to them, to understand the behavioural patterns of children, you can’t just look at the Facebook family of products, because children behave differently from one platform to another. If they’re on Facebook at all, which commonly they’re not, they’re far more grown-up and more risk-averse. On Snapchat, they run riot. Everyone behaves differently depending on the environment. 

I’d like to hear about any redeeming features you see for kids in the internet. Do you think it’s a really dangerous place, or one that could potentially be inspirational for them? 

I think the internet is one of the most incredible inventions ever. I think that children know far more now than ever before. We’ve engaged with over 2,000 kids putting our product together – and countless times I’m sat with them asking what they do online, what they think, and so on. 

I remember actually, as long ago as the previous election in the United States – the Clinton/Trump election. I talked to a bunch of 12 and 13-year-olds who had really informed opinions as to why Clinton lost it. When I was a boy I knew nothing about world affairs! 

Technology like SafeToNet should be seen, really, as a social enabler. It should be seen as a way of allowing children to explore even more. When I was a kid, I’d climb trees and fall out of them, so I’d learn not to walk on the weak branches. But if I knew that there was a big cushion for me to land on I’d have climbed all the trees.

The internet is just the most wonderful, incredible thing. The fact that we can talk to anyone we like anywhere around the world means we can find solace and support in our opinions. As long as you can do it in safety, that’s fantastic. 

Discover SafeToNet