With rising concern over social media’s ‘toxic‘ content problem, and mainstream consumer trust apparently on the slide, there’s growing pressure on parents to keep children from being overexposed to the Internet’s dark sides. Yet pulling the plug on social media isn’t exactly an option.
UK startup SafeToNet reckons it can help, with a forthcoming system of AI-powered cyber safety mobile control tools.
Here at Mobile World Congress it’s previewing an anti-sexting feature that will be part of the full subscription service — launching this April, starting in the UK.
It’s been developing its cyber safety system since 2016, and ran beta testing with around 5,000 users last year. The goal is to be “protecting” six million children by the end of this year, says CEO Richard Pursey — including via pursuing partnerships with carriers (which in turn explains its presence at MWC).
SafeToNet has raised just under £9 million from undisclosed private investors at this point, to fund the development of its behavioral monitoring platform.
From May, the plan is to expand availability to English speaking nations around the world. They’re also working on German, Spanish, Catalan and Danish versions for launch in Q2.
So what’s at stake for parents? Pursey points to a recent case in Denmark as illustrative of the risks when teens are left freely using social sharing apps.
In that instance more than 1,000 young adults, many of them teenagers themselves, were charged with distributing child pornography after digitally sharing a video of two 15-year-olds having sex.
The video was shared on Facebook Messenger and the social media giant alerted US authorities — which in turn contacted police in Denmark. And while the age of consent is 15 in Denmark, distributing images of anyone under 18 is a criminal offense. Ergo sexting can get even consenting teens into legal hot water.
And sexting is just one of the online risks and issues parents now need to consider, argues Pursey, pointing to other concerns such as cyber bullying or violent content. Parents may also worry about their children being targeted by online predators.
“We’re a cyber safety company and the reason why we exist is to safeguard children on, in particular, social networking and messaging apps from all those things that you read about every day: Cyber bullying, abuse, aggression, sextortion, grooming,” he says.
“We come from the basis that existing parental control systems… simply aren’t good enough. They’ve not kept up to date with the digital world and in particular the world that kids socialize on. So Snapchat, Instagram, less so Facebook, but you get the idea.
“We’ve tackled this using a whole mixture of deep tech from behavioral analytics, sentiment analysis and so on, all using machine learning, to be able to contextualize messages that children send, share and receive. And then block anything harmful. That’s the mission.”
Once the SafeToNet app is installed on a child’s device, and linked with their parents’ SafeToNet account, the software scans for any inappropriate imagery on their device. If it finds anything it will quarantine it and blur the content so it no longer presents a sharing risk, says Pursey.
The software runs continuously in the background on the device so it can also step in in real-time to, for instance, block access to a phone’s camera if it believes the child might be about to use it for sexting.
It’s able to be so reactive because it’s performing ongoing sentiment analysis of everything being typed on the device via its own keyboard app — and using its visibility into what’s being sent and received, how and by whom, to infer a child might be about to send or see something inappropriate.
Pursey says the AI system is designed to learn the child’s normal device usage patterns so it can also alert parents to potential behavioral shifts signaled by their online activity — which in turn might represent a problem or a risk like depression or aggression.
He says SafeToNet’s system is drawing on research into social behavioral patterns, including around digital cues like the speed and length of reply, to try to infer psychological impacts.
If that sounds a little Black Mirror/Big Brother, that’s kind of intentional. Pursey says it’s deliberately utilizing the fact that the children who are its users will know its system is monitoring their device to act as a moderating impulse and rein in risky behaviors.
Its website specifies that children have to agree to the software being installed, and kids will obviously be aware it’s there when it pops up the first notification related to something problematic that they’re trying to do.
“If children know they’re being watched they automatically adjust their behavior,” he says. “We’re using a high degree different methods to deploy our software but it is based upon research working with universities, child welfare support groups, even a priest we’ve been talking to.”
On the parent side, the system hands them various controls, such as enabling them to block access to certain apps or groups of apps for a certain time period, or lock out their kids’ devices so they can’t be used at bedtime or during homework hours. Or ground access to a device entirely for a while.
Though, again, SafeToNet’s website suggests parents use such measures sparingly to avoid the tool being used to punish or exclude kids from socializing digitally with their friends.
The system can also report on particular apps a child is using that parents might not even know could present a concern, says Pursey, because it’s tracking teen app usage and keeping an eye on fast-changing trends — be it a risky meme or something worse.
But he also claims the system is designed to respect a child’s privacy, and Pursey says the software will not share any of the child’s content with their parents without the child’s say so. (Or, in extremis, after a number of warnings have been ignored by the child.)
That’s also how he says it’s getting around the inevitable problem of no automated software system being able to be an entirely perfect content monitoring guardian.
If/when the system generates a false positive — i.e. the software blocks content or apps it really shouldn’t be blocking — he says kids can send a request to their parents to unlock, for example, an image that wasn’t actually inappropriate, and their parents can then approve access to it.
Another privacy consideration: He says SafeToNet’s monitoring systems are designed to run without any of its employees accessing or viewing children’s content. (You can read the company’s Privacy Policy here. They’ve also written a plain English version. And published a privacy impact assessment.)
Though the vast majority (circa 80%) of the data processing it needs to do to run this pervasive monitoring system is being done in the cloud right now. So it obviously cannot guarantee its systems and the data being processed there are safe from hacking risks.
Asked about the company’s intentions towards the user data it’s collecting, Pursey says SafeToNet will not be selling usage data in any form whatsoever. Activity data collected from users will only be used for making improvements to the SafeToNet service itself, he emphasizes.
But isn’t deploying background surveillance of children’s digital devices something of a sledgehammer to crack a nut approach to online safety risks?
Shouldn’t parents really be engaging in ongoing and open conversations with their children in order to equip them with the information and critical thinking for them to be able to assess Internet risks and make these kind of judgement calls themselves?
Pursey argues that risks around online content can now be so acute, and kids’ digital worlds so alien to parents, that they really do need support tools to help them navigate this challenge.
SafeToNet’s website is also replete with warnings that parents should not simply tune out once they have the system installed.
“When you realize that the teenage suicide rate is through the roof, depression, all of these issues you read about every day… I don’t think I would use that phrase,” he says. “This isn’t about restricting children it’s actually about enabling their access to social media.
“The way we look at is the Internet is an incredibly powerful and wonderful thing. The problem is is that it’s unregulated, it’s out of control. It’s a social experiment that nobody on the planet knows how it’s going to come out the other end.”
“I’ve seen a 10 year old girl hang herself in a cupboard,” he adds. “I’ve seen it. I saw it online. I’ve seen two 12 year old boys hang themselves. This morning I saw a film of two Russian girls jumped off a balcony to their death.
“I’ve seen a man shot in the head. I’ve seen a man — two men, actually — have their heads chopped off. These are all things that six year old kids can stumble across online. When you’ve seen those sorts of things you can’t help be affected by them.”
What about the fact that, as he says, surveillance impacts how people behave? Isn’t there a risk of this kind of pervasive monitoring ending up constraining children’s sense of freedom to experiment and explore boundaries, at a crucial moment when they are in the process of forming their identities?
A child may also be thinking about their own sexuality and wanting private access to information to help them try to understand their feelings — without necessarily wanting to signpost all that to their parents. A system that’s monitoring what they’re looking at and intervening in a way that shuts down exploration could risk blocking natural curiosity and even generate feelings of isolation and worse.
“Children are trying to determine their identity, they’re trying to work out who they are but… we’re not there to be the parent,” Pursey responds on that. “We’re they’re to advise, to do the safeguarding… But [parents’ job] is to try and make sure that their children are well balanced and well informed, and can handle the challenges that life brings.
“Our job is certainly not to police them — quite the opposite. It’s to enable them, to give them the freedom to do these things. Rather than sledgehammer to crack a nut, which is the existing parental control systems. In my opinion they cause more harm than they actually save or protect. Because parents don’t know how to use them.”
SafeToNet’s software will work across both Android and iOS devices (although Pursey says it was a lot easier to get it all working on Android, given the open nature of the platform vs Apple’s more locked down approach). Pricing for the subscription will be £4.99 monthly per family (with no limit on the number of devices), or £50 if paid up front for a year.
Source link