Protecting children in the metaverse: It’s easy to blame big tech, but we all have a role to play

In a recent BBC News investigation, a reporter posing as a 13-year-old girl in a virtual reality (VR) app was subjected to sexual content, racist slurs and the threat of rape. The VRChat application in question is an interactive platform where users can create “rooms” in which people interact (in the form of avatars). The reporter saw avatars that mimic sex, and many men offered her.

The results of this investigation led to warnings from child welfare charities, including the National Society for the Prevention of Cruelty to Children (NSPCC), about the dangers children face in the metaverse. the metaverse refers to a network of virtual reality worlds that Meta (formerly Facebook) is positioning as the future version of the Internet, which will eventually allow us to participate in educational, professional and social contexts.

The NSPCC appears to be shifting the blame and responsibility to tech companies, saying they need to do more to protect the safety of children in these online spaces. While I agree that platforms can do more, they cannot solve this problem alone.

Reading the BBC investigation, I felt already saw. I was surprised that anyone working in the online defense field was, in the words of the NSPCC, “shocked” by the reporter’s experience. A decade ago, long before we even heard the word “metaverse”, similar stories emerged around platforms like Club Penguin and Hotel Habbo.

These avatar-based platforms, where users interact in virtual spaces via a text chat feature, were actually designed for children. In both cases, adults posing as children for the investigation were subjected to overt sexual interaction.

Demands for companies to do more to prevent these incidents have been around for a long time. We are locked in a cycle of new technologies, emerging risks and moral panic. But nothing changes.


Read more: Metaverse: Three Legal Questions We Need to Solve


This is a tricky area

We have seen companies ask for age verification measures to prevent young people from accessing inappropriate services. This included suggestions for social platforms to require verification that a user is 13 or older, or for porn sites to require verification that a user is over 18.

If age verification were simple, it would already be widespread. If anyone can come up with a way for all 13 year olds to securely verify their age online without worrying about data privacy, and in a way that is easy to implement on platforms, many tech companies would be happy to talk to them.

In terms of managing the communication that takes place on these platforms, this will also not be achieved through an algorithm. Artificial intelligence is nowhere near smart enough to intercept audio streams in real time and accurately determine if someone is being offensive. And while there may be some wiggle room for human moderation, real-time monitoring of all online spaces would be incredibly resource intensive.

The reality is that platforms already provide many tools to combat harassment and abuse. The problem is that few people know about them, believe they will work, or want to use them. VRChat, for example, provides tools to block abusive users and ways to report them, which can result in the user’s account being deleted.

We can’t sit back and yell, “My kid got upset about something on the internet, who’s going to stop it from happening?” We need to shift our focus away from the notion of “evil big tech”, which is really useless, to the role that other stakeholders can play.

If parents are buying VR headsets for their kids, they should pay attention to the safety features. Activity can often be monitored by asking the young person to stream what’s on the headphones to the family’s TV or other screen. Parents can also review the apps and games young people are interacting with before allowing their children to use them.

What do young people think

I have spent the past two decades researching online advocacy, talking to young people about online harm concerns, and working with various stakeholders on how we can better help young people. I rarely hear young people themselves begging the government to subdue the big tech companies.

However, they regularly call for better adult education and support in dealing with the potential online harm they may face. For example, young people tell us that they want to talk in class with knowledgeable teachers who are able to manage emerging debates and to whom they can ask questions without being told “don’t ask those questions.”


Read more: How to protect kids online without strict rules and reprimands


However, without nationwide coordination, I can sympathize with any teacher who does not want to risk complaining, for example, from outraged parents, after discussing such sensitive topics.

I note that the UK Government’s Internet Safety Bill, which politicians say will prevent harm on the Internet, contains only two mentions of the word “education” in 145 pages.

We all have a role to play in helping young people navigate the online space. Prevention has been the key idea for 15 years, but this approach is not working. Young people want to be educated by people who understand the issues. This is not something that can be achieved with platforms alone.

  • Andy Phippen Professor of Computer Ethics and Digital Rights at Bournemouth University.

This article first appeared on Talk.