With the metaverse, a brand new step within the troublesome battle towards sexual assault on the Web

After a few minutes of immersion Echo VR, a virtual reality sports game, we hear our first insult. In the reception room, intended for people who have not yet started the game, one player shouts at the other on the microphone. She suggests it to him, first suck anus », before continuing with racist insults. Our guide, regular onEcho VR, promises us: usually, it’s not that scary. For our part, it is difficult to determine whether this is a friend’s joke or we have just witnessed a real attack.

In October 2021, Mark Zuckerberg, the founder and CEO of Facebook, announced that the goal of his company, renamed Meta for the occasion (in ancient Greek, “outside, after”), would henceforth be to create “metaverse”, an impressive and persistent world in which Internet users, equipped with virtual reality headphones, can evolve thanks to 3D avatars. An announcement that also highlighted projects from other platforms, which are already developing their own “metaverse”.

She also revived fear: the fear of assault, including sexual assault, in virtual reality. Andrew Bosworth, CTO of Meta, himself admits that harassment is in the metaverse “existential threat” for his company’s ambitions, according to a confidential letter he uncovered in November FinancialTimes. He also admitted that the moderation was large “practically impossible”.

The highly sophisticated VRchat system allows you to tailor your experience to the people you interact with.

“Shock is the same as in real life”

These issues, however, are not new. Examples of sexual assault in immersive worlds, whether virtual reality experiences or not, are numerous. 2016. user shooter QuiVR he said in a blog post that another player touched him in the chest. As early as 1993, an article by American journalist Julian Dibbel, “Rape in Cyberspace,” described him as a member of the IOC (Multi-user dimension Object oriented), an online community based on text exchanges, forced other players to simulate sexual acts.

In a broader sense, according to a recent study by the Ipsos Institute, 41% of French people (and 81% of young people aged 18 to 24) have already been victims of online violence. “In a sense, bullying in the metaverse is driven by the same factors we see on other platforms: we feel detached from the consequences of our actions.explains Katherine Cross, an American information science researcher at the University of Washington who specializes in Internet harassment. But there is a fundamental difference, which can make this experience worse: virtual reality is designed to make us believe that what we are experiencing is true. »

Virtual reality, by its impressive nature, makes the presence of other players surprisingly tangible and sometimes intimidating, depending on the situation. This feeling can get worse depending on the attachment we feel towards our avatar, who is often assumed to represent us. We may know that the universe is artificial, but the feelings we feel can be very real. Thus, victims of sexual assault in metaverses testify to their discomfort, even trauma. “The shock I felt was similar to other attacks I experienced in real life.”said one of them Guardian in 2016

Illustration of the option

Moderate gestures

Metaverse moderation is at the crossroads of different issues. The challenges are not the same in public spaces – such as catering salons – as in closed communities, which often have volunteer moderators. And, to control the content produced by internet surfers (texts or images, as on a traditional social network), the control of oral discussions and physical behavior is added. “In the metaverse, toxicity takes many different forms. There is a theme of voice. How to spot an insult? Or if a very young person is chatting with older people? », Details Charles Cohen, CEO of Bodyguard, a French company specializing in automatic moderation tools for individuals and businesses. “And there are gestures. If an avatar follows another, is it a game or a harassment? »

Lacking the ability to control this attitude live, most platforms offer a range of tools to improve the Internet user experience: activating the invisible border around their virtual body, modulating the range of their voice (if a woman wants to pretend to be a man), reduce other avatars to small floating balls. .. You can also report problematic actions, for example on a social network, but you still need to record a video as proof.

Another key topic is design. How to build a virtual universe that would prevent abuse upstream? Metavers offer avatars without legs (thus avoiding sexual positions), prohibit certain gestures with virtual hands (on the basic version of VRChat you can lift your thumb in the air, but not the middle finger), provide areas where the avatar immediately becomes invisible to others, whether protect or take a break.

Many virtual worlds also choose to remind their rules in a not very subtle way, through posters or warnings that are displayed during loading time. It was as if, walking down the street, the inscriptions constantly reminded us that it was forbidden to slap others.

U

who is responsible?

“There are explicit and implicit rules in every environment or game”explains Martin Buthaud, a PhD student at the University of Rouen (interdisciplinary research team of the Laboratory on Cultural Areas), who works on metaverses. “This also applies to metaverses. Do we want to explain all the basic rules and build worlds that look authoritarian? Or do we assume that Internet users already know how to behave, at the risk of problems? It’s a delicate balance. »

As evidenced by the example of this tester from Horizon Worlds, presented in the press at the end of 2021, who was the victim of sexual touching another avatar. At the time of the attack, it failed to implement the “safe zone”, a feature that in theory allows the user to disappear from the eyes of others.

This situation shows that platforms cannot always transfer their responsibilities to internet users. It also raises complex questions: how to distribute the damage between the aggressor, who behaved badly, and the platform, whose tools did not protect? Do we need to change the laws governing online hate and social media responsibilities to take into account the physical dimension of virtual avatar aggression? And above all, how to educate players upstream, to stop this violence?

Moderation has long been a thorn in the side of large web companies, accused of not investing enough in the security of Internet users and of designing platforms that are impossible to control. Could the metaverse, which is sometimes described as our online future, be an opportunity to correct this from the beginning? Or, on the contrary, are we already reproducing the mistakes of the past? “Moderation is often used as a tool to solve structural problems on platformssorry Katherine Cross. We need to build metaverses that take into account potential abuse from the first line of code. But for big companies like Meta, unfortunately, moderation is likely to remain an afterthought. As always. »

Leave a Comment