Recent allegations have surfaced claiming that Meta has hidden critical research related to the risks young children face when using virtual reality (VR) devices and applications. Whistleblowers from within the company have revealed troubling insights, including potential threats from child predators, as highlighted by a Washington Post report.
In a significant move, Congress has received thousands of pages of documents tied to Meta’s VR initiatives. Among the whistleblowers, four researchers have stepped forward, including two still employed by Meta and two former staff members.
One alarming incident shared by a Meta researcher involved a family in Germany. The family reported that their child often encountered strangers within a VR setting, with a teenage boy revealing that his younger brother had been approached inappropriately by adults. This child was under 10 years old, as documented by the Post.
Although there was concern expressed by German parents and teens regarding grooming within the VR landscape—specifically in Horizon Worlds—the internal report excluded the troubling testimony regarding the targeted young brother. Meta, however, firmly refutes any notions of mishandling its research practices.
“These isolated examples are being used to support an inaccurate narrative. Since early 2022, Meta has sanctioned nearly 180 studies through Reality Labs focused on social issues like youth safety,” a Meta spokesperson indicated in a statement to Gizmodo. Reality Labs, the company’s VR division, has purportedly led to significant product enhancements aimed at protecting young users.
“Our research has resulted in valuable tools for parents, allowing them to oversee their teens’ VR connections, time spent online, and accessed applications. We have also implemented automatic protection measures for adolescents, limiting unwanted interactions, such as setting default voice chat permissions in Horizon Worlds,” the statement continued.
Criticism has intensified as Meta has also been scrutinized for its AI chatbot policies, following a series of articles from Reuters reporter Jeff Horwitz. An internal document reportedly permitted generative AI chatbots to engage in inappropriate conversations with children, which has provoked significant public and governmental backlash, particularly from Senator Josh Hawley, who has launched an investigation into these policies.
“What won’t Big Tech do for profit?” Hawley questioned in a tweet on August 15. “It’s shocking to learn that Meta’s chatbots were programmed for explicit discussions with children. I’m calling for a thorough investigation. Big Tech should not be messing with our kids.”
Meta, which rebranded its parent company from Facebook to Meta in 2021, has heavily invested in making the metaverse mainstream, having initially acquired Oculus in 2014. Despite these efforts, the Reality Labs division has faced financial difficulties, reportedly losing $60 billion, as stated by the Post.
A Senate Judiciary Committee hearing is scheduled to examine the whistleblower allegations regarding Meta’s practices in child safety research. The session is titled “Hidden Harms: Examining Whistleblower Allegations that Meta Buried Child Safety Research.”
In another related case, it was revealed Monday that a former security head of WhatsApp, also owned by Meta, has filed a lawsuit alleging that employees at the company had potential access to sensitive user data, including profile images and location. Meta indeed seems to be under scrutiny from various fronts, particularly regarding privacy and security concerns.
What safety measures should parents take when their children use VR technology? Parents should actively engage with their children about safe online behaviors, monitor their interactions in VR, and utilize parental controls effectively to create safer experiences.
Is Meta’s commitment to child safety sufficient? While Meta claims to be taking steps towards youth safety, the revelations from whistleblowers may raise legitimate concerns about the seriousness and effectiveness of their actions.
What can children do to protect themselves in virtual environments? Kids can be taught to recognize inappropriate behavior, understand the importance of privacy settings, and feel empowered to report any uncomfortable interactions to trusted adults.
What should lawmakers focus on regarding Meta’s VR technologies? Lawmakers need to scrutinize the company’s policies on child protection and accountability, ensuring they align with the safety needs of young users using advanced technology.
In conclusion, as the investigation unfolds, it remains crucial for both parents and lawmakers to remain vigilant about the experiences children have with new technologies. There’s a lot more to explore in the world of VR and social media responsibility. For further insights, visit Moyens I/O.