The demo video looked real enough: photorealistic people dancing at a birthday party. Then came the glitch—a hand with too many fingers, an impossible reflection in the mirror. It was another uncanny valley moment brought to you by AI, and it made me wonder: is there any corner of the internet that hasn’t been swamped by bots and fake content? OpenAI thinks it has an answer: a new social network where proving you’re human might require an eye scan.
OpenAI, the same company behind ChatGPT and Sora, is reportedly developing a social media platform designed to be free of AI bots. The catch? Users may need to have their irises scanned to gain access.
Forbes reported that, according to unnamed sources, the platform is in the early stages of development, managed by a small team of fewer than 10 people. The goal is a human-only social platform that requires users to verify their identities. The team is considering using Apple’s Face ID or the Orb, an eye-scanning device made by Worldcoin, a company also founded by OpenAI CEO Sam Altman.
This venture appears to be Altman’s latest attempt to address a problem he and his fellow AI developers inadvertently helped create. The ambition is admirable.
The Worldcoin Connection: Scanning Eyes for a Human-Only Internet
I remember first hearing about Worldcoin and thinking it sounded like science fiction. Altman initially tried tackling the bot problem in 2019 when he co-founded Tools for Humanity, the company behind the World app, formerly known as Worldcoin. The project aimed to create a global ID and a crypto-based currency only available to verified humans. Now, it’s a “super app” with messaging and payment features, but verification still requires an eye scan by the Orb in exchange for a digital ID. In theory, this filters out AI bots from gaming, social media, and financial transactions like concert ticket sales.
Roughly 17 million people have been verified using the Orb, short of the company’s goal of one billion users. Part of the problem is logistical: people must physically travel to one of the 674 verification locations worldwide to get their eyes scanned. In the U.S., there are only 32 such locations, mostly in Florida. Getting your eyes scanned by a company founded by one of Silicon Valley’s most talked-about figures isn’t an easy sell.
Several countries have already temporarily banned or launched investigations into the company’s biometric technology, citing privacy and data security concerns.
Biometric Verification: A Step Too Far?
Seeing how people reacted to the idea of vaccine passports makes me wonder about biometric social media sign-ups. Sources told Forbes that the new social platform would allow users to create and share AI-generated content like images and videos. While OpenAI has built popular apps, it’s unclear whether a new social network could pull people away from existing platforms, especially with biometric verification. Is it a solution looking for a problem, or a necessary evil to reclaim the internet?
How many people actively use ChatGPT?
ChatGPT reaches roughly 700 million weekly users, and the company’s AI video app got about one million downloads within five days of its launch. Meta reported in September that its platforms, including Facebook, WhatsApp, and Instagram, reach about 3.5 billion daily active users combined. All of which already allow users to generate and share AI-generated content.
OpenAI seems to hope its promise of a bot-free environment will be enough to draw in users. The question is, will it?
The Bot Problem: Altman’s Perspective
I think many of us can relate to the feeling of unease when we suspect something online isn’t quite real. Altman has voiced his frustration with bots online. In September, Altman responded to a post showing comments in the ClaudeCode subreddit praising OpenAI’s coding agent Codex. “i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real,” he wrote in a post on X.
He theorized why this might be happening, pointing to people picking up “quirks of LLM-speak” and “probably some bots.” “But the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago,” Altman wrote.
A few days earlier, Altman wrote in another post that he had never taken the dead internet theory seriously, “but it seems like there are really a lot of LLM-run twitter accounts now.”
What is the ‘dead internet theory’?
The dead internet theory claims that since around 2016, much of the internet has been dominated by bots and AI-generated content rather than real human activity. The internet, once a vibrant garden of human expression, is at risk of becoming a synthetic desert. The idea of having to prove you’re human to engage online feels dystopian, but maybe it’s a necessary response to the rising tide of AI.
But maybe there is someone other than Altman who could be trusted to find a solution. Will Altman’s strategy save us from the bots, or lead us further down a path of surveillance and control?