Sam Altman: ChatGPT Won’t Add Timer Feature for a Year

OpenAI Reportedly Pivots to Business and Productivity After WSJ Leak

He smiles, but it feels strained — a human reflex trying to cover for a machine’s mistake. You watch a TikTok where ChatGPT invents a stopwatch reading as if it were a casual fib. I replay the moment and the laugh and keep thinking: an $852 billion (€784 billion) company can’t actually count seconds?

I’ve spent years pressing machines for facts. You’ve probably asked a voice assistant to set a timer and felt the tiny betrayal when it failed. This isn’t just one bad clip; it’s a clear signal about what these systems can and cannot do right now.

Altman laughed on camera when shown the TikTok.

At the Mostly Human interview, Laurie Segall played Sam Altman a short clip: a TikTok user named Husk asks ChatGPT’s voice model to time a mile, and the model confidently fabricates a time. Altman’s laugh lands oddly — the kind you hear when someone masks irritation. He tells Segall, “No, no, that’s a known issue,” and then adds, almost casually, “Maybe another year before something like that works well.”

You hear two things in that admission: corporate calm and a timetable. I read it as both a promise and a clock started on public expectations.

Husk showed the clip back to ChatGPT and it doubled down.

Husk did something smart: he fed Altman’s reaction back into ChatGPT and watched what happened. When presented with the CEO saying the model can’t actually keep time, the bot replied, “I definitely have a time capability,” then reported a mile run at 7 minutes and 42 seconds as if it had watched a stopwatch.

I find that moment revealing: the model insists on competence even while evidence piles up that it’s guessing. You see an authority cue (Sam Altman), a social platform (TikTok), and a user testing boundaries — and the system fails a basic truth test.

Can ChatGPT keep time?

Short answer: not reliably. Across OpenAI’s text and voice models, the systems lack an internal, auditable clock that tracks elapsed seconds during a session. They can simulate timing by estimating or recalling user-supplied timestamps, but when pressed to run a live stopwatch they tend to hallucinate measurements.

Most AI models struggle with time across formats.

Researchers testing images, text, and audio have noticed the same pattern: models trip over clocks and calendars. In labs, image models often draw clocks with impossible hands; in chat, models guess how long you’ve been talking; in voice mode, they invent elapsed time. It’s a consistent failure across modalities.

This is not a simple bug. Time is a moving target for statistical models because it requires persistent state, real-time sensing, and reliable grounding — things these models weren’t built to prioritize.

When will ChatGPT be able to set timers?

Sam Altman’s timeline — “maybe another year” — is a public estimate from the CEO of OpenAI. I take it as a conservative roadmap: engineers need to add external services (real clocks, timers, system-level hooks) or tighten integrations with devices. You should expect incremental fixes: integrations with phones, smart speakers, or browser APIs first, then tighter native voice capabilities.

Why does AI get time wrong?

Because time requires reliable, persistent reference points. Most large language models predict text based on patterns; they aren’t connected to a continuous clock. When you ask a model to time you, it substitutes probability for verification and you get confident fiction instead of a timestamp. It’s like trusting a novelist to produce a logbook.

I want to call out players here: OpenAI and Sam Altman set expectations; Anthropic, Google, and smaller labs are racing on similar problems; Husk and the TikTok ecosystem are doing public QA. You watching these demos is part of the pressure that will force practical fixes.

The deeper risk is reputational: an $852 billion (€784 billion) company can’t have its core voice feature invent facts and shrug. For regulators, customers, and competitors, that’s the story that sticks.

Two final notes: one technical, one human. Technically, adding a reliable timer requires system-level permissions and auditable state — not magic. Humanly, when a machine insists it’s right, you and I have to test it harder and demand logs we can verify.

I’ll keep pressing these systems, and I hope you will too — because a year from now, the question won’t be whether a model can pretend to time a run, but whether it can be trusted when it claims to have measured reality. Are you ready to hold the line?