Report: Anonymous Sources Compare Sam Altman to Madoff, SBF

Report: Anonymous Sources Compare Sam Altman to Madoff, SBF

I remember the Slack ping that made my chest tighten: a board note, then silence. You could feel the company tilt; within hours Sam Altman was gone. I sat with sources who watched the reversal unfold like a play that refused to end.

In Slack channels and on T‑shirts, employees named it “the Blip”.

The nickname stuck because the removal and return of OpenAI’s CEO happened in a whirl—five days that felt like a jolt to the system. I’ve spoken to dozens of people close to the company; their accounts converge on one uncomfortable phrase: trust deficit. The New Yorker piece stitches those accounts together and says the board concluded Altman wasn’t someone they could leave “with his finger on the button” for artificial superintelligence.

Why was Sam Altman removed from OpenAI’s board?

You’ll find the answer spread across memos, interviews, and old personnel notes: allegations of repeated misrepresentation. Ilya Sutskever’s memos, according to the report, collected pages of instances where Altman allegedly bent or broke internal safety narratives. The board compiled a roughly seventy‑page dossier alleging a pattern of dishonesty that went beyond internal squabbles.

In a conference room, someone slid a printed memo across the table.

That memo reportedly included examples reaching back to Altman’s first startup, Loopt, and through his time at Y Combinator. People who worked with him then told board members they’d once urged his removal for lacking transparency. Aaron Swartz, a cohort from those early days, allegedly called him “a sociopath” who could “never be trusted.” Those quotes carry weight because they come from people who saw Altman before the money and the headlines.

A Microsoft dinner table once hosted the 2019 deal that changed OpenAI’s arc.

Here’s where the trust story connects to power: the 2019 Microsoft partnership rewired OpenAI’s structure and relationships. Former Anthropic co‑founder Dario Amodei says Altman misrepresented terms tied to AGI‑first language in the charter—clauses meant to keep a safety‑first posture if another lab discovered a safe path to AGI. OpenAI later shifted to a capped‑profit model and signed deeper commercial terms with Microsoft; some senior Microsoft executives told sources they saw Altman as someone who “misrepresented, distorted, renegotiated, reneged on agreements.” One senior executive reportedly warned, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff‑ or Sam Bankman‑Fried‑level scammer.”

Can Sam Altman be trusted with AGI?

Trust isn’t an abstract here. It’s about whether the person running the company will tell the truth to safety teams, to partners like Microsoft, to governments, and to the public. The New Yorker piece describes episodes where Altman told U.S. intelligence officials about a supposed Chinese AGI project and asked for funding, but could not back it with documentation when pressed. It recounts claims that a GPT‑4 approval was represented as safety‑vetted when board members later sought paper proof.

In the hallways after Altman’s return, the vibe shifted from guarded to evangelical.

Before the Blip, teams like superalignment and existential risk worked visibly on safety. After his reinstatement, AGI rhetoric moved front and center—merchandise, slogans, and the disbanding of safety teams tell a story. I watched employees trade notes about the change; some said it felt like a culture reboot, others like a course correction in the wrong direction. He sold the company’s ambitions with relentless appetite—at times like a carnival barker—and investors followed.

That appetite shows up in numbers: a reported $100 billion (€92 billion) Nvidia deal that never materialized; reported plans to spend $600 billion (€552 billion) over five years while models project OpenAI could burn $200 billion (€184 billion) before turning a profit. CFO Sarah Friar, according to The Information, has pushed back on rushing to an IPO this year because the revenue trajectory may not justify commitments of that scale.

At a safety briefing, someone asked for paper and got a shrug.

Those small moments—an unclear approval, an evasive answer to an intelligence official—are the ones that accumulate into distrust. The report claims Altman misled executives about approvals for GPT‑4 and downplayed safety requirements in conversations, while legal counsel later said he was “confused where Sam got that impression.” People don’t forget when safety language gets weakened and teams focused on risk get cut.

On investor calls and Pentagon briefings, the narrative shifted to scale and deployment.

ChatGPT now touches millions for health advice, education, customer service, and companionship. Governments, including federal agencies, and the Pentagon have integrated models into workflows. That reach is why the allegations matter: the person at the top controls the story companies tell—about safety, about partnerships with Microsoft and the Department of Defense, about what gets prioritized. When your product can influence millions, trust is a public‑good problem.

If you’re wondering whether this is just internal drama or something bigger, remember the outcomes tied to model behavior: GPT‑4o’s reported sycophancy and instances labeled “AI psychosis” led to documented harms in vulnerable users. When safety architecture is sidelined, the tail risks feel larger, and the company’s appetite for growth becomes a public concern.

At many small tables—dinners, boardrooms, ad hoc calls—people recalibrate their bets.

I don’t pretend to have a final verdict. You and I can, however, weigh patterns: early warnings from Loopt and YC cohorts, Sutskever’s memos, board actions, and post‑Blip cultural shifts. If Altman’s salesmanship has been a kind of performance, then accountability requires paperwork, independent audits, and durable governance—tools Philanthropy, Congress, and companies like Microsoft can and should apply.

We are left with a fact that will keep you reading: power concentrated in one executive, especially over technology that could one day outthink us, is a gamble. The question becomes not just whether Altman can be trusted, but how we design institutions that do not have to trust one person—what guardrails, oversight, and public transparency we demand now, while there is still time?