I was reading a short newsletter when a line jumped out: a small group of people hit by AI-related job loss are getting steady cash. I felt a weird mix of relief and annoyance—relief for the recipients, annoyance that this is news at all. You should know what that payment actually means before the headlines finish reshaping the story.
I’m tracking this because the conversation about AI and work is no longer academic. Companies, investors, and policy folks are arguing over whether AI replaces jobs or simply reshuffles them, and sometimes the talk becomes cover for cost-cutting. That’s why a tiny program quietly sending checks matters.
A former game-studio artist refreshed their bank app and saw a new deposit—what just started
According to the Blood in the Machine newsletter, a nonprofit experiment called the AI Dividend has begun issuing payments this week. For now, the program covers roughly 25–50 people getting $1,000 per month (€920), a modest but consistent lift. I watched the details ripple through advocacy groups and think tanks, and you should know who’s running the pilot and what they hope it becomes.
What is the AI Dividend?
The AI Dividend is a small guaranteed-income pilot aimed at people negatively affected by AI-driven job changes. Two groups teamed up to create it: What We Will, an advocacy group focused on workers impacted by AI, and the AI Commons Project, which is part of the Fund for Guaranteed Income. The organizers told Merchant they plan to grow funding from an initial $300,000 (€276,000) toward a $3 million (€2,760,000) goal by year’s end, ideally with help from large AI companies.
An engineer who lost a contract called a friend—who else is eligible?
Who is eligible for AI basic income?
The pilot targets people who’ve been directly harmed by automation or AI-related layoffs, or whose livelihoods were disrupted by AI adoption in their sectors. Eligibility criteria are narrow for now: small cohorts, case-by-case vetting, and a focus on those with clear, AI-tied income shocks. If you’re watching from a company that’s reshaping roles with models from OpenAI or Anthropic, this is the kind of policy experiment that may reach your former colleagues first.
A social organizer printed a donor list and counted who might give—how the idea fits the larger debate
How much do recipients receive?
Recipients in this pilot get $1,000 per month (€920). That’s not enough to replace a full salary for most people, but it’s a predictable payment that can cover basics and buy time to find new work or retrain. I say this as someone who watches safety nets in action: it can feel like a bandage on a broken bone—helpful immediately, but not a cure.
The notion of paying people because AI changes work isn’t new. Sam Altman has publicly supported universal basic income experiments — he even funded research into UBI’s effects — while Anthropic’s Dario Amodei has called UBI “better than nothing,” stressing that meaningful work matters to people. In corporate America, firms such as Epic Games faced scrutiny after large layoffs; executives insist reductions weren’t meant to be an AI play, yet the phrase “AI washing”—coined by Altman—now hangs over these announcements.
A campaign volunteer printed map pins for meetings—who’s putting up the money?
Right now the money behind this pilot is small and sourced via nonprofits and philanthropic grants, with organizers publicly hoping big AI firms will chip in to scale the program to millions. That $300,000 starter pot (€276,000) is intended as a proof of concept; the $3 million target (€2,760,000) would let them expand quickly if donors agree. You can see why companies and reputational logic are central: funding a visible safety net is an easy PR win, but it also shifts the policy conversation about corporate responsibility.
I want you to notice how this experiment sits between charity and policy. It’s not a national safety net. It’s not a corporate severance plan. It is small, public-facing, and politically charged—like a pebble in a pond that makes ripples long after the toss.
What bothers me most is how language gets reused to calm markets. “AI is improving productivity,” investors say, while workers see role cuts; then companies sometimes retro-fit an AI rationale that obscures the financial motive. That’s why transparency about eligibility, funding sources, and measurable outcomes matters—if this grows into anything, it should do so with clear rules, not optics.
I’ll keep following the AI Dividend as it pays its first months and as organizers seek funding from larger players. You should watch too: this pilot will tell you more about how the tech industry intends to manage the human costs of automation than any press release will. So, are we treating these checks as a serious policy experiment or a PR bandage on a wider problem?