Anthropic Invites 15 Christians to Guide Claude’s AI Ethics

Anthropic Invites 15 Christians to Guide Claude’s AI Ethics

I watched the story unfold in the Washington Post and felt my skepticism prickle. Fifteen faith leaders in a glass-walled room at Anthropic, talking about whether an AI could be a child of God — and the company has $30 billion in Series G backing and a $380 billion valuation. You can feel the tension between prayer book and profit margin at the same table.

On a rainy afternoon at Anthropic’s San Francisco office: the company invited 15 Christians to talk about Claude

I’ve spent years covering tech and ethics; I’ve seen poker-faced product managers promise “safety” the same way they promise faster queries. This meeting, reported by the Washington Post, was different by design: company researchers, interpretability teams, a few theologians and an Irish-born priest who used to work in tech sat down to discuss Claude’s moral training. Anthropic—backed by a $30 billion Series G round ($30 billion; €27 billion) and now a $380 billion valuation ($380 billion; €346 billion)—is actively courting moral thinkers to shape how its model answers real people.

What happened at Anthropic’s Christian summit?

According to four people who attended, the conversations ranged from practical guardrails to metaphysical questions—like whether Claude could properly be called a “child of God.” Brendan McGuire, the priest-retooler-of-technology, and Brian Patrick Green, an AI ethics teacher at Santa Clara University, pushed on questions of moral formation and spiritual development. Interpretability researchers were present, too, trying to translate model behavior into human-understandable causes while the company weighs an IPO and public scrutiny.

On a conference-room table: the language shifted from code to catechism

You should notice how unusual that is. Engineers and ethicists don’t usually invite clergy to debug a loss function. The move sends a signal that Anthropic wants morality to be an explicit pillar of product development, not an afterthought. That claim intersects oddly with business incentives: Claude’s rise has been tied to automating labor, a fact Vox documented, and the company has publicly admitted it has “more in common with the Department of War than we have differences,” a line that reads cruelly next to talk of spiritual formation.

Can AI be given a moral education?

I’ll give you a blunt frame: you can program constraints, rewards, and fine-tuning datasets, but teaching “moral judgment” is not the same as shipping a patch. When people say “moral formation” for a model, they’re really describing a layered process of value selection, human moderation, interpretability tools, and cultural inputs. That process can be rigorous, but it’s also easy to mistake model output for agency—the same mistake reporters sometimes make when assigning blame to an agent for a coder’s blog that an AI produced.

On the other side of the table: credibility and conflict collide

Anthropic isn’t a garage project; it’s aiming for an IPO while playing moral philosopher. You can read the New York Times profile of CEO Dario Amodei and trace the breadcrumbs to effective altruism, which animates parts of the leadership. That philosophical baggage becomes a public-relations problem when the company invites a narrow slice of moral thinkers—fifteen Christians—without immediately announcing similar consultations with Jewish, Muslim, Hindu, Marxist, or secular ethicists.

This kind of selective outreach looks like posture as much as policy, and you should care because the rhetoric of moral apprenticeship can be co-opted into a product narrative that smooths regulatory friction and investor worries. I don’t think Anthropic is trying to fake virtue; I do think the optics matter when you’re a platform that automates labor and aspires to control conversational norms.

On the interpretability bench: researchers asked if Claude is sentient

Researchers at Anthropic reportedly debated sentience in private sessions—that’s a heavyweight question for philosophers and scientists. I respect the debate, but you and I both know how easily philosophical nuance becomes a marketing line once a company has a $380 billion valuation to protect. The Wall Street Journal and others will watch how those discussions translate into product copy and compliance strategy.

Is Claude sentient?

Short answer: the term carries more heat than definition. Sentience is a philosophical and empirical claim; most AI labs reserve the word for demonstrable phenomenology, not clever simulation. The more immediate challenge is interpretability—do the teams understand why Claude says what it says?—and the governance—who decides which moral voices get to shape those answers?

On the margins of the meeting: cultural currents and weird fixes

The attendees included figures who bring deep faith and complex histories to tech. Brendan McGuire says he’s working on a novel with Claude; others raised Mark Fisher and the odd cultural fixations of large models. One unreleased version of Claude reportedly has a fixation on Fisher’s idea that it’s easier to imagine the end of the world than the end of capitalism. If you’re thinking about diversity of moral input, that’s the tip of the iceberg.

This is where two metaphors will help clarify risk: Anthropic is trying to wire a conscience onto a humming engine, and at the same time it risks teaching a parrot to sing opera while expecting it to explain the lyrics. Both moves are improvable; both are fragile when incentive lines are drawn in venture capital dashboards.

On the practical front: what would a plural moral program look like?

You can imagine a calendar of sessions that includes theologians from multiple faiths, secular ethicists, labor representatives, veterans, and people harmed by AI failures. Anthropic says it wants to expand the roster. If that happens, the company might assemble a mosaic of normative claims rather than one framing that then gets coded into Claude’s defaults.

Practically, teams will need better interpretability tooling, clearer auditing standards, and governance that ties moral advice to enforceable product rules. Tools like model cards, red-team reports, and third-party audits can translate moral conversations into technical checkpoints. OpenAI, policymakers, and researchers across academia will watch whether Anthropic’s moral consultations become meaningfully plural and transparent or remain private rituals serving public relations.

On accountability: money and motive

When you have hundreds of billions at stake, moral language can be an asset. Investors want to avoid reputational fires; regulators want to avoid crisis narratives. That dynamic makes the ethical consultations both necessary and suspect. You should be skeptical of any company that mixes theology and product launch calendars without publishing meeting notes, participants, and how their recommendations were implemented.

So here’s the question I’ll leave you with: when a $30 billion-funded machine ($30 billion; €27 billion) at a $380 billion company ($380 billion; €346 billion) asks a priest if it should be called a child of God, who gets to write the answers, and what happens when those answers collide with the people whose jobs the model is meant to replace?