I slipped into a room where clergy and coders were sharing coffee and a bruise of skepticism. A rabbi asked a software engineer a single blunt question that made everyone lean forward: Can you teach a machine to care? For a moment, the chatter froze—because the answer would shape whose values get translated into code.
I’ll tell you what Anthropic is doing, why it feels both clever and alarmingly incomplete, and what you should be watching for next. You won’t get sermonizing; you’ll get the plain stakes, from my view at the table.
At a New York roundtable, Sikh, Hindu, Jewish, and LDS representatives met with Anthropic and OpenAI.
The meeting billed as the “Faith-AI Covenant” pulled together groups: the New York Board of Rabbis, the Hindu Temple Society of North America, the U.S.-based Sikh Coalition, the Greek Orthodox Archdiocese, and the Church of Jesus Christ of Latter-day Saints. Anthropic and OpenAI sent delegates; the Interfaith Alliance for Safer Communities helped organize the event, and Baroness Joanna Shields was named as a partner.
What’s striking is not just the lineup, it’s the optics. Anthropic is making a public case that it consulted religious authorities while shaping Claude—its large language model—so the company can claim moral diligence. Whether that builds trust or theater depends on how much influence those conversations actually have over Claude’s internal guardrails.
Did Anthropic meet with religious leaders?
Yes. Anthropic held dinners with 15 Christian leaders earlier, and participated in the New York roundtable with additional faiths. The Associated Press and outlets like Gizmodo have traced the outreach; Anthropic did not answer every request for clarity about who from its staff attended which sessions.
At a previous series of dinners, Anthropic sat down with a cohort of Christian leaders to discuss Claude’s spiritual and moral development.
Those dinners were more intimate and aimed explicitly at sourcing moral frameworks; they weren’t one-off photo ops. Anthropic has described an effort to feed Claude a set of ethical priors—what the company calls a constitution—so the model acts like a person with “good enough” values when rules fail.
I heard that phrased bluntly from a skeptical source: Silicon Valley hoped for a universal ethical formula and discovered ambiguity instead. The company’s response seems to have been: gather more moral voices. That’s sensible as PR and moderately sensible as product strategy, but it is not the same as equipping a model to make life-or-death judgments with human subtlety.
Will religion help AI make moral decisions?
Religion can supply narratives, analogies, and conflict-tested moral heuristics. Those inputs are useful raw material for designers who must decide whether Claude refuses, advises, or escalates a request. But feeding sacred texts and clerical counsel into a model doesn’t automatically resolve trade-offs between competing harms.
The Associated Press reported an NGO organized the event and plans similar sessions in China, Kenya, and the UAE.
The Interfaith Alliance for Safer Communities is steering a global series, and that global angle matters: moral prescriptions vary by culture. Baroness Joanna Shields and other named partners lend political heft, which matters commercially and reputationally for startups like Anthropic and giants like OpenAI.
Rumman Chowdhury, CEO of Humane Intelligence, offered a stinging observation to the AP: Silicon Valley’s hope for one-size-fits-all ethical principles has been exposed as naive. She said companies are “looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations.” That’s a concise diagnosis of why these meetings are happening.
Anthropic’s own document—Claude’s constitution—acknowledges the problem: you can’t write rules for every edge case, and a misstep could be catastrophic. So the company is hunting for higher-order moral truths and a defense in depth of public accountability. I read that as process improvement plus public signaling: show you consulted elders, leaders, and NGOs so users and regulators hear that you tried.
There’s an old comparison worth remembering: before Islam, the Kaaba functioned as a repository for 360 sacred tokens—people trusted a single black cube to contain the right answer for a busy merchant on the road. Anthropic seems to be building a modern version of that chest for Claude by aggregating faith-based counsel into one architecture.
That approach has an attractive simplicity, but it also exposes a fragile center. If Claude’s moral core is an assembly of high-level doctrines, then its behavior will be a negotiation between traditions rather than an uncontested truth. In other words, the moral compass they’re polishing already shows hairline cracks.
You should watch three things: who actually influenced the engineering choices (not just attended the dinners), whether these faith-sourced guidelines are translated into measurable safety tests, and how Anthropic documents disagreements among advisors. The company can claim breadth of counsel; it can’t claim unanimity.
OpenAI’s presence at the table matters because the industry looks to leaders who set norms—technical, corporate, and regulatory. If Anthropic and OpenAI converge on a set of practices, those practices become de facto standards for startups and platforms that integrate Claude or similar models.
There’s a hopeful case: diverse worldviews might nudge Claude toward less culturally biased outcomes. There’s also a practical risk: religious advice could be cherry-picked or simplified into rules that fail in context. You should treat their consultations as evidence of effort, not proof of safety.
If a company hands its model a patchwork of sacred counsel and calls it a moral operating system, do you trust that machine to advise a doctor, a judge, or your teenager about a life-altering choice?