I watched a colleague paste a chatbot’s rewrite into a client email and call it “cleaner.” The next day three drafts from different people read like versions of the same file. If you have noticed that too, you are not alone.
I’ve been following a new thread of research from the University of Southern California that feels less like a technical footnote and more like a warning. The paper, published in Trends in Cognitive Sciences, reviews more than 130 studies and argues that large language models are compressing the variety of human expression into predictable patterns. I want to walk you through what that means for your writing, your teams, and the culture you influence.
My editor noticed a newsroom memo losing its quirks after someone ran it through ChatGPT.
That observation lines up with the USC team’s headline finding: LLMs—despite training on massive corpora—tend to favor repeated patterns. At scale, these models are a sieve, letting through the most common grains and trapping the odd kernels that make language colorful. The result is more uniform phrasing, predictable argument arcs, and fewer unexpected turns in prose.
Are AI models making people think alike?
Yes—and in subtle ways. When you use ChatGPT or Claude to tidy a paragraph, you are not only editing words; you are borrowing a preferred way of organizing ideas. Researchers note that repeated exposure nudges people toward the same logical habits the model produces. That nudging isn’t just cosmetic: experiments show a shift in how people reason after interacting with model outputs.
At a weekly brainstorm, three colleagues who leaned on Grok returned with the same five ideas.
Group creativity suffers differently from individual creativity. The USC review found that while a person using an LLM can generate more raw text, groups that use LLMs together produce fewer distinct ideas than groups that brainstorm without them. When groups lean on them, the models become a choir that only rehearses the same chorus until new harmonies die.
How do LLMs affect creativity?
They increase volume but reduce variance. Individual users can get more output quickly, but the outputs are statistically concentrated around dominant patterns in the training data. For teams, that concentration cancels out diverse perspectives and lived experience—ingredients that produce the best creative outcomes in organizations and research.
I saw a friend revise an essay until their metaphors disappeared into plain language.
That’s the personal cost. People often use chatbots to polish tone and remove “rough edges.” But polish can erode signature voice. The USC authors point out that LLMs capture statistical regularities and disproportionately reflect dominant languages and ideologies. Zhivar Sourati and colleagues warn that when models overrepresent certain views, their outputs skew toward those views—and humans begin to internalize them.
Companies admit this limitation. OpenAI’s help pages note that ChatGPT is “skewed towards Western views,” and xAI’s Grok has been tuned to reflect the perspectives of its leadership. Those are not technical quirks you can safely ignore; they are design choices with cultural effects.
A policy briefing I attended treated consensus as a feature and diversity as friction.
Design and regulation matter. The USC team flags a policy environment in which incentives can push models toward consensus and away from pluralism. Recent U.S. directives and industry pressures have signaled that models will be judged on perceived safety and conformity—often at the expense of expressive variety.
How can teams preserve idea diversity when using chatbots?
If you care about preserving distinct voices, you need deliberate countermeasures. Use prompts that ask for contrarian takes, require a human-first draft before editing with a model, and rotate who shapes prompts and who edits outputs. Tools like prompt engineering, version controls in Google Docs, and collaborative whiteboards can help maintain divergent threads instead of collapsing them into a single, model-approved line.
I don’t want to be the person telling you to delete your tools—AI can speed work and surface facts—but you should treat these models as editors, not authors. Use them to expand reach, not to replace the messy, valuable friction of human thought.
If you care about cultural variety, about the small metaphors and the weird sentences that make ideas sticky, then you need to ask: who gets to train the models we use every day, and whose voices are being left out of that training?