I watched a graduate recruiter quietly scan a transcript while the room waited for a verdict. The GPA read perfect; the interview revealed a gap so wide it was audible. I realized then that a single number had become a mirror that lied.
I’m writing to you because this is not an isolated surprise. You’ve probably seen the same pattern: immaculate essays, suspiciously polished code, and students who can’t explain their own work. A new working paper from UC Berkeley’s Igor Chirikov helps explain why that keeps happening — and why it matters for your hiring decisions, your campus, or your own studies.
A professor finds whole essays that read like they were produced by the same pen
In a large Texas university dataset spanning 2018–2025, Chirikov examined more than 500,000 student-course enrollments across 84 departments and found grade shifts concentrated where take-home writing and coding mattered most. Courses with heavy unsupervised assignments saw the biggest rises: AI-exposed classes reported roughly a 30 percent increase in A grades after ChatGPT entered the market, according to the paper hosted on eScholarship.
You should know there are three distinct ways students deploy generative AI. Augmentation helps with research and phrasing while the student remains hands-on. Reinstatement creates new AI-friendly tasks that still build skills. Displacement automates the work entirely — the essay, the code, the homework — and that’s where grades inflate without matching learning.
How is AI affecting student grades?
Short answer: it’s lifting GPAs faster than skills. You’ve seen the headlines. The longer answer is in the detail: classes weighted toward take-home deliverables are the easiest targets for displacement. Proctored exams, in-person presentations, and oral defenses still act as natural brakes. But when your evaluation relies on unsupervised outputs, the score no longer measures mastery.
An admissions officer notes applicants with glossy GPAs but thin portfolios
Employers and grad programs used GPAs to triage talent. That system is fraying.
When I talk to recruiters, they say resumes look cleaner but interviews reveal the gaps. This isn’t just academic alarmism — it’s a practical hiring problem. If AI displaces skill-building tasks during learning, graduates may leave campus with weaker capabilities in the very areas AI assists, creating a loop: more AI in education begets more AI reliance in the workplace. AI has become like a pressure cooker for grades, steaming up GPAs until the pressure releases unexpectedly.
Can universities stop AI-enabled cheating?
They can try, and some are. Princeton faculty recently voted to reverse a 133-year-old honor code that let students self-proctor in-person exams after surveys showed around 30% of seniors admitted to cheating with generative AI. Harvard faculty are debating capping A grades to 20% of a class to blunt inflation. But policy fights alone won’t repair lost practice time: you can tighten rules, proctor more, or adopt detection tools from vendors like Turnitin and new OpenAI collaboration features, but those fixes chase symptoms more than the underlying incentives.
A dean notices students treating AI as a shortcut rather than a tutor
Students have incentives. Their GPA gates internships, scholarships, and graduate offers.
I don’t blame them. When your future often hinges on a single figure, you use the tools at hand. Many will choose the path that minimizes risk. But the consequence is that a generation may graduate confident in the final product but unsure of the craft. Relying on AI can feel comfortable at first, as if handing someone the keys to a car without a lesson.
Will AI make graduates less employable?
Not immediately — resumes still open doors. Over time, though, mismatches will grow. Employers will spend more on vetting, internships will lengthen, and onboarding costs will rise. The sectors that value independent problem-solving, like software engineering or research, will suffer if graduates can’t do core tasks without AI assistance. That could accelerate automation, since companies will prefer systems that deliver consistent results over human hires who need more training.
An instructor spots creative assignments being reshaped into AI-friendly formats
Teachers are changing their syllabi to fight back.
Some professors are converting take-home essays into staged portfolios, timed reflections, or in-class presentations. Others are redesigning tasks to emphasize process over product: drafts, annotated sources, live coding sessions. You’ll also see more oral exams and collaborative projects where process and provenance matter. Platforms like GitHub Classroom, Canvas, and integrated IDEs will play a role, as will AI-detection services — but the bigger shift is cultural. If assessment becomes about exposure to practice rather than polished deliverables, learning follows.
I’ve read the data, spoken to faculty at Princeton, Harvard, and across state systems, and tracked students who use ChatGPT and other OpenAI tools in their coursework. This feels like a hinge moment: either institutions rewrite assessment to reward demonstrable capability, or grades keep inflating while competence lags.
You can demand transcripts that match reality, and you can rewrite courses to protect practice. Or you can accept a future where a clean GPA is an unreliable signal and employers treat degrees like decorative badges. Which outcome will you choose?