AI and the Higher Education Arms Race
How the classroom is becoming a battleground of trust, tools, and runaway selection
If you think ChatGPT in schools is just for cheating on essays, think again. OpenAI is pushing to make AI assistants as fundamental to college life as email—embedding them in every facet of the experience, from personalized tutors to AI-generated quizzes and job interview simulators. As Leah Belsky, OpenAI’s vice president of education, recently told The New York Times, the vision is for every student to have a personal AI account, just like a school email. Universities like California State and Duke are already on board. Duke has even launched its own branded platform: DukeGPT.
But what happens when the same technology that writes the paper also grades it? Or when professors rely on AI to generate assignments, feedback, and even lesson plans? The classroom becomes a self-reinforcing ecosystem—one where AI is both the tool and the judge.
What makes this dynamic uniquely volatile is the adversarial relationship at its core. In most industries, technology adoption is collaborative or competitive among peers. However, in education, the roles are asymmetrical: teachers set the rules and assign the work; students are expected to produce original output. When both sides adopt AI—students to generate content, teachers to assess or detect it—education becomes an arms race. Each side is incentivized to outmaneuver the other, inside a system that depends on trust and integrity to function.
This isn’t just a story about productivity—it’s a technological arms race in a system built on trust. Yet, as AI blurs the line between help and deception, academic integrity, once a cornerstone, is now under siege. Teachers fear losing authority or relevance; students fear falling behind without machine assistance. Both sides feel pressured to adopt tools they may not fully trust. The more AI is used, the more it becomes expected, creating a feedback loop where its presence is mandatory, not optional. In evolutionary biology, that’s called “runaway selection” or “Fisherian runaway” after the mathematician and biologist Sir Ronald Fisher.
Runaway selection explains how traits—like the peacock’s tail—become extreme simply because they’re popular, not because they’re useful. Flashy, burdensome, but impossible to ignore. The same logic applies here. AI tools are both the trait and the preference: their popularity makes them necessary, and their necessity makes them popular. As AI becomes more common in classrooms—used to write, grade, assess, and simulate—it becomes the standard not because it improves learning, but because the system adapts around its presence. The result is a self-reinforcing cycle where AI becomes indispensable, not because it’s always better, but because it’s what everyone expects.
That shift has real-world consequences. As AI reshapes the classroom, it also reshapes the students within it, altering how they learn and what they become. Over-reliance on machine assistance risks dulling critical thinking, interpersonal skills, and the kind of cognitive struggle that builds real understanding. At the same time, unequal access to AI tools threatens to widen educational divides, privileging the well-resourced and leaving others behind. This isn’t just a pedagogical issue—it’s a blueprint for the workforce, the economy, and the social contract that follows.
That’s not reform. That’s mutation—and it’s editing education at the chromosomal level. Unlike peacocks trapped by their evolutionary programming, however, educational institutions retain the power to step back and redesign the rules of engagement—but only if they can coordinate action before the arms race becomes too entrenched to reverse.