Colleges are using AI to read your application essays

Colleges are using AI to read your application essays - Professional coverage

According to Phys.org, colleges are increasingly using artificial intelligence to screen and analyze student applications, a trend that’s accelerating this fall. Virginia Tech is debuting an AI-powered essay reader this season, which it expects will let it inform students of decisions a month earlier, in late January, by helping sort tens of thousands of applications. The California Institute of Technology is also launching an AI tool to assess the “authenticity” of student-submitted research projects via a video chatbot interview. The University of North Carolina at Chapel Hill faced backlash earlier this year after reports it used AI to evaluate essay grammar and style. Virginia Tech’s tool can scan about 250,000 essays in under an hour, saving an estimated 8,000 hours of human review time based on last year’s application volume.

Special Offer Banner

AI: The silent first reader

Here’s the thing: the messaging from universities is incredibly careful. They all stress, repeatedly, that AI is not making final decisions. It’s an assistant, a tireless scanner, a consistency check. But when you look at the actual implementation, it’s clearly becoming a core part of the evaluation pipeline. At Virginia Tech, for example, each of four short-answer essays used to be read by two humans. Now, one of those readers is an AI model trained on past essays and the scoring rubric. That’s a fundamental shift. A human only steps in if the AI and the first human scorer disagree by more than two points on a 12-point scale. So in many cases, the AI’s score is the *only* second opinion. That’s a lot of trust to place in a black box trained on historical data—data that might itself contain human biases.

The delicate balance of benefits and blowback

The benefits for overwhelmed admissions offices are obvious and massive. Applications have skyrocketed since many schools went test-optional. Virginia Tech got nearly 58,000 applications for 7,000 seats last year. Humans get tired and grumpy, as their VP Juan Espinoza notes. An AI doesn’t. It can plow through a quarter-million essays in an hour. Other uses, like Georgia Tech’s AI for parsing transfer student transcripts, genuinely seem to remove tedious, error-prone work and speed things up for applicants. But the UNC Chapel Hill backlash shows how sensitive this is. Students are being told not to use AI unethically, while the school itself uses it to judge them. It feels like a double standard. No wonder, as Espinoza says, other colleges are watching Virginia Tech’s rollout like hawks, wary of their own potential public relations nightmare.

Beyond the essay: The AI holistic read

What’s more fascinating, and perhaps more unsettling, is how AI is being used to try and *understand* students, not just score them. Caltech’s “authenticity” gauge for research projects is a prime example. They’re looking for “joy” and “passion” via a chatbot interview. Can an algorithm truly detect intellectual curiosity or genuine passion? Or will it just learn to recognize the performance of those things? Similarly, schools like Stony Brook are testing AI to summarize essays and recommendation letters, highlighting key hardships or context for a human officer. Emily Pacheco, an AI in admissions expert, predicts this collaboration is just the start. “Ten years from now, all bets are off. I’m guessing AI will be admitting students,” she says. That’s a staggering thought. It moves AI from an administrative tool to the ultimate gatekeeper.

The rubric’s new overlord

So we’re left with huge, unanswered questions. The National Association for College Admission Counseling (NACAC) updated its ethics guide to urge transparency, fairness, and integrity. But will that happen? The very nature of proprietary AI models fights against transparency. If an AI is trained on a decade of admissions essays from a school, it’s going to bake in whatever preferences—conscious or not—existed in that decade. Does that reinforce outdated norms? And what about the student’s experience? The process already feels opaque and high-stakes. Adding an inscrutable AI judge, even as an “assistant,” makes it feel more like a game to be optimized than a personal story to be told. It’s part of a much bigger trend of algorithmic decision-making creeping into life’s critical junctures, similar to how big tech is funding AI in classrooms or how companies are rethinking skills for an AI era. The backlash at UNC proves applicants are paying attention. The real test for colleges won’t just be whether the AI works, but whether they can convince students it’s fair.

Leave a Reply

Your email address will not be published. Required fields are marked *