I don’t know if it’s just me, but every single professor I’ve had prepares at least one discussion post on generative artificial intelligence and its usage in the classroom. While important, these conversations have quickly become repetitive, formulaic and unproductive.
One of the laziest talking points I keep hearing from commentators, classmates and even some instructors in these discussions goes something like this: “AI is a tool, just like a calculator. Would you insist students shouldn’t use calculators? You’re going to have access to AI in the workplace, why shouldn’t you use it in the classroom, too?”
I’ve had to respond to these questions so many times it seemed appropriate to deliver my own exasperated polemic. So, here we go: AI usage in the classroom is rapidly degrading the quality of the college experience and education.
I’ve witnessed AI usage in both writing and programming through the classroom and student organizations. While many will expound AI’s productivity benefits and its ability to readily provide constructive criticism and collaboration, I can attest that the boots-on-the-ground reality in academia is much more grim.
For starters, the calculator analogy breaks down pretty quickly. If you use a calculator to solve most intermediate math problems, you’ve only displaced the repetitive process of manual mathematical calculations.
However, the user still had to analyze the problem, formulate a mathematical model or approach and then verify their answer. The user still fundamentally understands the problem and the way they solved it.
However, large language models, or LLMs, like ChatGPT are unlike any tool that has come before in the sense that they displace thinking. The calculator isn’t making any creative or logical decisions on behalf of the user. You may think this is an arbitrary distinction, but it has profound consequences.
Sure, you can generate an essay with ChatGPT pretty easily, but is it any good? You could look at its word choice — is it appropriate for the subject? What tone does the essay have, why and is this the right tonal decision for the assignment?
In the hands of a competent writer, they could use their judgement to leverage AI to provide alternative approaches and quickly edit drafts. But when AI presents such an alluring alternative to learning these skills and developing editorial judgement, inexperienced students quickly and uncritically accept the decisions the LLM made without understanding them or the trade-offs made. They are cooking without any taste buds — without any ability to evaluate the AI’s responses.
In the context of programming, this often means computer science students simply input “how do I do X” and receive a generic response. It may be in an interpreted programming language (which are slower, but more portable), when a compiled language (which is faster) would be more appropriate for their use case. The answer may be primarily optimized for time complexity, when space complexity is more important for the specific problem.
Did that sound like nonsense to you? Unfortunately, many computer science students don’t understand it either. The LLM makes the decision, and they accept it — in fact, they might not even realize these trade-offs were made.
It works until it doesn’t, at which point the student lacks the knowledge and discipline to diagnose and solve the problem. I know because I’ve helped my classmates debug code they didn’t write — and certainly didn’t understand — and edit essays they didn’t write and barely read.
This may be enough to scrape by in your classes. But eventually, when you need to tackle real-world problems, you will fail. You will struggle to understand what is important about the problem, its requirements, the tools and approaches available to you, weigh strategic trade-offs and identify problems with your solution.
Another trend that I’ve noticed since the proliferation of ChatGPT and similar LLM tools is that students increasingly sound the same. News flash: your writing is bland! It’s boring! Its structure is generic! It’s overly verbose! It all has the same professional, Wikipedia-style tone!
Now, being boring is hardly a sin, but when I know your writing is AI-generated — and trust me, I can tell — why would I bother reading it? You certainly didn’t care enough to write it, why should I care enough to read it? Did you even bother reading it? Does this reflect any of your real thoughts or opinions? I’m inclined to think not.
Frankly, no one cares when it’s a classroom assignment, but eventually you will be asked to write something that matters. If I’m your boss, I want to know what you are capable of, not what ChatGPT can do. If I’m your colleague, I want to consult your experience and judgement, not a statistical model.
I’m not saying ChatGPT and other models aren’t useful or don’t have their place. In the hands of a critical, experienced user, they can automate repetitive tasks and make more time for more important work. The problem is that if you rely on AI tools, you won’t develop the experience and critical mind necessary to use these tools effectively.
So please, if you’re a student, moderate your AI usage — especially in domains you don’t know much about. I realized I was too reliant on AI programming tools during my junior year, so I removed Github Copilot from my code editor, and I’ve felt much more self-sufficient since doing so. AI is too convenient, so put up some barriers and find a system that forces moderation.
Your brain is a muscle. Use it. Or, at the very least, please find a different mantra to repeat during those classroom discussion sessions.
Caleb Elizondo is a computer science senior and Web Editor for The Battalion.