In my last post, I elaborated on my thoughts on relying on AI in parts of the peer review process. In this post, I will focus on the latter half of AI misuse in education – specifically among students in secondary and tertiary education.

In my final teaching assistant assignment for an undergraduate class on introductory ecology, the teaching team had a tough call to make. The traditional mode of examination for this course was an open-ended examination done online, with open access to any resources available online or offline, and this time it was no different. However, this particular cohort coincided with a surge of AI-chatbots (ChatGPT was gaining a massive popularity hike then) and we wanted students to produce original arguments to our questions. Thus, we decided to enforce a strict “No AI-generated content” rule in the examination.

While we did consider the possibility of some students flaunting the rule, we designed our questions in a way that makes it difficult for ChatGPT to produce a thoroughly coherent argument. We thought we had our grounds covered. Unfortunately, when we ran a random subset of students’ answers through an AI detector tool, several of our students’ answers were flagged as potentially AI-generated. We had a tough call to decide how we wanted to handle these marginal cases. In particular, one student had answers so good (for a freshman), we were deeply skeptical that his answers were original, especially when compared against the quality of his other assignments. Perhaps we have overestimated our ability to spot AI-generated answers, or underestimated the ability of ChatGPT to produce legitimate-sounding content.

I never knew how my professor got around to that case. Nonetheless, this incident is indicative of a shift in higher education towards a new era – a time where students are increasingly substituting original contributions with AI-generated content.

Is this a problem?

All instincts in me wanted to say yes – after all, what is the point of pursuing higher education if all you did was to throw your questions to an AI chatbot as opposed to doing the hard work of consulting primary sources and doing meticulous, detailed research?

But then again, don’t many people in the workforce use such tools on a regular basis? As a student, if you know that your future job will probably be using ChatGPT regularly, why not start early? In fact, surprisingly, some countries are actually going all-in regarding the use of AI in education – a very stark contrast to the current concern of the impact of AI on education.

This was when I gave some pause for thought. What is the problem? I needed more insight from both sides.

AI makes our kids stupid – literally

A growing number of studies have pointed out a potential danger in utilizing AI among students – cognitive offloading. Cognitive offloading is a phenomenon where users rely on external tools (such as AI) to automate cognitively intensive tasks, resulting in a reduction in cognitive utility.

At a surface glance, this conclusion seems to make sense. Cognitive ability, in a way, could be viewed as a muscle that needed constant exercise to function well. The exercise, in this case, happens to be tasks that are mentally challenging, be it writing an essay, solving a math equation, or parsing through a research paper. Using AI to shortcut through these tasks is akin to using an electric scooter to run laps instead of doing the actual running. Sure, you might clock those laps, but that really isn’t the point of the assignment. Do that shortcut enough, and your muscles atrophy.

Not everyone takes the view that cognitive offloading is a bad thing. On its own, cognitive offloading is already a part and parcel of the education curriculum and in daily life – that’s why we use calculators, notetakers and other modern tools. When utilized correctly, cognitive offloading can help ease trivial mental operations in favor of higher-level operations such as logic, creativity and construction. Maybe AI can do the same for students, when exploited correctly.

Does this cognitive atrophy actually play out in students? A recent unpublished MIT study that tracked brain activity of 54 subjects demonstrated that ChatGPT users “had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels”. The authors reported that this phenomenon was accompanied by other behavioral and psychological changes observed with AI users, such as a decreased sense of ownership in their work and curiosity. Should the authors’ findings hold true across a wide range of demographics and subjects, it could lend credence to the concern that AI is truly making our kids dumb.

AI creates a false mirage as an agent of truth

AI tends to hallucinate. I knew that from firsthand experience trying out ChatGPT for my research (the results it returned were blatant falsehoods). That is how I knew to be wary.

But students probably won’t know (or register) that.

Most readers would have probably tried to conduct a query on a chatbot before and received a credible-sounding answer. As any researcher knows, credible-sounding ain’t going to cut it – the gold standard and the only standard is through a critical examination of the data, which involves at a minimum, looking at the primary source to make an evaluation. Unfortunately, students may do the exact opposite by submitting work without vetting the legitimacy of all the sources referenced.

For AI to be used effectively in students’ assignments, students will need to develop an awareness of the limitations and issues related to AI’s generative algorithms first. Furthermore, students will need to first develop skills to effectively critique and vet the output of generative AI, instead of blindly trusting the AI’s output.

Maybe the problem doesn’t lie with the students or AI…

Maybe the big outcry over students using AI is analogous to the old math teacher in middle grade chiding students for using calculators to do their math homework…

At a certain level, I can sympathize with that perspective. If students are dumped homework that can be resolved with a simple query to ChatGPT, perhaps the homework itself was a poor medium to assess students’ learning. After all, employers at the workplace primarily want results – if a chatbot is the fastest, most time-efficient way to get there, so be it.

Does the traditional form of doing assignments actually stimulate genuine learning in students (as opposed to using an AI shortcut)? Anecdotally, I have vague memories of myself recalling facts out of a textbook word-for-word when I was in my teens sitting for my O levels. Did I learn anything then? To be honest – probably not. I could recall a lot, but I didn’t feel like I learnt a lot. If schools produce assignments that are answerable without critical thinking by an AI, what makes us think students would have engaged in critical thinking on their own then? In such a situation, AI overdependence isn’t the problem – it’s a symptom that education never honed critical thinking to begin with.

Conclusion

Like it or not, AI has entered the world of academia and is here to stay. While I personally remain antagonistic about AI use in academia for the time being (I think it causes more problems than it solves – call me a boomer), many students aren’t going to share the same sentiments as me when they open their next assignment due 2359 on the same day. I wouldn’t be surprised by a surge in cases highlighting the fraudulent use of AI in the near future by undergraduates, which could ultimately diminish the value of a university education altogether.

Is there a way to make AI an effective companion to students’ education journey? Many seem to think so by enhancing AI literacy. I take a more negative stance – most students will remain as students, taking the path of least resistance to their schoolwork whenever possible. That is human psychology – learning is inherently hard and no one likes to do hard things if they can help it, least of all students who are simply in class to earn their degree and move on. I’ve been there in my middle school days.

How can students learn then? In my personal experience as a teaching assistant, students engage in the content only if they are engaged and clearly see a value in undergoing that cognitive labor to pursue true mastery of the material. On my end, I try to present the content in a manner that speaks relevance into their lives and assess them in a manner that goes beyond recalling facts and spitting out familiar examples. It would be great to hear how others in academia have wrestled and overcome the AI obstacle in their own departments – let me hear your opinions in the comments!

For me, I think there is one undermentioned issue in students’ overreliance on generative AI – it breeds into them an unwarranted sense of expectation for easy, instant answers. Because AI platforms such as ChatGPT are so accessible and demand so little cognitive resources on the user’s end, turning to an AI chatbot for every problem in life may breed a temperament that all solutions to life’s problems can be found online, scrutable via AI. Unfortunately, most of life’s obstacles don’t operate that way – be it environmental issues, social challenges or deep technical problems. Tackling them requires nuance and patience, often devoting countless hours of mental resources and above all, incredible resilience. For me, the erosion of cognitive resilience is the single biggest obstacle that could cause AI to drive the collapse of our next generation of academics altogether.

Leave a comment