A recent MIT study raises concerns about how using ChatGPT might impact critical thinking and brain engagement. Generalizing the outcomes here (which have not been published or peer-reviewed yet):
- Population size? n=54 – hardly enough to suggest much of anything on humanity, though it’s not the only study to suggest negative impacts on our brain activity from human overreliance on genAI chatbots.
- An interesting aspect of the study was the use of an EEG to measure brain activity. Participants who used ChatGPT to write SAT essays showed the lowest brain engagement compared to those using Google Search or no tools at all. EEG readings revealed reduced neural activity. Not surprising at all.
- Their essays were more uniform and lacked creative or original thought… as one would expect from any generative AI engine that serves as nothing but a fancy-ass stochastic regurgitation machine.
- Another obvious outcome – participants also became [insert-shocked-face-here] lazier over time. When they didn’t use their own brain in their writing, they struggled to recall their own writing when asked to revise without the AI.
The group that didn’t use any tools demonstrated the highest neural connectivity, were more engaged and curious, and felt more satisfied with their work.
I have yet to talk with any academic in any field who is not sounding enormous alarms (not to mention frustration from cheating) from student use of AI. What I tell them is this: Generative AI, like it or not, is here to stay. Power-hungry, greedy corporations are driving it with no interest in understanding its impact on humanity. Big companies are lured by the temptation to save a buck or a million, and the tech companies are ready to serve the next AI plate. Like it or not, AI is not going anywhere. AI will continue to become more capable and yes, even awe-inspiring.
The cynical side of me (which is about 99% of me these days) views genAI as democratizing cheating. I’ve been privy to students cheating. All of us educators have. But, the best cheating came from the affluent – those students who could afford their subscriptions to cheating websites and services, some even paying for their code or essays to be written by cheating “farms” around the world! Well, ChatGPT has made cheating accessible for all! Snark aside, let’s be realistic -cheating has been around since the dawn of education. Stop worrying about the cheating! My God, that’s the least of our problems today. It doesn’t matter what fancy language you put on your syllabus, what rules the administration tries to put in place, or what types of lock downs you put on school computers – students are going to cheat, and they have access to all the best tools out there, and they likely know more than you do on how to use them.
My observation is that college students do not know how to use AI responsibly. They do not know how to utilize it to aid and promote their own learning in a way that keeps them just as engaged and invested in their own learning, while protecting their own critical thinking with the tools.
In theory, if learning with AI tools does not result in students yielding tangible, measurable evidence that their intellectual growth and critical thinking exceeds those measures without the tools, then what are we doing even having this discussion? Problem solved! But that’s not realistic. Humans (me included) are opportunistic, stressed-out creatures who will always have moments of needing to cut corners to get work done, and damn, generative AI makes that easy.
Our real problem – teaching our kids how to learn, how to protect their own brain development, in the era of highly-accessible generative AI tools.
It’s not going anywhere. It will be used. What do we do?
I am trying to believe that eventually the hype and chaos will subside in time, allowing it to become viewed as a tool, maybe even an essential tool. It will become much like the calculator for a mathematician, the stethoscope for a doctor, or the IDE for a software engineer. I hope that in time, genAI will mature to become an essential tool to help us with the mundane, repetitive tasks so that we can focus on getting more done, allowing us to become more efficient with our time so that we can leave each day accomplishing more than we could before, ideally letting us spend more time with our friends and families (you know, those things we used to do before mobile devices and social media. Remember having conversations at restaurants?) But, it’s going to take a lot of time to get there, and honestly, I’m not sure we (at least in the U.S.) have what it takes to enact the legislation urgently needed to regulate how AI is used in K-12, or even whether it should be allowed to be used at all in the early years. This is not a partisan issue. This is one of the most important pieces of legislation that needs to be passed to protect our future. GenAI in the hands of kids, at the age when their brains are still growing, is the absolute worst possible use of any technology today. We need to help our K-12 educators learn how to use the tech themselves and recognize early signs when kids may be using it. Every discipline at the college level needs to reimagine its curriculum to include instruction on how to teach students to use AI safely and responsibly while protecting their own critical thinking. Ideally, if done correctly, it should enable us to accomplish more in the classroom. Our students should be able to conquer more challenging problems than ever before… if and only if we help them learn how to use it! And that’s on us, friends – the educators of the world – to help our next generation students learn how to do that. Unfortunately, times are bizarre right now. With the recent devaluing of education in this country, I’m not sure we have what it takes to wake up and fund and support the education system in a way it needs to get through this time. Papers like the above will become the norm, to the point where it’s mostly AI that is publishing another paper announcing yet another test where it outperforms humans on yet another measure of brain activity, intelligence, critical thinking, etc.
Legislation isn’t going to happen, at least not in the near future. It’s on all of us, as educators and part of our society that values education, to lead the way in protecting our kids. In my courses, it’s my responsibility to teach my students how to utilize these tools in ways that promote their brain development, learn how to use AI responsibly and safely, and minimize harm to humanity. I don’t have answers. None of us does, but pretending like it’s not here and doing nothing is not the answer. Failure to do so will transform our society into some form of an idiocracy within the next generation, with humanity at the mercy of powerful AI engines that do all our thinking for us. (I never realized that Mike Judge was a modern prophet.)
I consider myself a huge fan of technology, dating back to my high school days. The technology itself does not scare me. I find it fascinating and am actively using AI and ML models in my own work. It has a purpose. What scares me is the willingness of society to let AI do their thinking for them. My worst fears are letting our kids use it with no reins, with no understanding of the harm its use is doing to their mental capacities in the long term. The damage is real, and if we don’t help our kids and college students learn how to use it safely, it will have devastating impacts on humanity.
This is a disruptive technology. Disruption causes mass chaos for a time as society adjusts. It also displaces and eliminates those who fail to evolve with it and adapt. Just ask Kodak. Or Blockbuster. Or how about Blackberry? Or Palm. MySpace anyone? AOL?