AI as the New Study Partner: Yes, we CAN help students learn responsible AI use

AI to the Rescue: https://www.chronicle.com/special-projects/the-different-voices-of-student-success/ai-to-the-rescue

The Chronicle’s article “AI to the Rescue” offers a bit more of, shall we say, a balanced view of how today’s college students are utilizing AI in and out of the classroom. It’s a balanced take on AI, noting that not as many students as some faculty fear are using AI merely as a shortcut to skirt academic rigor. Some students are learning to use it as their 24/7 personalized tutor, seeing it as an essential part of their scaffolding to bolster their learning, manage cognitive load, and help them navigate the challenging times we put them through in higher education. The narrative is grounded in student voices and survey data, painting a nuanced picture that challenges the prevailing trope of AI as merely a cheating engine. Thank you, Beth McMurtrie.

I particularly liked Abeldt’s story:

Abeldt began building her AI arsenal. She used ChatGPT and watched YouTube videos to break down complex topics. Perplexity, an AI-powered search engine, helped her find information. Google NotebookLM turned class notes and lecture slides into study guides and podcasts, which she listened to on the way to class. Quizlet made flashcards and ChatGPT created mock exams and quizzes. Abeldt even used AI to develop a study plan to avoid procrastinating or spending too much time on any one assignment. She followed its guidance, and it worked.

Abeldt’s outcome? A GPA rebound, and more importantly, a sense of agency restored.

Not all are on board, and nor should they be

Everyone — faculty, staff and students alike — should proceed with caution and not jump into AI without proper guidance and boundaries. As we know, OpenAI, Anthropic, Google, Meta, and the corporations involved in developing AI have zero interest in protecting humanity as they develop these tools. There is no doubt, it can cause harm, and already has. We’ll continue to see new examples of harmful things resulting from the irresponsible or naive use of AI increasing in frequency. Many should avoid it altogether. It’s not clear when it should be used, if at all. Despite its danger, we are noticing some positive outcomes for those who understand how to use it responsibly. Anecdotes are not data, but my colleagues and I have noted that in our higher-level courses and research with our students, AI can be quite helpful. However, even in these contexts, it needs to be used responsibly and with caution. (Responsible AI – I will keep pushing this with urgency, no matter what my colleagues think of me.) Some, particularly in the humanities or those with strong ethical stances, opt out entirely, citing concerns about bias, trustworthiness, or the environmental impact of large models. I respect those who take a stand through informed, data-driven decisions.

The article points out that some blame AI for the erosion of peer networks, as classmates default to AI tools for feedback and collaboration, bypassing the rewarding experiences (and challenges) that come through human interaction with their peers, meeting with TAs, and coming to the office hours of their instructors. No. Blame that on social media and mobile devices, which may both be considered bigger disruptions to regular human contact than anything AI is doing. AI might be making it easier to avoid human interaction, but social media will always surpass the damage done to promote human interaction than anything AI will ever do.

The fact is that very few campuses are monolithic; this is more true in the primarily undergraduate liberal arts colleges. The use of AI depends on the discipline. Let’s say that again – AI use and attitudes are shaped by discipline, workload, and personal philosophy. Let’s start respecting each other’s choices and autonomy to choose how they want to use it!

Here’s my take (subject to change as fast as AI is evolving):

Everyone should use AI only if they have a full understanding of how it works. There should be no judgment on individual choices made. Each user should understand how the model they are using is trained, with what data, and understand its strengths and weaknesses, as well as its ethical, societal, moral, and environmental impacts. If you determine AI is not right for you, your class, or your own learning, then take that stand, explain to your students, with data, why you are choosing that path to help them make future informed decisions, and make choices that are right for you. Let’s stop the virtue signaling, guilting, and condemning others who have chosen to use it. 

We all need to figure out what is right for our courses, our departments, and our programs. We need to teach our students to be ethical and responsible users of AI today. Every administration should be crafting a general policy on ethical and responsible AI use for their campus. But understand that there is no one-size-fits-all solution. That makes it challenging.

Disruption

The fact is, as I’ve stated elsewhere, AI is not going anywhere. To choose to ignore it is not a good way to move forward. It’s a colossal disruptor. There is no doubt that AI, like so many technologies over our lifetimes, is a disruptor that is displacing the norm of “how things are done.” It’s not the first, and it won’t be the last. Consider:

  • Industrial Automation, an endeavor around long before AI, has displaced traditional job markets, increased factory emissions, and disrupted landscapes from resource extraction.
  • Automobiles enabled great encomic growth, but led to urban spawl, traffic fatalities, and has been a formidable contributor to air pollution, also contributing to the need for substantial resource extraction for oil.
  • Nuclear Technology has such potential, but ask Chernobyl about the risks.
  • Computer and Internet Technologies have enabled nearly every modern disruption today! Want to point fingers at someone for AI? Well, blame this category! It disrupted how we communicate and do our work day-to-day. And, it caused incredible amounts of privacy and security concerns while continuing to contribute to the social and digital divides in the world. And the e-waste generated from new phones and tech being released year after year is profound. (It also contributed to enormous labor waste measured in countless hours of Solitaire played on the company’s dime!)
  • Biotech has had some remarkable improvements in crop yield at the cost of long-term health impacts, chemical runoff, loss of biodiversity, water pollution, and destruction of ecosystems.
  • Plastics, which we are only now seeing the huge negative impact on plants, animals and human health from microplastics

Disruption seems to be a historical norm. So, what is different about AI?

While many of the above are considered disruptors, AI is just… different. I’m not a philosopher by any stretch of the imagination. But, I think way too much, and I try my hardest to just listen to my colleagues across the college, combined with reading too much about AI. Here’s my take…

  1. Scope of human displacement – none of the above technologies have encroached on those humanistic things we do that are cognitive, creative, or fall under decision-making domains, the way AI has. It’s infringing on those things we thought were reserved for humans.
  2. Speed and Scale of change – the above disruptions took time to evolve. The pace that AI is moving is unprecedented, unpredictable, and driven mainly by large, greedy corporations who don’t give a %)@# how it impacts humans. And, it’s happening while we have an administration with no interest in curtailing and regulating how its used. This is downright unnerving and incredibly frustrating. As soon as I have something figured out, more capabilities emerge.
  3. Existential and Identity as Humans – AI is causing us to wake up in the morning, wondering what it means to be a human in the era of AI. 
  4. Trust, security, control, accountability, etc – AI systems are treated as those mythical “black boxes”, i.e., algorithms we blindly use with no concern about what’s going on behind the scenes. Who is responsible when AI causes harm, makes a biased decision, or generally has unintended consequences? 
  5. Privacy and manipulation – OK, so this really is part of the previous concern. But it bears repeating. AI can process an incredible amount of data, making it quite easy for “big brother” systems to monitor and perform regular surveillance without us knowing. Then there are the “deepfakes”, misinformation, etc… all posing greater risks than any of the above disruptors.
  6. Media – boy, is the mainstream media having a “hey day” with AI. Fear sells the news! News outlets love it when they can get your attention through fear and manipulation of your emotions. Facts? Pffft… not necessary when I got you emotionally charged. It’s what drives our political coverages on the news too. AI is being portrayed as a formidable, uncontrollable force, conjuring up images of roge terminators taking us out. The media fuels these anxieties.

There is a lot of fear and discontent over AI, rightfully so. It is indeed a disruptor, but unlike anything we have seen in quite some time, perhaps in human history. Or is it? Time will tell. For instance, automation was long criticized as a source of job displacement, but eventually realized it would create additional jobs. 

Balance – the key to everything

Let’s get back to the article at hand. Fortunately, it ends on a rather pragmatic note. The future of student success, as the author suggests, lies in balance – something we’re not very good at in academia, especially as we become a more polarized society. Can academics (me included) leverage AI for efficiency and accessibility, while doubling down on those things that make us human? Can we emphasize more intuition, compassion, innovation, creativity, and nuanced judgment? These are the things that AI cannot replicate. Can we reframe how we judge our students with our final grade assessments by rewarding their curiosity and willingness to challenge themselves and take intellectual deep dives instead of encouraging them to only learn enough to pass tests?  

Remember – AI can only imitate, not innovate. Let’s come together to figure out how to reimagine our mission in higher education – fostering the next generation of critical thinkers, world-changers, and ethical minds. It is essential if we are to maintain relevance.

You mean, ChatGPT isn’t good for your brain?

https://www.theweek.in/news/health/2025/06/19/chat-gpt-might-not-be-good-for-your-brain-new-mit-study-finds.html

A recent MIT study raises concerns about how using ChatGPT might impact critical thinking and brain engagement. Generalizing the outcomes here (which have not been published or peer-reviewed yet):

  • Population size? n=54 – hardly enough to suggest much of anything on humanity, though it’s not the only study to suggest negative impacts on our brain activity from human overreliance on genAI chatbots.
  • An interesting aspect of the study was the use of an EEG to measure brain activity. Participants who used ChatGPT to write SAT essays showed the lowest brain engagement compared to those using Google Search or no tools at all. EEG readings revealed reduced neural activity. Not surprising at all.
  • Their essays were more uniform and lacked creative or original thought… as one would expect from any generative AI engine that serves as nothing but a fancy-ass stochastic regurgitation machine.
  • Another obvious outcome – participants also became [insert-shocked-face-here] lazier over time. When they didn’t use their own brain in their writing, they struggled to recall their own writing when asked to revise without the AI.

The group that didn’t use any tools demonstrated the highest neural connectivity, were more engaged and curious, and felt more satisfied with their work.

I have yet to talk with any academic in any field who is not sounding enormous alarms (not to mention frustration from cheating) from student use of AI. What I tell them is this: Generative AI, like it or not, is here to stay. Power-hungry, greedy corporations are driving it with no interest in understanding its impact on humanity. Big companies are lured by the temptation to save a buck or a million, and the tech companies are ready to serve the next AI plate. Like it or not, AI is not going anywhere. AI will continue to become more capable and yes, even awe-inspiring.

The cynical side of me (which is about 99% of me these days) views genAI as democratizing cheating. I’ve been privy to students cheating. All of us educators have. But, the best cheating came from the affluent – those students who could afford their subscriptions to cheating websites and services, some even paying for their code or essays to be written by cheating “farms” around the world! Well, ChatGPT has made cheating accessible for all! Snark aside, let’s be realistic -cheating has been around since the dawn of education. Stop worrying about the cheating! My God, that’s the least of our problems today. It doesn’t matter what fancy language you put on your syllabus, what rules the administration tries to put in place, or what types of lock downs you put on school computers – students are going to cheat, and they have access to all the best tools out there, and they likely know more than you do on how to use them.

My observation is that college students do not know how to use AI responsibly. They do not know how to utilize it to aid and promote their own learning in a way that keeps them just as engaged and invested in their own learning, while protecting their own critical thinking with the tools.

In theory, if learning with AI tools does not result in students yielding tangible, measurable evidence that their intellectual growth and critical thinking exceeds those measures without the tools, then what are we doing even having this discussion? Problem solved! But that’s not realistic. Humans (me included) are opportunistic, stressed-out creatures who will always have moments of needing to cut corners to get work done, and damn, generative AI makes that easy.

Our real problem – teaching our kids how to learn, how to protect their own brain development, in the era of highly-accessible generative AI tools.

It’s not going anywhere. It will be used. What do we do?

I am trying to believe that eventually the hype and chaos will subside in time, allowing it to become viewed as a tool, maybe even an essential tool. It will become much like the calculator for a mathematician, the stethoscope for a doctor, or the IDE for a software engineer. I hope that in time, genAI will mature to become an essential tool to help us with the mundane, repetitive tasks so that we can focus on getting more done, allowing us to become more efficient with our time so that we can leave each day accomplishing more than we could before, ideally letting us spend more time with our friends and families (you know, those things we used to do before mobile devices and social media. Remember having conversations at restaurants?) But, it’s going to take a lot of time to get there, and honestly, I’m not sure we (at least in the U.S.) have what it takes to enact the legislation urgently needed to regulate how AI is used in K-12, or even whether it should be allowed to be used at all in the early years. This is not a partisan issue. This is one of the most important pieces of legislation that needs to be passed to protect our future. GenAI in the hands of kids, at the age when their brains are still growing, is the absolute worst possible use of any technology today. We need to help our K-12 educators learn how to use the tech themselves and recognize early signs when kids may be using it. Every discipline at the college level needs to reimagine its curriculum to include instruction on how to teach students to use AI safely and responsibly while protecting their own critical thinking. Ideally, if done correctly, it should enable us to accomplish more in the classroom. Our students should be able to conquer more challenging problems than ever before… if and only if we help them learn how to use it! And that’s on us, friends – the educators of the world – to help our next generation students learn how to do that. Unfortunately, times are bizarre right now. With the recent devaluing of education in this country, I’m not sure we have what it takes to wake up and fund and support the education system in a way it needs to get through this time. Papers like the above will become the norm, to the point where it’s mostly AI that is publishing another paper announcing yet another test where it outperforms humans on yet another measure of brain activity, intelligence, critical thinking, etc.

Legislation isn’t going to happen, at least not in the near future. It’s on all of us, as educators and part of our society that values education, to lead the way in protecting our kids. In my courses, it’s my responsibility to teach my students how to utilize these tools in ways that promote their brain development, learn how to use AI responsibly and safely, and minimize harm to humanity. I don’t have answers. None of us does, but pretending like it’s not here and doing nothing is not the answer. Failure to do so will transform our society into some form of an idiocracy within the next generation, with humanity at the mercy of powerful AI engines that do all our thinking for us. (I never realized that Mike Judge was a modern prophet.)

I consider myself a huge fan of technology, dating back to my high school days. The technology itself does not scare me. I find it fascinating and am actively using AI and ML models in my own work. It has a purpose. What scares me is the willingness of society to let AI do their thinking for them. My worst fears are letting our kids use it with no reins, with no understanding of the harm its use is doing to their mental capacities in the long term. The damage is real, and if we don’t help our kids and college students learn how to use it safely, it will have devastating impacts on humanity.

This is a disruptive technology. Disruption causes mass chaos for a time as society adjusts. It also displaces and eliminates those who fail to evolve with it and adapt. Just ask Kodak. Or Blockbuster. Or how about Blackberry? Or Palm. MySpace anyone? AOL?