AI as the New Study Partner: Yes, we CAN help students learn responsible AI use

AI to the Rescue: https://www.chronicle.com/special-projects/the-different-voices-of-student-success/ai-to-the-rescue

The Chronicle’s article “AI to the Rescue” offers a bit more of, shall we say, a balanced view of how today’s college students are utilizing AI in and out of the classroom. It’s a balanced take on AI, noting that not as many students as some faculty fear are using AI merely as a shortcut to skirt academic rigor. Some students are learning to use it as their 24/7 personalized tutor, seeing it as an essential part of their scaffolding to bolster their learning, manage cognitive load, and help them navigate the challenging times we put them through in higher education. The narrative is grounded in student voices and survey data, painting a nuanced picture that challenges the prevailing trope of AI as merely a cheating engine. Thank you, Beth McMurtrie.

I particularly liked Abeldt’s story:

Abeldt began building her AI arsenal. She used ChatGPT and watched YouTube videos to break down complex topics. Perplexity, an AI-powered search engine, helped her find information. Google NotebookLM turned class notes and lecture slides into study guides and podcasts, which she listened to on the way to class. Quizlet made flashcards and ChatGPT created mock exams and quizzes. Abeldt even used AI to develop a study plan to avoid procrastinating or spending too much time on any one assignment. She followed its guidance, and it worked.

Abeldt’s outcome? A GPA rebound, and more importantly, a sense of agency restored.

Not all are on board, and nor should they be

Everyone — faculty, staff and students alike — should proceed with caution and not jump into AI without proper guidance and boundaries. As we know, OpenAI, Anthropic, Google, Meta, and the corporations involved in developing AI have zero interest in protecting humanity as they develop these tools. There is no doubt, it can cause harm, and already has. We’ll continue to see new examples of harmful things resulting from the irresponsible or naive use of AI increasing in frequency. Many should avoid it altogether. It’s not clear when it should be used, if at all. Despite its danger, we are noticing some positive outcomes for those who understand how to use it responsibly. Anecdotes are not data, but my colleagues and I have noted that in our higher-level courses and research with our students, AI can be quite helpful. However, even in these contexts, it needs to be used responsibly and with caution. (Responsible AI – I will keep pushing this with urgency, no matter what my colleagues think of me.) Some, particularly in the humanities or those with strong ethical stances, opt out entirely, citing concerns about bias, trustworthiness, or the environmental impact of large models. I respect those who take a stand through informed, data-driven decisions.

The article points out that some blame AI for the erosion of peer networks, as classmates default to AI tools for feedback and collaboration, bypassing the rewarding experiences (and challenges) that come through human interaction with their peers, meeting with TAs, and coming to the office hours of their instructors. No. Blame that on social media and mobile devices, which may both be considered bigger disruptions to regular human contact than anything AI is doing. AI might be making it easier to avoid human interaction, but social media will always surpass the damage done to promote human interaction than anything AI will ever do.

The fact is that very few campuses are monolithic; this is more true in the primarily undergraduate liberal arts colleges. The use of AI depends on the discipline. Let’s say that again – AI use and attitudes are shaped by discipline, workload, and personal philosophy. Let’s start respecting each other’s choices and autonomy to choose how they want to use it!

Here’s my take (subject to change as fast as AI is evolving):

Everyone should use AI only if they have a full understanding of how it works. There should be no judgment on individual choices made. Each user should understand how the model they are using is trained, with what data, and understand its strengths and weaknesses, as well as its ethical, societal, moral, and environmental impacts. If you determine AI is not right for you, your class, or your own learning, then take that stand, explain to your students, with data, why you are choosing that path to help them make future informed decisions, and make choices that are right for you. Let’s stop the virtue signaling, guilting, and condemning others who have chosen to use it. 

We all need to figure out what is right for our courses, our departments, and our programs. We need to teach our students to be ethical and responsible users of AI today. Every administration should be crafting a general policy on ethical and responsible AI use for their campus. But understand that there is no one-size-fits-all solution. That makes it challenging.

Disruption

The fact is, as I’ve stated elsewhere, AI is not going anywhere. To choose to ignore it is not a good way to move forward. It’s a colossal disruptor. There is no doubt that AI, like so many technologies over our lifetimes, is a disruptor that is displacing the norm of “how things are done.” It’s not the first, and it won’t be the last. Consider:

  • Industrial Automation, an endeavor around long before AI, has displaced traditional job markets, increased factory emissions, and disrupted landscapes from resource extraction.
  • Automobiles enabled great encomic growth, but led to urban spawl, traffic fatalities, and has been a formidable contributor to air pollution, also contributing to the need for substantial resource extraction for oil.
  • Nuclear Technology has such potential, but ask Chernobyl about the risks.
  • Computer and Internet Technologies have enabled nearly every modern disruption today! Want to point fingers at someone for AI? Well, blame this category! It disrupted how we communicate and do our work day-to-day. And, it caused incredible amounts of privacy and security concerns while continuing to contribute to the social and digital divides in the world. And the e-waste generated from new phones and tech being released year after year is profound. (It also contributed to enormous labor waste measured in countless hours of Solitaire played on the company’s dime!)
  • Biotech has had some remarkable improvements in crop yield at the cost of long-term health impacts, chemical runoff, loss of biodiversity, water pollution, and destruction of ecosystems.
  • Plastics, which we are only now seeing the huge negative impact on plants, animals and human health from microplastics

Disruption seems to be a historical norm. So, what is different about AI?

While many of the above are considered disruptors, AI is just… different. I’m not a philosopher by any stretch of the imagination. But, I think way too much, and I try my hardest to just listen to my colleagues across the college, combined with reading too much about AI. Here’s my take…

  1. Scope of human displacement – none of the above technologies have encroached on those humanistic things we do that are cognitive, creative, or fall under decision-making domains, the way AI has. It’s infringing on those things we thought were reserved for humans.
  2. Speed and Scale of change – the above disruptions took time to evolve. The pace that AI is moving is unprecedented, unpredictable, and driven mainly by large, greedy corporations who don’t give a %)@# how it impacts humans. And, it’s happening while we have an administration with no interest in curtailing and regulating how its used. This is downright unnerving and incredibly frustrating. As soon as I have something figured out, more capabilities emerge.
  3. Existential and Identity as Humans – AI is causing us to wake up in the morning, wondering what it means to be a human in the era of AI. 
  4. Trust, security, control, accountability, etc – AI systems are treated as those mythical “black boxes”, i.e., algorithms we blindly use with no concern about what’s going on behind the scenes. Who is responsible when AI causes harm, makes a biased decision, or generally has unintended consequences? 
  5. Privacy and manipulation – OK, so this really is part of the previous concern. But it bears repeating. AI can process an incredible amount of data, making it quite easy for “big brother” systems to monitor and perform regular surveillance without us knowing. Then there are the “deepfakes”, misinformation, etc… all posing greater risks than any of the above disruptors.
  6. Media – boy, is the mainstream media having a “hey day” with AI. Fear sells the news! News outlets love it when they can get your attention through fear and manipulation of your emotions. Facts? Pffft… not necessary when I got you emotionally charged. It’s what drives our political coverages on the news too. AI is being portrayed as a formidable, uncontrollable force, conjuring up images of roge terminators taking us out. The media fuels these anxieties.

There is a lot of fear and discontent over AI, rightfully so. It is indeed a disruptor, but unlike anything we have seen in quite some time, perhaps in human history. Or is it? Time will tell. For instance, automation was long criticized as a source of job displacement, but eventually realized it would create additional jobs. 

Balance – the key to everything

Let’s get back to the article at hand. Fortunately, it ends on a rather pragmatic note. The future of student success, as the author suggests, lies in balance – something we’re not very good at in academia, especially as we become a more polarized society. Can academics (me included) leverage AI for efficiency and accessibility, while doubling down on those things that make us human? Can we emphasize more intuition, compassion, innovation, creativity, and nuanced judgment? These are the things that AI cannot replicate. Can we reframe how we judge our students with our final grade assessments by rewarding their curiosity and willingness to challenge themselves and take intellectual deep dives instead of encouraging them to only learn enough to pass tests?  

Remember – AI can only imitate, not innovate. Let’s come together to figure out how to reimagine our mission in higher education – fostering the next generation of critical thinkers, world-changers, and ethical minds. It is essential if we are to maintain relevance.

Leave a Reply

Your email address will not be published. Required fields are marked *