Is Computer Science Dead?

There is very little doubt in my mind that AI is here. Despite the plethora of thoughts and feelings of faculty across higher ed, AI is here, with all signs pointing to its continued growth and adoption. Its capabilities are growing at an absolutely profound rate, making any effort to try and figure out the best approach seem futile at best. It has introduced a level of disruption unlike anything we’ve seen, and nowhere is this more true than in computer science.

I have had quite a few discussions over the past couple of years about AI. The topic is often around how we are preparing our students. The discussions have involved every direct and indirect member of the Bucknell community, including prospective students and their parents, current students, our alumni, companies that consider hiring our students, staff on campus, and members of our administration at all levels, including the provost’s office, admissions, and university advancement. I also recently spoke before our Board of Trustees (as recently as last week) about how we are working to prepare our students. The only common takeaway from all of these conversations is that we’re all concerned, and we’re all working in our own way to provide the most meaningful path forward for our students.

I have been restructuring my courses to incorporate hands-on experience with AI tools, and I am far from alone in this effort. Many of my colleagues are doing the same. Across all three of our colleges, faculty are working diligently to find meaningful ways to integrate the thoughtful, careful, and responsible use of AI into their courses without encouraging cognitive offloading, while still giving students productive pathways for using these tools effectively. I believe we have a responsibility to equip all students, regardless of major, with strong AI literacy skills so they can thrive in an evolving workforce. However, there is no one uniform approach for all majors. The impact of AI across disciplines is wide and varied. The rapid evolution of AI technology sometimes leaves one feeling hopeless in education. It seems like the moment we convince ourselves we may have solid ideas moving forward, they quickly become irrelevant as new capabilities arise. Indeed, these are exciting times!

Computer science is being hit hard, but we have survived downturns repeatedly in the past, and we will survive this. However, this downturn is fundamentally different. The core tools of our trade are no longer relevant! It’s time for computer scientists and the discipline itself to evolve. We must recognize that we are in the midst of a new industrial revolution in our field. Everything we’ve been doing up to this point needs substantial renovation and reimagination. And, let’s agree that the field of Artificial Intelligence itself ought not to be considered only the domain of computer science. Sure, it rose in popularity largely because of substantial advances in computer science, computer engineering, electrical engineering (all brought to you our two lovely parents, math and physics!) Today, AI is not just a computer science or STEM initiative; it is a campuswide responsibility, one where the humanities and social sciences must play an essential role. How can it not? How can we teach artificial intelligence without having students in AI also learn about the very systems these computational algorithms are designed to replicate, i.e. human intelligence. It’d seem prudent that, at a minimum, any core AI curricular endeavor ought to include content that delves into human cognition.

In this new landscape, the urgent task of computer science education is not to outcompete AI at writing code. I cannot think of a more futile task. Writing code is no longer the exclusive marker of expertise and computational prowess it once was. (And for those like me who grew up thriving as teenage nerds writing our own games, and programs to solve our math problems in high school – just because we could – can I just say… damn, that sucks! I loved coding!) AI tools today can perform much of that work far more quickly, effectively, and efficiently than any programmer could.

So, is computer science dead? Absolutely not! We need to evolve, and those who refuse to evolve risk becoming irrelevant. We must cultivate and integrate the human capacities that AI cannot replace. Our students must learn to think critically and independently, to define and understand meaningful problems, and to evaluate AI-generated solutions for correctness, complexity, efficiency, and fitness to real-world constraints. They need experience working with people, assessing real-world problems through client communication, and defining their problems with solid goals and constraints that translate to plans that can easily be carried out in tandem with AI. They need to reason about consequences when systems fail or behave unpredictably, because AI, like humans, will never produce perfect solutions, especially as long as humans are orchestrating them. They must be able to communicate clearly with both technical and nontechnical audiences, design for people by optimizing for user experience and considering accessibility needs, and collaborate across disciplines to understand the social and organizational contexts in which their systems will operate. And ultimately, they must develop the ethical judgment to decide when and how a system should be built, deployed, or rejected altogether. These are not peripheral skills; they are the core of what it means to be a computing professional in an age where AI can generate code but cannot assume responsibility for its impact.

Computer Science Skills in the AI Era

It’s worth repeating. Let’s summarize some of the skills we want to be injecting in our courses throughout our curricula in CS. To be honest, I do not see anything new in my list. In fact, one could argue that AI is an amazing realization that is finally allowing us to put emphasis on things that computer science should have been emphasizing all along:

Critical thinking and independent reasoning – evaluating AI outputs rather than accepting them uncritically

Problem definition and framing – identifying what question is actually worth solving

Solution evaluation – assessing correctness, algorithmic complexity, efficiency, and real-world fitness

Consequence reasoning – anticipating failure modes, unintended behavior, and downstream effects

Communication – speaking and writing clearly for both technical and nontechnical audiences

Collaboration and teamwork – leading and contributing in interdisciplinary groups

UX and human-centered design – designing for real users, not just functional outputs

Accessibility and inclusion – building systems that serve diverse populations from the start

Debugging and system sense-making — diagnosing complex and AI-assisted systems rather than treating outputs as oracles

Data literacy – understanding data quality, bias, and the limits of models trained on it

Metacognition – knowing when AI is likely wrong and when to slow down or seek other perspectives, step back, and reconsider your plan

Adaptability and lifelong learning – keeping pace with rapidly evolving tools and ecosystems

Domain reasoning – applying computing thoughtfully within specific real-world fields and constraints

Ethical judgment – reasoning about fairness, responsibility, privacy, power, and accountability

Inspiring AI Literacy Initiatives

I’ll close with a handful of examples I’ve been collecting of how colleges and universities around the country are embracing AI with a liberal arts mindset:

  • Why AI is ‘resurrecting’ the liberal arts for the Class of 2026: At Wake Forest, educators are observing that the rapid evolution of workplace technologies is actually increasing the value of traditional liberal arts strengths. Rather than making humanistic study obsolete, AI platforms require graduates who can critically evaluate outputs, communicate effectively, and apply broad context to complex problems.
  • Building the Future: Why Teaching AI to Liberal Arts Students Is Critical Work: Dartmouth’s Tuck School of Business emphasizes that liberal arts students possess the exact intellectual framework necessary for an AI-augmented workforce. Their initiative demonstrates that non-STEM majors can excel in tech-driven environments when taught to leverage their natural abilities to ask probing questions and maintain ethical oversight.
  • Social scientists embrace the AI moment: Stanford researchers are highlighting how AI is fundamentally transforming empirical workflows and data analysis within the social sciences. By integrating AI into these fields, students learn to navigate new research methodologies while applying human judgment to automated text analysis and summarization tools.
  • Generative AI is raising new questions for liberal arts education: The University of Richmond is actively confronting the ethical and pedagogical challenges brought by generative models by fostering cross-disciplinary discussions. They are creating practical campus resources that help faculty and students collaborate to critically engage with AI tools across all departments.
  • Breaking Faculty Barriers to AI Literacy: The Digital Education Council provides a measured look at the widespread institutional struggle to move from mere interest in AI to actual classroom implementation. They argue that addressing faculty hesitation requires dedicated support structures, clear incentives, and practical guidance rather than just top-down mandates.

AI Agents should not be used as customer service

We’re hearing that multiple companies are striving to optimize and streamline services across their businesses with AI, buying into the promises AI companies themselves are touting. However, this is all too new to really understand best practices for AI deployment. Beyond software development, we have very little evidence to justify the rate of adoption we’re seeing. Sadly, so many are jumping on the AI bandwagon out of FOMO – fear of missing out. They are not taking the time to understand how the blind deployment of these agents in the wrong way may have long-lasting detriments, many of which will be hard to recover from. It’s a misguided optimization that ignores the customer.

Here’s the latest example – deployment of AI agents as the front line for customer service.

Quite a sobering read, really. It’s a pathetic, absurd move on the part of these businesses that think that deploying AI agents as customer service is going to win over customers. But, don’t blame AI! Blame the AI vendors:

The widespread adoption of obstructive AI is not just a corporate decision; it is fueled by a booming industry of AI vendors that market their platforms with promises of massive cost savings. An analysis of marketing materials from leading vendors like Talkdesk, Intercom, Zendesk, and Five9 reveals that the primary value proposition is not better service, but radical efficiency achieved by reducing human involvement.

As an AI/ML researcher, let me explain what’s happening. AI models are algorithms that learn from data. They can be trained to optimize for whatever problem the AI engineer is tasked with solving using these powerful tools. In machine learning, we frame learning as an objective function – a simple, measurable quantity that we want the model to learn to minimize or maximize. It’s not clear what these companies are optimizing for. Any number of possible metrics are possible here.

  • Minimize escalation rate – reward the model for closed tickets that do not involve humans
  • Minimize average handling time, regardless of whether the customer is satisfied
  • Minimize refund approval rate – this is pretty diabolical – let the model learn patterns in dialogue that fatigue the customer the most and lead to them not pursuing any refund!

This is not the fault of AI, but represents a deliberate design failure. AI companies will train models with whatever objectives they choose. They certainly have no reason to do otherwise. And when you fine-tune an AI agent to minimize escalation that involves humans, it will learn all sorts of delay tactics. Repeating responses, asking for clarification, asking for documents already submitted, offering irrelevant options, etc. There are many ways it can learn to fatigue a human! 

A well-designed AI agent should be optimizing for customer-side outcomes. Why aren’t these companies optimizing for this value proposition? (I smell greed!) For example, maximizing customer resolution rates or minimizing downstream complaints being filed are just a couple. The moment the company’s cost function and the customer’s satisfaction function are treated as the same optimization target, the entire problem changes, and these AI agents would learn to improve their dialogues in ways that leave both the customer and the company happy. But that’s not where companies are at right now. Businesses are blindly jumping on the AI bandwagon.

Finally, it’s worth reminding all of us that when we’re engaging with AI models, we’re engaging with models that learned how to respond with streams of tokens, i.e. words that represent the most likely responses of words that optimize the AI model’s objective function. There is absolutely no affective intelligence – the capacity to perceive, interpret, and respond appropriately to human emotional states. You and I do this automatically. We’re pretty awesome that way. We can detect frustration in word choices. Hesitations and pauses are meaningful. The phrase, “your silence is deafening,” would be meaningless for an AI agent. We can hear distress in tone. We can adapt and adjust in real time (or not!) Even the most sophisticated models today, at best, can only predict a particular emotional state based on selected words, but can’t infer the meaning of that emotion in context. While this is an ongoing area of research, tonality in human voice is so wide and varied that it’s indeed a challenging problem for AI. None of these deployed agents are designed to properly channel empathy, moral intuition, compassion, dignity, respect, etc. Of course, a bigger question is, would we even want that? Personally, I would not. A compassionate AI agent is still just an algorithm.

I can’t help but wonder how long it will be before this practice reflects negatively on companies that deploy AI agents as their first-pass interface for their customer base. As an AI researcher who teaches students how to build these models, I find this (among a variety of other misguided uses of AI today) quite unfortunate; these choices only deepen the public’s distrust and growing weariness of AI. 

So, here’s your PSA: Recent surveys and research show that humans have a strong preference not to interact with AI agents. (And, if you’ve been subject to interacting with an AI agent when you called for customer service, you get it!) For example, one recent online survey of 1,011 U.S. consumers found that 93.4% prefer interacting with a human over an AI agent. Another study suggested 70% of customers will abandon your brand after just one poor AI experience. It makes sense – if a company thinks so little of their customers that they are willing to make us talk to computers instead of humans, people will not remain loyal.

My most recent personal interaction with an AI agent is partly what drove this post. I had to place a call to my local healthcare provider recently. Even in healthcare, they are replacing their front desk receptionists with AI agents. The whole experience left me wanting to search for an alternative provider. I ultimately ended up needing a human agent anyway, who could have simply resolved my question in a small fraction of the time it took with the AI agent.

Let’s hope some of these companies learn to put humans back as the primary interface for the company and keep AI agents focused on what they are good at. If they don’t, I see a big opportunity for the smaller, local businesses to see a resurgence in many domains. Humans do not want to interact with AI! Why is this hard for businesses to understand? (Greed blinds common sense.)

The big Gartner report last month gives me some hope – they are predicting that half of the companies that laid off customer service staff will be forced to rehire by 2027. 

AI is a tool to assist humans in their work. It should not replace humans, especially in human-facing roles. Companies that choose to use it this way will experience quite a backlash in business.