Is Computer Science Dead?

There is very little doubt in my mind that AI is here. Despite the plethora of thoughts and feelings of faculty across higher ed, AI is here, with all signs pointing to its continued growth and adoption. Its capabilities are growing at an absolutely profound rate, making any effort to try and figure out the best approach seem futile at best. It has introduced a level of disruption unlike anything we’ve seen, and nowhere is this more true than in computer science.

I have had quite a few discussions over the past couple of years about AI. The topic is often around how we are preparing our students. The discussions have involved every direct and indirect member of the Bucknell community, including prospective students and their parents, current students, our alumni, companies that consider hiring our students, staff on campus, and members of our administration at all levels, including the provost’s office, admissions, and university advancement. I also recently spoke before our Board of Trustees (as recently as last week) about how we are working to prepare our students. The only common takeaway from all of these conversations is that we’re all concerned, and we’re all working in our own way to provide the most meaningful path forward for our students.

I have been restructuring my courses to incorporate hands-on experience with AI tools, and I am far from alone in this effort. Many of my colleagues are doing the same. Across all three of our colleges, faculty are working diligently to find meaningful ways to integrate the thoughtful, careful, and responsible use of AI into their courses without encouraging cognitive offloading, while still giving students productive pathways for using these tools effectively. I believe we have a responsibility to equip all students, regardless of major, with strong AI literacy skills so they can thrive in an evolving workforce. However, there is no one uniform approach for all majors. The impact of AI across disciplines is wide and varied. The rapid evolution of AI technology sometimes leaves one feeling hopeless in education. It seems like the moment we convince ourselves we may have solid ideas moving forward, they quickly become irrelevant as new capabilities arise. Indeed, these are exciting times!

Computer science is being hit hard, but we have survived downturns repeatedly in the past, and we will survive this. However, this downturn is fundamentally different. The core tools of our trade are no longer relevant! It’s time for computer scientists and the discipline itself to evolve. We must recognize that we are in the midst of a new industrial revolution in our field. Everything we’ve been doing up to this point needs substantial renovation and reimagination. And, let’s agree that the field of Artificial Intelligence itself ought not to be considered only the domain of computer science. Sure, it rose in popularity largely because of substantial advances in computer science, computer engineering, electrical engineering (all brought to you our two lovely parents, math and physics!) Today, AI is not just a computer science or STEM initiative; it is a campuswide responsibility, one where the humanities and social sciences must play an essential role. How can it not? How can we teach artificial intelligence without having students in AI also learn about the very systems these computational algorithms are designed to replicate, i.e. human intelligence. It’d seem prudent that, at a minimum, any core AI curricular endeavor ought to include content that delves into human cognition.

In this new landscape, the urgent task of computer science education is not to outcompete AI at writing code. I cannot think of a more futile task. Writing code is no longer the exclusive marker of expertise and computational prowess it once was. (And for those like me who grew up thriving as teenage nerds writing our own games, and programs to solve our math problems in high school – just because we could – can I just say… damn, that sucks! I loved coding!) AI tools today can perform much of that work far more quickly, effectively, and efficiently than any programmer could.

So, is computer science dead? Absolutely not! We need to evolve, and those who refuse to evolve risk becoming irrelevant. We must cultivate and integrate the human capacities that AI cannot replace. Our students must learn to think critically and independently, to define and understand meaningful problems, and to evaluate AI-generated solutions for correctness, complexity, efficiency, and fitness to real-world constraints. They need experience working with people, assessing real-world problems through client communication, and defining their problems with solid goals and constraints that translate to plans that can easily be carried out in tandem with AI. They need to reason about consequences when systems fail or behave unpredictably, because AI, like humans, will never produce perfect solutions, especially as long as humans are orchestrating them. They must be able to communicate clearly with both technical and nontechnical audiences, design for people by optimizing for user experience and considering accessibility needs, and collaborate across disciplines to understand the social and organizational contexts in which their systems will operate. And ultimately, they must develop the ethical judgment to decide when and how a system should be built, deployed, or rejected altogether. These are not peripheral skills; they are the core of what it means to be a computing professional in an age where AI can generate code but cannot assume responsibility for its impact.

Computer Science Skills in the AI Era

It’s worth repeating. Let’s summarize some of the skills we want to be injecting in our courses throughout our curricula in CS. To be honest, I do not see anything new in my list. In fact, one could argue that AI is an amazing realization that is finally allowing us to put emphasis on things that computer science should have been emphasizing all along:

Critical thinking and independent reasoning – evaluating AI outputs rather than accepting them uncritically

Problem definition and framing – identifying what question is actually worth solving

Solution evaluation – assessing correctness, algorithmic complexity, efficiency, and real-world fitness

Consequence reasoning – anticipating failure modes, unintended behavior, and downstream effects

Communication – speaking and writing clearly for both technical and nontechnical audiences

Collaboration and teamwork – leading and contributing in interdisciplinary groups

UX and human-centered design – designing for real users, not just functional outputs

Accessibility and inclusion – building systems that serve diverse populations from the start

Debugging and system sense-making — diagnosing complex and AI-assisted systems rather than treating outputs as oracles

Data literacy – understanding data quality, bias, and the limits of models trained on it

Metacognition – knowing when AI is likely wrong and when to slow down or seek other perspectives, step back, and reconsider your plan

Adaptability and lifelong learning – keeping pace with rapidly evolving tools and ecosystems

Domain reasoning – applying computing thoughtfully within specific real-world fields and constraints

Ethical judgment – reasoning about fairness, responsibility, privacy, power, and accountability

Inspiring AI Literacy Initiatives

I’ll close with a handful of examples I’ve been collecting of how colleges and universities around the country are embracing AI with a liberal arts mindset:

  • Why AI is ‘resurrecting’ the liberal arts for the Class of 2026: At Wake Forest, educators are observing that the rapid evolution of workplace technologies is actually increasing the value of traditional liberal arts strengths. Rather than making humanistic study obsolete, AI platforms require graduates who can critically evaluate outputs, communicate effectively, and apply broad context to complex problems.
  • Building the Future: Why Teaching AI to Liberal Arts Students Is Critical Work: Dartmouth’s Tuck School of Business emphasizes that liberal arts students possess the exact intellectual framework necessary for an AI-augmented workforce. Their initiative demonstrates that non-STEM majors can excel in tech-driven environments when taught to leverage their natural abilities to ask probing questions and maintain ethical oversight.
  • Social scientists embrace the AI moment: Stanford researchers are highlighting how AI is fundamentally transforming empirical workflows and data analysis within the social sciences. By integrating AI into these fields, students learn to navigate new research methodologies while applying human judgment to automated text analysis and summarization tools.
  • Generative AI is raising new questions for liberal arts education: The University of Richmond is actively confronting the ethical and pedagogical challenges brought by generative models by fostering cross-disciplinary discussions. They are creating practical campus resources that help faculty and students collaborate to critically engage with AI tools across all departments.
  • Breaking Faculty Barriers to AI Literacy: The Digital Education Council provides a measured look at the widespread institutional struggle to move from mere interest in AI to actual classroom implementation. They argue that addressing faculty hesitation requires dedicated support structures, clear incentives, and practical guidance rather than just top-down mandates.

Leave a Reply

Your email address will not be published. Required fields are marked *