How Do You Teach Computer Science in the A.I. Era? – NYTimes

https://www.nytimes.com/2025/06/30/technology/computer-science-education-ai.html

Key takeaways for the CS professor:

Computer Science as a field of study has been shaken to the core. It’s not that it’s no longer relevant. In fact, it’s probably more relevant now than ever – as long as you’re willing to broaden and reconsider what it even means to be a computer scientist in the AI era. The best schools out there, such as Carnegie Mellon, are reevaluating their curricula in response to generative AI tools like ChatGPT. This is something we’re actively in the midst of doing here at Bucknell. I can’t imagine any computer science program today remaining relevant unless they consider a massive overhaul.

The fact is that AI is rapidly transforming how computer science is taught, especially coding. Traditional programming should no longer be considered a primary objective in any CS curriculum. We are in the midst of transforming our curricula to consider broad topics such as computational thinking and AI literacy.

Ideas to consider:

  • Computer science may need to evolve toward a liberal arts-style education. We’ll need to consider more interdisciplinary and hybrid courses that integrate computing and computational thinking into other disciplines.
  • More courses and experiential learning opportunities that focus on critical thinking, ethical use of AI, and communication skills. I would also argue this is prime time for computer science programs to finally put heavy emphasis on User Experience – something that AI is horrible at. Any aspect of our field that focuses on the human side of our product is essential. UX, Human-computer interaction (HCI), user interface design, teamwork, project management, communication and presentation skills, data visualization, and so on, need to be incorporated in more courses. We no longer have the excuse of not having the space in our program to cover these essential skills.
  • Early computer science courses still need to stress computational thinking. AI will get 90% of the job completed, but will continue to struggle with the most complex pieces of large-scale projects. The more complex the project, the more it will struggle. Unfortunately, students often have a false sense of security and confidence, blindly using AI with no knowledge of how to fix problems they find, or even worse, they’ll lack knowledge of how to properly debug and test systems for correctness in the first place, operating under the dangerous assumption that the AI generated code is correct.

Career Impacts

I think we can all agree that AI seems to be eliminating some entry-level coding work, though I am struggling to get any real numbers on how AI is impacting these jobs vs. economic factors. The job market has tightened—entry-level roles are fewer, and students are applying more broadly. But here’s the complete story, and one that this NYTimes article concludes as well. I’ve been telling prospective and current computer science students that the news narrative is not telling the whole story. Based on where our students are still getting jobs today, AI is taking the jobs of those who do not know how to leverage AI. That’s a pretty clear, widely accepted reality in our field. Here’s the kicker that is rarely being reported (because, again, the news cycle thrives on negativity) – despite layoffs, demand for AI-assisted software is growing!If you have AI literacy skills, combined with strong, demonstrable critical-thinking skills, you can share experiences in and (preferably) outside of the classroom that show you know how to orchestrate solutions to large-scale projects, you can work well with teams and communicate and present results, and for goodness sake you understand human-centered design and how to measure and maximize UX, you will continue to remain in a highly sought-after field!

My final thought from this article – It’s pretty clear that AI has not only democratized cheating, but with respect to computer science, it has democratized programming:

“The growth in software engineering jobs may decline, but the total number of people involved in programming will increase,” said Alex Aiken, a professor of computer science at Stanford.

More non-tech workers will build software using AI tools. It’s a slippery, dangerous slope. Why?

“But they didn’t understand half of what the code was,” he said, leading many to realize the value of knowing how to write and debug code themselves. “The students are resetting.”

That’s true for many computer science students embracing the new A.I. tools, with some reservations. They say they use A.I for building initial prototype programs, for checking for errors in code and as a digital tutor to answer questions. But they are reluctant to rely on it too much, fearing it dulls their computing acumen.

Indeed, this is the reality check that we’re seeing in our own students here. They notice that AI is not always correct, even when solving simpler undergraduate computer science exercises. Conclusion: students still need to understand the fundamentals of computer science to fix the convoluted sequence of tokens from the LLMs. The output generated will appear to be a solid solution at first, looking like well-written Python, Java, C, or whatever language you’re working in, even properly commented if you prompted it correctly. And heck, the chatbot will sound extraordinarily confident and cheeky about its solution, pleased to have served you! But… it’s still just a stochastic sequence of tokens, subject to error.

Not all that different than human solutions.