Talk – Prompting AI – July 15, 2025

I gave a talk for faculty and staff on how to prompt AI to get better results. We covered the background of AI, explained LLMs and their data, and discussed the increasingly common caveats and societal concerns with blindly using chatbots like ChatGPT, Claude, etc. We then covered the essentials of prompt engineering, with beginner and advanced tips.

The talk was mostly focused on summarizing, investigation, and brainstorming tasks, not on improving day-to-day mundane tasks such as automated e-mail handling and such. (That will be a future talk!)

Key slides related to the prompting part of the talk are linked below:

Slides – Prompting AI for Faculty/Staff – July 15, 2025

These materials are part of workshops and talks I give as part of my role as Faculty Fellow of the Dominguez Center for Data Science.

Prompt Engineering, Presented to Faculty/Staff at Bucknell University, July 15, 2025. ©2025 by Brian R. King, licensed under CC BY-NC-SA 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/4.0/

AI coding tools make developers slower but they think they’re faster, study finds

https://www.theregister.com/2025/07/11/ai_code_tools_slow_down/

Download the full paper from metr.org

I want this paper to join the regular stream of AI articles appearing in everybody’s news feed, especially those of the CEOs of Anthropic, OpenAI, Meta, Google, Microsoft, and every other company out there banking on being able to downsize their workforce someday on the promise of the AI hype. As is so often the case in the tech industry, the CEOs are out of touch with reality.

The setup

The study was a randomized controlled trial (RCT) involving 16 experienced open-source developers, each with an average of 5 years of prior experience on mature projects. Developers completed 246 tasks – Github issues to be more specific – with each task randomly assigned to either allow or disallow the use of AI tools. The AI tools they allowed were Cursor Pro, configured with Claude 3.5/3.7 Sonnet LLM.  When AI tools were allowed, developers primarily used Cursor Pro and Claude 3.5/3.7 Sonnet. The study included onboarding and training, required developers to record their screens while working, and collected both quantitative and qualitative data through surveys, interviews, and screen recordings. They tried to mirror real-world development closely. 

The results

Developers were quite confident in AI before the study. On average, they thought they woudl shave 24% off their development time. In contrast, they ended up increasing their development time by 19%! 

They postulated about a lot of factors that could contribute to the slowdown. With such a small population in this study, it’d be hard to find a lot of strong predictors. However, there were some that met significance that were quite telling:

  • Over-optimism about AI usefulness – On average, the developers predicted their implementation time would decrease by 24%. Instead, it increased by 19%.
  • High developer familiarity with repositories – This confused me at first, but as a long-time developer, it made sense. Developers who were intimately familiar with their repositories should trust themselves to work on their issues rather than trust AI. AI will be challenged to match human expertise on a project in which you are already the expert. 
  • Large and complex repositories – This should come as no surprise. Some of the repositories that had the worst outcomes were those with over 1,000,000 lines of code. No human or AI is going to be able to easily make sense of that, especially when the repo has been around a long time and is riddled with legacy, outdated, poorly maintained code. What makes people think that AI is going to somehow do a better job of fixing crap code? It can help a developer refactor it so that it can then meet high standards, and then maybe it’d have a better chance of helping with improving and fixing the code. 
  • Low AI Reliability – Most developers (myself included) regularly deal with AI generating useless code. But it’s confident about its useless output! So convincing! Thus, devs often erroneously just accept it. Unfortunately, we eventually get into a loop where we accept buggy code, ask AI to fix its own buggy code, at which point it introduces another bug. Sometimes, it’ll even go back and re-introduce the bug it originally fixed. It’s a vicious cycle.

I would strongly advise any company considering downsizing its developer workforce to carefully reconsider their plans. AI has its strengths. The developer who blindly uses it with no real knowledge ot software engineering is an incredibly dangerous developer who should be avoided. However, a developer who knows AI’s strengths and weaknesses, who knows when to trust it, when to avoid it, and how to control and leverage it carefully in thier workflow has tremendous promise. They will be in sought-after positions for quite some time. 

The Entire Internet is Reverting to Beta – The Atlantic

https://www.theatlantic.com/technology/archive/2025/06/ai-janky-web/683228/

Nattei Wong has a fantastic write-up that is a bit of a different take on the societal impact of generative AI. Instead of yet another article on the dumbing down of humanity, instead, Wong alerts us to our increased tolerance of errors, mistakes, hallucinations, and general shoddy solutions that AI is generating. For some reason, though we have little tolerance of other humans making mistakes, we’re seemingly OK with AI making mistakes, deeming it all as acceptable.

The opening paragraph says it all: 

A car that accelerates instead of braking every once in a while is not ready for the road. A faucet that occasionally spits out boiling water instead of cold does not belong in your home. Working properly most of the time simply isn’t good enough for technologies that people are heavily reliant upon. And two and a half years after the launch of ChatGPT, generative AI is becoming such a technology.

His final take – AI is in a dangerous zone, noting, “They may not be wrong frequently enough to be jettisoned, but they also may not be wrong rarely enough to ever be fully trusted.

The concern here is something we’re already seeing among the population using generative AI chatbots—we’re becoming “beta” as we become accustomed to and accept mediocrity. When it makes mistakes, we just say, “Oh yeah, it hallucinates. Oh well.” 

How Do You Teach Computer Science in the A.I. Era? – NYTimes

https://www.nytimes.com/2025/06/30/technology/computer-science-education-ai.html

Key takeaways for the CS professor:

Computer Science as a field of study has been shaken to the core. It’s not that it’s no longer relevant. In fact, it’s probably more relevant now than ever – as long as you’re willing to broaden and reconsider what it even means to be a computer scientist in the AI era. The best schools out there, such as Carnegie Mellon, are reevaluating their curricula in response to generative AI tools like ChatGPT. This is something we’re actively in the midst of doing here at Bucknell. I can’t imagine any computer science program today remaining relevant unless they consider a massive overhaul.

The fact is that AI is rapidly transforming how computer science is taught, especially coding. Traditional programming should no longer be considered a primary objective in any CS curriculum. We are in the midst of transforming our curricula to consider broad topics such as computational thinking and AI literacy.

Ideas to consider:

  • Computer science may need to evolve toward a liberal arts-style education. We’ll need to consider more interdisciplinary and hybrid courses that integrate computing and computational thinking into other disciplines.
  • More courses and experiential learning opportunities that focus on critical thinking, ethical use of AI, and communication skills. I would also argue this is prime time for computer science programs to finally put heavy emphasis on User Experience – something that AI is horrible at. Any aspect of our field that focuses on the human side of our product is essential. UX, Human-computer interaction (HCI), user interface design, teamwork, project management, communication and presentation skills, data visualization, and so on, need to be incorporated in more courses. We no longer have the excuse of not having the space in our program to cover these essential skills.
  • Early computer science courses still need to stress computational thinking. AI will get 90% of the job completed, but will continue to struggle with the most complex pieces of large-scale projects. The more complex the project, the more it will struggle. Unfortunately, students often have a false sense of security and confidence, blindly using AI with no knowledge of how to fix problems they find, or even worse, they’ll lack knowledge of how to properly debug and test systems for correctness in the first place, operating under the dangerous assumption that the AI generated code is correct.

Career Impacts

I think we can all agree that AI seems to be eliminating some entry-level coding work, though I am struggling to get any real numbers on how AI is impacting these jobs vs. economic factors. The job market has tightened—entry-level roles are fewer, and students are applying more broadly. But here’s the complete story, and one that this NYTimes article concludes as well. I’ve been telling prospective and current computer science students that the news narrative is not telling the whole story. Based on where our students are still getting jobs today, AI is taking the jobs of those who do not know how to leverage AI. That’s a pretty clear, widely accepted reality in our field. Here’s the kicker that is rarely being reported (because, again, the news cycle thrives on negativity) – despite layoffs, demand for AI-assisted software is growing!If you have AI literacy skills, combined with strong, demonstrable critical-thinking skills, you can share experiences in and (preferably) outside of the classroom that show you know how to orchestrate solutions to large-scale projects, you can work well with teams and communicate and present results, and for goodness sake you understand human-centered design and how to measure and maximize UX, you will continue to remain in a highly sought-after field!

My final thought from this article – It’s pretty clear that AI has not only democratized cheating, but with respect to computer science, it has democratized programming:

“The growth in software engineering jobs may decline, but the total number of people involved in programming will increase,” said Alex Aiken, a professor of computer science at Stanford.

More non-tech workers will build software using AI tools. It’s a slippery, dangerous slope. Why?

“But they didn’t understand half of what the code was,” he said, leading many to realize the value of knowing how to write and debug code themselves. “The students are resetting.”

That’s true for many computer science students embracing the new A.I. tools, with some reservations. They say they use A.I for building initial prototype programs, for checking for errors in code and as a digital tutor to answer questions. But they are reluctant to rely on it too much, fearing it dulls their computing acumen.

Indeed, this is the reality check that we’re seeing in our own students here. They notice that AI is not always correct, even when solving simpler undergraduate computer science exercises. Conclusion: students still need to understand the fundamentals of computer science to fix the convoluted sequence of tokens from the LLMs. The output generated will appear to be a solid solution at first, looking like well-written Python, Java, C, or whatever language you’re working in, even properly commented if you prompted it correctly. And heck, the chatbot will sound extraordinarily confident and cheeky about its solution, pleased to have served you! But… it’s still just a stochastic sequence of tokens, subject to error.

Not all that different than human solutions.