Nattei Wong has a fantastic write-up that is a bit of a different take on the societal impact of generative AI. Instead of yet another article on the dumbing down of humanity, instead, Wong alerts us to our increased tolerance of errors, mistakes, hallucinations, and general shoddy solutions that AI is generating. For some reason, though we have little tolerance of other humans making mistakes, we’re seemingly OK with AI making mistakes, deeming it all as acceptable.
The opening paragraph says it all:
A car that accelerates instead of braking every once in a while is not ready for the road. A faucet that occasionally spits out boiling water instead of cold does not belong in your home. Working properly most of the time simply isn’t good enough for technologies that people are heavily reliant upon. And two and a half years after the launch of ChatGPT, generative AI is becoming such a technology.
His final take – AI is in a dangerous zone, noting, “They may not be wrong frequently enough to be jettisoned, but they also may not be wrong rarely enough to ever be fully trusted.“
The concern here is something we’re already seeing among the population using generative AI chatbots—we’re becoming “beta” as we become accustomed to and accept mediocrity. When it makes mistakes, we just say, “Oh yeah, it hallucinates. Oh well.”
Computer Science as a field of study has been shaken to the core. It’s not that it’s no longer relevant. In fact, it’s probably more relevant now than ever – as long as you’re willing to broaden and reconsider what it even means to be a computer scientist in the AI era. The best schools out there, such as Carnegie Mellon, are reevaluating their curricula in response to generative AI tools like ChatGPT. This is something we’re actively in the midst of doing here at Bucknell. I can’t imagine any computer science program today remaining relevant unless they consider a massive overhaul.
The fact is that AI is rapidly transforming how computer science is taught, especially coding. Traditional programming should no longer be considered a primary objective in any CS curriculum. We are in the midst of transforming our curricula to consider broad topics such as computational thinking and AI literacy.
Ideas to consider:
Computer science may need to evolve toward a liberal arts-style education. We’ll need to consider more interdisciplinary and hybrid courses that integrate computing and computational thinking into other disciplines.
More courses and experiential learning opportunities that focus on critical thinking, ethical use of AI, and communication skills. I would also argue this is prime time for computer science programs to finally put heavy emphasis on User Experience – something that AI is horrible at. Any aspect of our field that focuses on the human side of our product is essential. UX, Human-computer interaction (HCI), user interface design, teamwork, project management, communication and presentation skills, data visualization, and so on, need to be incorporated in more courses. We no longer have the excuse of not having the space in our program to cover these essential skills.
Early computer science courses still need to stress computational thinking. AI will get 90% of the job completed, but will continue to struggle with the most complex pieces of large-scale projects. The more complex the project, the more it will struggle. Unfortunately, students often have a false sense of security and confidence, blindly using AI with no knowledge of how to fix problems they find, or even worse, they’ll lack knowledge of how to properly debug and test systems for correctness in the first place, operating under the dangerous assumption that the AI generated code is correct.
Career Impacts
I think we can all agree that AI seems to be eliminating some entry-level coding work, though I am struggling to get any real numbers on how AI is impacting these jobs vs. economic factors. The job market has tightened—entry-level roles are fewer, and students are applying more broadly. But here’s the complete story, and one that this NYTimes article concludes as well. I’ve been telling prospective and current computer science students that the news narrative is not telling the whole story. Based on where our students are still getting jobs today, AI is taking the jobs of those who do not know how to leverage AI. That’s a pretty clear, widely accepted reality in our field. Here’s the kicker that is rarely being reported (because, again, the news cycle thrives on negativity) – despite layoffs, demand for AI-assisted software is growing!If you have AI literacy skills, combined with strong, demonstrable critical-thinking skills, you can share experiences in and (preferably) outside of the classroom that show you know how to orchestrate solutions to large-scale projects, you can work well with teams and communicate and present results, and for goodness sake you understand human-centered design and how to measure and maximize UX, you will continue to remain in a highly sought-after field!
My final thought from this article – It’s pretty clear that AI has not only democratized cheating, but with respect to computer science, it has democratized programming:
“The growth in software engineering jobs may decline, but the total number of people involved in programming will increase,” said Alex Aiken, a professor of computer science at Stanford.
More non-tech workers will build software using AI tools. It’s a slippery, dangerous slope. Why?
“But they didn’t understand half of what the code was,” he said, leading many to realize the value of knowing how to write and debug code themselves. “The students are resetting.”
That’s true for many computer science students embracing the new A.I. tools, with some reservations. They say they use A.I for building initial prototype programs, for checking for errors in code and as a digital tutor to answer questions. But they are reluctant to rely on it too much, fearing it dulls their computing acumen.
Indeed, this is the reality check that we’re seeing in our own students here. They notice that AI is not always correct, even when solving simpler undergraduate computer science exercises. Conclusion: students still need to understand the fundamentals of computer science to fix the convoluted sequence of tokens from the LLMs. The output generated will appear to be a solid solution at first, looking like well-written Python, Java, C, or whatever language you’re working in, even properly commented if you prompted it correctly. And heck, the chatbot will sound extraordinarily confident and cheeky about its solution, pleased to have served you! But… it’s still just a stochastic sequence of tokens, subject to error.
Amid this backdrop, Anthropic on Friday launched its Economic Futures Program, a new initiative to support research on AI’s impacts on the labor market and global economy and to develop policy proposals to prepare for the shift.
Anthropic, one of the big AI companies whose existence is entirely focused on the development of new AI tools, who recognizes they are pushing out AI tools with no concern of their impact on humanity, whose livelihood depends entirely on selling its product to greedy corporations worldwide, corporations who are buying into the hype of being able to save billions of dollars by displacing human skill, is investing in a new initiative to help understand how AI is impacting the labor market and global economy.
Let that sink in. Can anyone say “conflict of interest?”
Anthropic’s CEO, Dario Amodei, has been oddly vocal about their own destructive impact:
At least one prominent name has shared his views on the potential economic impact of AI: Anthropic’s CEO Dario Amodei. In May, Amodei predicted that AI could wipe out half of all entry-level white-collar jobs and spike unemployment to as high as 20% in the next one to five years.
And yet they keep functioning, business as usual. I do not think anyone is so naive as to believe that any tech CEO has a genuine concern for how their product may (or may not) devastate humanity. Amodei has been a bit more extreme than the other CEOs in sounding alarms, and yet Anthropic has continued to release more capable AI models and tools on an unprecedented cycle.
Remember that time long ago (only a few years) when big corporations were expected to adhere to the triple bottom line?
The triple bottom line is a business concept that states firms should commit to measuring their social and environmental impact—in addition to their financial performance—rather than solely focusing on generating profit, or the standard “bottom line.”
Sadly, as is the usual outcome with humans, greed won, flipping the proverbial bird to human and societal concerns and the environment. The amount of harm these companies are actively doing to two of the three components suggests they have no actual concern for anything other than one bottom line – profit.
Suppose Anthropic, OpenAI, Perplexity, Tesla, Google, Amazon, etc., actually cared about the future of humanity and the planet. In that case, one might think they’d slow down, carefully evaluate the impact of their products and be more responsible. Perhaps they would provide massive funding for academics and researchers who are invested in this space to start working with them to study how to move forward in a way that is responsible to people and the planet.
They do not care. Virtue signaling through the announcement of some initiative to study the damage Anthropic and other AI companies are actively doing is a bit like the wolf draped in a gardener’s cloak, planting a couple seeds with one hand while uprooting the forest with the other.
Amodei (Anthropic), Cook (Apple), Altman (OpenAI), Zuckerberg (Meta), Huang (Nvidia), Nadella (Microsoft), Pichai (Google), Jassy (Amazon), and Musk (Tesla), are just a handful of CEOs who run companies highly invested in seeing AI “succeed” (which means, from their viewpoint, massive profits.) These companies are closing in on an aggregate value of $20 trillion dollars. The collective worth of the CEOs based on publicly available data could be around $350 billion dollars:
~$2 billion (2024 estimate, not in search results; based on public filings and recent news)
The above are estimates as of June 2025 and fluctuate with the market. They also represent an unfathomable amount of money that could be used to fix the problems they are creating. Ironically, the AI tools themselves might be an important part of developing solutions.
It’s time these companies do more to invest in education, science, and research so that we, as students and academics, can help them change the course of this behemoth of a ship and steer them toward responsible AI – an AI that does not neglect humanity and the planet. These companies all reside in a country that is actively working to dismantle education, research and science – all things that brought America greatness long ago. Do these companies care about the longevity of their product? Do they want to ensure they have a workforce in the future that can continue to contribute to the science and technology necessary for their own growth? These companies must pull their heads out of their huge profit mounds and use their massive pools of money to invest in our future. They should be doing far more than just releasing tools that let us investigate the obvious negative impacts they are having on the workforce and the economy.
I developed an intense interest in understanding how Extended Reality devices could be used for, well, “good.” Over the pandemic, as many of us were stuck home, I purchased a couple of the Oculus Quest headsets for my family. Frankly, we were blown away. Perhaps it was just the euphoria of being able to finally do something other than sit while watching the news about a virus taking over the world. We needed something to just help us get up, move around, and have fun. The technology was still considered early, but I was amazed at what it was capable of. (We would play the infamous Beat Saber, competing against each other for high scores! Even better – we were getting cardio exercise in while having fun!) Surely there must be a slew of use cases for this tech? So, I decided to dive in head first. I purchased the Quest 2 as Meta was taking over Oculus. Then the Quest Pro where we worked with colleagues at Geisinger Commonwealth School of Medicine to develop a couple of prototype VR applications where students could enter an OR simulation and learn basic cardiology while in remote areas. My students learned the ORamaVR platform to make developing their medical simulation prototype much easier than it would have been otherwise. I also started teaching in the EXCELerator program for the College of Engineering, where some incoming students come for an immersive experience in prepping for the college experience on campus over the summer before the start of their first semester. I work with a small group of students each summer helping them learn about design and development framework in engineering using VR. Lately the Meta Horizons creation platform has been quite good for that.
I’ve developed a brand new course (offered the first time in Spring 2024) where I teach students about Design and Development in XR using Meta Quest 3 headsets. Students learn about Unity for the first half of the semester by working through a wide range of custom labs and a suite of resources on the Unity Learn platform. The rest of the semester they spend their time going deeper as they implement various interactions and figure out how to create accessible, immersive experiences by working on their own projects. It’s been awesome watching students develop immersive worlds on their own from scratch! I’ve been amazed at what some students have been able to create. It’s a lot to cover in one semester, but the fun of working with these devices and developing in Unity has far outweighed the work, and I think most of the students have agreed.
If you’re interested, reach out to former students of the course, or just send me an e-mail and we can talk!
CSCI 205 has been a highly successful course for our majors. It is a lot of work for students, and likewise for the instructors who teach it (myself and Prof. Chris Dancy.) But, the rewards have been plenty, as the course teaches a lot about there is a lot of material that is out of date. The course will still relying on Java 7, and used Netbeans.
I’ve taken quite a bit of extra time this semester to update some of the course. As of Fall 2019, the course has been updated in the following ways:
We are now using IntelliJ IDEA
The course has been updated to Java 12
Many videos have been re-recorded to address the updated content, including:
Heavier emphasis on lambda expressions than ever before
Teaching more java.nio and java.nio2 along with java.io
Added new material on socket programming with java.net
Added new material on multithreading and concurrency in Java
Introduced the Java Stream API (not to be confused with the I/O streams)
The final project has been overhauled. Every student now must use Gitlab for all their Scrum task board and sprint management (This has worked surprisingly well!)
Expanded the JavaFX material
It’s a start. There is a lot to be done still.
In a recent discussion with Chris Dancy, he expressed significant interest in incorporating elements of engineering social justice into the course. This represents a broader move by the engineering community at large to start helping our students recognize the impact that their choices have on humanity. I am at fault here. Like many of us, we focus on the goals without teaching our students the impacts that their choices have. Well, that’s not entirely true. I discuss impact – computational resource impact. That’s not enough. I do not give enough attention to social, moral or ethical impact. So, I believe this will be the next set of revisions we make to the course. Chris may likely start on some of those changes in Spring 2020. But, we’ll likely make a more substantial effort to incorporate this into the project over the summer. It’s time for engineers to emphasize people as more important considerations than profits.
I had the joy of becoming a core faculty member of the Institute of Leadership in Technology and Management for the past two summers (Summer 2017, 2018). I found this to be one of the most transformative experiences available to Bucknell students since I’ve been here. I was honored to be part of this program. I worked with some absolutely wonderful students in ILTM! However, as a result of this opportunity, my scholarship was substantially halted for the last two summers. Thus, I have not taken on any new students for quite some time.
I was also on sabbatical during the entire 2017-18 academic year. During this time, I continued to work on interesting projects collaboratively with Dr. Vanessa Troiani at Geisinger Autism and Developmental Medicine Institute. As much as I’ve found much pleasure working in various areas of bioinformatics, I decided it was time for me to explore other areas of sequential data analysis. Dr. Troiani and her lab members have invigorated me with new opportunities in pattern mining mass quantities of eye-tracking data. This ultimately led to another collaborative project involving Dr. Troiani and our own Prof. Evan Peck. Slowly, the research agenda is ramping back up again. I applied to 5 different grant opportunities, of which, to date, one has been awarded, and a much larger one is currently under review.
I’ve also become more involved in interdisciplinary teaching and research opportunities across the university. Bucknell is at a point now where we can truly provide some very interesting transformative experiences to our students – rare opportunities that very few colleges can offer. To do so, however, we must leverage the opportunities that exist across disciplines. Thus, I’ve been intentional in my pursuits to identify new opportunities outside of my own department, and my own home – the College of Engineering. For instance, I’ve had great joy working with my collegue, Prof. Abby Flynt, on both teaching and research projects. (We both recently received the Presidential Award for Teaching Excellence for 2018, and co-mentored a wonderful student, Alexander Murph, who completed an honors thesis and is now at UNC Chapel Hill pursuing his PhD in Statistics!)
Speaking of new, unique opportunities for interdisciplinary work. I’m looking forward to seeing new things happen with our new College of Management, where I expect some interesting collaborations with new faculty who will be part of their new Analytics and Operations Management program. I’ve been spending time with them recently serving on a committee to help them hire new faculty for this exciting program.
Of course, I can’t forget our wonderful friends in Biology, who were so instrumental in collaborating on my bioinformatics projects very early on during my pre-tenure days here. Needless to say, there are great colleagues across this university, with lots of data! It’s a rich place for a data scientist!
Sequential data mining and analysis – it will always remain my primary area of focus, and it’s exciting to be able to afford the risks with tenure to be able to stretch my core interests toward new areas. Fortunately, sequential data are ubiquitous. Thus, I’ve branched away from biological sequence analysis and delved into numerous other areas of sequential data. I will update soon.
My post tenure feelings
So, is tenure all it’s cracked up to be? Well, I’m now in the midst of my third year post tenure. Or is it my second? I don’t even know. I’m burnt out, thanks to the vicious down side of tenure – SERVICE! Once a faculty member receives tenure, it seems as though you are put on a list by the administration throughout the college and university. This list is special. I believe the title of the list is, “PEOPLE WHO WE CAN GUILT INTO SERVING ON COMMITTEES NOW THAT THEY HAVE TENURE.” This semester, I honestly have lost count of the committees and the other opportunities where I have said “yes” to volunteer for opportunities to help my colleagues. The result, I regularly have a minimum of 10-15 additional hours per week dedicated to service obligations; that has recently reached 20+. Those are hours that are on top of my normal teaching hours in and out of the classroom, which are easily 40+ hours, and that doesn’t include my normal teaching/service duties, such as academic advising, department meetings, mandatory caffeine pursuits, and so on. (An academic has no concept of a 40-hour work week. It doesn’t exist.) This is a huge challenge that I’m struggling with. It is due, in part, to a very young, vibrant department of faculty who are going through the tenure process. Thus, the relatively few of us who have tenure take on a lot of the service obligations to protect them as they work through tenure. And, of course, I know, I know… the real reason? Because I often find it difficult to say, “NO!” Like I said, I work at a great place with wonderful colleagues. I believe it’s important to pay it forward. I had people senior to me who were once in my shoes and protected me from excessive service obligations, and I will do the same. The challenge is the imbalance in the department. We’ve had a lot of people retiring in recent years. In time, the balance should be back to normal as others get through the tenure review process, and they can share the service burdens.
Anyway, the most important thing that has me excited? First, I’m teaching BOTH a data mining AND a data science course this Spring! Second – this summer 2019 is mine! All mine! [Insert-evil-laugh-here]. I have not had a summer for research since 2016. So, I have several projects that are ramping up, and I am looking for new students to work with me this summer. Funding is available. Send me an e-mail if you are interested.
It has been quite some time since I’ve updated current events. Thanks to our students, we have had a pretty active summer…
Robert Cowen is continuing his work with me on word prediction models. We have good results and are writing our first paper. The first draft should be complete by the beginning of September.
Morgan Eckenroth has started work on the development of a virtual reality app (using Google Cardboard) that will be used by autistic children to help assess (and hopefully retrain) biases in their visual processing
Khai Nguyen is working on a collaborative project, funded together by the College of Engineering, Chemical Engineering, and Computer Science. The aim of the project is to develop a new application for aerosol researchers in Chem Eng.
Ryan Stecher is working on a collaborative project with Dr. Aaron Mitchel in Psychology to develop and finalize a web-based series of perception tests.
Tongyu Yang has been investigating the use of deep learning to help autism researchers better understand why autistic children have substantial interest in certain types of images
We have an active summer in store. Three students are working on entirely different research projects, while Rachel Ren is wrapping up her work.
Son Pham is working on investigating the use of Deep Learning for protein sequence classification. Deep Learning has recently gained substantial recognition due to its success with automated image recognition and speech classification. Very few have examined its use in bioinformatics. Son will help me explore this untapped area in bioinformatics.
Jason Hammett will be applying data mining techniques to years of regional climate data, including local stats for the Susquehanna River, to develop explanatory and predictive models for anomalistic weather events around the Susquehanna River Valley.
Robert Cowen will be continuing the wonderful work that I started with Bucknell Student Stephanie Gonthier last year on word prediction. Robert will be collaborating with myself and speech pathologists at the Geisinger-Bucknell Autism and Developmental Medicine Institute (ADMI) to develop a preliminary version of a new augmentative and alternative communication (AAC) app that will utilize my word prediction model. This first version will be developed to run on Android tablets.
Rachel Ren is graciously staying for a month after graduating to help submit a paper based on her extensive work completed for her honors thesis. Stayed tuned!
Rachel Ren successfully defended her honors thesis, titled, “Predicting Protein Contact Maps by Bagging Decision Trees”. Congratulations, Rachel! Additionally, Rachel will be attending graduate school starting in the fall at Columbia University, where she will pursue a Masters in Computer Science. Rachel intends to focus on research in machine learning.
Congratulations, Rachel! Bucknell is proud of you! We wish you the very best as you pursue your graduate work.