Anthropic launches program to track fallout from AI

As job losses loom, Anthropic launches program to track AI’s economic fallout

Here’s what you need to know:

Amid this backdrop, Anthropic on Friday launched its Economic Futures Program, a new initiative to support research on AI’s impacts on the labor market and global economy and to develop policy proposals to prepare for the shift. 

Anthropic, one of the big AI companies whose existence is entirely focused on the development of new AI tools, who recognizes they are pushing out AI tools with no concern of their impact on humanity, whose livelihood depends entirely on selling its product to greedy corporations worldwide, corporations who are buying into the hype of being able to save billions of dollars by displacing human skill, is investing in a new initiative to help understand how AI is impacting the labor market and global economy.

Let that sink in. Can anyone say “conflict of interest?”

Anthropic’s CEO, Dario Amodei, has been oddly vocal about their own destructive impact:

At least one prominent name has shared his views on the potential economic impact of AI: Anthropic’s CEO Dario Amodei. In May, Amodei predicted that AI could wipe out half of all entry-level white-collar jobs and spike unemployment to as high as 20% in the next one to five years. 

And yet they keep functioning, business as usual. I do not think anyone is so naive as to believe that any tech CEO has a genuine concern for how their product may (or may not) devastate humanity. Amodei has been a bit more extreme than the other CEOs in sounding alarms, and yet Anthropic has continued to release more capable AI models and tools on an unprecedented cycle.

Remember that time long ago (only a few years) when big corporations were expected to adhere to the triple bottom line?

The triple bottom line is a business concept that states firms should commit to measuring their social and environmental impact—in addition to their financial performance—rather than solely focusing on generating profit, or the standard “bottom line.”

Sadly, as is the usual outcome with humans, greed won, flipping the proverbial bird to human and societal concerns and the environment. The amount of harm these companies are actively doing to two of the three components suggests they have no actual concern for anything other than one bottom line – profit.

Suppose Anthropic, OpenAI, Perplexity, Tesla, Google, Amazon, etc., actually cared about the future of humanity and the planet. In that case, one might think they’d slow down, carefully evaluate the impact of their products and be more responsible. Perhaps they would provide massive funding for academics and researchers who are invested in this space to start working with them to study how to move forward in a way that is responsible to people and the planet.

They do not care. Virtue signaling through the announcement of some initiative to study the damage Anthropic and other AI companies are actively doing is a bit like the wolf draped in a gardener’s cloak, planting a couple seeds with one hand while uprooting the forest with the other.

Amodei (Anthropic), Cook (Apple), Altman (OpenAI), Zuckerberg (Meta), Huang (Nvidia), Nadella (Microsoft), Pichai (Google), Jassy (Amazon), and Musk (Tesla), are just a handful of CEOs who run companies highly invested in seeing AI “succeed” (which means, from their viewpoint, massive profits.) These companies are closing in on an aggregate value of $20 trillion dollars. The collective worth of the CEOs based on publicly available data could be around $350 billion dollars:

CEOCompanyEstimated Net Worth (USD)
Dario AmodeiAnthropic$1.2 billion1
Sam AltmanOpenAI$2.8 billion2
Mark ZuckerbergMeta$12.3 billion3
Jensen HuangNvidia$135 billion4
Satya NadellaMicrosoft$500 million5
Sundar PichaiGoogle$1.3 billion6
Andy JassyAmazon$470–540 million789
Elon MuskTesla$194 billion (2025 estimate)10
Tim CookApple~$2 billion (2024 estimate, not in search results; based on public filings and recent news)

The above are estimates as of June 2025 and fluctuate with the market. They also represent an unfathomable amount of money that could be used to fix the problems they are creating. Ironically, the AI tools themselves might be an important part of developing solutions.

It’s time these companies do more to invest in education, science, and research so that we, as students and academics, can help them change the course of this behemoth of a ship and steer them toward responsible AIan AI that does not neglect humanity and the planet. These companies all reside in a country that is actively working to dismantle education, research and science – all things that brought America greatness long ago. Do these companies care about the longevity of their product? Do they want to ensure they have a workforce in the future that can continue to contribute to the science and technology necessary for their own growth? These companies must pull their heads out of their huge profit mounds and use their massive pools of money to invest in our future. They should be doing far more than just releasing tools that let us investigate the obvious negative impacts they are having on the workforce and the economy.

AI as the New Study Partner: Yes, we CAN help students learn responsible AI use

AI to the Rescue: https://www.chronicle.com/special-projects/the-different-voices-of-student-success/ai-to-the-rescue

The Chronicle’s article “AI to the Rescue” offers a bit more of, shall we say, a balanced view of how today’s college students are utilizing AI in and out of the classroom. It’s a balanced take on AI, noting that not as many students as some faculty fear are using AI merely as a shortcut to skirt academic rigor. Some students are learning to use it as their 24/7 personalized tutor, seeing it as an essential part of their scaffolding to bolster their learning, manage cognitive load, and help them navigate the challenging times we put them through in higher education. The narrative is grounded in student voices and survey data, painting a nuanced picture that challenges the prevailing trope of AI as merely a cheating engine. Thank you, Beth McMurtrie.

I particularly liked Abeldt’s story:

Abeldt began building her AI arsenal. She used ChatGPT and watched YouTube videos to break down complex topics. Perplexity, an AI-powered search engine, helped her find information. Google NotebookLM turned class notes and lecture slides into study guides and podcasts, which she listened to on the way to class. Quizlet made flashcards and ChatGPT created mock exams and quizzes. Abeldt even used AI to develop a study plan to avoid procrastinating or spending too much time on any one assignment. She followed its guidance, and it worked.

Abeldt’s outcome? A GPA rebound, and more importantly, a sense of agency restored.

Not all are on board, and nor should they be

Everyone — faculty, staff and students alike — should proceed with caution and not jump into AI without proper guidance and boundaries. As we know, OpenAI, Anthropic, Google, Meta, and the corporations involved in developing AI have zero interest in protecting humanity as they develop these tools. There is no doubt, it can cause harm, and already has. We’ll continue to see new examples of harmful things resulting from the irresponsible or naive use of AI increasing in frequency. Many should avoid it altogether. It’s not clear when it should be used, if at all. Despite its danger, we are noticing some positive outcomes for those who understand how to use it responsibly. Anecdotes are not data, but my colleagues and I have noted that in our higher-level courses and research with our students, AI can be quite helpful. However, even in these contexts, it needs to be used responsibly and with caution. (Responsible AI – I will keep pushing this with urgency, no matter what my colleagues think of me.) Some, particularly in the humanities or those with strong ethical stances, opt out entirely, citing concerns about bias, trustworthiness, or the environmental impact of large models. I respect those who take a stand through informed, data-driven decisions.

The article points out that some blame AI for the erosion of peer networks, as classmates default to AI tools for feedback and collaboration, bypassing the rewarding experiences (and challenges) that come through human interaction with their peers, meeting with TAs, and coming to the office hours of their instructors. No. Blame that on social media and mobile devices, which may both be considered bigger disruptions to regular human contact than anything AI is doing. AI might be making it easier to avoid human interaction, but social media will always surpass the damage done to promote human interaction than anything AI will ever do.

The fact is that very few campuses are monolithic; this is more true in the primarily undergraduate liberal arts colleges. The use of AI depends on the discipline. Let’s say that again – AI use and attitudes are shaped by discipline, workload, and personal philosophy. Let’s start respecting each other’s choices and autonomy to choose how they want to use it!

Here’s my take (subject to change as fast as AI is evolving):

Everyone should use AI only if they have a full understanding of how it works. There should be no judgment on individual choices made. Each user should understand how the model they are using is trained, with what data, and understand its strengths and weaknesses, as well as its ethical, societal, moral, and environmental impacts. If you determine AI is not right for you, your class, or your own learning, then take that stand, explain to your students, with data, why you are choosing that path to help them make future informed decisions, and make choices that are right for you. Let’s stop the virtue signaling, guilting, and condemning others who have chosen to use it. 

We all need to figure out what is right for our courses, our departments, and our programs. We need to teach our students to be ethical and responsible users of AI today. Every administration should be crafting a general policy on ethical and responsible AI use for their campus. But understand that there is no one-size-fits-all solution. That makes it challenging.

Disruption

The fact is, as I’ve stated elsewhere, AI is not going anywhere. To choose to ignore it is not a good way to move forward. It’s a colossal disruptor. There is no doubt that AI, like so many technologies over our lifetimes, is a disruptor that is displacing the norm of “how things are done.” It’s not the first, and it won’t be the last. Consider:

  • Industrial Automation, an endeavor around long before AI, has displaced traditional job markets, increased factory emissions, and disrupted landscapes from resource extraction.
  • Automobiles enabled great encomic growth, but led to urban spawl, traffic fatalities, and has been a formidable contributor to air pollution, also contributing to the need for substantial resource extraction for oil.
  • Nuclear Technology has such potential, but ask Chernobyl about the risks.
  • Computer and Internet Technologies have enabled nearly every modern disruption today! Want to point fingers at someone for AI? Well, blame this category! It disrupted how we communicate and do our work day-to-day. And, it caused incredible amounts of privacy and security concerns while continuing to contribute to the social and digital divides in the world. And the e-waste generated from new phones and tech being released year after year is profound. (It also contributed to enormous labor waste measured in countless hours of Solitaire played on the company’s dime!)
  • Biotech has had some remarkable improvements in crop yield at the cost of long-term health impacts, chemical runoff, loss of biodiversity, water pollution, and destruction of ecosystems.
  • Plastics, which we are only now seeing the huge negative impact on plants, animals and human health from microplastics

Disruption seems to be a historical norm. So, what is different about AI?

While many of the above are considered disruptors, AI is just… different. I’m not a philosopher by any stretch of the imagination. But, I think way too much, and I try my hardest to just listen to my colleagues across the college, combined with reading too much about AI. Here’s my take…

  1. Scope of human displacement – none of the above technologies have encroached on those humanistic things we do that are cognitive, creative, or fall under decision-making domains, the way AI has. It’s infringing on those things we thought were reserved for humans.
  2. Speed and Scale of change – the above disruptions took time to evolve. The pace that AI is moving is unprecedented, unpredictable, and driven mainly by large, greedy corporations who don’t give a %)@# how it impacts humans. And, it’s happening while we have an administration with no interest in curtailing and regulating how its used. This is downright unnerving and incredibly frustrating. As soon as I have something figured out, more capabilities emerge.
  3. Existential and Identity as Humans – AI is causing us to wake up in the morning, wondering what it means to be a human in the era of AI. 
  4. Trust, security, control, accountability, etc – AI systems are treated as those mythical “black boxes”, i.e., algorithms we blindly use with no concern about what’s going on behind the scenes. Who is responsible when AI causes harm, makes a biased decision, or generally has unintended consequences? 
  5. Privacy and manipulation – OK, so this really is part of the previous concern. But it bears repeating. AI can process an incredible amount of data, making it quite easy for “big brother” systems to monitor and perform regular surveillance without us knowing. Then there are the “deepfakes”, misinformation, etc… all posing greater risks than any of the above disruptors.
  6. Media – boy, is the mainstream media having a “hey day” with AI. Fear sells the news! News outlets love it when they can get your attention through fear and manipulation of your emotions. Facts? Pffft… not necessary when I got you emotionally charged. It’s what drives our political coverages on the news too. AI is being portrayed as a formidable, uncontrollable force, conjuring up images of roge terminators taking us out. The media fuels these anxieties.

There is a lot of fear and discontent over AI, rightfully so. It is indeed a disruptor, but unlike anything we have seen in quite some time, perhaps in human history. Or is it? Time will tell. For instance, automation was long criticized as a source of job displacement, but eventually realized it would create additional jobs. 

Balance – the key to everything

Let’s get back to the article at hand. Fortunately, it ends on a rather pragmatic note. The future of student success, as the author suggests, lies in balance – something we’re not very good at in academia, especially as we become a more polarized society. Can academics (me included) leverage AI for efficiency and accessibility, while doubling down on those things that make us human? Can we emphasize more intuition, compassion, innovation, creativity, and nuanced judgment? These are the things that AI cannot replicate. Can we reframe how we judge our students with our final grade assessments by rewarding their curiosity and willingness to challenge themselves and take intellectual deep dives instead of encouraging them to only learn enough to pass tests?  

Remember – AI can only imitate, not innovate. Let’s come together to figure out how to reimagine our mission in higher education – fostering the next generation of critical thinkers, world-changers, and ethical minds. It is essential if we are to maintain relevance.

You mean, ChatGPT isn’t good for your brain?

https://www.theweek.in/news/health/2025/06/19/chat-gpt-might-not-be-good-for-your-brain-new-mit-study-finds.html

A recent MIT study raises concerns about how using ChatGPT might impact critical thinking and brain engagement. Generalizing the outcomes here (which have not been published or peer-reviewed yet):

  • Population size? n=54 – hardly enough to suggest much of anything on humanity, though it’s not the only study to suggest negative impacts on our brain activity from human overreliance on genAI chatbots.
  • An interesting aspect of the study was the use of an EEG to measure brain activity. Participants who used ChatGPT to write SAT essays showed the lowest brain engagement compared to those using Google Search or no tools at all. EEG readings revealed reduced neural activity. Not surprising at all.
  • Their essays were more uniform and lacked creative or original thought… as one would expect from any generative AI engine that serves as nothing but a fancy-ass stochastic regurgitation machine.
  • Another obvious outcome – participants also became [insert-shocked-face-here] lazier over time. When they didn’t use their own brain in their writing, they struggled to recall their own writing when asked to revise without the AI.

The group that didn’t use any tools demonstrated the highest neural connectivity, were more engaged and curious, and felt more satisfied with their work.

I have yet to talk with any academic in any field who is not sounding enormous alarms (not to mention frustration from cheating) from student use of AI. What I tell them is this: Generative AI, like it or not, is here to stay. Power-hungry, greedy corporations are driving it with no interest in understanding its impact on humanity. Big companies are lured by the temptation to save a buck or a million, and the tech companies are ready to serve the next AI plate. Like it or not, AI is not going anywhere. AI will continue to become more capable and yes, even awe-inspiring.

The cynical side of me (which is about 99% of me these days) views genAI as democratizing cheating. I’ve been privy to students cheating. All of us educators have. But, the best cheating came from the affluent – those students who could afford their subscriptions to cheating websites and services, some even paying for their code or essays to be written by cheating “farms” around the world! Well, ChatGPT has made cheating accessible for all! Snark aside, let’s be realistic -cheating has been around since the dawn of education. Stop worrying about the cheating! My God, that’s the least of our problems today. It doesn’t matter what fancy language you put on your syllabus, what rules the administration tries to put in place, or what types of lock downs you put on school computers – students are going to cheat, and they have access to all the best tools out there, and they likely know more than you do on how to use them.

My observation is that college students do not know how to use AI responsibly. They do not know how to utilize it to aid and promote their own learning in a way that keeps them just as engaged and invested in their own learning, while protecting their own critical thinking with the tools.

In theory, if learning with AI tools does not result in students yielding tangible, measurable evidence that their intellectual growth and critical thinking exceeds those measures without the tools, then what are we doing even having this discussion? Problem solved! But that’s not realistic. Humans (me included) are opportunistic, stressed-out creatures who will always have moments of needing to cut corners to get work done, and damn, generative AI makes that easy.

Our real problem – teaching our kids how to learn, how to protect their own brain development, in the era of highly-accessible generative AI tools.

It’s not going anywhere. It will be used. What do we do?

I am trying to believe that eventually the hype and chaos will subside in time, allowing it to become viewed as a tool, maybe even an essential tool. It will become much like the calculator for a mathematician, the stethoscope for a doctor, or the IDE for a software engineer. I hope that in time, genAI will mature to become an essential tool to help us with the mundane, repetitive tasks so that we can focus on getting more done, allowing us to become more efficient with our time so that we can leave each day accomplishing more than we could before, ideally letting us spend more time with our friends and families (you know, those things we used to do before mobile devices and social media. Remember having conversations at restaurants?) But, it’s going to take a lot of time to get there, and honestly, I’m not sure we (at least in the U.S.) have what it takes to enact the legislation urgently needed to regulate how AI is used in K-12, or even whether it should be allowed to be used at all in the early years. This is not a partisan issue. This is one of the most important pieces of legislation that needs to be passed to protect our future. GenAI in the hands of kids, at the age when their brains are still growing, is the absolute worst possible use of any technology today. We need to help our K-12 educators learn how to use the tech themselves and recognize early signs when kids may be using it. Every discipline at the college level needs to reimagine its curriculum to include instruction on how to teach students to use AI safely and responsibly while protecting their own critical thinking. Ideally, if done correctly, it should enable us to accomplish more in the classroom. Our students should be able to conquer more challenging problems than ever before… if and only if we help them learn how to use it! And that’s on us, friends – the educators of the world – to help our next generation students learn how to do that. Unfortunately, times are bizarre right now. With the recent devaluing of education in this country, I’m not sure we have what it takes to wake up and fund and support the education system in a way it needs to get through this time. Papers like the above will become the norm, to the point where it’s mostly AI that is publishing another paper announcing yet another test where it outperforms humans on yet another measure of brain activity, intelligence, critical thinking, etc.

Legislation isn’t going to happen, at least not in the near future. It’s on all of us, as educators and part of our society that values education, to lead the way in protecting our kids. In my courses, it’s my responsibility to teach my students how to utilize these tools in ways that promote their brain development, learn how to use AI responsibly and safely, and minimize harm to humanity. I don’t have answers. None of us does, but pretending like it’s not here and doing nothing is not the answer. Failure to do so will transform our society into some form of an idiocracy within the next generation, with humanity at the mercy of powerful AI engines that do all our thinking for us. (I never realized that Mike Judge was a modern prophet.)

I consider myself a huge fan of technology, dating back to my high school days. The technology itself does not scare me. I find it fascinating and am actively using AI and ML models in my own work. It has a purpose. What scares me is the willingness of society to let AI do their thinking for them. My worst fears are letting our kids use it with no reins, with no understanding of the harm its use is doing to their mental capacities in the long term. The damage is real, and if we don’t help our kids and college students learn how to use it safely, it will have devastating impacts on humanity.

This is a disruptive technology. Disruption causes mass chaos for a time as society adjusts. It also displaces and eliminates those who fail to evolve with it and adapt. Just ask Kodak. Or Blockbuster. Or how about Blackberry? Or Palm. MySpace anyone? AOL?

Demystifying AI Workshop – May 23, 2025

2025-May-23 – Demystifying Neural Nets
The links below are to Python notebooks for each section of the workshop. Open the notebook, which will take you to a Google Drive link that is a read-only version of the notebook. Open the link, save a copy of the notebook file in your Drive space, and open it in Google Colab. You can also run the file locally on your own machine if you have a complete Python environment installed with Jupyter/JupyterLab or an editor that lets you edit Jupyter notebooks. All notebooks were tested on both Google Colab and natively.

These materials are part of workshops taught in my role as Faculty Fellow of the Dominguez Center for Data Science.

Extended Reality (XR)

I developed an intense interest in understanding how Extended Reality devices could be used for, well, “good.” Over the pandemic, as many of us were stuck home, I purchased a couple of the Oculus Quest headsets for my family. Frankly, we were blown away. Perhaps it was just the euphoria of being able to finally do something other than sit while watching the news about a virus taking over the world. We needed something to just help us get up, move around, and have fun. The technology was still considered early, but I was amazed at what it was capable of. (We would play the infamous Beat Saber, competing against each other for high scores! Even better – we were getting cardio exercise in while having fun!) Surely there must be a slew of use cases for this tech? So, I decided to dive in head first. I purchased the Quest 2 as Meta was taking over Oculus. Then the Quest Pro where we worked with colleagues at Geisinger Commonwealth School of Medicine to develop a couple of prototype VR applications where students could enter an OR simulation and learn basic cardiology while in remote areas. My students learned the ORamaVR platform to make developing their medical simulation prototype much easier than it would have been otherwise. I also started teaching in the EXCELerator program for the College of Engineering, where some incoming students come for an immersive experience in prepping for the college experience on campus over the summer before the start of their first semester. I work with a small group of students each summer helping them learn about design and development framework in engineering using VR. Lately the Meta Horizons creation platform has been quite good for that.

I’ve developed a brand new course (offered the first time in Spring 2024) where I teach students about Design and Development in XR using Meta Quest 3 headsets. Students learn about Unity for the first half of the semester by working through a wide range of custom labs and a suite of resources on the Unity Learn platform. The rest of the semester they spend their time going deeper as they implement various interactions and figure out how to create accessible, immersive experiences by working on their own projects. It’s been awesome watching students develop immersive worlds on their own from scratch! I’ve been amazed at what some students have been able to create. It’s a lot to cover in one semester, but the fun of working with these devices and developing in Unity has far outweighed the work, and I think most of the students have agreed.

If you’re interested, reach out to former students of the course, or just send me an e-mail and we can talk!

Updating CSCI 205

CSCI 205 has been a highly successful course for our majors. It is a lot of work for students, and likewise for the instructors who teach it (myself and Prof. Chris Dancy.) But, the rewards have been plenty, as the course teaches a lot about there is a lot of material that is out of date. The course will still relying on Java 7, and used Netbeans.

I’ve taken quite a bit of extra time this semester to update some of the course. As of Fall 2019, the course has been updated in the following ways:

  • We are now using IntelliJ IDEA
  • The course has been updated to Java 12
  • Many videos have been re-recorded to address the updated content, including:
    • Heavier emphasis on lambda expressions than ever before
    • Teaching more java.nio and java.nio2 along with java.io
    • Added new material on socket programming with java.net
    • Added new material on multithreading and concurrency in Java
    • Introduced the Java Stream API (not to be confused with the I/O streams)
  • The final project has been overhauled. Every student now must use Gitlab for all their Scrum task board and sprint management (This has worked surprisingly well!)
  • Expanded the JavaFX material 

It’s a start. There is a lot to be done still.

In a recent discussion with Chris Dancy, he expressed significant interest in incorporating elements of engineering social justice into the course. This represents a broader move by the engineering community at large to start helping our students recognize the impact that their choices have on humanity. I am at fault here. Like many of us, we focus on the goals without teaching our students the impacts that their choices have. Well, that’s not entirely true. I discuss impact – computational resource impact. That’s not enough. I do not give enough attention to social, moral or ethical impact. So, I believe this will be the next set of revisions we make to the course. Chris may likely start on some of those changes in Spring 2020. But, we’ll likely make a more substantial effort to incorporate this into the project over the summer. It’s time for engineers to emphasize people as more important considerations than profits.

Posting from MarsEdit

I’m still slacking off on my hope to do a better job with keeping a presence online. I need massive simplicity, and as much as I dig WordPress, I don’t find the workflow intuitive. So, on my quest to find the simplest, easiest editor that will let me publish posts quickly, with relatively rich content, I continue to stumble around. Nothing out there does exactly what I want. And frankly, this is just not anywhere near representing a high priority. Blogo was easy to use, but they died. I’m not sure where they went? IFTTT supports an automatic hook to allow a new Evernote post to post to WP, but not a self-hosted WP blog like this. I am close to just giving up and editing directly on the WP interface. Given the great functionality that Project Gutenberg has given to WP users with Blocks, that’s not really a bad option. My bit of frustration comes just with dealing with media files, mostly images.

So, checking out MarsEdit, available on the App Store. Do not let the free price to download and install fool you. It’s free with complete features for 14-days, then your ability to publish content is disabled. To continue, you must pay $49 for a full license.

Here’s the interface running on my Mac in Dark Mode. You can see it fully supports Dark Mode in Catalina:

There are also options to edit your slug, tags, select the WP categories this post is assigned to, select your featured image, and other server options to set your post status, author, whether comments are closed, etc. Overall, it seems quite simple. But, I mostly care about editing. You can see above it’s a basic functional rich-text editor. It supports the most common formatting commands.

Yet, we know that WP has made a substantial commitment to its Project Gutenberg – their new Blocks editor. So, what happens when you publish a brand new post? It comes up in “classic” mode when you open your post in WP:

I can attempt to convert my post to Blocks…

MarsEdit_convert_to_blocks

but sometimes it results in no change, and leaves your post in classic mode. Other times, it does indeed work. I’m not certain what the triggers are that prevent your post from converting to separate blocks, though this is not a big deal to me.

At first glance, this tool seems a bit pricey. However, the workflow I need to quickly publish updates from my Mac with minimal effort is definitely there. I might adopt this. Why? There are two tools I rely on extensively when writing documentation – quick screen captures, and recording quick little GIF animations. Having to save a file, upload it to my Media, and then reference it, is an absolute pain.

I’ll try this for a bit…

Past Research Projects

The following are research projects that, for one reason or another, ended up falling down the priority list and are no longer being actively worked on. I list them here as a possible conversation starter with students looking for interesting work

  • [IN PREP] Cowen R, Mitchel MW, Hare-Harris A, King BR. Incorporation of Brown’s stages of syntactic and morphological development in a word prediction model of conversational speech from young children
  • [IN PREP] – Cowen R, Mitchel MW, Hare-Harris A, King BR. An adaptive n-gram based stochastic word prediction model for conversational speech.
  • [IN PREP]- Hare A, Essae E, King BR, Ledbetter DH, Martin CL. Determining the dosage effect of copy number variants in the human genome.
  • [IN PREP] – Ren C, King BR – Protein residue contact map prediction using bagged decision trees

Current Student Research

These are ongoing projects as of Summer 2019


Bhagawat Acharya ’20 – Using deep learning for handwriting text recognition.

  • This is a collaborative, interdisciplinary project with Katherine Faull (Comparative Humanities and German Studies) and Carrie Pirmann (Research Services Librarian). We are working together to develop an improved handwriting translation pipeline to increase the HTR throughput of 17-18th century Moravian handwritten literature that is part of the Moravian archives.
  • Funding – Bucknell Emerging Scholars Summer Research Program

Taehwan Kim ’20 – Using Deep Learning to Forecast Monthly Extreme Temperatures over the United States

  • Undoubtedly, climate change is one of most pressing, disconcerting issues of our time. Collaborating with atmospheric science and aerosol science expert Dabrina Dutcher, Assistant Prof. in Chemistry and Chemical Engineering, we are exploring the use of deep learning to develop advanced models that can improve future temperature predictions
  • Funding – Katherine Mabis McKenna Environmental Internship

Lily Romano ’20 – Software for Aerosol Analysis

  • We are developing a new software toolkit to aid in the aerosol research of my colleagues in Chemical Engineering, Dabrina Dutcher, PhD and Timothy Raymond, PhD. Lily is resuming work that was initiated by former student Khai Nguyen ’18 on the software, including advancing the data analysis tools available for aerosol researchers.
  • Funding – Clare Boothe Luce Research Scholars Program

Kartikeya Sharma ’20 – Trajectory Gaze Path Analysis on Eye Tracking Data for Autism Spectrum Disorder Studies

  • This is a collaborative project with my colleagues, Vanessa Troiani, PhD and Antoinette Sabatino DiCriscio, PhD at the Geisinger Autism Developmental Medicine Institute. The primary aim is to develop a toolkit for the eye tracking research community that incorporates my novel method for extracting scanpath trends from group-level eye tracking data.
  • Funding – Ciffolillo Healthcare Technology Inventors Program

Yili Wang ’21 – Using deep learning to identify discriminative features of images with high interest of autistic children

  • This is a collaborative project with my colleague Vanessa Troiani, PhD at Geisinger Autism and Developmental Medicine Institute. This is also a continuation of a project with former student Tongyu Yang `17, who is continuing to assist with the effort
  • Funding – Bucknell Program for Undergraduate Research (PUR)

These are projects that are unfinished for a variety of reasons:

Including a Jupyter Notebook file on WordPress

I’ve been exploring different mechanisms to post Python Jupyter notebook files on WordPress. Of course, I can use nbconvert to convert my notebook files to other formats – including HTML – right from the command line. I can then post this file as part of an embedded HTML block in a WordPress post. However, this sounded like an unnecessary step, since I also wanted the notebook to be available in GitHub. I did not want to deal with generating this HTML file AND also managing a published notebook on GitLab as well. Smells a lot like duplicate efforts, wasted time. Thanks to a great WordPress plugin from Andy Challis, called nbconvert, I was able to achieve what I wanted! See his page at https://www.andrewchallis.co.uk/portfolio/php-nbconvert-a-wordpress-plugin-for-jupyter-notebooks/ for complete instructions.

  1. If you haven’t yet, you must install WP Pusher as a plugin in your WordPress site. (See this for more info.)
  2. Go to his web page for nbconvert, copy the CSS custom code displayed on the page.
  3. Go to your WordPress page, and add the custom CSS displayed on the page above into Appearance -> Customize -> Additonal CSS
  4. Go to https://github.com/ghandic/nbconvert and verify the latest instructions. Install the nbconvert shortcode plugin through WP Pusher. Activate it.
  5. That’s it!

Follow the instructions to include your own Jupyter notebook file available on GitHub.

Example

Here is an example. In a standalone text (or paragraph) block, I included the following shortcode:

[nbconvert url="https://github.com/bkingcs/python_snippets/blob/master/clustering/hierarchical.ipynb" /]

This generates the following: