As job losses loom, Anthropic launches program to track AI’s economic fallout
Here’s what you need to know:
Amid this backdrop, Anthropic on Friday launched its Economic Futures Program, a new initiative to support research on AI’s impacts on the labor market and global economy and to develop policy proposals to prepare for the shift.
Anthropic, one of the big AI companies whose existence is entirely focused on the development of new AI tools, who recognizes they are pushing out AI tools with no concern of their impact on humanity, whose livelihood depends entirely on selling its product to greedy corporations worldwide, corporations who are buying into the hype of being able to save billions of dollars by displacing human skill, is investing in a new initiative to help understand how AI is impacting the labor market and global economy.
Let that sink in. Can anyone say “conflict of interest?”
Anthropic’s CEO, Dario Amodei, has been oddly vocal about their own destructive impact:
At least one prominent name has shared his views on the potential economic impact of AI: Anthropic’s CEO Dario Amodei. In May, Amodei predicted that AI could wipe out half of all entry-level white-collar jobs and spike unemployment to as high as 20% in the next one to five years.
And yet they keep functioning, business as usual. I do not think anyone is so naive as to believe that any tech CEO has a genuine concern for how their product may (or may not) devastate humanity. Amodei has been a bit more extreme than the other CEOs in sounding alarms, and yet Anthropic has continued to release more capable AI models and tools on an unprecedented cycle.
Remember that time long ago (only a few years) when big corporations were expected to adhere to the triple bottom line?
The triple bottom line is a business concept that states firms should commit to measuring their social and environmental impact—in addition to their financial performance—rather than solely focusing on generating profit, or the standard “bottom line.”
Sadly, as is the usual outcome with humans, greed won, flipping the proverbial bird to human and societal concerns and the environment. The amount of harm these companies are actively doing to two of the three components suggests they have no actual concern for anything other than one bottom line – profit.
Suppose Anthropic, OpenAI, Perplexity, Tesla, Google, Amazon, etc., actually cared about the future of humanity and the planet. In that case, one might think they’d slow down, carefully evaluate the impact of their products and be more responsible. Perhaps they would provide massive funding for academics and researchers who are invested in this space to start working with them to study how to move forward in a way that is responsible to people and the planet.
They do not care. Virtue signaling through the announcement of some initiative to study the damage Anthropic and other AI companies are actively doing is a bit like the wolf draped in a gardener’s cloak, planting a couple seeds with one hand while uprooting the forest with the other.
Amodei (Anthropic), Cook (Apple), Altman (OpenAI), Zuckerberg (Meta), Huang (Nvidia), Nadella (Microsoft), Pichai (Google), Jassy (Amazon), and Musk (Tesla), are just a handful of CEOs who run companies highly invested in seeing AI “succeed” (which means, from their viewpoint, massive profits.) These companies are closing in on an aggregate value of $20 trillion dollars. The collective worth of the CEOs based on publicly available data could be around $350 billion dollars:
CEO | Company | Estimated Net Worth (USD) |
---|---|---|
Dario Amodei | Anthropic | $1.2 billion1 |
Sam Altman | OpenAI | $2.8 billion2 |
Mark Zuckerberg | Meta | $12.3 billion3 |
Jensen Huang | Nvidia | $135 billion4 |
Satya Nadella | Microsoft | $500 million5 |
Sundar Pichai | $1.3 billion6 | |
Andy Jassy | Amazon | $470–540 million789 |
Elon Musk | Tesla | $194 billion (2025 estimate)10 |
Tim Cook | Apple | ~$2 billion (2024 estimate, not in search results; based on public filings and recent news) |
The above are estimates as of June 2025 and fluctuate with the market. They also represent an unfathomable amount of money that could be used to fix the problems they are creating. Ironically, the AI tools themselves might be an important part of developing solutions.
It’s time these companies do more to invest in education, science, and research so that we, as students and academics, can help them change the course of this behemoth of a ship and steer them toward responsible AI – an AI that does not neglect humanity and the planet. These companies all reside in a country that is actively working to dismantle education, research and science – all things that brought America greatness long ago. Do these companies care about the longevity of their product? Do they want to ensure they have a workforce in the future that can continue to contribute to the science and technology necessary for their own growth? These companies must pull their heads out of their huge profit mounds and use their massive pools of money to invest in our future. They should be doing far more than just releasing tools that let us investigate the obvious negative impacts they are having on the workforce and the economy.