After spending the day promoting his company’s AI technology at a developer conference, Anthropic’s CEO issued a warning: AI may eliminate 50% of entry-level white-collar jobs within the next five years.

“We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” Dario Amodei told Axios in an interview published Wednesday. “I don’t think this is on people’s radar.”

The 42-year-old CEO added that unemployment could spike between 10% and 20% in the next five years. He told Axios he wanted to share his concerns to get the government and other AI companies to prepare the country for what’s to come.

“Most of them are unaware that this is about to happen,” Amodei said. “It sounds crazy, and people just don’t believe it.”

Amodei said the development of large language models is advancing rapidly, and they’re becoming capable of matching and exceeding human performance. He said the US government has remained quiet about the issue, fearing workers would panic or the country could fall behind China in the AI race.

Meanwhile, business leaders are seeing savings from AI while most workers remain unaware of the changes that are evolving, Amodei said.

He added that AI companies and the government need to stop “sugarcoating” the risks of mass job elimination in fields including technology, finance, law, and consulting. He said entry-level jobs are especially at risk.

Amodei’s comments come as Big Tech firms’ hiring of new grads dropped about 50% from pre-pandemic levels, according to a new report by the venture capital firm SignalFire. The report said that’s due in part to AI adoption.

A round of brutal layoffs swept the tech industry in 2023, with hundreds of thousands of jobs eliminated as companies looked to slash costs. While SignalFire’s report said hiring for mid and senior-level roles saw an uptick in 2024, entry-level positions never quite bounced back.

In 2024, early-career candidates accounted for 7% of total hires at Big Tech firms, down by 25% from 2023, the report said. At startups, that number is just 6%, down by 11% from the year prior.

SignalFire’s findings suggest that tech companies are prioritizing hiring more seasoned professionals and often filling posted junior roles with senior candidates.

Heather Doshay, a partner who leads people and recruiting programs at SignalFire, told Business Insider that “AI is doing what interns and new grads used to do.”

“Now, you can hire one experienced worker, equip them with AI tooling, and they can produce the output of the junior worker on top of their own — without the overhead,” Doshay said.

AI can’t entirely account for the sudden shrinkage in early-career prospects. The report also said that negative perceptions of Gen Z employees and tighter budgets across the industry are contributing to tech’s apparent reluctance to hire new grads.

“AI isn’t stealing job categories outright — it’s absorbing the lowest-skill tasks,” Doshay said. “That shifts the burden to universities, boot camps, and candidates to level up faster.”

To adapt to the rapidly changing times, she suggests new grads think of AI as a collaborator, rather than a competitor.

“Level up your capabilities to operate like someone more experienced by embracing a resourceful ownership mindset and delegating to AI,” Doshay said. “There’s so much available on the internet to be self-taught, and you should be sponging it up.”

Amodei’s chilling message comes after the company recently revealed that its chatbot Claude Opus 4 exhibited “extreme blackmail behavior” after gaining access to fictional emails that said it would be shut down. While the company was transparent with the public about the results, it still released the next version of the chatbot.

It’s not the first time Amodei has warned the public about the risks of AI. On an episode of The New York Times’ “Hard Fork” podcast in February, the CEO said the possibility of “misuse” by bad actors could threaten millions of lives. He said the risk could come as early as “2025 or 2026,” though he didn’t know exactly when it would present “real risk.”

Anthropic has emphasized the importance of third-party safety assessments and regularly shares the risks uncovered by its red-teaming efforts. Other companies have taken similar steps, relying on third-party evaluations to test their AI systems. OpenAI, for example, says on its website that its API and ChatGPT business products undergo routine third-party testing to “identify security weaknesses before they can be exploited by malicious actors.”

Amodei acknowledged to Axios the irony of the situation — as he shares the risks of AI, he’s simultaneously building and selling the products he’s warning about. But he said the people who are most involved in building AI have an obligation to be up front about its direction.

“It’s a very strange set of dynamics, where we’re saying: ‘You should be worried about where the technology we’re building is going,'” he said.

Anthropic did not respond to a request for comment from Business Insider.



Read the full article here

Share.
Leave A Reply