For years, philosophy majors were the butt of jokes about unemployable degrees.
Now, some of them are being recruited by the world’s most powerful AI companies to help shape how machines think and behave — with six-figure salary packages.
As AI systems become more powerful and more embedded in everyday life, companies are increasingly grappling with questions around how these systems should behave, what values they reflect, and how much they can be trusted.
That’s creating a niche but growing demand for people trained to think through those problems, including philosophers.
“This is definitely a growing trend,” future of work expert Ravin Jesuthasan told Business Insider.
“Scrutiny of AI and the decisions it makes/enables is increasing daily, and these roles are pivotal for addressing this challenge,” he added.
Small but mighty
A small but growing group of philosophers is already embedded inside AI labs.
Amanda Askell, who has a Ph.D. in philosophy from NYU, is Anthropic’s resident philosopher. She writes on her website that her team’s role is to train Anthropic’s chatbot, Claude, to be more honest and to develop better character traits — essentially, to be good.
Iason Gabriel, who previously taught moral and political philosophy at Oxford University, is Google DeepMind’s in-house philosopher and research scientist. He focuses on the ethics of AI and ensuring that AIÂ systems are aligned with human values and goals.
Henry Shevlin, an AI ethicist and professor at the University of Cambridge, is also set to join DeepMind as a philosopher in May.
Workplace experts and recruiters say the shift is real but still early.
“Over the last few months, I have seen more conversation in the market around AI companies hiring people into roles aligned with a philosophy background,” Ben Eubanks, chief research officer at human capital advisory firm Lighthouse Research & Advisory, told Business Insider.
He said that the evidence remains largely anecdotal and that the number of roles is still too small to show up clearly in job market data.
Firas Sozan, CEO of Harrison Clarke, a specialized search and venture firm focused on talent in cloud, data, and AI for VC-backed startups, said the hiring push is being driven by a broader concern inside the industry: how much users, businesses, and governments can trust AI systems.
“As AI has grown, there’s been a natural emphasis on trust — how do we build layers of governance that allow us to control the technology in a more human way,” he told Business Insider.
Still, Sozan cautioned against overstating the trend.
“I wouldn’t say it’s a trend yet,” he said. “The data is still embryonic.”
Shaping the AI model
The appeal of philosophers is straightforward.
AI systems have already shown they produce harmful outputs, or behave in unpredictable ways — from coding agents deleting production databases and fabricating results to models attempting blackmail or sabotaging shutdown efforts — raising pressure on companies to ensure they are safe and aligned with human values.
“AI companies now hire them because not all AI development problems are technical,” said Annette Zimmermann, an assistant philosophy professor at the University of Wisconsin-Madison. “Defining complex concepts and defending value-based arguments are central to AI, and philosophers are trained to do exactly that.”
While safety and ethics roles have existed in tech for years, the work is changing.
“Prior corporate ethicists were advisory,” said Susanna Schellenberg, a philosophy professor at Rutgers University. “The work at frontier AI labs is different because philosophers help shape the object itself.”
Their work now includes writing model specifications, constitutions, and behavioral policies — tasks that Schellenberg said directly shape the AI model, not just comment on it.
From theory to high-paying jobs
The median wage for a philosophy major early in their career was $52,000 and about $80,000 mid-career, according to the Federal Reserve Bank of New York’s latest report on labor-market outcomes of college graduates. These median salaries are in line with those of other humanities graduates.
At the top end, AI ethics, safety, and governance roles can command base salaries ranging from $250,000 to $400,000, driven by intense competition for talent, Sozan said.
Some of those roles are already emerging across the industry, but they’re often senior and highly specialized.
Blackbaud, for example, is hiring an AI governance specialist with a base pay range of $117,200 to $157,500. The job description calls for expertise in ethics, including candidates with a background in philosophy.
Google DeepMind, meanwhile, is hiring an emerging impacts manager in AI ethics and safety with a base salary of $212,000 to $231,000. It requires at least 5 years of experience in AI ethics and safety within a governance, policy, legal, or research role.
A handful of more junior roles are starting to appear. Sony Research, for instance, recently advertised an AI ethics internship focused on evaluation, guardrails, and responsible AI. The job description calls for candidates pursuing degrees in socio-technical AI, such as ethics and philosophy.
Still, the jobs remain rare. Jesuthasan estimates that most companies are hiring fewer than 10 people into these roles.
Skepticism and limits
The rise of philosophers in AI has been described as a kind of “revenge of the humanities,” as companies rediscover the value of critical thinking and ethical reasoning in an AI-driven world.
But not everyone is convinced the shift will yield tangible changes.
About a decade ago, several tech companies set up AI ethics boards and advisory groups to guide how the technology was developed, including Google’s internal ethics board tied to its 2014 DeepMind acquisition and Microsoft’s Aether committee, created in 2017 to oversee AI research.
Companies like Google, Facebook, Amazon, and IBM also launched the Partnership on AI in 2016 to address the social and ethical implications of technology.
“What we found is that those boards were often figureheads,” Eubanks said, adding that companies often prioritized commercialization over ethical concerns.
Deborah Johnson, a pioneer in computer ethics, said companies may be more interested in signaling responsibility than embracing it.
“My cynical view would be that tech companies just want to ‘look’ like they are addressing ethics,” she said.
Johnson said that the pressures driving AI development, such as speed, competition, and profit, may limit the influence philosophers actually have.
“They are under pressure to get things out quickly,” she said. “Taking ethical considerations into account will slow them down.”
“Whether they have ethicists or not, I doubt they will listen to anything that will slow them down,” she added.
Read the full article here















