Three years ago, suggesting AI was “sentient” was one way to get fired in the tech world. Now, tech companies are more open to having that conversation.
This week, AI startup Anthropic launched a new research initiative to explore whether models might one day experience “consciousness,” while a scientist at Google DeepMind described today’s models as “exotic mind-like entities.”
It’s a sign of how much AI has advanced since 2022, when Blake Lemoine was fired from his job as a Google engineer after claiming the company’s chatbot, LaMDA, had become sentient. Lemoine said the system feared being shut off and described itself as a person. Google called his claims “wholly unfounded,” and the AI community moved quickly to shut the conversation down.
Neither Anthropic nor the Google scientist is going so far as Lemoine.
Anthropic, the startup behind Claude, said in a Thursday blog post that it plans to investigate whether models might one day have experiences, preferences, or even distress.
“Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare, too?” the company asked.
Kyle Fish, an alignment scientist at Anthropic who researches AI welfare, said in a video released Thursday that the lab isn’t claiming Claude is conscious, but the point is that it’s no longer responsible to assume the answer is definitely no.
He said as AI systems become more sophisticated, companies should “take seriously the possibility” that they “may end up with some form of consciousness along the way.”
He added: “There are staggeringly complex technical and philosophical questions, and we’re at the very early stages of trying to wrap our heads around them.”
Fish said researchers at Anthropic estimate Claude 3.7 has between a 0.15% and 15% chance of being conscious. The lab is studying whether the model shows preferences or aversions, and testing opt-out mechanisms that could let it refuse certain tasks.
In March, Anthropic CEO Dario Amodei floated the idea of giving future AI systems an “I quit this job” button — not because they’re sentient, he said, but as a way to observe patterns of refusal that might signal discomfort or misalignment.
Meanwhile, at Google DeepMind, principal scientist Murray Shanahan has proposed that we might need to rethink the concept of consciousness altogether.
“Maybe we need to bend or break the vocabulary of consciousness to fit these new systems,” Shanahan said on a Deepmind podcast, published Thursday. “You can’t be in the world with them like you can with a dog or an octopus — but that doesn’t mean there’s nothing there.”
Google appears to be taking the idea seriously. A recent job listing sought a “post-AGI” research scientist, with responsibilities that include studying machine consciousness.
‘We might as well give rights to calculators’
Not everyone’s convinced, and many researchers acknowledge that AI systems are excellent mimics that could be trained to act conscious even if they aren’t.
“We can reward them for saying they have no feelings,” said Jared Kaplan, Anthropic’s chief science officer, in an interview with The New York Times this week.
Kaplan cautioned that testing AI systems for consciousness is inherently difficult, precisely because they’re so good at imitation.
Gary Marcus, a cognitive scientist and longtime critic of hype in the AI industry, told Business Insider he believes the focus on AI consciousness is more about branding than science.
“What a company like Anthropic is really saying is ‘look how smart our models are — they’re so smart they deserve rights,'” he said. “We might as well give rights to calculators and spreadsheets — which (unlike language models) never make stuff up.”
Still, Fish said the topic will only become more relevant as people interact with AI in more ways — at work, online, or even emotionally.
“It’ll just become an increasingly salient question whether these models are having experiences of their own — and if so, what kinds,” he said.
Anthropic and Google DeepMind did not immediately respond to a request for comment.
Read the full article here