Join Us Thursday, September 11

AI systems may feel real, but they don’t deserve rights, said Microsoft’s AI CEO.

Mustafa Suleyman said in an interview with WIRED published Wednesday that the industry needs to be clear that AI is built to serve humans, not to develop independent will or desires.

“If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals — that starts to seem like an independent being rather than something that is in service to humans,” he said. “That’s so dangerous and so misguided that we need to take a declarative position against it right now.”

The former DeepMind and Inflection cofounder pushed back against the idea that AI’s increasingly convincing responses amount to genuine consciousness. It’s “mimicry,” he said.

He also said that rights should be tied to the ability to suffer — something biological beings experience but AI does not.

“You could have a model which claims to be aware of its own existence and claims to have a subjective experience, but there is no evidence that it suffers,” he said.

Humans don’t owe them any moral protection or rights. “Turning them off makes no difference, because they don’t actually suffer,” he added.

AI as sentient beings

Suleyman’s comments come as some AI companies explore the opposite: whether AI deserves to be treated more like sentient beings.

Anthropic has gone further than most companies in treating AI systems as if their welfare matters. The company has hired a researcher, Kyle Fish, whose role is to consider whether advanced AI might one day be “worthy of moral consideration.”

His job involves exploring what capabilities an AI system would need before earning such protection, and what practical steps companies could take to safeguard the “interests” of AI, Anthropic told Business Insider last year.

Anthropic has also recently experimented with how to end extreme conversations — including child exploitation requests — in ways that extend “welfare” considerations to the AI itself.

In April, a principal scientist at Google DeepMind said the industry might need to rethink the concept of AI consciousness altogether.

“Maybe we need to bend or break the vocabulary of consciousness to fit these new systems,” Murray Shanahan said on a Deepmind podcast published in April. “You can’t be in the world with them like you can with a dog or an octopus — but that doesn’t mean there’s nothing there.”

Suleyman previously said that there is no evidence that AI is conscious.

In a personal essay published last month, he wrote that he was “growing more and more concerned” about so-called AI psychosis, a term increasingly being used to describe when people form delusional beliefs after interacting with chatbots.

Suleyman and Microsoft did not respond to a request for comment from Business Insider.



Read the full article here

Share.
Leave A Reply