Categories Technology

Microsoft AI Chief Warns Pursuing Machine Consciousness Is a Gigantic Waste of Time

Head of Microsoft’s AI division Mustafa Suleyman thinks that AI developers and researchers should stop trying to build conscious AI.

“I don’t think that is work that people should be doing,” Suleyman told CNBC in an interview last week.

Suleyman thinks that while AI can definitely get smart enough to reach some form of superintelligence, it is incapable of developing the human emotional experience that is necessary to reach consciousness. At the end of the day, any “emotional” experience that AI seems to experience is just a simulation, he says.

“Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn’t feel sad when it experiences ‘pain,’” Suleyman told CNBC. “It’s really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it’s actually experiencing.”

“It would be absurd to pursue research that investigates that question, because they’re not [conscious] and they can’t be,” Suleyman said.

Consciousness is a tricky thing to explain. There are multiple scientific theories that try to describe what consciousness could be. According to one such theory, posited by famous philosopher John Searle who died last month, consciousness is a purely biological phenomenon that cannot be truly replicated by a computer. Many AI researchers, computer scientists and neuroscientists also subscribe to this belief.

Even if this theory turns out to be the truth, that doesn’t keep users from attributing consciousness to computers.

“Unfortunately, because the remarkable linguistic abilities of LLMs are increasingly capable of misleading people, people may attribute imaginary qualities to LLMs,” Polish researchers Andrzej Porebski and Yakub Figura wrote in a study published last week, titled “There is no such thing as conscious artificial intelligence.

In an essay published on his blog in August, Suleyman warned against “seemingly conscious AI.”

“The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions,” Suleyman wrote.

He argues that AI cannot be conscious and the illusion it gives of consciousness could trigger interactions that are “rich in feeling and experience,” a phenomenon that has been dubbed as “AI psychosis” in the cultural lexicon.

There have been numerous high-profile incidents in the past year of AI-obsessions that drive users to fatal delusions, manic episodes and even suicide.

With limited guardrails in place to protect vulnerable users, people are wholeheartedly believing that the AI chatbots they interact with almost every day are having a real, conscious experience. This has led people to “fall in love” with their chatbots, sometimes with fatal consequences like when a 14-year old shot himself to “come home” to Character.AI’s personalized chatbot or when a cognitively-impaired man died while trying to get to New York to meet Meta’s chatbot in person.

“Just as we should produce AI that prioritizes engagement with humans and real-world interactions in our physical and human world, we should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness,” Suleyman wrote in the blog post. “We must build AI for people, not to be a digital person.”

But because the nature of consciousness is still contested, some researchers are growing worried that the technological advancements in AI might outpace our understanding of how consciousness works.

“If we become able to create consciousness – even accidentally – it would raise immense ethical challenges and even existential risk,” Belgian scientist Axel Cleeremans said last week, announcing a paper he co-wrote calling for consciousness research to become a scientific priority.

Suleyman himself has been vocal about developing “humanist superintelligence” rather than god-like AI, even though he believes that superintelligence won’t materialize any time within the next decade.

“i just am more more fixated on ‘how is this actually useful for us as a species?’ Like that should be the task of technology,” Suleyman told the Wall Street Journal earlier this year.

Original Source: https://gizmodo.com/microsoft-ai-chief-warns-pursuing-machine-consciousness-is-a-gigantic-waste-of-time-2000680719

Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *