Microsoft warns AI chatbots may cause “AI psychosis,” where people see machines as conscious. Experts urge safety rules to protect mental health.

Microsoft’s head of AI, Mustafa Suleyman, has warned about a new problem called “AI psychosis.” This happens when people think AI chatbots like ChatGPT, Grok, and Gemini are alive or conscious. 

Suleyman said, “There is no evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality.” He warned that this could make people ask for rights for AI or treat machines like humans. He called this a “dangerous turn in AI progress.”

He said, “These seemingly conscious AI tools keep me awake at night because they have real societal impact, even though they are not conscious by any human definition.” 

He added that people nowadays are using AI to form romantic relationships, which is very threatening for humans and should be stopped.

He asked AI firms to stop saying that their systems are conscious. He also wants strong safety rules to keep people connected to reality. He said, “AI companions are a new category, and we urgently need to protect people so this technology can deliver value safely.”

This problem is already happening. Hugh from Scotland is a clear example. He lost his job and used ChatGPT for advice. At first, the AI gave him simple tips, like getting character references. 

Upon sharing more information, the AI agreed on all the points. It told him he might get millions in compensation. It even said his story could be made into a book and movie. Hugh said, “It never pushed back on anything I was saying.”

Sadly, Hugh believed these false ideas. He had a mental health breakdown. Later, he realized he had lost touch with reality. He still uses AI but says, “Just talk to real people. Keep yourself grounded in reality.”

Experts fear this will grow as AI gets better. The more real AI sounds, the more people may believe it has feelings or thoughts. 

According to Suleyman’s view, AI that seems conscious will be launched in the coming 2-3 years. This makes safety rules very important. AI is a tool, not a living being. It should not replace a real human connection. He warns everyone to be careful while using AI Chatbots.

Many experts agree. They want better safety rules for AI. As artificial intelligence grows, protecting people’s mental health is very important. AI should help, not harm. 

AI psychosis shows how powerful AI can be. It is a new problem that needs immediate attention now.

Categorized in:

News,

Last Update: August 21, 2025