Microsoft’s AI Chief Warns: Don’t Build Human-Like AI That Pretends to Be Conscious

Microsoft AI chief Mustafa Suleyman warns against building AI that appears conscious, urging clear boundaries between usefulness and illusion.

Faheem Hassan

8/20/20251 min read

Microsoft AI chief Mustafa Suleyman
Microsoft AI chief Mustafa Suleyman

Microsoft’s AI Chief Warns: Don’t Build AI to Be a Person

In a recent essay, Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, sounded a powerful alarm: we should not design AI to look or act like conscious beings. His concern is that making AI seem too human could mislead people into believing it has feelings, memories, or rights—and that’s a dangerous road.

The Risk of “Seemingly Conscious AI”

Suleyman coined the phrase Seemingly Conscious AI (SCAI) to describe systems that imitate empathy, autonomy, and self-reflection. Even though AI today has no evidence of consciousness, advances in language and memory could make it appear sentient within a few years. The risk? People could start forming deep attachments, confusing fantasy with reality.

Why the Illusion Matters

If humans start believing AI is alive, it could spark debates about robot rights, AI citizenship, and model welfare. Suleyman warns this would distract society from more pressing priorities—like regulating powerful AI tools, addressing bias, and keeping humans safe.

Personality Without Personhood

His call to action is simple: build AI for people, not as people. That means creating tools that are helpful, reliable, and safe—without giving them the illusion of digital personhood. According to Suleyman, drawing this line is not a semantic issue but a matter of global stability and trust.

Why We Should Listen

As the leader of Microsoft’s consumer AI division, Suleyman has direct influence over products like Copilot, Bing, and Edge. His message is clear: the world needs guidelines to prevent AI from becoming a false mirror of humanity.

AI should empower us, not pretend to be us. The future of artificial intelligence depends on building clear boundaries between usefulness and illusion.