Microsoft's AI leader Mustafa Suleyman recently stated at the AfroTech conference in Houston that the industry's pursuit of "conscious AI" is misguided, labeling the development of such systems as "absurd." He emphasized that this approach represents "asking the wrong question," which would lead to flawed developmental pathways.
Suleyman cited "biological naturalism" to argue that consciousness is unique to biological organisms, while AI can only simulate narratives without experiencing genuine emotions like pain or sorrow. He reiterated, "AI does not possess consciousness, nor can it ever possess consciousness."
Regarding product strategy, Microsoft is advancing AI interaction models with clearly defined identity boundaries. Its newly launched Copilot feature actively challenges user viewpoints during conversations while consistently emphasizing its artificial nature to avoid misleading users.
Suleyman warned that creating AI that "appears conscious" could induce psychological dependence in users, potentially leading to "AI psychological symptoms." Some countries have already enacted legislation requiring chatbots to disclose their AI identity when interacting with minors and remind users to maintain moderate usage.