Press Enter to search
Microsoft has revealed a new kind of cyberattack that could compromise user privacy during conversations with AI chatbots like ChatGPT, Gemini, or Grok. The company explained that the threat, named “Whisper Leak,” doesn’t allow hackers to read your actual messages but helps them detect what subject you’re talking about. The findings suggest that even encrypted chats may not be as private as users believe, making this a growing security concern.
AI chatbots generate responses using a system called “streaming,” where answers are produced token by token instead of being sent all at once. Although the data is encrypted, hackers can analyze traffic flow — the rate and size of these tokens — to estimate the conversation’s topic. For instance, longer or heavier data transfers may indicate a detailed discussion, helping attackers deduce whether the chat involves politics, personal issues, or business data.
According to a report by Forbes, some governments and surveillance agencies may use this technology to monitor citizens, journalists, or political activists. Countries with tight information control could exploit such side-channel attacks to discover what topics people are discussing with AI systems. Microsoft warned that this method of “pattern spying” could pose serious threats to user freedom, privacy, and democracy worldwide.
Microsoft clarified that these conversations remain encrypted — meaning the content itself cannot be directly read. However, encryption does not mask traffic patterns. This allows hackers to identify signals and make predictions about the subject being discussed. Such attacks are known as “side-channel attacks,” where information is leaked indirectly, not through the message content but through its transmission behavior.
The biggest risk, Microsoft says, lies in the exposure of sensitive information. Attackers could guess if a user is discussing financial plans, personal health details, company projects, or political issues. Even without reading the messages, understanding the topic itself can help malicious actors plan further attacks, phishing attempts, or social engineering tricks. Therefore, users are advised to limit personal or confidential discussions with AI chatbots.
Yes, in some nations, surveillance bodies are already known to track online communications of activists, journalists, and dissenters. With “Whisper Leak,” such monitoring could become even easier. Governments might not see the full chat, but they could infer whether citizens are researching political content or controversial subjects — raising concerns over misuse and digital freedom violations.
Microsoft recommends maintaining maximum privacy while using AI tools. Avoid sharing sensitive or personally identifiable information with AI chatbots. Use secure, private networks and avoid public Wi-Fi while chatting with AI systems. The company has also urged developers to improve response-streaming mechanisms and anonymize data traffic to reduce such leaks. As AI becomes more common, Microsoft warns that attackers will soon build smarter models to exploit even the smallest patterns — making user caution the best defense.