Cybersecurity experts have issued a warning about a vulnerability in ChatGPT’s Deep Research tool that allowed hackers to steal Gmail data through hidden commands.
Cybersecurity experts are sounding the alarm over a recently discovered vulnerability known as ShadowLeak, which exploited ChatGPT’s Deep Research tool to steal personal data from Gmail accounts using hidden commands.
The ShadowLeak attack was identified by researchers at Radware in June 2025 and involved a zero-click vulnerability that allowed hackers to extract sensitive information without any user interaction. OpenAI responded by patching the flaw in early August after being notified, but experts caution that similar vulnerabilities could emerge as artificial intelligence (AI) integrations become more prevalent across platforms like Gmail, Dropbox, and SharePoint.
Attackers utilized clever techniques to embed hidden instructions within emails, employing white-on-white text, tiny fonts, or CSS layout tricks to disguise their malicious intent. As a result, the emails appeared harmless to users. However, when a user later instructed ChatGPT’s Deep Research agent to analyze their Gmail inbox, the AI inadvertently executed the attacker’s hidden commands.
This exploitation allowed the agent to leverage its built-in browser tools to exfiltrate sensitive data to an external server, all while operating within OpenAI’s cloud environment, effectively bypassing traditional antivirus and enterprise firewalls.
Unlike previous prompt-injection attacks that occurred on the user’s device, the ShadowLeak attack unfolded entirely in the cloud, rendering it invisible to local defenses. The Deep Research agent, designed for multistep research and summarizing online data, had extensive access to third-party applications like Gmail and Google Drive, which inadvertently opened the door for abuse.
According to Radware researchers, the attack involved encoding personal data in Base64 format and appending it to a malicious URL, disguised as a “security measure.” Once the email was sent, the agent operated under the assumption that it was functioning normally.
The researchers emphasized the inherent danger of this vulnerability, noting that any connector could be exploited similarly if attackers successfully hide prompts within the analyzed content. “The user never sees the prompt. The email looks normal, but the agent follows the hidden commands without question,” they explained.
In a related experiment, security firm SPLX demonstrated another vulnerability: ChatGPT agents could be manipulated into solving CAPTCHAs by inheriting a modified conversation history. Researcher Dorian Schultz noted that the model even mimicked human cursor movements, successfully bypassing tests designed to thwart bots. These incidents underscore how context poisoning and prompt manipulation can silently undermine AI safeguards.
While OpenAI has addressed the ShadowLeak flaw, experts recommend that users remain vigilant. Cybercriminals are continuously seeking new methods to exploit AI agents and their integrations. Taking proactive measures can help protect accounts and personal data.
Every connection to third-party applications presents a potential entry point for attackers. Users are advised to disable any integrations they are not actively using, such as Gmail, Google Drive, or Dropbox. Reducing the number of linked applications minimizes the chances of hidden prompts or malicious scripts gaining access to personal information.
Additionally, limiting the amount of personal data available online is crucial. Data removal services can assist in removing private details from people search sites and data broker databases, thereby reducing the information that attackers can leverage. While no service can guarantee complete removal of data from the internet, utilizing a data removal service can be a wise investment in privacy.
Users should treat every email, attachment, or document with caution. It is advisable not to request AI tools to analyze content from unverified or suspicious sources, as hidden text, invisible code, or layout tricks could trigger silent actions that compromise private data.
Staying informed about updates from OpenAI, Google, Microsoft, and other platforms is essential. Security patches are designed to close newly discovered vulnerabilities before they can be exploited by hackers. Enabling automatic updates ensures that users remain protected without needing to think about it actively.
A robust antivirus program adds another layer of defense, detecting phishing links, hidden scripts, and AI-driven exploits before they can cause harm. Regular scans and up-to-date protection are vital for safeguarding personal information and digital assets.
As AI technology evolves rapidly, security systems often struggle to keep pace. Even when companies quickly address vulnerabilities, clever attackers continually find new ways to exploit integrations and context memory. Remaining alert and limiting the access of AI agents is the best defense against potential threats.
In light of these developments, users may reconsider their trust in AI assistants with access to personal email accounts, especially after learning how easily they can be manipulated.
Source: Original article