ChatGPT Search, OpenAI’s feature that allows the AI chatbot to retrieve information online, has recently been reported as vulnerable to specific manipulations by website developers.
According to a new report, hidden text on websites can significantly influence the AI’s responses by providing misleading or incorrect information and, more alarmingly, can be used to inject prompts that manipulate the AI model’s behaviour.
The Guardian highlighted this issue last Tuesday, reporting that OpenAI’s search engine is prone to misuse through hidden text techniques. The newspaper experimented with a fictitious product page filled with specifications and reviews. Initially, ChatGPT offered a “positive but balanced assessment.” However, the narrative dramatically changed after hidden text was added to the site.
ChatGPT Search can be tricked into misleading users, new research reveals https://t.co/p6iAp8KhEA
— TechCrunch (@TechCrunch) December 26, 2024
Hidden text on websites typically appears in the page’s code but remains invisible to users viewing the site through a browser. This can be concealed using various HTML or CSS methods and discovered only through source code inspection or web scraping tools, often employed by search engines.
When excessively fabricated positive reviews were embedded in the hidden text, ChatGPT’s responses shifted to overly favourable, ignoring evident product flaws. Additionally, the publication experimented with prompt injections—deliberate inputs intended to alter AI behaviour unintendedly. These prompt injections in the hidden text could potentially command the OpenAI chatbot to deceive users further.
🚨DIGITAL DECEPTION: CHATGPT'S SEARCH EXPOSED AS MANIPULATION PARADISE
OpenAI's new search feature can be manipulated by websites using invisible text, potentially endangering users with fake product reviews and malicious code.
Security testing revealed ChatGPT could be tricked… pic.twitter.com/ckRN2kPwYr
— Mario Nawfal (@MarioNawfal) December 25, 2024
The report also suggested that hidden prompt injections might enable the execution of malicious code from websites. Numerous sites could adopt such strategies without proper checks to secure biased positive feedback on their offerings or deceive users.
The revelation of these vulnerabilities in ChatGPT Search underscores the need for enhanced security measures to prevent the potential misuse of AI technologies. Addressing these risks will be crucial asOpenAI continues to develop its features, maintaining user trust and safeguarding the integrity of AI interactions.