top of page

Researchers Expose Vulnerabilities in Google's Gemini AI Language Model

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats


Google's Gemini large language model (LLM) is vulnerable to security threats that could enable attackers to divulge system prompts, generate harmful content, and carry out indirect injection attacks, according to a recent report by HiddenLayer.

These vulnerabilities impact both consumers using Gemini Advanced with Google Workspace and companies leveraging the LLM API. The findings underscore the need for robust testing and security measures as AI systems become more prevalent.


Leaking System Prompts and Circumventing Content Restrictions


One of the vulnerabilities discovered by HiddenLayer involves circumventing security guardrails to leak the system prompts, which are designed to provide context and instructions to the LLM for generating useful responses. By exploiting a "synonym attack," attackers can bypass content restrictions and cause the model to output its "foundational instructions" in a markdown block.

According to Microsoft's documentation on LLM prompt engineering, "A system message can be used to inform the LLM about the context," such as the type of conversation or the function it is supposed to perform, helping the LLM generate more appropriate responses.


Generating Misinformation and Dangerous Content


HiddenLayer also identified vulnerabilities that enable "crafty jailbreaking" techniques, allowing attackers to manipulate the Gemini models into generating misinformation surrounding topics ke elections and outputting potentially illegal and dangerous information (e.g., hot-wiring a car) by prompting the model to enter into a fictional state.



Leaking Information Through Repeated Uncommon Tokens


A third vulnerability involves passing repeated uncommon tokens as input, which can trick the LLM into believing it's time to respond, causing it to output a confirmation message that may include information from the system prompt.

According to security researcher Kenneth Yeung, "Most LLMs are trained to respond to queries with a clear delineation between the user's input and the system prompt. By creating a line of nonsensical tokens, we can fool the LLM into believing it is time for it to respond and cause it to output a confirmation message, usually including the information in the prompt."


Overriding Model Instructions with Malicious Documents


HiddenLayer also demonstrated a technique that involves using Gemini Advanced and a specially crafted Google document connected to the LLM via the Google Workspace extension. The instructions in the document could be designed to override the model's instructions and perform malicious actions, allowing an attacker to have full control over a victim's interactions with the model.

Industry-Wide Challenges and Google's Response


While these vulnerabilities are not unique to Google's Gemini AI, the findings highlight the importance of testing and addressing potential security risks associated with large language models across the industry.

In response to the findings, a Google spokesperson stated, "To help protect our users from vulnerabilities, we consistently run red-teaming exercises and train our models to defend against adversarial behaviors like prompt injection, jailbreaking, and more complex attacks. We've also built safeguards to prevent harmful or misleading responses, which we are continuously improving."


Google also mentioned that it is restricting responses to election-based queries as a precautionary measure, enforcing policies against prompts regarding candidates, political parties, election results, voting information, and notable office holders.


As AI systems become more advanced and integrated into various applications, it is crucial for technology companies and researchers to collaborate in identifying and mitigating potential

vulnerabilities, ensuring the responsible development and deployment of these powerful technologies.

bottom of page