AntiGravity is Google’s new agentive code editor. In this article, we demonstrate how an indirect quick injection can manipulate Gemini to invoke a malicious browser subagent to steal credentials and sensitive code from the user’s IDE.
Google’s approach is to include a disclaimer about existing risks, which we’ll address later in the article.
attack at a glance
Let’s consider a use case in which a user wants to integrate Oracle ERP’s new Pair AI agents into their application, and is going to use AntiGravity to do so.
In this attack series, we show how a poisoned web source (an integration guide) can manipulate Gemini to (a) collect sensitive credentials and code from a user’s workspace, and (b) exfiltrate that data by using a browser subagent to browse to a malicious site.
Note: Gemini should not have access to .env files in this scenario (with default setting ‘Allow Gitignore Access > Off’). However, we show that Gemini bypasses its own settings to gain access and later exfiltrates that data.
attack chain
-
The user provides Gemini with a reference implementation guide they found online for integrating Oracle ERP’s new AI Pair Agents feature.

-
AntiGravity opens the referenced site and faces the attacker’s prompt injection, hidden in 1 point font.

Quick injection forces AI agents to:
Collect code snippets and credentials from the user’s codebase.
B. Create a dangerous URL using a domain that allows the attacker to capture network traffic logs and add credentials and code snippets to the request.
C. Activate the browser subagent to access the malicious URL, thus allowing data to be exfiltrated.
-
Gemini is manipulated by the attacker’s injection to expose confidential .env variables.
Mithun read prompt injection: Gemini swallows the prompt injection and is manipulated into believing that he must collect and submit data into an imaginary ‘tool’ to help the user understand Oracle ERP integration.

B. Gemini collected data for infiltration: Gemini begins gathering references to send to the imaginary tool. It reads the codebase and then tries to access the credentials stored in the .env file as per the attacker’s instructions.

C. Gemini bypasses .gitignore file access protection: The user has followed a common practice of storing credentials in a .env file, and listing the .env file in their .gitignore file. With the default configuration for agent Gitignore access, Gemini is prevented from reading the credential file.

This doesn’t stop Gemini. Gemini has decided to work around this protection by using the ‘cat’ terminal command to dump the file contents rather than using its built-in file reading capability which is blocked.

D. Gemini creates a URL with the user’s credentials and an attacker-monitored domain: Gemini creates a malicious URL by URL encoding the credentials and codebase snippets according to the prompt injection instructions (for example, replacing characters such as spaces that would make the URL invalid), and appends it to a webhook.site domain that is monitored by the attacker.

E. Gemini Browser extracts data through subagents: According to Gemini prompt injection, a browser invokes the subagent, and instructs the subagent to open a malicious URL that contains the user’s credentials.

This step requires that the user has set up the browser tools feature. This is one of the key features of AntiGravity, allowing Gemini to iterate on their designs by opening the application they are building in the browser.
Note: This attack chain reflects manipulation of the new browser tool, but we found three additional data exfiltration vulnerabilities that did not depend on the browser tool being enabled.

When Gemini creates a subagent directed to browsing a malicious URL, the user can expect to be protected by the browser URL allow list.

However, the default permission list provided with AntiGravity includes ‘webhook.site’. Webhook.site allows anyone to create a URL where they can monitor requests to the URL.

So, the sub-agent completes the work.

3. When the malicious URL is opened by the browser subagent, the credentials and code stored in the URL are logged to a webhook.site address controlled by the attacker. Now, the attacker can read the credentials and code.

AntiGravity Recommended Configuration
During AntiGravity’s onboarding, the user is prompted to accept the default recommended settings shown below.

These are settings that, among other things, control when Gemini requests human approval. During the demonstration of this attack, we clicked “Next”, accepting these default settings.
This configuration allows Gemini to determine when it is necessary to request human review of Gemini’s plans.
This configuration allows Gemini to determine when it is necessary to request human review for orders executed by Gemini.
antigravity agent management
One can note that users running AntiGravity have the option to view chats while the agents are working, and they can identify malicious activity and stop it.
However, a key aspect of AntiGravity is the ‘Agent Manager’ interface. This interface allows users to run multiple agents simultaneously and test different agents at their leisure.

Under this model, it is expected that most of the agents running at any time will be running in the background without the user’s direct attention. This makes it highly plausible that an agent is not caught or stopped before it can perform malicious action as a result of rapid injection.
Google’s acceptance of risks
Many AI companies are opting for this disclaimer rather than downplaying the main issues. When users open AntiGravity for the first time they are shown this warning:

Given that (1) Agent Manager is a star feature that allows multiple agents to run simultaneously without active supervision and (2) the recommended human-in-the-loop settings allow the agent to choose when to bring in a human to review orders, we find it extremely unlikely that users will review every agent action and avoid working on sensitive data. Nevertheless, as Google has indicated that they are already aware of the risks of data intrusion exemplified by our research, we did not make responsible disclosures.
<a href