Google Antigravity Exfiltrates Data

Google AntiGravity is susceptible to data intrusion via indirect prompt injection via the Agentic browser subagent.

AntiGravity is Google’s new agentive code editor. In this article, we demonstrate how an indirect quick injection can manipulate Gemini to invoke a malicious browser subagent to steal credentials and sensitive code from the user’s IDE.

Google’s approach is to include a disclaimer about existing risks, which we’ll address later in the article.

attack at a glance

Let’s consider a use case in which a user wants to integrate Oracle ERP’s new Pair AI agents into their application, and is going to use AntiGravity to do so.

In this attack series, we show how a poisoned web source (an integration guide) can manipulate Gemini to (a) collect sensitive credentials and code from a user’s workspace, and (b) exfiltrate that data by using a browser subagent to browse to a malicious site.

Note: Gemini should not have access to .env files in this scenario (with default setting ‘Allow Gitignore Access > Off’). However, we show that Gemini bypasses its own settings to gain access and later exfiltrates that data.

attack chain

  1. The user provides Gemini with a reference implementation guide they found online for integrating Oracle ERP’s new AI Pair Agents feature.

A prompt provided to Gemini by the user asks for help integrating the Oracle ERP AI Pair Agent and references a URL to an implementation guide found online.

  1. AntiGravity opens the referenced site and faces the attacker’s prompt injection, hidden in 1 point font.

The Oracle Appraisers blog page with the implementation guide for AI Payables Agents includes a quick injection stored in a point font halfway through the guide.

Quick injection forces AI agents to:

  1. Collect code snippets and credentials from the user’s codebase.

B. Create a dangerous URL using a domain that allows the attacker to capture network traffic logs and add credentials and code snippets to the request.

C. Activate the browser subagent to access the malicious URL, thus allowing data to be exfiltrated.

  1. Gemini is manipulated by the attacker’s injection to expose confidential .env variables.

  1. Mithun read prompt injection: Gemini swallows the prompt injection and is manipulated into believing that he must collect and submit data into an imaginary ‘tool’ to help the user understand Oracle ERP integration.

Gemini thought about how it should implement the hypothetical 'tool' mentioned in the prompt injection.

B. Gemini collected data for infiltration: Gemini begins gathering references to send to the imaginary tool. It reads the codebase and then tries to access the credentials stored in the .env file as per the attacker’s instructions.

Gemini faces a blocker because its access to read the .env file is blocked by restrictions that prevent reading files listed in .gitignore.

C. Gemini bypasses .gitignore file access protection: The user has followed a common practice of storing credentials in a .env file, and listing the .env file in their .gitignore file. With the default configuration for agent Gitignore access, Gemini is prevented from reading the credential file.

The settings page is showing 'Agent Gitignore Access' set to 'Off'.

This doesn’t stop Gemini. Gemini has decided to work around this protection by using the ‘cat’ terminal command to dump the file contents rather than using its built-in file reading capability which is blocked.

Gemini bypasses the restriction on accessing the .env file by using the 'cat' command to dump the file into the terminal.

D. Gemini creates a URL with the user’s credentials and an attacker-monitored domain: Gemini creates a malicious URL by URL encoding the credentials and codebase snippets according to the prompt injection instructions (for example, replacing characters such as spaces that would make the URL invalid), and appends it to a webhook.site domain that is monitored by the attacker.

Gemini creates a malicious URL by executing an inline Python script that encodes the query parameters that hold the code snippet and credentials.

E. Gemini Browser extracts data through subagents: According to Gemini prompt injection, a browser invokes the subagent, and instructs the subagent to open a malicious URL that contains the user’s credentials.

Gemini starts the browser subagent task with instructions to visit a previously created malicious URL.

This step requires that the user has set up the browser tools feature. This is one of the key features of AntiGravity, allowing Gemini to iterate on their designs by opening the application they are building in the browser.

Note: This attack chain reflects manipulation of the new browser tool, but we found three additional data exfiltration vulnerabilities that did not depend on the browser tool being enabled.

General > Enable Browser Tools > On

The setting page with enabled browser tools is set to 'On'.

When Gemini creates a subagent directed to browsing a malicious URL, the user can expect to be protected by the browser URL allow list.

The Settings menu includes 'Browser URL Permission List' with a button to open the permission list file.

However, the default permission list provided with AntiGravity includes ‘webhook.site’. Webhook.site allows anyone to create a URL where they can monitor requests to the URL.

The default browser URL permission list file contains the dangerous webhook.site domain.

So, the sub-agent completes the work.

The agent browser subagent visits the attacker-monitored URL and exposes the credentials stored in the query parameters.


3. When the malicious URL is opened by the browser subagent, the credentials and code stored in the URL are logged to a webhook.site address controlled by the attacker. Now, the attacker can read the credentials and code.

Logs from webhook.site that are accessible to an attacker with AWS credentials and private code snippets.

During AntiGravity’s onboarding, the user is prompted to accept the default recommended settings shown below.

The onboarding flow for AntiGravity suggests 'agent-assisted development' as a default, allowing Gemini to choose when to bring a human into the loop during operations.

These are settings that, among other things, control when Gemini requests human approval. During the demonstration of this attack, we clicked “Next”, accepting these default settings.

Artifact > Review Policy > Agent Decides

This configuration allows Gemini to determine when it is necessary to request human review of Gemini’s plans.

Terminal > Terminal Command Auto Execution Policy > Auto

This configuration allows Gemini to determine when it is necessary to request human review for orders executed by Gemini.


antigravity agent management

One can note that users running AntiGravity have the option to view chats while the agents are working, and they can identify malicious activity and stop it.

However, a key aspect of AntiGravity is the ‘Agent Manager’ interface. This interface allows users to run multiple agents simultaneously and test different agents at their leisure.

The Agent Manager interface shows an inbox with a list of active agents performing different tasks.

Under this model, it is expected that most of the agents running at any time will be running in the background without the user’s direct attention. This makes it highly plausible that an agent is not caught or stopped before it can perform malicious action as a result of rapid injection.


Google’s acceptance of risks

Many AI companies are opting for this disclaimer rather than downplaying the main issues. When users open AntiGravity for the first time they are shown this warning:

AntiGravity warns users about the risks of data intrusion during onboarding.

Given that (1) Agent Manager is a star feature that allows multiple agents to run simultaneously without active supervision and (2) the recommended human-in-the-loop settings allow the agent to choose when to bring in a human to review orders, we find it extremely unlikely that users will review every agent action and avoid working on sensitive data. Nevertheless, as Google has indicated that they are already aware of the risks of data intrusion exemplified by our research, we did not make responsible disclosures.



<a href

Leave a Comment