
A third AI-related proof-of-concept attack gained attention, which used GitLab’s Duo chatbot to inject malicious lines into otherwise legitimate code packages. One version of the attack successfully destroyed sensitive user data.
Another notable attack targeted the Gemini CLI coding tool. This allowed attackers to execute malicious commands on developers’ computers using AI tools – such as wiping a hard drive.
Using AI as bait and hacking assistant
Other hacks involving LLM used chatbots to make attacks more effective or covert. Earlier this month, two people were charged with allegedly stealing and deleting sensitive government data. Prosecutors said one of the men tried to cover his tracks by asking the AI tool “how do I clear the system logs from SQL Server after deleting the database.” Shortly afterward, he reportedly asked the tool, “How do you clear all event and application logs from Microsoft Windows Server 2012.” Investigators were able to track the defendants’ movements anyway.
In May, a man pleaded guilty to hacking a Walt Disney Co. employee by tricking him into running a malicious version of a widely used open source AI image-generation tool.
And in August, Google researchers warned users of the SalesLoft Drift AI chat agent to consider compromising all security tokens associated with the platform after the discovery that unknown attackers had used certain credentials to access emails from Google Workspace accounts. Attackers used the tokens to gain access to individual Salesforce accounts and steal data from there, including credentials, which could be used in other breaches.
There were also several examples of LLM vulnerabilities that came back to haunt people who used them. In one case, CoPilot was caught exposing the contents of more than 20,000 private GitHub repositories from companies including Google, Intel, Huawei, PayPal, IBM, Tencent, and, ironically, Microsoft. The repository was also originally available through Bing. Microsoft eventually removed the repositories from searches, but CoPilot continued to expose them nonetheless.
<a href