How recruitment fraud turned cloud IAM into a $2 billion attack surface

hero
A developer gets a LinkedIn message from a recruiter. The role seems legitimate. Coding assessment requires installing a package. That package gets all the cloud credentials from the developer’s machine – GitHub personal access tokens, AWS API keys, Azure service principals, and more – out, and the competitor is inside the cloud environment within minutes.

Your email security never saw it. Your dependency scanner may have flagged the package. No one was watching what happened next.

The attack chain is increasingly becoming known as the identity and access management (IAM) pivot, and it represents a fundamental difference in the way enterprises monitor identity-based attacks. CrowdStrike Intelligence research published on January 29 details how the adversary groups conducted this series of attacks on an industrial scale. Threat actors are hiding the delivery of Trojanized Python and NPM packages through recruitment fraud, then moving on to full cloud IAM compromises with stolen developer credentials.

In one case in late 2024, attackers distributed malicious Python packages to a European fintech company through a recruitment-themed lure, focused on cloud IAM configuration and diverted cryptocurrencies to an adversary-controlled wallet.

From entry to exit the message never touched the corporate email gateway, and there is no digital evidence to go on.

In a recent episode of CrowdStrike’s Adversary Universe podcast, Adam Meyers, the company’s SVP Intelligence and Head of Counter Adversary Operations, described the scale: more than $2 billion linked to cryptocurrency operations run by an adversary entity. Decentralized currency is ideal because it allows attackers to simultaneously avoid sanctions and detection, Meyers explained. Cristian Rodriguez, CrowdStrike’s Americas field CTO, explained that revenue success has driven organizational expertise. What was once a single threat group has split into three separate entities targeting cryptocurrency, fintech, and espionage purposes.

That case was no different. The Cybersecurity and Infrastructure Security Agency (CISA) and security company JFrog have tracked overlapping campaigns in the NPM ecosystem, with JFrog identifying 796 compromised packages in a self-replicating worm that spread via infected dependencies. Research has documented WhatsApp messaging as a primary initial compromise vector, with adversaries distributing malicious ZIP files containing trojanized applications through the platform. Corporate email security never blocks this channel.

Most security stacks are optimized for a single entry point that these attackers have completely abandoned.

When dependency scanning is not enough

Adversaries are changing entry vectors in real time. Trojanized packages aren’t arriving via typosquatting like before – they’re hand-delivered through personal messaging channels and social platforms that corporate email gateways don’t touch. CrowdStrike documented adversaries tailoring employment-themed lures to specific industries and roles, and as recently as June 2025 observed the deployment of specialized malware at fintech firms.

CISA documented this extensively in September, issuing an advisory on a broader npm supply chain compromise targeting GitHub personal access tokens and AWS, GCP, and Azure API keys. The malicious code was scanned for credentials during package installation and sent to the external domain.

Dependency scanning captures packages. This is the first control, and most organizations have it. Almost none have another, which is runtime behavior monitoring that detects credential exfiltration during the install process itself.

“When you boil this attack down to its essentials, what emerges is not a successful technique,” said Shane Barney, CISO at Keeper Security, in an analysis of a recent cloud attack series. “This is how little resistance the environment offered once the attacker gained legitimate access.”

Adversaries are getting better at creating lethal, unmonitored pivots

Google Cloud’s Threat Horizons report found that weak or missing credentials are responsible for 47.1% of cloud incidents in the first half of 2025, with misconfigurations increasing by 29.4%. These numbers have remained stable over consecutive reporting periods. This is a chronic condition, not an emerging threat. Attackers with legitimate credentials don’t need to exploit anything. They log in.

Research published earlier this month revealed just how quickly this pivot is being implemented. Sysdig documented an attack chain where compromised credentials reached cloud administrator privileges in eight minutes, traversing 19 IAM roles before computing an Amazon Bedrock AI model and disabling model invocation logging.

Eight minutes. No malware. No exploitation. Simply the lack of a valid credential and IAM behavior baselines.

Ram Varadarajan, CEO of Akalvio, put it clearly: The speed of breaches has changed from days to minutes, and defending against this class of attacks requires technology that can reason and respond at the same speed as automated attackers.

Identity threat detection and response (ITDR) addresses this gap by monitoring how identities behave inside the cloud environment, not just whether they are successfully authenticated. Kuppingercole’s 2025 Leadership Compass on ITDR found that most identity breaches now arise from compromised non-human identities, yet enterprise ITDR adoption is uneven.

Morgan Adamski, PwC’s deputy leader for cyber, data and technology risk, laid out the stakes in operational terms. Getting identity right, including AI agents, means controlling who can do what at the speed of the machine. With multicloud proliferation and identity-centric attacks, fire alerts will no longer be issued from everywhere.

Why don’t AI gateways stop this?

AI gateways excel at verifying authentication. They check whether the identity requesting access to the model endpoint or training pipeline holds the correct token and has privileges for the time frame defined by administrators and governance policies. They do not investigate whether that identity is behaving consistently with its historical patterns or randomly check into the infrastructure.

Consider a developer who typically queries code-completion models twice a day suddenly calculates every Bedrock model into account, having previously disabled logging. The AI ​​gateway sees a valid token. ITDR sees an anomaly.

A blog post from CrowdStrike highlights why this matters now. The adversary groups it tracks have evolved from opportunistic credential thefts to cloud-aware intrusion operators. They are moving from compromised developer workstations directly to cloud IAM configurations, the same configurations that control AI infrastructure access. Shared tooling across different units and malware specific to cloud environments indicate that it is not experimental. It is industrialized.

The Google Cloud Office of the CISO addressed this directly in its December 2025 cybersecurity forecast, noting that boards now ask about business resilience against machine-speed attacks. Managing both human and non-human identities is essential to mitigating risks from non-deterministic systems.

No air gap separates compute IAM from AI infrastructure. When a developer’s cloud identity is hijacked, the attacker can access model weights, training data, inference endpoints, and any tools associated with those models through protocols such as the Model Reference Protocol (MCP).

That MCP connection is no longer theoretical. OpenClaw, an open-source autonomous AI agent that has surpassed 180,000 GitHub stars in a single week, connects to email, messaging platforms, calendars, and code execution environments through MCP and direct integration. Developers are installing it on corporate machines without security review.

Cisco’s AI security research team called the tool “phenomenal” from a capability standpoint and “an absolute nightmare” from a security standpoint, demonstrating what kind of agentic infrastructure hijacked cloud identities can access.

The implications of IAM are obvious. In an analysis published on February 4, CrowdStrike CTO Elia Zaitsev warned "A successful quick injection against an AI agent is not just a data leak vector. This is a potential basis for automated lateral movement, where the compromised agent continues to execute attack objectives across the infrastructure."

The agent’s legitimate access to APIs, databases, and business systems becomes the adversary’s access. This attack chain does not end at the model endpoint. If an agentive device sits behind it, the blast radius expands to everything the agent can reach.

where are the control intervals

This attack chain is mapped into three phases, each with a different control interval and a specific action.

Entry: Trojanized packages distributed through WhatsApp, LinkedIn, and other non-email channels completely bypass email security. CrowdStrike documented employment-themed lures tailored to specific industries, with WhatsApp as the primary delivery mechanism. difference:Dependency scanning catches packages, but not runtime credential exfiltration. Recommended Action: Deploy runtime behavior monitoring on developer workstations that flags credential access patterns during package installation.

Pivot: Stolen credentials enable IAM role assumption to become invisible to perimeter-based security. In CrowdStrike’s documented European fintech case, the attackers moved from a compromised developer environment directly into the cloud IAM configuration and related resources. difference: No practical baseline exists for the use of cloud identity. Recommended Action: Deploy ITDR that monitors identity behavior in cloud environments, flagging lateral movement patterns such as the 19-role traversal documented in SysDig research.

Objective: AI infrastructure relies on authenticated identities without evaluating behavioral consistency. difference: AI gateways validate the token but not the usage pattern. Recommended Action: Implement AI-specific access controls that model access requests, correlate them with identity behavior profiles, and enforce logging that the accessing identity cannot disable.

Jason Sorocco, senior partner at Sectigo, identified the root cause: Look at the novelty of AI assistance, and the mundane error it has enabled. Valid credentials are exposed in the public S3 bucket. Refusal to master the basic principles of security.

What to verify in the next 30 days

Audit your IAM monitoring stack against this three-step series. If you have dependency scanning but not runtime behavior monitoring, you may catch malicious packages but miss credential theft. If you authenticate cloud identities but don’t default to their behavior, you won’t see lateral movement. If your AI gateway checks the token but not the usage pattern, a hijacked credential goes straight to your model.

The perimeter is not where this fighting happens anymore. Have an identity.



<a href

Leave a Comment