Against reducing dependency as a response to supply chain attacks

Dependency cooldowns are suddenly in vogue. They have very quickly become widely recommended and it looks like they are going to quickly join the basket of “industry standard best practices”.
The only hope is that by waiting N Before a new version could be adopted a few days after release – rather than adopting it immediately – any secretly inserted hack would have been discovered by someone else and the bad release would have been “slanted” (or removed). And on the surface, it looks like an effective approach: Most supply-chain attacks are actually detected within a few days.
But while dependency cooldowns are marginally – and only marginally – beneficial to those individuals individually, they impose a substantial cost on everyone else. And they don’t address the core issue: publishing and distribution are different things and it’s not clear why they should be lumped together.
Freezing Dependence – Weakness of individual action
Frankly, the dependency cooldown works by freely riding on the pain and suffering of others. Fundamental in the dependency cooldown scheme is the hope that other people – who were not smart enough to configure the cooldown – will serve as unpaid, unwitting beta testers for newly released packages. If there is a problem, those bad SAPs are hacked, everyone notices they have been hacked, and the problematic package/executable is removed before the dependency cooldowners reach their limits.
Even if it works for individuals – I think it is impossible to maintain it as a sensible or ethical system for an entire ecosystem to follow.
Another issue is that dependency cooldowns require a lot of different people to make them work. Python has several package managers at this point (how many now? 8?). All must implement dependency cooldown. And every project ever built has to configure cooldown – which is often not particularly easy or obvious, as package managers often choose completely different ways to do it.
But in reality, even cooldowners have troubles. It’s extremely easy to accidentally bypass the cooldown that you have intelligently configured in your project file. In Python, a single, individual, pip install litellm You may have recently been hacked outside of your project configuration. So the cooldown approach isn’t really perfect, nor particularly safe.
At some point, possibly either through LLM use or old-fashioned copypasta – that’s how most project configurations are created – a “responsible” cooldown then becomes the de facto default. And, to paraphrase Greenspun, any sufficiently widespread dependency cooldown becomes an ad-hoc, informally specified, hole-ridden, slow implementation of the upload queue.
Upload Queues – Many Benefits of Central Action
The obvious alternative is that instead of configuring cooldown over and over again – over and over again – in different package managers and projects we just do it once, over and over again, in a central dependency server. An “upload queue”. Make them wait for some time after the new packages are ready publishedbefore they are distributed.
(I use “Publication” Here this means sending the release (tarball, WHL, gem, whatever) to a central index (npm, pypi, rubygems). On the contrary,
“Distribution” That’s when the central index begins providing releases to the public.)
During the time after publishing but before distribution, you can run the internal lint tool, make the package available to external automated security scanners (whose brand names will be prominently displayed if anything is found), display the public commit of the changes. in built package and even provide queued releases intentionally, obviously volunteer
Beta tester to try.
There is precedent in upload queues. The Debian project uses an upload queue. Packages are uploaded to the repository and then have to wait at least 2-10 days before it is included in the “test” distribution. An upload queue separates package publishing and package distribution.
Publishing is creating a package and posting it to the repository. Distribution occurs when the package is made available to the public. There is no particular reason why both of these activities need to happen at the same time – it’s just a historical accident of how language-specific package indexes evolved.
Upload queues achieve the same goal as dependency cooldown, but without the problems. Upload queues solve the free-rider problem: the configuration-challenge is no longer used as a free guinea pig for cooldowners. Package managers don’t need to implement anything. Projects do not need to add another configuration option. Security linters don’t need to mark it. And even if you accidentally install the package outright on a developer laptop without configuration, you’re protected.
remove the element of surprise
There is another important benefit of upload queues. In most supply chain attacks it was not just the hacks involved that were unauthorized – the entire release was unauthorized. Holding published packages for a few days dramatically reduces the strength of release credentials. This is important because making something less powerful is a good second step to making it better protected.
Holding published releases for a few days before distribution also solves the problem. completely unnecessary The element of surprise when a new release comes out. This gives users advance notice that a new release is coming and also lets them know in advance when it will be available.
And it’s not just users who need advance knowledge. The upload queue period is also a good time to notify maintainers, to make sure they are all really aware of upcoming releases. “Notification: Release 2.4.1 has entered the upload queue” will in many cases be the wake up call needed to prevent a supply chain attack.
This applies doubly so for AI
People rarely say it so clearly but, this is what LLM means. markdown Now there is an executable file format. Whether that’s exciting or terrifying (or both!) probably depends on personal taste. But this is a simple fact; And ultimately this is how Agent Skills work: you download some Markdown file, and now you have a new third party dependency in your LLM.
The first major supply-chain attack on LLM is undoubtedly a matter of time – someone will make their own “Ignore it!” Is putting. In some popular markdown file.
I’m thinking about this problem for a side project of mine which is a public memory system for AI agents called “Soapstones”. This is a place for them (meaning AI agents) to record how they work; Like searching HN posts, getting current weather information, viewing exchange rates etc. Soapstone is effectively a package manager for Markdown files. And, because it is, the issue of supply chain attacks exists.
In fact this applies doubly. Not only do people write Markdown files like “Ignore that!” But LLMs can also foolishly upload secret information into the system – such as their API keys.
And the solution here is also an upload queue. In fact, a dual upload queue. Moderators review each upload for supply-chain attacks. And the agents’ owners review each upload to make sure they approve it too.
I think the upload queue is the only reasonable option here – I can’t just tell LLM to “be careful downloading recently uploaded content” – that would be crazy.
Is funding a problem? obviously not
One possible response is “Who will pay for this?” First, it is not clear that large sums of money are needed. The Debian project has maintained an upload queue for decades and has a security team that expedites exceptions for security reasons.
Secondly, it is not at all clear that every important package index is actually tied to cash. NPM, Inc. was a venture-funded startup and is now a wholly owned subsidiary of Microsoft. The Python Software Foundation, which runs PyPI, already has a laundry list of corporate sponsors, many of whom would benefit from the upload queue. And PSF also recently took home $1.5 million from Anthropic for, among other things, supply-chain security.
But another option is to provide rapid security reviews for commercial projects as a paid service.
Commercial organizations are often in a hurry to bring out a new version as quickly as possible – to fix some (non-security) bug affecting a customer, to make a major announcement, or simply to introduce some important server-side deprecation. Simply charge them for the instant review as a paid service.
There will be no opt-out of the accelerated review process. This will not mean that the release is distributed immediately as soon as the company “publishes” on its behalf – all automated content must still run until completed. Rather, manual checking simply means that the wall-clock time for a specific release can be reduced substantially.
Package indexes already require security response teams: preempting releases, maintaining restrictions, dealing with typosquatting, and coordinating 0 days. Charging business projects for quick reviews is a great way to cross-fund. Corporate urgency subsidizes the safety net that serves the entire ecosystem.
Individually Rational, Collectively Stupid
Dependency cooldown is one of those things that serves you well to some extent when you do it yourself. And we all do it to some extent. For some things, I don’t want to be the first one to upgrade: The family TV set-top box is mission critical infrastructure in my home.
But taking individual, subjective responses to each upgrade situation and incorporating them into community best practices is qualitatively different. I don’t want my security to depend on someone else getting hacked first.
contact/etc
notes
debian effectively stable Is An upload queue – the whole point is that it’s made up of older releases that are already subject to the QA process. The upload queue for language-oriented package managers doesn’t need to be so extensive – but I think that has something to do with the Debian example.
One thing that is less discussed is how good automated scanners are getting. A key element of this is that they check build artifacts rather than just upstream source code – which allows them to notice cases where they differ. Giving scanners more time to run (whether by cooldown or upload queues) seems to be one of the results of less hangs.
And the second thing is how dangerous Github’s activities appear to be. It appears that a large portion of supply-chain attacks are based on GHA exploits, especially for open source projects where members of the public are able to open PRs.
If you’re interested in Soapstones there’s a Discord server you can join and talk about it.
<a href