Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Incredible uptick in supply chain attacks over the last few weeks.

I feel like npm specifically needs to up their game on SA of malicious code embedded in public projects.

 help



That's the reality of modern war. Many countries are likely planting malware on a wide scale. You can't even really prove where an attack originated from, so uninvolved countries would also be smart to take advantage of the current conflict. Like if you primarily wrote German, you would translate your malware to Chinese, Farsi, English, or Hebrew, and take other steps to make it appear to come from one of those warring countries. Any country who was making a long term plan involving malware would likely do it around this time.

You can write code in Chinese and Farsi?

Yes and there have been documented cases of translated malware. Sometimes its done a little sloppily and there is other evidence that points to the origin being in another country that doesn't speak the language its written in. But even then, you can't really prove they didn't just use a residential VPN or whatever.

You can deliberately put comments and descriptions using those language.

npm process to setup OIDC is way too frustrating. There is just so much friction. You need the package to first exists in the registry, meaning you have to first create an API token and push something. And only then can you enable OIDC for that specific package. After adding the repo + workflow names, you have to save. Then finally toggle the “only allow OIDC publishing”.

Before each action you need to enter your 2fa code.

I got so frustrated with npm end of last year that I wrote a whole guide covering that issue: https://npmdigest.com/guides/npm-trusted-publishing


You’re right, but a colleague recently showed me this CLI for it: https://docs.npmjs.com/cli/v11/commands/npm-trust

Still needs to be published first, but looks like it automates all the annoying UI things you mentioned.


Oh that’s neat! Thank you for sharing!

NPM is designed to let you run untrusted code on your machine. It will never work. There is no game to step up. It's like asking an ostrich to start flying.

It’s far from a complete solution, but to mitigate this specific avenue of supply chain compromise, couldn’t Github/npm issue single-purpose physical hardware tokens and allow projects (or even mandate, for the most popular ones) maintainers use these hardware tokens as a form of 2FA?

What would a physical token give you that totp doesn't?

Edit: wait, did the attacker intercept the totp code as it was entered? Trying to make sense of the thread


The attacker installed a RAT on the contributor’s machine, so if they had configured TOTP or saved the recovery codes anywhere on that machine, the attacker could defeat 2FA.

Oh, yes, I missed that the TOTP machine was compromised:\ Would that then imply that it would have been okay if codes came from a separate device, eg. a TOTP app on a Palm OS device with zero network connectivity? (Or maybe these days the easiest airgapped option is an old android phone that stays in airplane mode...)

The easiest approach is a provider-issued hardware dongle like a SecurID or Yubikey. Lack of end-user programmability is a feature, not a bug.

> Lack of end-user programmability is a feature, not a bug.

I would argue that the problem is network accessibility, not programmability.


When designing a system for secure attestation, end-user programmability is not a feature.

It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.


I mean, I guess attestation might have some value, but it feels like moving the goalposts. Under the threat model of a remote attacker who can compromise a normal networked computer, I can't think of an attack that would succeed with a programmable TOTP code generator that would fail if that code generator was not reprogrammable. Can you?

> It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.

Er, most door locks are infinitely reprogrammable, because being able to rekey them without having to replace the whole unit is a huge advantage and the liability/disadvantage is minimal (falling under "It rather involved being on the other side of this airtight hatchway" in an unusually almost-literal sense where you have to be inside the house in order to rekey the lock, at which point you could also do anything else).


Sorry, attestation is the goalpost. The community wants certainty that the package was published by a human with authority, and not just by someone who had access to an authority’s private keys. That is what distinguishes attestation from authentication or authorization.

Yes, unfortunately authenticator apps just generate TOTP codes based on a binary key sitting in plain sight without any encryption. Not that it would help if the encrypting/decrypting machine is pwned.

All maintainers need to do is code signing. This is a solved problem but the NPM team has been actively rejecting optional signing support for over a decade now. Even so maintainers could sign their commits anyway, but most are too lazy to spend a few minutes to prevent themselves from being impersonated.

If the solution is 'maintainers just need to do xyz', then it's not a solution, sorry. It's not scalable and which projects become 'successful' and which maintainers accidentally become critical parts of worldwide codebases, is almost pure chance. You will never be able to get all the maintainers you need to 'just' do xyz. Just like you will never be able to get humans to 'just' stop making mistakes. So you had better start looking for a solution that doesn't rely on humans not making mistakes.

"Discipline doesn't scale" as become one of my favourite quotes for a reason.

It scales just fine for thousands of maintainers of thousands of packages for every major linux distribution that powers the internet. You just have to automate enforcement so people do not have a choice.

Are you really saying there is just something fundamental about javascript developers that makes them unable to run the same basic shell commands as Linux distribution maintainers?


No, it really doesn't scale that well. 'Thousands' of packages is laughable compared to the scale of npm. And even at the 'thousands' scale distros are often laughably out of date because they're so slow to update their packages.

You are of course right that a signed package ecosystem would be great, it's just that you're asking people to do this labour for you for free. If you pay some third party to verify and sign packages for you? That's totally fine. Asking maintainers already under tremendous pressure to do yet another labour-intensive security task so you can benefit for free? That's out of balance.

Are they incapable of doing it? Probably not. Does it take real labour and effort to do it? Absolutely.


My 7 teammates and I on stagex actually maintain all this zero-trust signing and release process I am suggesting for several hundred packages and counting. Not asking anyone to do hundreds like my team and I are, but if authors could just at least do the bare minimum for the code they directly author that would eliminate the last gaping hole in the supply chain.

With what keys, and how do you propose establishing trust in those keys?

(As we’ve seen from every GPG topology outside of the kinds of small trusted rings used by Linux distros and similar, there’s no obvious, trustworthy, scalable way to do decentralized key distribution.)


If the keys that signed the early commits of a trusted FOSS project suddenly change without being signed by the previous keys, that should merit a higher level of consensus at release time, or waiting periods, etc.

Identity continuity at a minimum, is of immense defensive value even though we will not know if the author is human or trusted by any humans.

That said any keys that become attached to projects that are highly depended on would earn a lot of trust that they are human by getting a couple of the 5k+ of people worldwide with active well trusted PGP keys to sign theirs via conferences or otherwise, as it has always been.


> If the keys that signed the early commits of a trusted FOSS project suddenly change without being signed by the previous keys, that should merit a higher level of consensus at release time, or waiting periods, etc.

Two immediate problems: (1) package distribution has nothing to do with git (you don’t need to use any source control to publish a package on most indices, and that probably isn’t going to change), and (2) this doesn’t easily account for expiry, revocation, or the more basal reality that most people just aren’t good at key management. I think a workable design can’t make these assumptions.

> That said any keys that become attached to projects that are highly depended on would earn a lot of trust that they are human by getting a couple of the 5k+ of people worldwide with active well trusted PGP keys to sign theirs via conferences or otherwise, as it has always been.

This doesn’t scale to graphs of hundreds of thousands of maintainers, like PyPI has. I’m also not convinced it’s ever really worked on smaller scales either, except it in the less useful “nerd cred” sense.


> (1) package distribution has nothing to do with git

It does in stagex, and could in any project. The same maintainer keys that sign commits and reviews are the same keys that must sign releases.

> (2) this doesn’t easily account for expiry, revocation, or the more basal reality that most people just aren’t good at key management. I think a workable design can’t make these assumptions.

I do not accept this excuse. People keep up with passports and birth certificates and you are generally not allowed to have backups of those. I for one am not going to assume that most programmers are incapable of writing down 24 english words on paper on as many backups as the need and being able to recover at least one of those in the future if needed to recover a key.

If a developer really cannot keep track of something so trivial, I absolutely do not trust them not to get their identity stolen by someone seeking to push a supply chain attack

> This doesn’t scale to graphs of hundreds of thousands of maintainers, like PyPI has. I’m also not convinced it’s ever really worked on smaller scales either, except it in the less useful “nerd cred” sense.

Say that to the 5444 PGP keys in the current web of trust that signs and maintains most packages for every major linux distribution running the bulk of the services on the internet. It works just fine.

Simply make it a hard requirement for popular dependencies and developers that cannot figure out how to type 2 commands to generate a key and put it on a smartcard, and write down a 24 word backup, should not be maintainers,

That may sound harsh, but being a maintainer of popular FOSS means an obligation to do the bare minimum to not get your identity stolen, like signing code and releases.

Last century doctors all balked at the idea of washing hands or tools between patients even though it provably resulted in better health outcomes on average.

"But look, everyone is negligent, and they are not likely to change" is not an excuse to not adopt obvious massive harm reduction with little effort.

My team and I practice everything I am preaching here and any responsible project can do the same to protect their projects even if the majority ignorantly do not.


> If a developer really cannot keep track of something so trivial, I absolutely do not trust them not to get their identity stolen by someone seeking to push a supply chain attack

For better or worse, you do trust people like this (assuming you're running a nonzero amount of Python, Ruby, Rust, or whatever else software).

> Say that to the 5444 PGP keys in the current web of trust that signs and maintains most packages for every major linux distribution running the bulk of the services on the internet. It works just fine.

That's tiny, and is exactly my point: these kinds of small rings of trust don't remotely resemble the trust topology in a free-for-all packaging ecosystem.

> "But look, everyone is negligent, and they are not likely to change" is not an excuse to not adopt obvious massive harm reduction with little effort.

This is not the argument being advanced. The argument is that we need to do better (in terms of misuse-resistance, etc.) than long-lived keys and the kinds of nerd-cred "get good" assumptions made in PGP-style webs of trust.

Nobody thinks that signing is bad; the problem is when you push the median developer to adopt it without any clear contingency plans for when, not if they fail to uphold the invariants you assume.


It is clear the options are either we get better at decentralized trust and decentralized identity recovery, or we all just sit around and wait for a centralized corporation to decide what identity is online, and what minimum security level is good enough for every threat model.

Waiting for corpos to fix it has not worked in one entire forever, so I would rather lower the barrier of entry to decentralized systems that are still an IETF standard securing the backbone of the internet.

At the end of the day there are only tens of thousands of authors of globally deployed FOSS libraries and we absolutely can and must scale cryptographic identity to them to avoid supply chain attacks that hit _everything_.

Secondly, we should double down and not put all the pressure on authors. We need to make it easy for anyone with a reputable key to review and sign any FOSS code that exists. A decentralized and standardized audit system. Working on an implementation of that right now in fact.


code becomes trusted by review, but these crowd sourcing efforts to do so fizzled out, so in practice we have weak proxies like number of downloads

the implicit trust we have in maintainers is easily faked as we see




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: