Interesting. I like it. Now let's say I currently use the OS process as my primitive for agents, just spawning `claude "foo bar baz"`, and orchestrating this way, using perhaps Unix style of files for intermediate data and piping for transformations. What would you are some good use cases of Druid for someone like me?
It’s like saying you don’t want to exercise because it induces tachycardia and hypertension. The point is that you are training your body to adapt to overstimulation in a context and dose dependent manner.
The singularity doesn't mean that we get an AGI Day with a big announcement from The People In Charge that intelligence has been solved once and for all. It simply means that the frequency of "this changes everything" and "rumored model X at lab Y passes benchmark Z at 110% in less-than-zero-shot" style posts increases monotonically for ever.
Or it will just become seen as a category error. We don't often go around talking about "artificial general strength" or "artificial super-strength", even though it's easy to build a claw arm which can exert more force in any direction than any human can.
Common failure mode in environments that promote being the best over getting things done: If you went from school to college to industry always identifying with being better than everyone else, because that's what it takes to get in the door, sometimes you miss the transition to when you're already in the door. A lot of people don't realize they were supposed to put their guards down until it's too late. It's too bad but this is what the cutthroat labor pipeline rewards.
>In a way this is virtuous, because I think startups are a good thing. But really what motivates us is the completely amoral desire that would motivate any hacker who looked at some complex device and realized that with a tiny tweak he could make it run more efficiently. In this case, the device is the world's economy, which fortunately happens to be open source.
After a while you learn to ignore criticism. I'm not really interested in what people have to say who would never become users anyway. They're simply not the demographic. It's all noise, and when I was younger and more impressionable it caused serious self-doubt. But when I demo something and I see the eyes light up, and then they say "well, what about this?", that's pure gold.
Critics providing other sorts of criticisms fall into a bimodal distribution. There are those who criticize because your proposal seems risky and they don't want to see you fail, and then there are those who criticize because they don't want to see you succeed.
NPM is designed to let you run untrusted code on your machine. It will never work. There is no game to step up. It's like asking an ostrich to start flying.
It’s far from a complete solution, but to mitigate this specific avenue of supply chain compromise, couldn’t Github/npm issue single-purpose physical hardware tokens and allow projects (or even mandate, for the most popular ones) maintainers use these hardware tokens as a form of 2FA?
The attacker installed a RAT on the contributor’s machine, so if they had configured TOTP or saved the recovery codes anywhere on that machine, the attacker could defeat 2FA.
Oh, yes, I missed that the TOTP machine was compromised:\ Would that then imply that it would have been okay if codes came from a separate device, eg. a TOTP app on a Palm OS device with zero network connectivity? (Or maybe these days the easiest airgapped option is an old android phone that stays in airplane mode...)
I mean, I guess attestation might have some value, but it feels like moving the goalposts. Under the threat model of a remote attacker who can compromise a normal networked computer, I can't think of an attack that would succeed with a programmable TOTP code generator that would fail if that code generator was not reprogrammable. Can you?
> It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.
Er, most door locks are infinitely reprogrammable, because being able to rekey them without having to replace the whole unit is a huge advantage and the liability/disadvantage is minimal (falling under "It rather involved being on the other side of this airtight hatchway" in an unusually almost-literal sense where you have to be inside the house in order to rekey the lock, at which point you could also do anything else).
Sorry, attestation is the goalpost. The community wants certainty that the package was published by a human with authority, and not just by someone who had access to an authority’s private keys. That is what distinguishes attestation from authentication or authorization.
Yes, unfortunately authenticator apps just generate TOTP codes based on a binary key sitting in plain sight without any encryption. Not that it would help if the encrypting/decrypting machine is pwned.
All maintainers need to do is code signing. This is a solved problem but the NPM team has been actively rejecting optional signing support for over a decade now. Even so maintainers could sign their commits anyway, but most are too lazy to spend a few minutes to prevent themselves from being impersonated.
If the solution is 'maintainers just need to do xyz', then it's not a solution, sorry. It's not scalable and which projects become 'successful' and which maintainers accidentally become critical parts of worldwide codebases, is almost pure chance. You will never be able to get all the maintainers you need to 'just' do xyz. Just like you will never be able to get humans to 'just' stop making mistakes. So you had better start looking for a solution that doesn't rely on humans not making mistakes.
It scales just fine for thousands of maintainers of thousands of packages for every major linux distribution that powers the internet. You just have to automate enforcement so people do not have a choice.
Are you really saying there is just something fundamental about javascript developers that makes them unable to run the same basic shell commands as Linux distribution maintainers?
No, it really doesn't scale that well. 'Thousands' of packages is laughable compared to the scale of npm. And even at the 'thousands' scale distros are often laughably out of date because they're so slow to update their packages.
You are of course right that a signed package ecosystem would be great, it's just that you're asking people to do this labour for you for free. If you pay some third party to verify and sign packages for you? That's totally fine. Asking maintainers already under tremendous pressure to do yet another labour-intensive security task so you can benefit for free? That's out of balance.
Are they incapable of doing it? Probably not. Does it take real labour and effort to do it? Absolutely.
My 7 teammates and I on stagex actually maintain all this zero-trust signing and release process I am suggesting for several hundred packages and counting. Not asking anyone to do hundreds like my team and I are, but if authors could just at least do the bare minimum for the code they directly author that would eliminate the last gaping hole in the supply chain.
With what keys, and how do you propose establishing trust in those keys?
(As we’ve seen from every GPG topology outside of the kinds of small trusted rings used by Linux distros and similar, there’s no obvious, trustworthy, scalable way to do decentralized key distribution.)
If the keys that signed the early commits of a trusted FOSS project suddenly change without being signed by the previous keys, that should merit a higher level of consensus at release time, or waiting periods, etc.
Identity continuity at a minimum, is of immense defensive value even though we will not know if the author is human or trusted by any humans.
That said any keys that become attached to projects that are highly depended on would earn a lot of trust that they are human by getting a couple of the 5k+ of people worldwide with active well trusted PGP keys to sign theirs via conferences or otherwise, as it has always been.
> If the keys that signed the early commits of a trusted FOSS project suddenly change without being signed by the previous keys, that should merit a higher level of consensus at release time, or waiting periods, etc.
Two immediate problems: (1) package distribution has nothing to do with git (you don’t need to use any source control to publish a package on most indices, and that probably isn’t going to change), and (2) this doesn’t easily account for expiry, revocation, or the more basal reality that most people just aren’t good at key management. I think a workable design can’t make these assumptions.
> That said any keys that become attached to projects that are highly depended on would earn a lot of trust that they are human by getting a couple of the 5k+ of people worldwide with active well trusted PGP keys to sign theirs via conferences or otherwise, as it has always been.
This doesn’t scale to graphs of hundreds of thousands of maintainers, like PyPI has. I’m also not convinced it’s ever really worked on smaller scales either, except it in the less useful “nerd cred” sense.
> (1) package distribution has nothing to do with git
It does in stagex, and could in any project. The same maintainer keys that sign commits and reviews are the same keys that must sign releases.
> (2) this doesn’t easily account for expiry, revocation, or the more basal reality that most people just aren’t good at key management. I think a workable design can’t make these assumptions.
I do not accept this excuse. People keep up with passports and birth certificates and you are generally not allowed to have backups of those. I for one am not going to assume that most programmers are incapable of writing down 24 english words on paper on as many backups as the need and being able to recover at least one of those in the future if needed to recover a key.
If a developer really cannot keep track of something so trivial, I absolutely do not trust them not to get their identity stolen by someone seeking to push a supply chain attack
> This doesn’t scale to graphs of hundreds of thousands of maintainers, like PyPI has. I’m also not convinced it’s ever really worked on smaller scales either, except it in the less useful “nerd cred” sense.
Say that to the 5444 PGP keys in the current web of trust that signs and maintains most packages for every major linux distribution running the bulk of the services on the internet. It works just fine.
Simply make it a hard requirement for popular dependencies and developers that cannot figure out how to type 2 commands to generate a key and put it on a smartcard, and write down a 24 word backup, should not be maintainers,
That may sound harsh, but being a maintainer of popular FOSS means an obligation to do the bare minimum to not get your identity stolen, like signing code and releases.
Last century doctors all balked at the idea of washing hands or tools between patients even though it provably resulted in better health outcomes on average.
"But look, everyone is negligent, and they are not likely to change" is not an excuse to not adopt obvious massive harm reduction with little effort.
My team and I practice everything I am preaching here and any responsible project can do the same to protect their projects even if the majority ignorantly do not.
> If a developer really cannot keep track of something so trivial, I absolutely do not trust them not to get their identity stolen by someone seeking to push a supply chain attack
For better or worse, you do trust people like this (assuming you're running a nonzero amount of Python, Ruby, Rust, or whatever else software).
> Say that to the 5444 PGP keys in the current web of trust that signs and maintains most packages for every major linux distribution running the bulk of the services on the internet. It works just fine.
That's tiny, and is exactly my point: these kinds of small rings of trust don't remotely resemble the trust topology in a free-for-all packaging ecosystem.
> "But look, everyone is negligent, and they are not likely to change" is not an excuse to not adopt obvious massive harm reduction with little effort.
This is not the argument being advanced. The argument is that we need to do better (in terms of misuse-resistance, etc.) than long-lived keys and the kinds of nerd-cred "get good" assumptions made in PGP-style webs of trust.
Nobody thinks that signing is bad; the problem is when you push the median developer to adopt it without any clear contingency plans for when, not if they fail to uphold the invariants you assume.
It is clear the options are either we get better at decentralized trust and decentralized identity recovery, or we all just sit around and wait for a centralized corporation to decide what identity is online, and what minimum security level is good enough for every threat model.
Waiting for corpos to fix it has not worked in one entire forever, so I would rather lower the barrier of entry to decentralized systems that are still an IETF standard securing the backbone of the internet.
At the end of the day there are only tens of thousands of authors of globally deployed FOSS libraries and we absolutely can and must scale cryptographic identity to them to avoid supply chain attacks that hit _everything_.
Secondly, we should double down and not put all the pressure on authors. We need to make it easy for anyone with a reputable key to review and sign any FOSS code that exists. A decentralized and standardized audit system. Working on an implementation of that right now in fact.
I think it is accurate. Where are the autonomous AI who beat the creator to the punch? When we write "Hello, World!" in C and compile it with `gcc`, do we give credit to every contributor to GNU? AI is a tool that thus far only humans are capable of using with the unique inspiration. Will this change in the future? Certainly. But is it the case now? I think my questions imply some reasonable objections.
reply