Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I personally don't like `curl | bash` because I don't know _how_ something will install: 1) What are all of the directories that something will insert itself into? 2) What of my files (.bashrc, etc.) will it modify? 3) If it modifies those things, will it tell me?

The `curl | bash` install pattern means that it can do _anything_.

Using a package manager I know that the install will be "typical", and easy to uninstall (that's the case with most of the package managers that I use anyways). Each package manager has a different pattern, sure, but at least it will be predictable.



This isn't how I'd describe the guarantees provided by a package manager. In fact, most package managers don't really provide any guarantees at all; almost all of them support something like preinst.sh and postinst.sh scripts which can basically do anything. It's the package maintainers that are supposed to provide the guarantees you describe. Of course, they're only human, and their incentives might not line up with yours.

And if you stray outside the official channels, as most users must at least some of the time, then you're back to all-bets-are-off. Fetching and installing packages from a channel hosted by some third party really is no better from a security standpoint than running a (signed) shell script from that same party.

EDIT: I should add that there may be some new, advanced package management systems that do actually provide strong guarantees, like only putting files in certain directories, never setting the setuid/setgid bits on executable files, or perhaps ensuring that all files from a package are owned by a user:group associated with that package (the Linux From Scratch docs describe a package management scheme like that, it's worth checking out). I'm referring here to the majority of popular package managers, e.g. dpkg, which will run arbitrary code during installation.


You make some good points, but I want to follow up:

With dpkg packages for example, you do get a few guarantees.

1. The package will include a list of files which it installs 2. The package manager will not overwrite existing files which were installed by dpkg without an explicit diversion 3. When uninstalling, the package manager will remove any of those files, and the directories created for them (unless they are not empty or are also crated by another package) 4. It won't run as non-root (unless you've made some major changes to your system), and as such won't prompt you for, or try to take advantage of, sudo access.

Sure, that doesn't stop out-of-the-norm behaviour; the Oracle Java packages are a great example of this; the packages contain only a shell script which downloads, unpacks, copies, and symlinks the actual Oracle Java tarball from Oracle's website, and then (ideally) removes those packages if you're uninstalling. Still, it's far more of guarantees than curl|bash provides.


I don't think the guarantees you've numbered 1, 2, or 3 are true. Insofar as the package uses the standard mechanism for installing files, sure, it can guarantee that. But I don't believe it hooks a tracer up to the installer script to detect the betrayal of those guarantees. I think it just runs the install script, as root, trusting that the files list and uninstall scripts will do their job. The whole thing is based on implicit trust of the package maintainer, not guarantees in software.


You're right, and I've called that out in my post as well (re: Oracle Java, as an example).

That said, I've got far more trust in someone who's gone to the trouble of making a .deb file than someone who put a shell script on GitHub.


This is not exactly a fair comparison because it is documented and configurable but I've recently found out apt on its default settings does something unexpected (for me at least) when removing packages (purge + autoremove): Normally you (I) would expect all automatically installed dependencies (depends/recommends/suggests) to be gone after this, if no other package references them in its depends/recommends lists (which is what gets installed on the default settings).

However it turns out if a package suggests another package and that other package somehow gets installed, the suggested package will not be autoremoved anymore because autoremove honors suggests relationships as a reason for not removing automatically installed packages. While there are valid reasons for this (e.g. when installing something with --install-suggests) it also amounts to a lot of unwanted packages after a while of installing/uninstalling software. I don't know if this has an widespread name but I call it "suggestion congestion" for that.

Of course, one can turn this off by setting APT::AutoRemove::SuggestsImportant to "false". And really, that is an awful problem to solve since you have to deal with different users and package maintainers with different expectations. And apt still solves a lot more problems that it creates.

But I'm now convinced that there is no such thing as a clean uninstall. At least not until the year of the stateless ZFS snapshot rollback NixOS desktop.


You can also pass --no-install-recommends to apt for a one-off installation to avoid pulling in a ton of garbage from a specific installation.


Meanwhile, in Windows-land, literally everything is installed by clicking Setup.exe.


Not quite. Corporate machines will most likely have some kind of management like SCCM, and there are options like PatchMyPC or Ninite for home users.

There's also Chocolatey and OneGet or whatever it's called today, plus vcpkg and nuget over in developer land.


Chocolatey and OneGet packages are usually wrappers around setup.exe/setup.msi with commandline arguments to keep the installer quiet, nothing more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: