I deeply disagree. Docker’s key innovation is not its isolation; it’s the packaging. There is no other language-agnostic way to say “here’s code, run it on the internet”. Solutions prior to Docker (eg buildpacks) were not so much language agnostic as they were language aware.
Even if you allow yourself the disadvantage that any non-Docker solution won’t be language-agnostic: how do you get the code bundle to your server? Zip & SFTP? How do you start it? ./start.sh? How do you restart under failure? Systemd? Congrats, you reinvented docker but worse. Want to upgrade a dependency due to a security vulnerability? Do you want to SSH into N replicated VMs and run your Linux distribution specific package update command, or press the little refresh icon in your CI to rebuild a new image then be done?
Docker is the one good thing the ops industry has invented in the last 15 years.
This is a really nice insight. I think years of linux have kind of numbed me to this. I've spent so much time on systems which use systemd now that going back to an Alpine Linux box always takes me a second to adjust, even though I know more or less how to do everything on there. I think docker's done a lot to help with that though since the interface is the same everywhere. A typical setup for me now is to have the web server running on the host and everything else behind docker, since that gives me the benefit of using the OS's configuration and security updates for everything exposed to the outside world (firewalls, etc).
Another thing about packaging. I've started noticing myself subconsciously adding even a trivial Dockerfile for most of my projects now just in case I want to run it later and not hassle with installing anything. That way it gives me a "known working" copy which I can more or less rely on to run if I need to. It took a while for me to get to that point though
It's all the same stuff. Docker just wraps what you'd do in a VM.
For the slight advantage of deploying every server with a single line, you've still got to write the mutli-line build script, just for docker instead. Plus all the downsides of docker.
There's another idea too, that docker is essentially a userspace service manager. It makes things like sandboxing, logging, restarting, etc the same everywhere, which makes having that multi-line build script more valuable.
In a sense it's just the "worse is better" solution[0], where instead of applying the good practices (sandboxing, isolation, good packaging conventions, etc) which leads to those benefits, you just wrap everything in a VM/service manager/packaging format which gives it to you anyway. I don't think it's inherently good or bad, although I understand why it leaves a bad taste in people's mouths.
Docker images are self-running. Infrastructure systems do not have to be told how to run a Docker image; they can just run them. Scripts, on the other hand, are not; at the most simple level because you'd have to inform your infrastructure system what the name of the script is, but more comprehensively and typically because there's often dependencies the run script implies of its environment, but does not (and, frankly, cannot) express. Docker solves this.
> Docker just wraps what you'd do in a VM.
Docker is not a VM.
> Plus all the downsides of docker.
Of which you've managed to elucidate zero, so thanks for that.
Even if you allow yourself the disadvantage that any non-Docker solution won’t be language-agnostic: how do you get the code bundle to your server? Zip & SFTP? How do you start it? ./start.sh? How do you restart under failure? Systemd? Congrats, you reinvented docker but worse. Want to upgrade a dependency due to a security vulnerability? Do you want to SSH into N replicated VMs and run your Linux distribution specific package update command, or press the little refresh icon in your CI to rebuild a new image then be done?
Docker is the one good thing the ops industry has invented in the last 15 years.