Being caught in the containerization craze myself I'd love to hear whether the story is exaggerating or is painfully accurate.
So far I've been bitten by the inability to clean up images of certain age.
UPDATE:
Another really annoying thing is the inability to tag an image directly into a registry (AFAIK). You need to pull, tag and push back again. Given that images can be GBs long you end up with really heavy net traffic for a simple task.
We are currently moving to using Docker swarmkit (from non-Docker environment). We thought about using tags to communicate environment but it felt like an anti-pattern given how tags work. What we ended up with is the following:
Per publishable branch (e.g. master to prod, dev to QA, shared integration that isn't ready for dev to secondary QA):
- Publish image to private docker registry with tag in the format of <branch>-latest (e.g. master-latest, dev-latest). Also add labels with git revision hash and CI build number to reconcile what is in the image both in CI and in git repo.
- Capture the digest of the published image and put in .txt file as an artifact from CI.
- Add a tag to the git repo with the build number, treated as version number in the form of v<build-number>-<branch> (e.g. v1-master).
- Maintain list of applications' docker image digests that should be running in each environment. We currently have this automatically determined by pulling the digest artifact from the passing-build digest. We do allow for manual overrides, however.
- We have a CI project that updates the docker swarm with these digests and other settings (replicas, mounts, environment variables). We wrote a small tool that does this similar to where stacks/dab files are going but with more functionality.
With the labels and published digests we can go from build to digest, digest to build, etc.
This is working really well so far and we aren't fighting the tools. We may separate the non-master registry later in life, and that's really easy given that docker treats the private registry hostname as part of the tag.
But then you could just push from the staging server. And it's probably best to keep this independent of the image tag i.e. your system knows image v1.1.2 is the latest stable, but the image name never changes. If you retag your stable image "latest", you run into issues with rollback and figuring out what you're running.
That's what I do but given that the process is run in parallel being part of a complex CI system I needed to write quite a bit of logic to do the pruning safely myself. Main problem being that I couldn't rely on docker daemon not deleting images that might actually be still needed by other processes building in parallel.
No, there is no simple command to remove old images because the creators of docker want you to compose other existing commands to delete them and it would be unnecessary to add a command. IMO this is a terrible decision but that was apparently the rationalization.
So far I've been bitten by the inability to clean up images of certain age.
UPDATE: Another really annoying thing is the inability to tag an image directly into a registry (AFAIK). You need to pull, tag and push back again. Given that images can be GBs long you end up with really heavy net traffic for a simple task.