My standard way of "smoke testing" docker images is to run a verify script at the end that just checks all the expected binaries are in the path, and tries to access any expected environment variables while `-u` is set.
This feels to me like the wrong approach. You shouldn't be trying to test if a file exists or a command was run as a 'unit test' (as a sanity check, maybe, but not a test) - that's testing the implementation.
You should be testing for the desired behavior that having that command run or having that file there was supposed to achieve.
IMHO even the use of containers should probably be considered an implementation detail.
Sometimes the desired behaviour is actually generating a Docker image with specific contents. Think of the pause container image that every Kubernetes cluster out there uses. To put it in other way, this is for build targets producing Docker images that are needed by an arbitrary set of projects, possibly from third parties.
Suppose you have an organization-wide Docker image from which tens of other images from a number of teams are derived. You could add a test to make sure that the ca.crt file is always packaged and never empty or you could let tens of downstream tests and images fail in mysterious ways when it's missing (unless you replicate the check in all projects depending on the base image).
The use case is more for infrastructure that explicitly deals with or supports containers. Generic applications, as you say, shouldn't bother.
The pause container is very much an implementation detail. That's sort of my point. Why would you try to test implementation details?
>You could add a test to make sure that the ca.crt file is always packaged
I could, but why? If I wrote a command to put ca.crt there I would expect it to be there unless somebody deliberately took it out.
It would be even better if I wrote a command to put ca.crt in the wrong place and then tested to see that it was in the wrong place. The test would be worse than useless in that case - it would fail on a working container and pass on a broken container.
The ca.crt is there because it triggers desired behavior - e.g. that you can connect to services over SSL without errors. Test that instead.
>The use case is more for infrastructure that explicitly deals with or supports containers.
Personally I'd consider containers themselves an implementation detail most of the time - the exception being when you're building containers for other people to use.
I still think it's the wrong approach for that too.
You can mix and match between tests that check file contents and tests that actually run commands inside the container and make assertions based on their output.
Maintainer of the tool here - this is definitely the main use-case. We produce container images that are used directly or indirectly (as a base image by other users), and the structure tests here are a way for us to define the contract our container exposes and test/verify that we continue to maintain that contract quickly and easily.
I get you, wrong wording, it's not exactly a standard unit test, etc. But it doesn't change the meaning and intention of it - treat infrastructure as you treat code (I guess that also includes debating whether something is a 'unit' test or 'other' kind of test).
Also, having a file on a container (which is probably the easiest test to perform) often is _the desired behavior_ of a command or something else.
>Also, having a file on a container (which is probably the easiest test to perform) often is _the desired behavior_
No it isn't. The end user doesn't give a damn whether a particular file is on the container. That's an implementation detail. The end user wants:
* Pages to load quickly
* To not have to face data inconsistency bugs
* Pages to operate while the site is under high load
* For various services your system connects to to work properly (e.g. clicking 'get two factor code' actually sends an SMS).
Checking to see if a certain file is present is pointless if that doesn't lead to the desired system behavior.
Moreover, if you have the means to verify the desired system behavior:
* The presence of the file, if it was required, can be assumed.
* If you swap out a component and stop needing that file to be present to achieve the same behavior your test will still fail even if your system works perfectly. That's an extremely undesirable property to have in a test.
Checking for the presence of a file and failing hard if it is not there as part of a build is a sometimes a good way sanity checking a component, but as an outside 'test' of that component it's a bad idea.
In which case it would make most sense to build a mock container on top which clearly exhibits the desired behavior of the underlying container and then test the behavior of that.
Maintainer of the tool here - I'm not sure I understand this idea. What would go in the mock container? How would you test against it? Would you mind elaborating a little more?
I'm assuming that in this case the base container would be a 'ruby' container or something. You'd build an example container atop it running a bit of example ruby code - verifying that it behaves properly. Those behavioral tests could be used to verify that the underlying container is configured properly.
I suppose that's sort of what you're doing in the example you linked to above.
I guess an "example". I generally agree with the parent poster. Testing for files does not sound useful. Test the functionality, not structurue. But not everyone's google.
Agreed. What might be useful is a file list comparison across builds to alert someone if new files are present in a container that weren't in previous versions.
Just package content into RPM and use rpm to perform these checks at installation time as usual. Moreover, Dockerfile will be simple "yum install package" command.
This could be quite useful to define the basic things a container should exhibit. A lot of the time you rely on certain files being present or a specific entrypoint and command combo when running containers under orchestration with sidecars or volume mapping etc.
So this could help you define what is needed and why for others who will change your container build steps in future.
It could also compliment your Goss or Inspec integration tests quite nicely.
That's awesome. One step closer to treating infrastructure exactly as code.
I'd imagine you could stop a build if the docker image generated doesn't have, say, a valid python installation because someone mistyped a command, and that doing so would be quicker than standing up the image and running an external test against it.
https://github.com/EngineerBetter/concourse-up/blob/master/c...
https://github.com/EngineerBetter/concourse-up/blob/master/c...