I guess the value in reporting it is that for most people, for us on HN as well, computing is considered accurate. You can trust the output if you trust the input and the program that processes the input. That is what we expect and value in computing - accuracy.
For LLMs that's not really the case anymore and it needs to be highlighted that "computers" no longer necessarily produce accurate output, to make sure not too much faith is put in what they produce.
> "computers" no longer necessarily produce accurate output
This was always the case. Just because a computer executes your model, doesn't mean your model has any bearing on reality. This is not a new phenomenon.
For LLMs that's not really the case anymore and it needs to be highlighted that "computers" no longer necessarily produce accurate output, to make sure not too much faith is put in what they produce.