I wonder if this phenomenon comes from how reliable lower layers have become. For example, I never check the binary or ASM produced by my code, nor even intermediate byte code.
So vibers may be assuming the AI is as reliable, or at least can be with enough specs and attempts.
I have seen enough compiler (and even hardware) bugs to know that you do need to dig deeper to find out why something isn't working the way you thought it should. Of course I suspect there are many others who run into those bugs, then massage the code somehow and "fix" it that way.
Yeah, I know they exist in lower layers. Though layers being mostly deterministic (hardware glitches aside) I think they are relatively easy to rely on. Whereas LLMs seem to have an element of intentional randomness built into every prompt response.
So vibers may be assuming the AI is as reliable, or at least can be with enough specs and attempts.