It's likely not all this, but i expect an element is: there is a meaningful number of people essentially refusing to work with AI.
Antidotal but I have spoken to friends at Google who are telling me many co-workers say "I tried it didn't work, ill do it myself" when really they just didn't try very hard at all.
Edit: that is to say, if you had a % of your workforce avoiding helping you explore a current trend (valuable or not tbd sure), I can see rational arguments around removing them from the team.
If, as a member of the c-suite, I find that a noticeable percentage of my company's workforce isn't "helping to explorer a current trend" then either they know something I don't or I haven't given them the time/methods by which to explore.
The latter is actually the more pertinent item, as I've seen several times that an initiative get rolled out by leadership, some teams have free time to play around and use it, and other teams have so much on their plate that they're barely able to keep their heads above water, let alone take on another experiment. If someone is worried about getting a project knocked out by end of month/quarter/year in order to keep their job, they're not going to mess about.
Now, that's a leadership failure, but it happens more often than not.
To add to the speculation, it's possible that the people refusing to use it are working slower. Even if the code that they write is objectively better by any metric you'd like, humans can't really pump out code as fast as Claude or Codex can.
If you can get something into "good enough" territory in 1/10th the time of someone who can get it into "great" territory, that is often worth it.
Antidotal but I have spoken to friends at Google who are telling me many co-workers say "I tried it didn't work, ill do it myself" when really they just didn't try very hard at all.