> I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
Are people generally unhappy with the outcomes of this? As anecdotally, it does seem to pass review later on. Code is getting through this way.
It's slippery. You're swamped with low-effort PRs, can't possibly test and review all of them. You will become a visible bottleneck, and guess whether it's easier to defend quality vs. "blocking a lot of features" which "seem to work". If you're tied by your salary as a reviewer, you will have to let go, and at the same time you'll suffer the consequences of the "lack of oversight" when things go south.
Just reject a bunch of PRs two days before code freeze. They can go next sprint. In fact ask AI to provide a plausible reason for rejection. If anyone overrides, you are covered.
Are people generally unhappy with the outcomes of this? As anecdotally, it does seem to pass review later on. Code is getting through this way.