That would be a nightmare. One thing is to review a PR generated by a human using AI and caring about the code; another is reviewing wild agents, especially when they make changes everywhere
I'm not excited about it, but the only main way I've been able to discover LLM-isms that sneak in are
1. via seeing them glimpse by in the agents' window as its making edits (e.g. manual oversight), or
2. when running into an unexpected issue down the line.
If LLMs cannot automatically generate high quality code, it seems like it may be difficult to automatically notice when they generate bad code.