

LLMs are not peers. It should have no part in the peer review process.
You could make the argument that it’s just a tool that real peer reviewers use to help with the process, but if you do, you cant get mad that authors are shadow-prompting for a better chance it’ll be seen by a human.
Authors already consciously write their papers in ways that are likely to be approved by their peers, (using professional language, good data, and a standard structure) if the conditions for what makes a good paper changes, you can’t blame authors for adjusting to the new norms.
Either ban AI reviews entirely, or let authors try to game the system. You can’t have both.
“Mr. Trump, we’re losing money because Canadian tourism is massively down. What should we do?”
“Let’s make coming to the US even less attractive. That’ll surely help”