The tech lead at my team started using AI to do code reviews he can’t be bothered to properly do himself. His suggestions during reviews are now shit. I hate the future. I’m seriously thinking on taking a leadership position just because how much I hate this dynamic, to shield others and discourage AI usage from a higher ground.
You need to tell him that. Seriously. Pick some especially bad example to point at, and come to him with, “Hey Bob, I used to get real value from your review comments. I could tell you were thinking about what to say and it helped me to produce better code. Now it seems I am mostly seeing LLM-generated junk like this one that doesn’t help anybody. This isn’t an improvement, can you go back to the more helpful way you used to do things?”
Using AI just to be lazy is by far the worst use case. It should be a tool made for speeding up repetitive work, and its output (if important) should always be reviewed by humans.
The tech lead at my team started using AI to do code reviews he can’t be bothered to properly do himself. His suggestions during reviews are now shit. I hate the future. I’m seriously thinking on taking a leadership position just because how much I hate this dynamic, to shield others and discourage AI usage from a higher ground.
You need to tell him that. Seriously. Pick some especially bad example to point at, and come to him with, “Hey Bob, I used to get real value from your review comments. I could tell you were thinking about what to say and it helped me to produce better code. Now it seems I am mostly seeing LLM-generated junk like this one that doesn’t help anybody. This isn’t an improvement, can you go back to the more helpful way you used to do things?”
Using AI just to be lazy is by far the worst use case. It should be a tool made for speeding up repetitive work, and its output (if important) should always be reviewed by humans.