Agreed. Use LLM all you want to do the discovery and proof but do not use it to replace your voice. I literally can’t read, my brain just shuts off when I see LLM text.
It is a strange phenomenon though, these walls of text that LLMs output, when you consider that one thing they're really good at is summarization, and that if they are trained on bug report data, you'd think they would reproduce it in terms of style and conciseness.
Is it mainly post-training that causes this behaviour? They seem to do it for everything, like they are really biased towards super verbose output these days. Maybe something to do with reasoning models being trained for longer output?
I'm sure it's also a waste of maintainers time to drop a wall of AI bullet points instead of just sharing the critical information.