Ask the LLM for information about your topic of choice along with supporting citations. Skim the cited publications to make sure that they exist and actually support the information produced by the LLM. Assuming that the citations pass your checks, post links to them here along with excerpts.
Human-verified information from credible publications is a good thing to share here, whether you originally came across the information from books, search engines, or LLMs. Sharing LLM output by itself is discouraged here.
Asking for supporting citations is a good LLM technique I was not aware of. Appreciate the tip!
And of course the process you're describing would result in a much better post, after much more work.
That said, those are good standards for scientific journals. To me, discussion posts in a hacker forum has a much lower bar, and I think I fulfilled my civic duties by saying it came from an AI.
The fundamental problem with asking an llm when participating in a forum is that we're here because we're curious about your side of things. If we would be curious about the output of an llm, we would ask the llm directly. The value in a discussion like this is the human element itself.
So yes, I encourage you to just open up about the thing in discussion, even if you end up being wrong for example, or didn't write the best resourced comment. The authenticity, and the exchange itself is the point!
To be clear, I don't really have a problem with using AIs as a possible starting point. If they have citations for example, you can check those and make sure the AI isn't making things up, or sometimes they can point you in the right general direction of things to research and verify. But using them directly as a source is just nonsensical.
Its wild that you post that, like its hard to look things up or that you provided anything close to fact check yourself. People with your level of information literacy is going to be the end of us all.