Hacker News new | past | comments | ask | show | jobs | submit login

No, you absolutely are not. It's like an extra bit of parity, so you have more information than before.



Does that extra information come from a separate process than the LLM network? If not then, assuming the same output is not guaranteed from the same input as per usual, then all bets are off correct?


Sorry for the late reply, but if you read this, there is research that shows that prompting a LLM to take variety of perspectives on a problem (IIRC it was demonstrated with code) then finding the most common ground answer improved benchmark scores significantly. So, for example if you ask it to provide a brief review and likelihood of the answer, and repeat that process from several different perspectives, you can get some very solid data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: