Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is by design, given a non-determenitisic application?




sure. It may be more than that...possibly due to variable operating params on the servers and current load.

On whole, if I compare my AI assistant to a human worker, I get more variance than I would from a human office worker.


Thats because you don't 'own' the LLM compute. If you instead bought your office workers by the question I'm sure the variability would increase.

They're not really capable of producing varying answers based on load.

But they are capable of producing different answers because they feel like behaving differently if the current date is a holiday, and things like that. They're basically just little guys.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: