Why do you think this analogy is even remotely correct? It’s well-known that LLMs produce non-deterministic results. It’s also well-known that they hallucinate. To make it even clearer, all the top LLM players make sure to remind everyone of that behavior. If calculators had similar effects and warnings, it would have been a valid analogy. Instead, you're comparing apples and oranges.
I think it's a valid analogy in some contexts. Like when talking to a person that is not aware of non-determinism and hallucinations. Which happens on this website very frequently.
Many people here tell you to use AI like you use a calculator. With minimal or no oversight, with full access to production systems, etc.
To let a non deterministic tool communicate on your behalf, or give it access to critical systems is evidence enough that a good number of people are not aware of these facts.
Even worse is the back button. Scroll any store and click on an item on page 10 of infinity and then use the back button. Back to page 1 you go. That’s why I open everything in a new tab pretty much by default.
Or you click things on some form, it changes query params in the url (that's fine), but then it turns out each change created a new history item, and to actually navigate back to the previous page you need to click Back 300 times >:(
I just think there are far too many. Sometimes I just want to add a happy face and there are 500 variations. Then I find one which I think looks happy and hover over it for a second and the tooltip says “shivering anxious constipated”.
Tons of middle management that makes no decisions what so ever.
Everytime you ask a question, they delegate, until you end up at person 1 again and they just can't decide anything.
It's like they all have decision paralysis.
reply