Hacker News new | past | comments | ask | show | jobs | submit login

I suspect the issue here is just the assumption that LLMs are "just flipping some bits", while simultaneously putting humanity on some unreachable pedestal.

We are all nothing but a horde of molecular machines. Your "you" is just individual neurons reacting to input in accordance to their current chemical state and active connections. All your experiences, unique personality treats, and creativity you add to the process is solely the result of the current state of your network of neurons in response to a particular input.

But while an LLM is trained once and then has its state fixed in place regardless of input, we "train" continously, and while an LLM might have experience of an inhuman corpus for a certain subject, we have many "irrelevant" experiences to mix things up.

Your "prompt" is also messy, including the current sound of your own heartbeat, the remaining taste in your mouth from your last meal, the feeling of a breeze through your hair as it tickles your neck, while the LLM has just one, maybe two half-assed sentences. This mix of messy experiences and noisy input fuels "creativity". You don't think "I need to copy XYZ", but neither does the AI. You both just react.

In some regards our chaos is better, in others it is worse. But while the machinery of an LLM still does not even remotely approach a brain, we should not forget that we are nothing but more a cluster of small machines, assembled from roughly 750 MB worth of blueprint.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: