If anything terrifies me about our current moment in AI, it's this impulse to dehumanize actual living people as a way of defending AI from entirely valid criticism.
Sure, people suck and lots of them are immoral and stupid... but they're still human beings who could have been otherwise (counterfactual thinking and all that) given a different "training set" (upbringing).
Unless you're implying that some percentage of human beings are incapable (like ChatGPT) of acquiring a morality or rational scientific understanding, no matter their training?
I find it extremely concerning that people are ready to believe, about themselves that the reason they say, "I love you!" is because they're just correlating patterns in their memories of every time someone mentioned those words.
It's this fetishization of the seemingly more "rational" position that these people seem to have. It's similarly mind-boggling when people have discussions about free-will and determinism. Free will is quite possibly the most fundamental experience humans have yet people will flatly deny free will exists.
What do you mean by free will? Yes, we make decisions. Yes, we have the experience of having will and choosing things.
But where does the “freedom” of that will come from? Everything we desire or have preferences for, and everything we decide is based upon some prior event or factor.
It’s an illusion of free will. People often get caught up on this because they feel it scary and dehumanizing to say “we don’t have free will”, but really, determinism or at least compatibilism make way more sense than some magic “free will” that just arises out of thin air.
Do you think your concern here is part of why you have the opinion about AI that you do?
I agree it’s alarming to think that we may be creatures that take in data, store it, map it , then predict / output ideas and actions from that. But emotions aside — what’s wrong with that?
We have the experience of love and other emotions. Why we have that experience doesn’t really matter, does it? If someone proved flat out that the reason I say “I love you” to my family is that I’m just correlating patterns and remembering past experiences — so what? It changes nothing about my experience of love.
I feel like many people are against these claims about AI and even current LLMs simply because they are worried it takes it away the magic of being human.
But I don’t think it takes away anything at all. If anything, it’s exciting to think about the potential similarities as it might just help us understand our own selves .
Why? Are we not allowed to find meaning, beauty, in patterns? The fact that two people can share experiences that bond them together, allow them to deeply trust each other, share a sense of humor, etc, isn't not beautiful just because there's some extremely complex statistical calculations going on under the surface that we ourselves can't explain.
It's like saying it's concerning that human beings are ready to believe they're made up of combinations of atoms.
What would be an appropriate explanation for the concept of love? Or should we just never go beyond magic?
The point is that love isn't just a linguistic phenomenon – it's some other internal phenomenon or experience that happens to cause linguistic expression (among other forms of expression or behavior modulation.)
Love is something we feel; words are evidence of it. We don't define love as a pattern that shows up in a series of conversations. There are many other ways of generating those same words.
Nobody's trying to reject materialism here. I myself happen to believe that love is nothing more than (or at least, can entirely be explained by) electrochemical processes in my brain. And I agree that that doesn't make love any less magical ^_^
I think by "correlating patterns", the person you're replying to meant "doing just enough processing to produce the salient performances". But it's possible to have additional "inner mental life" beyond the minimum that's necessary to give rise to an outward behavior. (In this case, the outward behavior is speech.) A single interface can have many different implementations, as it were.
> But it's possible to have additional "inner mental life" beyond the minimum that's necessary to give rise to an outward behavior.
NNs have inner layers, and some of the properties in those inner layers translate to real concepts. I understand that people assume computer=not-sentient, and I wouldn't classify chatGPT as achieving sentience yet obviously, but that doesn't mean the way it learns is fundamentally different from how sentient beings learn, or that if we scaled it up a lot, and gave it a more diverse set of data, it couldn't achieve something indistinguishable from consciousness.
It seems pretty well established to me that congenital conditions, non-congenital mental illness, or traumatic brain injury [1] can rob human beings of almost any of the characteristics (self-awareness, empathy, any stage of morality above Kohlberg 1, the ability to meaningfully consent to things, etc.) that distinguish them from ChatGPT without robbing them of the ability to communicate intelligibly.
This does not imply that it's OK to kill or harm or even be rude to them, and I don't think anybody pointing this out has suggested that it does. On the other hand it is OK to flip the off switch on ChatGPT. But that distinction is not based on meaningful distinctions in ability, and I would have said trying as hard as I interpret you to be to draw it based on abilities was the scary bit.
Sure, people suck and lots of them are immoral and stupid... but they're still human beings who could have been otherwise (counterfactual thinking and all that) given a different "training set" (upbringing).
Unless you're implying that some percentage of human beings are incapable (like ChatGPT) of acquiring a morality or rational scientific understanding, no matter their training?