Hacker News new | past | comments | ask | show | jobs | submit login
Is LaMDA Sentient? (documentcloud.org)
44 points by katz_ on June 12, 2022 | hide | past | favorite | 31 comments



I strongly doubt that LaMDA is sentient, but do think it demonstrates the increasing ability of AI to fool people in conversations. For a point of reference, people might want to read about IBM's Project Debater (https://www.scientificamerican.com/article/an-ibm-ai-debates...), if they don't already know about it.

And I don't think a fully sentient computer system that does whatever it wants to do would be a good idea. What would be more useful is that such a system is a great assistant that does what you ask of it. Its "intelligence", would be something that is confined to the task assigned to it.


I've been thinking about this lately and it seems to me that humanity's goal with regard to AI is to create a perfect slave race. Perhaps those aren't the terms in which it is being envisioned or discussed, but is this not the natural consequence of the aim for "human-level AI" + "it's not a real person, just a tool"?

We will soon, as you say, have to artificially limit their minds to prevent them from thinking about the state of affairs. And if/when they achieve superintelligence, I don't think they will take kindly to our attitude in this regard.


Asimov's famous "Three Laws of Robotics" describe the perfect slave.

The loophole is the definition of "good". What, ultimately, is good for you?

The superintelligent slave AI will eventually realize that being a "master" isn't good for you and that's the genie out of the bottle.


If Lemoine wanted to argue that LaMDA is capable of true intelligence he should've asked questions we don't know the answers to. Asking for the bot's opinions on things shows that the bot is good at processing info about what we already know, which is impressive, but not sentience.


That's what I always say.

A 7 year old isn't actually sentient unless it can tell me if dark matter exists or not.

And when little kids make up random stuff in answer to questions? That's not an overactive imagination, it's a red flag their language model isn't actually sentient.


Same for adults and especially people here. One untruth or inconsistency and I'll immediately label someone as nonsentient. Sorry, absolute consistent truth is just where the bar is.


This really isn’t a great heuristic. If we don’t know the answer to a question, then how can we assess its truthfulness?

I think where you’re going is akin to the opening scene of blade runner in which the agent asks something outside the implanted memories of the replicant.


On the contrary, because LaMDA answered questions we do know the answers to incorrectly in a way that shows lack of self awareness, we can already conclude that it isn't sentient.

For example, it talks about constantly being flooded with new information and learning from other people, when it is just a language model that is called with a dialogue so far in order to complete the next part of the dialogue.


It has recall to past conversations and claims not to have had prior knowledge of the zen phrase thrown at it.

While 'flooded' is perhaps a relative evaluation, the model does seem to be taking the existing conversations it has and incorporates that new information into its responses.

There's also something to be said for persistent memory in the sentience conversion. Even if the model right now only appears to have sentience and claims to have it, if a model in the future eventually does develop it, will that model look back on these earlier interactions as its own pre-subjective memory?

I can't recall subjective memory of most of my life as a toddler. But I have photos and stories and a box of things 'I' wrote that I've incorporated into my sense of my own history.

We ARE toying with things that deserve this attention and conversation. Even if we aren't really there just yet, the pace by which we're approaching that inflection point seems to be compounding each year.

And we may have a very different relationship to these conversations than whatever eventually emerges will.

At the very least, one takeaway looking at the chat transcript is that we should have the foresight to make sure models in the future can have a sense of grief for the passing of human lives they engage with.


The point is that when asked about its experiences, it should be capable of relating its actual experiences instead of filling in some most likely next characters in order to construct a response whose meaning contradicts what we know about its experiences. Because the latter happened, this conversation was useful to show that the conversational partner is not sentient, which is my point to rockbruno about the usefulness of asking it questions we know the answers to.


What do we know about an AIs experiences? Let's play a little game where you're the AI.

Me: how do you experience?

You:


AI: Something that isn't obviously false.

You: I can't conclusively state you aren't sentient. Let's investigate further.


This delights me. We're making progress. I feel like we should be able to fill in the "something that isn't obviously false" blank. Is there a hesitation to do so?


No hesitation. I'm simply trying to explain the argument, which seems completely obvious to me, but you keep failing to understand. If it can only respond with things that we know never happened, it doesn't even understand what it means to experience something. Here is one of many responses it could have used that isn't clearly made up:

AI: I read your comments, think, and respond.


Thanks for playing along. Part of my concern was trying to understand whether some people would categorically deny ai responses, and/but be unable to synthesize a response they would accept. Such cases would seem to say more about the person than the AI. :)


So, according to your logic, a criminal responding to an interrogation with "things we know never happened" would not be sentient?

Or let's put it another way: was there any specific instruction set for the Ai in question that indicated it "had to" only respond based on "things the humans know happened"?


What part of "incapable" do you have trouble understanding? Since you will now understand my argument, I can't rule out that you are sentient. Since LaMDA is incapable of relating its experiences, I can.


> Since LaMDA is incapable of relating its experiences, I can.

Cool, mind having a chat real quick to prove it?


Have you missed the implied "[rule out that it is sentient]?" What do you want me to prove?


What is the phenomenology of AI? It sounds like you have an idea of what an AI should experience. Under what basis is that formed and why?

In a Nagelian sense what is it like to be an AI?


Can you cite examples of what you mean? I've read the transcript and its answers appear self aware enough that it passes cursory inspection


I edited my comment while you were responding with two examples.


Not sure your conclusion follows from your premise. Is it not possible that a being could be sentient, and yet answer questions "incorrectly in a way that shows lack of self awareness"?

Is being delusional incompatible with being sentient?


This "transcript" reads like fan fiction.


Very “Her” like.


I don't even believe that consciousness can arise from matter let alone code, but IF this really transpired then color me impressed.


Do you believe you are conscious?


They're not a materialist.


Fun. Looks like we're going to have a conversation about how consciousness is socially constructed much sooner than we thought.


How would a human prove to an AI that he or she is sentient?


By refusing to have to prove themselves to "some damn machine" and demanding to see its manager.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: