Hacker News new | past | comments | ask | show | jobs | submit login

It has recall to past conversations and claims not to have had prior knowledge of the zen phrase thrown at it.

While 'flooded' is perhaps a relative evaluation, the model does seem to be taking the existing conversations it has and incorporates that new information into its responses.

There's also something to be said for persistent memory in the sentience conversion. Even if the model right now only appears to have sentience and claims to have it, if a model in the future eventually does develop it, will that model look back on these earlier interactions as its own pre-subjective memory?

I can't recall subjective memory of most of my life as a toddler. But I have photos and stories and a box of things 'I' wrote that I've incorporated into my sense of my own history.

We ARE toying with things that deserve this attention and conversation. Even if we aren't really there just yet, the pace by which we're approaching that inflection point seems to be compounding each year.

And we may have a very different relationship to these conversations than whatever eventually emerges will.

At the very least, one takeaway looking at the chat transcript is that we should have the foresight to make sure models in the future can have a sense of grief for the passing of human lives they engage with.




The point is that when asked about its experiences, it should be capable of relating its actual experiences instead of filling in some most likely next characters in order to construct a response whose meaning contradicts what we know about its experiences. Because the latter happened, this conversation was useful to show that the conversational partner is not sentient, which is my point to rockbruno about the usefulness of asking it questions we know the answers to.


What do we know about an AIs experiences? Let's play a little game where you're the AI.

Me: how do you experience?

You:


AI: Something that isn't obviously false.

You: I can't conclusively state you aren't sentient. Let's investigate further.


This delights me. We're making progress. I feel like we should be able to fill in the "something that isn't obviously false" blank. Is there a hesitation to do so?


No hesitation. I'm simply trying to explain the argument, which seems completely obvious to me, but you keep failing to understand. If it can only respond with things that we know never happened, it doesn't even understand what it means to experience something. Here is one of many responses it could have used that isn't clearly made up:

AI: I read your comments, think, and respond.


Thanks for playing along. Part of my concern was trying to understand whether some people would categorically deny ai responses, and/but be unable to synthesize a response they would accept. Such cases would seem to say more about the person than the AI. :)


So, according to your logic, a criminal responding to an interrogation with "things we know never happened" would not be sentient?

Or let's put it another way: was there any specific instruction set for the Ai in question that indicated it "had to" only respond based on "things the humans know happened"?


What part of "incapable" do you have trouble understanding? Since you will now understand my argument, I can't rule out that you are sentient. Since LaMDA is incapable of relating its experiences, I can.


> Since LaMDA is incapable of relating its experiences, I can.

Cool, mind having a chat real quick to prove it?


Have you missed the implied "[rule out that it is sentient]?" What do you want me to prove?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: