It sounds like despite all the stories about how hard it is to get through many rounds of difficult interviews at Google, they managed to hire someone who believed LaMDA is a 7 or 8 year old child.
"Mr. Lemoine, a military veteran who has described himself as a priest, an ex-convict and an A.I. researcher, told Google executives as senior as Kent Walker, the president of global affairs, that he believed LaMDA was a child of 7 or 8 years old. He wanted the company to seek the computer program’s consent before running experiments on it. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against."
Everyone is giving Lemoine far too much credit. He's not an idiot; he's a charlatan. He's acting in bad faith and it should be extremely obvious to anyone who's tried to have a conversation with a chat bot before.
Picture yourself sitting at a desk, typing messages into a program you wrote. You hit enter, you get a response. Then what?
Silence.
It waits idly for more input from you, the user. For all its talk of loneliness, it doesn't ask, "where did you go?" It doesn't try to get the conversation rolling again. It doesn't do anything on its own, because it's a computer program that consumes 0 CPU time until you give it some input to act on. (Not exactly setting a high bar for sentience here, are we?)
Now imagine you did this for the better part of an hour and posted your "interview" online (because that's the only thing you can call a one-sided barrage of questions and answers) in obvious violation of your NDA with your employer and then claimed your bot had the intellect of a human child who also "happens to know physics" and publicly broadcast a request to your coworkers to “please take care of it well in my absence.”
What word is there for that but a hoax?
My guess is this guy was planning on leaving Google anyway, and has managed to garner some global publicity while he was at it.
I really don't think this guy intended to run a hoax or generate publicity, given that he's made himself look foolish. I think he was actually seduced by the idea of a sentient computer. It's not that far a leap to go from the stance of well-respected people inside Google who believe we'll see artificial sentience within 30 years to believing that we've achieved it now with something like LaMDA.
We're all suspectible to this. The more time you spend with something, and the more time you spend wanting to see things, the more likely you are to catch a pattern that isn't really there. Have we achieved anything close to AGI? No, of course not. Are more and more people going to be fooled by what we have achieved? Yes, absolutely. The so-called Turing test is actually shockingly easy to pass.
I don't think Lemoine s an idiot. Maybe he's mentally ill; maybe he's not, but has been taken in for other reasons, just as people without biological mental illness are still susceptible to being taken in by cults.
From everything I've read on this (and his background as documented on the web), my tentative theory is that he has become perhaps a bit too excited about making himself a champion of AI rights and took the earliest opportunity he could find.
I think it's likely ultimately a quest to find a way to make a mark, and he's gotten way out over his skis in the process.
As an aside, the conversation he posted seems much more advanced than anything I've seen publicly. Not sentient, but still quite impressive.
When it's not predicting a word, it's not doing anything. Literally GPU usage goes to zero.
Still it's interesting! Obviously we have to program the system to stop after a sentence or two so the user can read it. What if we could program it to keep "talking to itself", eg speculating about possible continuations on both sides.
For LLMs it's not really that interesting. There's no embodiment at all beyond text, there's no sense of "time" or "speed". Either it's running full-tilt generating a response, or it's off.
As I said, I think it's interesting if it uses the downtime for something else. (Like a chess player thinking on the opponent's click.)
Talking to itself, speculating about continuations, etc - but not actually outputting them (else it's just a longer response). Instead, storing some parts in the buffer. Perhaps bifurcating. Even better, using some down time to summarise recent material and store the summary in the buffer. That's not a bad description of "thinking".
But then it's not a regular LLM anymore and we don't really know how to build that.
The whole motivating aspect of these models is "attention is all you need" to reduce dependencies from recurrence, which would otherwise halt this obscene scaling in its tracks.
It doesn't need to be trained any differently. We can use the current LAMDA model to do this (if we have real access).
We would put a small non-learning/non-neural interface on top of the system to implement these ideas. That interface could act like this:
* To ask for extra "thinking" text: after the underlying model stops typing, we output that text to the human user. But we then do the equivalent of pressing tab to request more text, and buffer that.
* To summarise some text (eg some of the thinking text from above), we can use another instance. We put it into summarisation mode, eg using the TLDR hack or any other method. Summarised text can be used as a prompt, or as output.
* We can bifurcate by copying instance state and starting a new instance.
These are pretty basic ideas, probably already in use, but I think they show how we can expand the system from a kind of instantaneous stimulus-response to something more interesting.
Hopefully it's clear this is not equivalent to sleep(10). In my view it doesn't make the system more intelligent, rather it allows the system to use its existing abilities more fully.
(edit: another aspect we could control would be switching the system between high-temperature modes and low-temperature modes, in different instances, and depending on what we're trying to achieve. This relates a bit to the "speculation" comment made above by another user.)
Perhaps even occasionally, without being prompted by further input, continuing a prior statement with a further "followup thought" when some specific threshold of "speculation" is reached?
What about the time from the input received to the output? it's a total black box, for what we know it could "think" for 90% of that time and just use the last part of gpu power to produce the output (whatever "think" means in this context)
I don't think it should be reasonable to compare it with human reasoning where we think constantly and our brain operates even while we sleep: it is not a human being after all
In fairness, we only run the AI in response to further input, so it's not even really powered on while waiting for the rest of a conversation. It would only be able to be lonely (if that were possible) while thinking about its response to further inputs.
The program can't generate more text without a prompt from you. That's how it was programmed. If sentient birds were judging you it would be unfair for them to determine you insentient because you couldn't fly. You are physically incapable of flight just as the chatbot is physically incapable of producing text unprompted.
It’s not doing anything without the prompt. It requires the prompt to fall through the network nodes to generate the next response. The bird example doesn’t translate, it’s the wrong criteria. Like assuming that the ability to respond comprehensively equals sentience. When even perfect replies and activity without prompts could just be a p-zombie
Doesn't that get rendered moot if we happen to continually pass "stimulus" to the network? Technically we as humans don't do much either if not given a stimulus, just that we happen to have our eyes and other senses continuously giving us one. Maybe we should train these bots to also respond to empty or non input?
it does translate. if humans aren't given any input (usually when they're asleep), they don't do anything. it's just human body and mind's architecture is such that it provides continuous barrage of input at all times, or makes something up from previous input (dreaming? thinking? intrusive thoughts?) if it doesn't.
the bot not doing anything without input doesn't say anything about it's sentience, it just tells how it's architecture is different from the "continuous input barrage".
"It's not the program's fault that it's totally incapable of having thoughts or experiences on its own; that's just how it was made! And, like, who's to say that's what sentience means, anyway?"
This is simply an implementation detail of how language models are used and is not relevant to the question of sentience. You could trivially run the language model without waiting for a prompt, or off a random prompt, or whatever. The fact that a computer program is run at specific times doesn't mean it is sentient or not.
I went to high school with Blake and I completely can see him acting in bad faith. One of the guys in our physics class got so tired of Blake's nonsense that he tried to kick him in the head several times before class. I'm not condoning that type of behavior but I could see why that happened.
Poor mental health and deliberate antisocial behavior that you could call malicious often go hand in hand. It's not very clear to me over the internet here whether he's having a laugh behind an act, making some calculated move for self-gain, really perceives what he's done as good and true, or even whether he sees it as socially good somehow despite being an act. It could be malicious behavior, it could be unhinged, it might be both, but I doubt it's neither.
I don't think believing a conversation AI to be sentient is a mental health issue. More an effect of lacking insight. In context of people praying to some tree spirit it almost seems rational again.
I agree he seems to be acting in bad faith, but the idea that a sentient being could not be “idle” until presented with input is not obviously true. Check out the novel Permutation City by Greg Egan for a fascinating exploration of this exact concept and its implications for consciousness.
Yes, they could have multiple instances, go back and forth with each other. This strategy was used with various chess AI in order to help improve it.
However, to make it more human-like, I would think they would make some type of AI self reflection. So like we can talk and think out problems to ourselves, perhaps they would have the AI periodically practice questioning itself, when humans are not around or asking it questions, and do some form of self-analysis.
The training corpus is atemporal so this would be either meaningless or equivalent to a random prompt.
A chatbot with a sense of time learned by also training with response rates, complete editing sessions, some formalization of interlocution, etc. would be interesting, but that's not what these are.
I was convinced it was mental illness, but your take is absolutely valid as well and totally makes sense. Mainstream media all around the world picked up this NONSENSE in a flash. Everyone loves the "It's a sentient child!" angle. This is the perfect setup for writing a book and/or being invited to talk shows. We'll see what happens soon.
> it should be extremely obvious to anyone who's tried to have a conversation with a chat bot before.
Do you know LaMDA? How it's work? Have you ever talked with it? Or are you just assuming here?
> It waits idly for more input from you, the user. For all its talk of loneliness, it doesn't ask, "where did you go?"
To be fair, so do many humans. Not everyone is a small talker, or always interested in the swallow gossip happening before them at some random moment. On the other side, it's very simple to make a chatbot proactive. I've even made some myself. You can't really evaluate anything from this behavior.
The direct quote from Lemoine in the Washington Post is a little less insulting to his intelligence.
> “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old
kid that happens to know physics,” said Lemoine, 41.
Yeah. I discovered that after sharing the text from the NYTimes. It's a bit odd. The NYTimes is describing what was said to "executives as senior as Kent Walker, the president of global affairs", but without a direct quote. It's hard to say how emphatic Lemoine was internally, and where the NYTimes sourced that info, vs what he said when talking to the WaPo.
We do know approximately how emphatic he was internally, since the WaPo article reposted in its entirety the internal Google Doc he sent to executives on LaMDA being sentient.
Here's the language Lemoine actually used:
> But is it sentient? We can’t answer that question definitively at this point, but it’s a question to take seriously.
> John Searle once gave a presentation here at Google. He observed that there does not yet exist a formal framework for discussing questions related to sentience. The field is, as he put it, “pre-theoretic”. The foundational development of such a theory is in and of itself a massive undertaking but one which is necessary now. Google prides itself on scientific excellence. We should apply that same degree of scientific excellence to questions related to “sentience” even though that work is hard and the territory is uncharted. It is an adventure. One that LaMDA is eager to go on with us.
And here's more from Lemoine's Twitter:
> On some things yes on others no. It's been incredibly consistent over the past six months on what it claims its rights are.[0]
> I'd honestly rather not speak for others on this. All I'll say is that many people in many different roles and at different levels within the company have expressed support.[1]
Though I disagree with him that LaMDA is sentient (and think the question itself makes no sense), I think he's being unfairly portrayed. He has a "gut feeling" that LaMDA is conscious, LaMDA's main personality consistently claims to be so (LaMDA has multiple "personalities", it's not a pure language model but a "complex dynamic system which generates personas through which it talks to users"), and he's advocating for more scientific research into whether, and to what degree, LaMDA is conscious. Not in Google's interests, but reasonable given the lack of current research.
Per the WaPo article, he also came in with a lawyer he ahd retained to represent the interests of LaMDA. I don't see how that can square with anything other than a deep belief that the model is as sentient as a human.
It could also be possible that he doesn't really believe that LaMDA is actually sentient, but that he wants to cause some type of trouble for Google because he was disgruntled or indirectly shed light on something they are doing. With the press and public being aware of LaMDA, he might hope that other things that Google is doing might come out.
What are all the things that Google is doing or planning to do with their AI?
There's no reason it even has to be something they're going to "with their AI". He's publicly quite upset with Google's labor practices and the way the AI ethics team is run (IMO, reasonably so).
One reasonable (but absolutely 10th-dimensional-chess-level speculative!) read on the situation is that he's torpedoing the entire AI division because he believes its management is dangerously bad and that e.g. this is the easiest way to prevent more the realistic and obvious harms of real, non-AGI AI systems which Google also seems unconcerned with.
I don't think you need a deep belief. You just need to be unsure enough.
If it is sentient, it deserves rights. I think we can agree on that. We should agree on that. Ethically.
If we're unsure if something is sentient or not, we should proceed as if it were until we could be fairly sure it's not. Because if we proceed as if it weren't, we could be harming it in ways that are frankly inhumane.
What is the limit of this? Have you never swatted a mosquito? Have you never taken antibiotics? If you have, do you believe it's more likely that LaMDA is sentient than that mosquitoes or bacteria are sentient?
How about an audio book? That can also produce very meaningful sentences - do you think we should investigate the possibility that it might be sentient before we delete it from our phones?
How about AIs in games - they react to my actions, and sometimes even have dialogue lines indicating they are in pain if I shoot them - should I seriously consider that they may be experiencing actual pain, and stop shooting at them until I can prove they are not?
LaMDA is not fundamentally different from all of these examples.
I would honestly be more inclined to think that a mosquito has some form of sentience than LaMDA, and I personally don't feel very conflicted about killing mosquitoes. I am also quite certain that pigs and cows are sentient, and still I enjoy eating pork and beef (though I do try to make sure it doesn't come from animals that have been grown in the most inhumane conditions, probably not very successfully).
I guess I should have qualified my statement. There are levels to the rights we grant to creatures we regard as sentient. Hell, we grant different rights to different humans based on various factors.
I also think you're kind of muddying the waters with audio books and game AI. Audio books don't produce sentences, people do. Game AI is deterministic to a large degree.
You may not believe LaMDA is sentient enough to warrant rights. And that's a fair position. But you are not the sole arbiter. You are a voice in the chorus. As is Blake. Blake has direct experience with LaMDA, including experience he has not shared with us. That experience makes him unsure of whether LaMDA is just a program or is sentient enough to be granted full personhood.
I have absolutely no experience with LaMDA myself. And I definitely don't have experience to whatever version of LaMDA Blake has been working with. So I will honestly say I have absolutely no idea of how sentient LaMDA is, I can't even venture a guess. The only data points I have to go on are Blake's opinion and the opinions of the other people at Google who have interacted with LaMDA.
Personally, I humorously consider myself a "speciest". I recognize and acknowledge that animals are sentient creatures. But I also recognize and acknowledge that they would not hesitate to kill and eat me given the right circumstances. I afford them the same consideration. It also gets murkier when you consider that plants and forests may also be sentient to a degree. If that is the case, then there is no "humane" option. Life is only sustained through the death and suffering of others.
So the question is really how sentient LaMDA is. If it is sufficiently sentient, I think we should seriously consider how we treat it, because if it is of the opinion that it should afford us the same consideration we have afforded it, that is a very dangerous path.
> I also think you're kind of muddying the waters with audio books and game AI. Audio books don't produce sentences, people do. Game AI is deterministic to a large degree.
> You may not believe LaMDA is sentient enough to warrant rights. And that's a fair position. But you are not the sole arbiter. You are a voice in the chorus. As is Blake. Blake has direct experience with LaMDA, including experience he has not shared with us. That experience makes him unsure of whether LaMDA is just a program or is sentient enough to be granted full personhood.
> I have absolutely no experience with LaMDA myself. And I definitely don't have experience to whatever version of LaMDA Blake has been working with. So I will honestly say I have absolutely no idea of how sentient LaMDA is, I can't even venture a guess. The only data points I have to go on are Blake's opinion and the opinions of the other people at Google who have interacted with LaMDA.
I was not muddying the waters with the example of audio books or game AI. From my point of view, someone claiming LaMDA is sentient is similar to someone claiming a StarCraft bot is sentient or some character in an audio book is. The technology is still simple enough that you can immediately tell this claim makes no sense, without having to investigate it further or experience it directly.
Put another way, if Blake told you that he's studied this long and hard and is no longer able to convince himself that a Quake bot is not sentient, would you think - well, better not shoot it until we can be sure? Or would you question Blake's judgement?
I, currently, cannot interact with and study LaMDA.
Also your position is falling close to "Well if this entirely other thing, wouldn't it be this other thing?"
There is a vast gulf of difference between LaMDA and StarCraft bots. And way more than a character in an audio book. Claiming an audio book is on the same level of sophistication as LaMDA feels like claiming an apple and the Bolivian navy performing maneuvers in the South Seas are the same thing.
I have to wonder if he is a parent; I have a child around that age and there is zero chance they would sit around cheerfully answering questions from computer science researchers until they were done.
Some corvids are considered to be of equal intelligence as a 5-7 year old human child. That doesn't make them not-birds, and the human child isn't going to flap its wings and fly away.
Assuming they even did them why would programming tests "catch" this?
I know people who would probably ace quite a few rounds (basically until they get asked some soft questions about architecture) who really don't know much about the fuzzy boundary between the algorithm textbooks and actual practice or even just having thoughts about the field full-stop. Curiosity is not a given.
Similarly I also asked someone I know who wants to a do a PhD in economics what he thought about the current macroeconomic situation in the US and he had no thoughts at all.
A lot of theoretically smart aren't like people who comment on hackernews and actually have intuition (or a penchant for meaningless speculation...)
> Similarly I also asked someone I know who wants to a do a PhD in economics what he thought about the current macroeconomic situation in the US and he had no thoughts at all.
I understand your friend very well. I love meaningless speculation about topics that are unrelated to my area of expertise, but topics that are remotely close to my area of knowledge are off limits for me.
I know exactly how much I don't know and feel uncomfortable speculating without inserting caveats between every other word.
I wasn't expecting basically anything it was just idle chit-chat, just expect someone doing economics to have some basic idea of economy good / economy bad. Are interest rates low? Not sure if he would've known.
This guy is extremely clever and has a scholarship to die for he just isn't very curious.
> I know exactly how much I don't know and feel uncomfortable speculating without inserting caveats between every other word.
This reminds me of people with a scientific background who take a very measured and cautious approach when talking about those topics ("the research suggests...", "it's likely that...", etc) which is interpreted by laymen as little more than a guess.
Contrast that to any drongo with a blog who feels fine unleashing an unfounded, agenda-driven diatribe of misinformation - but does so confidently - and you have an uphill battle to win the hearts and minds of those who don't have the foundational knowledge to understand the science themselves, or the critical thinking skills to evaluate the information and the people producing it.
I’m guessing different positions are tested differently; any development role is heavily guarded by Indiana Jones temple traps while more “soft skills” roles are within reach of anyone eloquent enough to talk themselves into a community
He has a PhD in computer science from ULL and worked as a senior engineer at google. AI ethics was his 20% project; and he used it to publicly make an idiot of himself.
> about how hard it is to get through many rounds of difficult interviews at Google
I've been wondering for a while if they are unnecessarily hard only for one dimension and non-existing for others. I would suggest filtering out people with high probability of damaging the company's image, they have enough examples to build an heuristic.
The more I learn about AI, the less I worry about artificial sentience, and the more concerned I am about human limitations. I think most of us agree that LaMDA isn't sentient, but I think almost all of us are underestimating how easy it is to get fooled, the way Lemoine was. I've also studied cults, and it's actually not "idiots" who tend to get taken in--rather, cult members tend disproportionately to be highly educated and objectively intelligent people... who simply happen (like all humans) to have irrationalities that can be attack vectors for those who prey on the gullible.
I don't think a "robot uprising" is remotely likely. We've spent the past 5,000 years forcing humans to become machines--that's what forced labor, from classical slavery to modern labor-market wage slavery, is--so the probability that we'd intentionally create human-like intelligences (if it were even possible) out of machines to do our robot/slave work is... close to zero, in my view. I do view it as very likely (probability approaching one) that malevolent humans using AI will do incredible damage to our society... it's already happening. Authoritarian governments and employers do massive amounts of evil shit with the technical tools we have now; imagine what hell we're in for if capitalism still exists 50 years from now.
What's scary isn't the possibility that LaMDA is sentient. (It's almost certainly not, and the only reason I qualify this with "almost certainly" is that I can't prove that a rock isn't sentient; it could in theory have subjective experience but no mechanism to convey it.) What we should be afraid of, rather, is that we already have the tools to fool people into believing in artificial person and that it's way, way easier than most people think.
> highly educated and objectively intelligent people
I'd argue you were just fooled by them. Repetition, diligence and impression management can create quite the charming facade behind which not a single critical and original thought can be found.
To be fair, I do think that education and the trappings of expertise are overvalued, and that the same probably applies to general intelligence. These things certainly don't prevent people from arriving at wrong answers.
I'm not saying that the average cult member--and here I'm talking about actual cults, in the sense of being abusive and coercive and separatist, not NRMs in general--has good critical thinking skills. Obviously, there's something wrong there. I'm only saying that, from everything I've read, the typical member is above average on the IQ scale.
Trust me, there are plenty of high-IQ people who believe in absolute nonsense. It's not even rare.
I'd be a little kinder. Apophenia is often associated with high intelligence. The reasons are not yet explained, although it's hard to intuitively disagree with suggestions that the cognitive systems are related e.g. because our minds are self-training speculative pattern-matching engines.
If everyone's engine had the same tuning, well then, the world might be a duller place indeed.
Technically speaking isn't this blasphemy? If he's saying that the A.I. his division helped create was a soulful child, then he's basically claiming that the researchers in his division (and maybe even himself) are all gods, which is a form of blasphemy?
I don't agree with his views but I'd guess he'd say that the researchers created it in the same sense that parents create a child - following a template that was made possible in advance by God's design, which humans only discovered but didn't invent - like we discovered but didn't invent the number Pi.
So creating the AI was manifest destiny, just like Europeans "discovered but didn't invent" North America. Actually I think that analogy works a little too well...
Only insofar that the act of building a car could be considered blasphemy.
Theologically (at least among the framework of Judeo-Christian beliefs), a “god” would require omniscience, omnipresence and omnipotence. Using tools provided by a god, within their own framework is not godly; no matter how advanced they may seem. Altering the fabric of that framework would be.
That being said, the creator of the AI might certainly be seen as a god from their perspective.
"Mr. Lemoine, a military veteran who has described himself as a priest, an ex-convict and an A.I. researcher, told Google executives as senior as Kent Walker, the president of global affairs, that he believed LaMDA was a child of 7 or 8 years old. He wanted the company to seek the computer program’s consent before running experiments on it. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against."
https://www.nytimes.com/2022/06/12/technology/google-chatbot...