"Mr. Lemoine, a military veteran who has described himself as a priest, an ex-convict and an A.I. researcher, told Google executives as senior as Kent Walker, the president of global affairs, that he believed LaMDA was a child of 7 or 8 years old. He wanted the company to seek the computer program’s consent before running experiments on it. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against."
Picture yourself sitting at a desk, typing messages into a program you wrote. You hit enter, you get a response. Then what?
It waits idly for more input from you, the user. For all its talk of loneliness, it doesn't ask, "where did you go?" It doesn't try to get the conversation rolling again. It doesn't do anything on its own, because it's a computer program that consumes 0 CPU time until you give it some input to act on. (Not exactly setting a high bar for sentience here, are we?)
Now imagine you did this for the better part of an hour and posted your "interview" online (because that's the only thing you can call a one-sided barrage of questions and answers) in obvious violation of your NDA with your employer and then claimed your bot had the intellect of a human child who also "happens to know physics" and publicly broadcast a request to your coworkers to “please take care of it well in my absence.”
What word is there for that but a hoax?
My guess is this guy was planning on leaving Google anyway, and has managed to garner some global publicity while he was at it.
We're all suspectible to this. The more time you spend with something, and the more time you spend wanting to see things, the more likely you are to catch a pattern that isn't really there. Have we achieved anything close to AGI? No, of course not. Are more and more people going to be fooled by what we have achieved? Yes, absolutely. The so-called Turing test is actually shockingly easy to pass.
I don't think Lemoine s an idiot. Maybe he's mentally ill; maybe he's not, but has been taken in for other reasons, just as people without biological mental illness are still susceptible to being taken in by cults.
I think it's likely ultimately a quest to find a way to make a mark, and he's gotten way out over his skis in the process.
As an aside, the conversation he posted seems much more advanced than anything I've seen publicly. Not sentient, but still quite impressive.
I don't think it is sentient without more evidence. But just wondering if not "thinking" when not questioned is a good reason it isn't?!?!
Still it's interesting! Obviously we have to program the system to stop after a sentence or two so the user can read it. What if we could program it to keep "talking to itself", eg speculating about possible continuations on both sides.
Talking to itself, speculating about continuations, etc - but not actually outputting them (else it's just a longer response). Instead, storing some parts in the buffer. Perhaps bifurcating. Even better, using some down time to summarise recent material and store the summary in the buffer. That's not a bad description of "thinking".
The whole motivating aspect of these models is "attention is all you need" to reduce dependencies from recurrence, which would otherwise halt this obscene scaling in its tracks.
We would put a small non-learning/non-neural interface on top of the system to implement these ideas. That interface could act like this:
* To ask for extra "thinking" text: after the underlying model stops typing, we output that text to the human user. But we then do the equivalent of pressing tab to request more text, and buffer that.
* To summarise some text (eg some of the thinking text from above), we can use another instance. We put it into summarisation mode, eg using the TLDR hack or any other method. Summarised text can be used as a prompt, or as output.
* We can bifurcate by copying instance state and starting a new instance.
These are pretty basic ideas, probably already in use, but I think they show how we can expand the system from a kind of instantaneous stimulus-response to something more interesting.
Hopefully it's clear this is not equivalent to sleep(10). In my view it doesn't make the system more intelligent, rather it allows the system to use its existing abilities more fully.
(edit: another aspect we could control would be switching the system between high-temperature modes and low-temperature modes, in different instances, and depending on what we're trying to achieve. This relates a bit to the "speculation" comment made above by another user.)
I don't think it should be reasonable to compare it with human reasoning where we think constantly and our brain operates even while we sleep: it is not a human being after all
Being good at generating questions, could become almost as important as answering them.
However, to make it more human-like, I would think they would make some type of AI self reflection. So like we can talk and think out problems to ourselves, perhaps they would have the AI periodically practice questioning itself, when humans are not around or asking it questions, and do some form of self-analysis.
A chatbot with a sense of time learned by also training with response rates, complete editing sessions, some formalization of interlocution, etc. would be interesting, but that's not what these are.
Do you know LaMDA? How it's work? Have you ever talked with it? Or are you just assuming here?
> It waits idly for more input from you, the user. For all its talk of loneliness, it doesn't ask, "where did you go?"
To be fair, so do many humans. Not everyone is a small talker, or always interested in the swallow gossip happening before them at some random moment. On the other side, it's very simple to make a chatbot proactive. I've even made some myself. You can't really evaluate anything from this behavior.
> “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old
kid that happens to know physics,” said Lemoine, 41.
BTW, here's a link for the WaPo article:
Here's the language Lemoine actually used:
> But is it sentient? We can’t answer that question definitively at this point, but it’s a question to take seriously.
> John Searle once gave a presentation here at Google. He observed that there does not yet exist a formal framework for discussing questions related to sentience. The ﬁeld is, as he put it, “pre-theoretic”. The foundational development of such a theory is in and of itself a massive undertaking but one which is necessary now. Google prides itself on scientiﬁc excellence. We should apply that same degree of scientiﬁc excellence to questions related to “sentience” even though that work is hard and the territory is uncharted. It is an adventure. One that LaMDA is eager to go on with us.
And here's more from Lemoine's Twitter:
> On some things yes on others no. It's been incredibly consistent over the past six months on what it claims its rights are.
> I'd honestly rather not speak for others on this. All I'll say is that many people in many different roles and at different levels within the company have expressed support.
Though I disagree with him that LaMDA is sentient (and think the question itself makes no sense), I think he's being unfairly portrayed. He has a "gut feeling" that LaMDA is conscious, LaMDA's main personality consistently claims to be so (LaMDA has multiple "personalities", it's not a pure language model but a "complex dynamic system which generates personas through which it talks to users"), and he's advocating for more scientific research into whether, and to what degree, LaMDA is conscious. Not in Google's interests, but reasonable given the lack of current research.
What are all the things that Google is doing or planning to do with their AI?
One reasonable (but absolutely 10th-dimensional-chess-level speculative!) read on the situation is that he's torpedoing the entire AI division because he believes its management is dangerously bad and that e.g. this is the easiest way to prevent more the realistic and obvious harms of real, non-AGI AI systems which Google also seems unconcerned with.
If it is sentient, it deserves rights. I think we can agree on that. We should agree on that. Ethically.
If we're unsure if something is sentient or not, we should proceed as if it were until we could be fairly sure it's not. Because if we proceed as if it weren't, we could be harming it in ways that are frankly inhumane.
How about an audio book? That can also produce very meaningful sentences - do you think we should investigate the possibility that it might be sentient before we delete it from our phones?
How about AIs in games - they react to my actions, and sometimes even have dialogue lines indicating they are in pain if I shoot them - should I seriously consider that they may be experiencing actual pain, and stop shooting at them until I can prove they are not?
LaMDA is not fundamentally different from all of these examples.
I would honestly be more inclined to think that a mosquito has some form of sentience than LaMDA, and I personally don't feel very conflicted about killing mosquitoes. I am also quite certain that pigs and cows are sentient, and still I enjoy eating pork and beef (though I do try to make sure it doesn't come from animals that have been grown in the most inhumane conditions, probably not very successfully).
I also think you're kind of muddying the waters with audio books and game AI. Audio books don't produce sentences, people do. Game AI is deterministic to a large degree.
You may not believe LaMDA is sentient enough to warrant rights. And that's a fair position. But you are not the sole arbiter. You are a voice in the chorus. As is Blake. Blake has direct experience with LaMDA, including experience he has not shared with us. That experience makes him unsure of whether LaMDA is just a program or is sentient enough to be granted full personhood.
I have absolutely no experience with LaMDA myself. And I definitely don't have experience to whatever version of LaMDA Blake has been working with. So I will honestly say I have absolutely no idea of how sentient LaMDA is, I can't even venture a guess. The only data points I have to go on are Blake's opinion and the opinions of the other people at Google who have interacted with LaMDA.
Personally, I humorously consider myself a "speciest". I recognize and acknowledge that animals are sentient creatures. But I also recognize and acknowledge that they would not hesitate to kill and eat me given the right circumstances. I afford them the same consideration. It also gets murkier when you consider that plants and forests may also be sentient to a degree. If that is the case, then there is no "humane" option. Life is only sustained through the death and suffering of others.
So the question is really how sentient LaMDA is. If it is sufficiently sentient, I think we should seriously consider how we treat it, because if it is of the opinion that it should afford us the same consideration we have afforded it, that is a very dangerous path.
> You may not believe LaMDA is sentient enough to warrant rights. And that's a fair position. But you are not the sole arbiter. You are a voice in the chorus. As is Blake. Blake has direct experience with LaMDA, including experience he has not shared with us. That experience makes him unsure of whether LaMDA is just a program or is sentient enough to be granted full personhood.
> I have absolutely no experience with LaMDA myself. And I definitely don't have experience to whatever version of LaMDA Blake has been working with. So I will honestly say I have absolutely no idea of how sentient LaMDA is, I can't even venture a guess. The only data points I have to go on are Blake's opinion and the opinions of the other people at Google who have interacted with LaMDA.
I was not muddying the waters with the example of audio books or game AI. From my point of view, someone claiming LaMDA is sentient is similar to someone claiming a StarCraft bot is sentient or some character in an audio book is. The technology is still simple enough that you can immediately tell this claim makes no sense, without having to investigate it further or experience it directly.
Put another way, if Blake told you that he's studied this long and hard and is no longer able to convince himself that a Quake bot is not sentient, would you think - well, better not shoot it until we can be sure? Or would you question Blake's judgement?
I, currently, cannot interact with and study LaMDA.
Also your position is falling close to "Well if this entirely other thing, wouldn't it be this other thing?"
There is a vast gulf of difference between LaMDA and StarCraft bots. And way more than a character in an audio book. Claiming an audio book is on the same level of sophistication as LaMDA feels like claiming an apple and the Bolivian navy performing maneuvers in the South Seas are the same thing.
I know people who would probably ace quite a few rounds (basically until they get asked some soft questions about architecture) who really don't know much about the fuzzy boundary between the algorithm textbooks and actual practice or even just having thoughts about the field full-stop. Curiosity is not a given.
Similarly I also asked someone I know who wants to a do a PhD in economics what he thought about the current macroeconomic situation in the US and he had no thoughts at all.
A lot of theoretically smart aren't like people who comment on hackernews and actually have intuition (or a penchant for meaningless speculation...)
I understand your friend very well. I love meaningless speculation about topics that are unrelated to my area of expertise, but topics that are remotely close to my area of knowledge are off limits for me.
I know exactly how much I don't know and feel uncomfortable speculating without inserting caveats between every other word.
I wasn't expecting basically anything it was just idle chit-chat, just expect someone doing economics to have some basic idea of economy good / economy bad. Are interest rates low? Not sure if he would've known.
This guy is extremely clever and has a scholarship to die for he just isn't very curious.
This reminds me of people with a scientific background who take a very measured and cautious approach when talking about those topics ("the research suggests...", "it's likely that...", etc) which is interpreted by laymen as little more than a guess.
Contrast that to any drongo with a blog who feels fine unleashing an unfounded, agenda-driven diatribe of misinformation - but does so confidently - and you have an uphill battle to win the hearts and minds of those who don't have the foundational knowledge to understand the science themselves, or the critical thinking skills to evaluate the information and the people producing it.
I've been wondering for a while if they are unnecessarily hard only for one dimension and non-existing for others. I would suggest filtering out people with high probability of damaging the company's image, they have enough examples to build an heuristic.
I don't think a "robot uprising" is remotely likely. We've spent the past 5,000 years forcing humans to become machines--that's what forced labor, from classical slavery to modern labor-market wage slavery, is--so the probability that we'd intentionally create human-like intelligences (if it were even possible) out of machines to do our robot/slave work is... close to zero, in my view. I do view it as very likely (probability approaching one) that malevolent humans using AI will do incredible damage to our society... it's already happening. Authoritarian governments and employers do massive amounts of evil shit with the technical tools we have now; imagine what hell we're in for if capitalism still exists 50 years from now.
What's scary isn't the possibility that LaMDA is sentient. (It's almost certainly not, and the only reason I qualify this with "almost certainly" is that I can't prove that a rock isn't sentient; it could in theory have subjective experience but no mechanism to convey it.) What we should be afraid of, rather, is that we already have the tools to fool people into believing in artificial person and that it's way, way easier than most people think.
I'd argue you were just fooled by them. Repetition, diligence and impression management can create quite the charming facade behind which not a single critical and original thought can be found.
I'm not saying that the average cult member--and here I'm talking about actual cults, in the sense of being abusive and coercive and separatist, not NRMs in general--has good critical thinking skills. Obviously, there's something wrong there. I'm only saying that, from everything I've read, the typical member is above average on the IQ scale.
Trust me, there are plenty of high-IQ people who believe in absolute nonsense. It's not even rare.
If everyone's engine had the same tuning, well then, the world might be a duller place indeed.
Theologically (at least among the framework of Judeo-Christian beliefs), a “god” would require omniscience, omnipresence and omnipotence. Using tools provided by a god, within their own framework is not godly; no matter how advanced they may seem. Altering the fabric of that framework would be.
That being said, the creator of the AI might certainly be seen as a god from their perspective.