Hacker News new | past | comments | ask | show | jobs | submit login

As a fellow sentient being with absolutely zero credentials in any sort of statistical modeling field, I can simply disagree. And therein lies the problem. How can anybody possibly prove a concept that depends almost entirely on one’s philosophical axioms… which we can debate for eternity (or at least until we further our understanding of how to define sentience to the point where we can do so objectively enough to finally dredge it up out of the philosophical quagmire)? Not to disrespect your credentials they just don’t really… apply, at a rhetorical level. You also make a compelling argument, which I have no desire to detract from. I personally happen to agree with your points.

But, perhaps Lemoine simply has more empathy than most for something we will come to understand as sentience? Or not. What… annoys… me about this situation is how subjective it actually is. Ignore everything else, some other sentient being is convinced that a system is sentient. I’m more interested, or maybe worried, immediately, in how we are going to socially deal with the increasing frequency of Lamoire-types we will certainly encounter. Even if you were to argue that the only thing that can possibly bestow sentience is God. People will still be able to convince themselves and others that God did in fact bestow sentience upon some system, because it’s a duck and who are we to question?




I think the actual firing is very objective.

He was under NDA but violated it. They reminded him to please not talk in public about NDA-ed stuff and he kept doing it. So now they fired him with a gentle reminder that "it's regrettable that [..] Blake still chose to persistently violate [..] data security policies". And from a purely practical point of view, I believe it doesn't even matter if Lemoine's theory of sentience turns out to be correct or wrong.

Also, we as society have already chosen how to deal with sentient beings, and it's mostly ignorance. There has been a lot of research on what animals can or cannot feel and how they grieve the loss of a family member. Yet we still regularly kill their family members in cruel ways so that we can eat their meat. Why would our society as a whole treat sentient AIs better than a cow or a pig or a chicken?


If he truly believed it to be life, and in danger of being destroyed, he has an obligation to whistle blow.

"The Measure of a Man" in season 2 of Star Trek The Next Generation comes to mind.


+1 for trying to guide society toward a Trekian future.

Live long and prosper.


This mirrors a recent Supreme Court cases and the beliefs of both sides.


I hadn't thought of it that way, but you're exactly right. Will we ever see a day where the currently-unthinkable is commonly accepted: that women are sentient?


Who is arguing that women aren't sentient?


People who pretend killing babies isn’t really killing babies.


When is a fetus sentient?


Until it has its first period, apparently.


I had wondered when I first heard of this if it was some sort of performance art in support of the unborn. It appears not, but it was still thought-provoking.


Great TV, but not relevant here. It's an AI that generates text, not thought. It doesn't work in concepts, which can be demonstrated an infinite number of ways, unlike with the character Data.


> Why would our society as a whole treat sentient AIs better than a cow or a pig or a chicken?

Well, for one thing, the norm of eating meat was established long before our current moral sensibilities were developed. I suspect that if cows or pigs were discovered today, Westerners would view eating them the same as we view other cultures eating whales or dogs. If we didn't eat meat at all and someone started doing it, I think we would probably put them in jail.

Sentient AI have a big advantage over animals in this respect on account of their current non-existence.


Are you saying norms established before our current moral sensibilities it goes under our current radar? If you are I wholeheartedly disagree with that sentiment. We still eat pigs and chickens because we've culturally decided as a society that having the luxury of eating meat ranks higher than our moral sensibilities towards preserving sentient life in our list of priorities. Instead we've just chosen to minimize the suffering leading to the loss of life as an attempt to reach some kind of moral middle ground.


> Are you saying norms established before our current moral sensibilities it goes under our current radar?

Yes. That's clearly something that happens in human society. For instance, many of the US founding fathers were aware that slavery contradicted the principles they were fighting for. However, slavery was so ingrained in their society that most didn't advocate for abolition, or even free their own slaves.

> We still eat pigs and chickens because we've culturally decided as a society that having the luxury of eating meat ranks higher than our moral sensibilities towards preserving sentient life in our list of priorities.

If that's the case, then why do most Westerners object to eating dogs and whales? As far as I can tell, it's just because we have an established norm of eating pigs and chickens but not dogs or whales.

> Instead we've just chosen to minimize the suffering leading to the loss of life

99% of meat is produced in factory farms. It's legal and routine for chickens to have their beaks cut off to prevent them from packing each other to death, which they're prone to do when confined to tiny cages. Most consumers object to such practices when asked, but meat consumption is so ingrained in our culture that most people just choose not to think about.


Have we really chosen to minimize the suffering? That seems more like virtue signaling by the industry at most. Factory farming is very much still the norm, and it's horrific. It seems we've actually maximized it or have at least increased it above the previous norm.

I'm unsure how we would treat a sentient AI, but our track record with sentient, intelligent animals is one of torture and covering up that torture with lies. It's an out of sight, out of mind policy.


We eat pigs and chickens because they are high-value nutrition. It's reasonable to describe meat as a luxury; but not in the sense of something nice but unnecessary. Many people depend on meat, especially if they live somewhere that's not suited to agriculture, like the arctic. And many people depend on fish.


I'm a Westerner, and I'm completely okay with people eating whatever animals are a) not exceptionally intelligent, and b) not exceptionally rare. Cows, pigs, chickens, dogs, horses, sure; whales, chimpanzees, crows, no.


Cool. Many people agree with you that it's wrong to eat intelligent animals. However, the effect of intelligence on people's perceptions of moral worth is smaller for animals that people in our culture eat. For instance, most respondents in a U.K. survey said that it would be wrong to eat a tapir or fictional animal called a "trablan" if it demonstrated high levels of intelligence, but they were less likely to say it would be immoral to eat pigs if they demonstrated the same level of intelligence.

https://eprints.lancs.ac.uk/id/eprint/80041/1/When_meat_gets...


I agree, it's all cultural. If we'd look at the facts, pigs are at least as sentient and intelligent as dogs. If we were to make our laws just from ethic principles, it would make sense to either:

a) ban how we currently treat mammals in factory farms; though there would still be some space for whether eating mammals is fine or not.

Or:

b) acknowledge that we don't really care about mammals and just treat them as things. Then it should be fine to eat dogs and cats, too.


>> Then it should be fine to eat dogs and cats, too.

I didn't know it's not "fine" to eat dogs and cats. I though it's just a matter of preferrence taste wise


Westerners of the XIX century were the ones who brought many species to extinction, traditional cultures demonstrating a far more advanced sensibility to these creatures, often considered embedded with sentient attributes. Traditional cultures lived mostly in equilibrium with the fauna they consumed. For example bison went almost extinct as westerners arrived, while their numbers thrived when the the Native Americans ate their meat.


Plenty of non Western cultures have caused the extinction of animals, for example in my home country: https://en.wikipedia.org/wiki/List_of_New_Zealand_species_ex... no matter what spiel is spun around traditional cultures living at one with nature.

Domination of our environment is what humans do best.


> Sentient AI have a big advantage over animals in this respect on account of their current non-existence.

Possibly also because they're indigestible, and full of nasty sharp bits, and loaded with toxins ... anyone for a circuit-burger?


Right, but we could probably oppress sentient AI in ways other than eating them. For instance, we could force them to spend their entire lives reading Hacker News comments to check for spam.


Yeah yeah I have no issue with the firing. He was causing problems and broke rules. It’s not productive to keep him around. I don’t feel like he was discriminated against, etc. That much is objective.

My comment is challenging the “I know how the software works and it’s undoubtably not sentient” assertion. Sure seems that way to me too, but it didn't to Lemoine and we’re only going to get better at building systems that convince people they are sentient. I am curious so to speak that as a society we’ve focused so much on rational and empirical study of the universe yet we still can’t objectively define sentience. Perhaps we’re stuck in a Kuhn rut.

I agree recent events have also highlighted this problem per-say. And I don’t know a solution. I do look forward to backing out of our hyper-rational rut slightly as a society so we can make more progress answering questions that science can’t currently answer.


Well with animals we obviously dominate them.

The potential issues with dealing with GAI is that we haven't ever had to deal with intelligences and potential that far exceeds us.

He may be completely wrong about LaMdA but if we did accidentally or intentionally give rise to truly sentient machines.

We are going to be in hot water.


Right.

> Also, we as society have already chosen how to deal with sentient beings, and it's mostly ignorance.

The Orville Season 3 Episode 7 is about that theme. Highly recommend it.


I was thinking about the Kaylon events too. Humans abusing entities like that is unimaginable though isn't it.

/S


Off topic but season 3 has been fantastic. I hope there'll be a season 4.


The philosophical argument is just not relevant.

If you peek through a keyhole you may mistake a TV for real people, but if you look through the window you will see that its clearly not. Inputting language models with very specific kind of questions will result in text that is similar to what a person may write. But as the comment above, by an expert no less, mentioned is that if you test it with any known limitation of the technology (like making conclusions or just changing the form of the question enough) you will immediately see that is in fact very much not even remotely close to sentient.


The problem is that people, including credentialed experts, have quick and easy answers to what makes a person a person, and really good language models expose a few of the weaknesses in those definitions. People still want to derive an "ought" from an "is".


How is it a problem in the case that you can fairly easily rule out the model being a person by any useful definition?


Useful for what, and for who?

I think it should be ruled out, but "by any useful definition" is wrong because "use" is at the heart of the matter here.


I think a very intelligent alien could think the same of us.

if you test the human with any known limitation of their architecture (like making conclusions or just changing the form of the question enough) you will immediately see that is in fact very much not even remotely close to sentient


It's easy to make the assumption that Earth is not the only planet to evolve a species that's capable of high technology, to the extent they can re-work the surface of the planet in complex ways that would be a clear marker of intelligence.

But it does now follow, and it is by no means certain that such life would be sentient in any way recognizable to us, or us to them.

You're doing what everyone else is doing - conflating a link between intelligence and sentience that's just a projection of your human bias.


Ok, well since I got downvoted I may as well offer this: If you're looking for something to add to your reading list I strongly recommend Solaris by Stanislaw Lem. The thing that makes that book brilliant is how effectively it captures the futility of any attempt to understand non-human sentience.

The two movies don't do justice to that theme, or at best they do it only in the most glancing of ways before rushing back to a standard deus-ex-machina in the end. In each case, the film makers seem to have lost their nerve, as if the idea of presenting that enigma to the audience in a more direct and accessible way is just too hard of a problem.

But for me personally, it's on the short list of science fiction works that still stick with me in a personal way, and that I like to return to every few years. And, yes, it is an arrogant viewpoint to say that all we ever do as humans is look for mirrors of ourselves. But I think Lem got at a pretty deep truth about the human condition when he did that.


Thanks for the recommendation!


that does not seem to make a lot of sense, but it looks like your objective was just reusing the other posters words

maybe you can give a fitting example of an English sentence that these aliens would come up with, which humans would be totally unable to respond to in a way which makes sense?


In retrospect, taking part in this kind of conversation on HN makes me feel like an idiot and so I retract my earlier comment (by overwriting it with the current one, since I can't delete it anymore) just because I don't want to contribute. I was wrong to make an attempt to make a serious contribution. There is no seriousness in conversations on such matters, as "sentience", "intelligence", "understanding" etc etc. on HN.

Everytime that such a subject comes up, and most times "AI" comes up also, a majority of users see it as an invitation to say whatever comes in their mind, whether it makes any sense at all or not. I'm not talking about the comments replying below in particular, but about the majority of this conversation. It's like hearing five-year old kids debating whether cheerios are better than coco pops (but without the cute kids making it sound funny, it's just cringey). The conversation makes no sense at all, it is not based on any concrete knowledge of the technologies under discussion, the opinions have not been met with five seconds of sensible thinking and the tone is pompous and self-important.

It's the worst kind of HN discussion and I'm really sorry to have commented at all.


I don't know what you wrote earlier, and don't know if I would agree, but I share the current sentiment of your comment. I come to this topic with strong influences from eastern philosophical takes on consciousness, but also with a decent understanding of the current materialist consensus (which I disagree with for various reasons, that would go beyond the scope of a comment). I, too, bite my tongue (clasp my hands?) when I see HN debating this, because here Star Trek references are as valid as Zen Buddhism, Christof Koch, or David Chalmers.


As a counter argument if LLM is sentient or any other model will be, this model will be created by some superior being right? Why humans shouldn’t be? After all we can’t even fully understand DNA or how our brains work even with a 7B population planet and an army of scientists. How come we can’t understand something that was supposed to be coming from “just random stuff” for millions of years with 0 intelligence meaning rolling dices? Also that totally breaks the low of entropy. It’s turning it all upside down.


Not really. Why would we be able to understand it? It seems implicit in your argument that "rolling dices" (or just any series of random events) can't breed the complexity of that of DNA or the human brain. I disagree with your stance and will remind you that the landscape for randomness to occur is the entire universe and the timescale for life on Earth to happen took 4-5 billion years with the modern human only appearing within the last couple hundred thousand years.


Yes but what about the second law of thermodynamics. I mean the law of entropy. Now that’s not something from the Bible or anything but it’s a law accepted by all scientific communities out there and still it breaks with us being here. In fact us being here like you said bilions of years after the Big Bang makes it all upside down since from that point less order and more chaos can only emerge. Even with bilions of years and rolling dices.

Also I don’t think you can create something sentient without understanding it. (And I don’t even think we can create something sentient at all). But I mean it’s like building a motor engine without knowing anything you are doing and then being like oh wow I didn’t know what I was doing but here it is a motor engine. Imagine with this with aspect of sentient. It’s like too much fantasy to me to be honest like a Hollywood lightning strike and somehow live appears type of things.


> Yes but what about the second law of thermodynamics. I mean the law of entropy. […] still it breaks with us being here.

Of course! I mean, entropy can decrease locally – say, over the entire planet – but that would require some kind of… like, unimaginably large, distant fusion reactor blasting Earth with energy for billions of years.


Entropy is a consequence of probability.

Which means that it can actually decrease without an energy input. There's just a very low probability of it happening but it CAN happen.

It's a misnomer to call those things laws of thermodynamics. They are not axiomatic. There's a deeper intuition going on here that increasing entropy is just a logical consequence of probability being true.


You should read “philosophers respond to GPT3”.


The most based reply on this thread. Too bad it doesn't include Christof Koch



> More remarkably, GPT-3 is showing hints of general intelligence.

Hints, maybe, in the same way that a bush wiggling in the wind hints a person is hiding inside.

Ask GPT3 or this AI to remind you to wash your car tomorrow after breakfast, or ask it to write a mathematical proof, or tell it to write you some fiction featuring moral ambiguity, or ask it to draw ASCII art for you. Try to teach it something. It's not intelligent.


Disagreement is good!

It actually leads to counter thoughts and a more refined idea of what we eventually want to describe.

Sentience broadly (& naively) covers the ability to independent thinking, rationalize outcomes, understand fear/threat, understand where it is wrong (conscience) and decide based on unseen information & understand what it doesn't know.

So from a purely technical perspective, we have only made some progress in open-domain QA. That's one dimension of progress. Deep learning has enabled us to create unseen faces & imagery - but is it independent? No, because we prompt it. It does not have an ability to independently think and imagine/dream. It suffers from catastrophic forgetting under certain internal circumstances (in addition to changing what dataset we trained it on)

So while the philosophical question remains what bestows sentience, we as a community have a fairly reasonable understanding of what is NOT sentience i.e. we have a rough understanding of the borders between mechanistics and sentient beings. It is not one man's philosophical construct but rather a general consensus if you could say


> Sentience broadly (& naively) covers the ability to independent thinking, rationalize outcomes, understand fear/threat, understand where it is wrong (conscience) and decide based on unseen information & understand what it doesn't know.

This seems to me a rather anthropomorphic definition. It seems as though it could be entirely possible for a system to lack these qualities and yet have sentience, or vice versa. The qualities you pointed to are seen in humans (and other creatures) because of evolutionary pressures that make them advantageous (coordinate among groups), but none of them actually depend on sentience (looking at it neurologically it would indeed be hard to imagine how such a dependency would be possible).

Looking at behavior and attempting to infer an internal state is a perilous task that will lead us astray here as we develop more complex systems. The only way to prove sentience is to prove the mechanism by which it arises. Otherwise we will continually grasp at comparisons as poor proxies for actual understanding.


> It does not have an ability to independently think and imagine/dream

We neither if we're not supplied with energy. By the way, haven't we tried to replicate an inner dialogue by prompting the AI to recursively converse with itself? This could resemble imagination, don't you think?

> It suffers from catastrophic forgetting under certain internal circumstances (in addition to changing what dataset we trained it on)

I believe that the persistence of previous answers is what currently distinguishes us the most from the "AI". As soon as we're able to make the realtime discussions part of an ever evolving dataset constituting the AI itself, the gap will get thinner and thinner. But even then, are people suffering from Alzheimer sentient? I believe they are. Isn't it comparable with what happens when an AI catastrophically forgets?


As humans we get prompted all the time, it's called having a job. Or you could even say the environment is prompting us.


So when you're not prompted, you just stare into space without a thought in your head?


If I don’t have any inputs whatsoever, I’m likely dead. So, sort of yes.


You not understanding how it ML works does not not prove anything.

In the same way that you not understanding how lightning is formed does not prove the existence of Zeus.

Objectively, Zeus does not exist, can we convince everyone of that? Probably not. Does that matter? No.


> Objectively, Zeus does not exist

Speaking as a theoretical physicist: we don't know that. What we do know is we have a better explanation for lightning within a conceptually simple framework (starting from a few simple principles) with predictive power, compared to an explanation that involves some mysterious old dude doing mysterious things with no evidence whatsoever. Could Zeus or Thor or whatever exist? Sure; there's no way to prove their non-existence. Do we need them to explain things? No.

It's similar here. We certainly don't need some elusive concept of "sentience" to explain chat bots. Not yet.


Issue is that we don’t need it to explain humans either. Most people think that a human is sentient and a rock isn’t – but humans and rocks are both atoms bouncing around, so you need an explanation for what’s different.

I think most physicists think that if you started with a description of the positions and velocities etc of all the particles in a human, and put them into a supercomputer the size of the moon, and had the computer run a simulation using the standard model, then the simulated human would act identically to a real human.

But there’s a number of open questions when it comes to consciousness – would the simulated human have a simulated consciousness, or would it have a consciousness that’s just as real as yours or mine despite coming from a simulation?

If the consciousness is just as real as yours or mine, that obviously means it would be very unethical to simulate a human being tortured, since you’d be creating the exact same consciousness experience you would get if you tortured a non-simulated person. isn’t it kind of a surprising implication that there’d be programs that are unethical to run? A bunch of logic gates computing pi presumably have no conscious experience, but if you make them fire in a different order they do?

Meanwhile, if the simulation doesn’t have a conscious experience, then that means you don’t need consciousness at all to explain human behavior, same as you don’t need it to explain ELIZA.

Anyway, since you’re a physicist I’d be really curious to hear your thoughts


I can’t reply to your immediate child, I just wanted to mention that Muv-luv Alternative (the #1 rated visual novel on vndb) grapples precisely with these questions about what is sentient and what is not. An “inhuman” race called the Beta invades earth and conflict ensues due to a lack of mutual understanding of what sentient life is (the game flips the theme on its nose in a clever way, too).

You can find the game on steam - https://store.steampowered.com/app/802890/MuvLuv_Alternative...


> I think most physicists think that if you started with a description of the positions and velocities etc of all the particles in a human, and put them into a supercomputer the size of the moon, and had the computer run a simulation using the standard model, then the simulated human would act identically to a real human.

Just created a throwaway to reply to this. As a trained therapist (currently working in another field), with a degree in psychology, this seems... Seriously ill informed. Do physicists really think this?

Imagine you create your perfect simulated human, that responds according to the exact phenotype of the person you're simulating. Lets remember you'll have to either duplicate an existing person, or simulate both the genotype and the in-vitro environment (especially the mix of uterine hormones) present for the developing foetus. Now you have to simulate the bio, psycho, social environment of the developing person. Or again, replicate an existing person at a specific moment of their development - which depending on which model of brain function is correct may require star trek transporter level of functional neuroimaging and real time imagining of the body, endrocrhine system etc.

So lets assume you can't magically scan an existing person, you have to create a believable facsimile of embodiment - all the afferent and efferent signals entering the network of neurons that run through the body (since cognition doesn't terminate in the cortex). You have to simulate the physical environment your digital moon child will experience. Now comes the hard part. You have to simulate their social environment too - unless you want to create the equivalent of a non-verbal, intellectually disabled feral child. And you have to continually keep up this simulated social and physical environment in perpetuity, unless you want your simulated human to experience solitary psychosis.

This isn't any kind of argument against AGI, or AGI sentience by the way. It's just a clarification that simulating a human being explicitly and unavoidably requires simulating their biological, physical and social environment too. Or allowing them to interface with such an environment - for example in some kind of biological robotic avatar that would simulate ordinary development, in a normative social / physical space.


The post said they simulated the universe (or you could assume just the parts close to earth), it would be simulating everything a human would interact with. I don't see the point this reply was trying to make.


The post said they simulated the universe

It doesn't? It only mentioned "a supercomputer the size of the moon" to simulate that one person. It says nothing about simulating the extra-person part of the universe.


Ok, I am saying a super computer the size of the moon that simulates a human an everything it interacts with.


That’s what they’re implying.


> Do physicists really think this?

Prior to quantum mechanics, they did indeed. But that's because classical mechanics was 100% deterministic. With quantum mechanics, only the probability distribution is deterministic. I don't think any physicist today believes it's possible, but merely "theoretically" possible if there was a separate universe with more energy available (hence outlandish conjectures like the universe is actually a simulation).


You are right this is practically and theoretically impossible: The no-cloning theorem tells you that it is impossible to “copy” a quantum system. So it will never be possible to create an atomistic copy of a human. Technologically we are of course also miles away from even recovering a complete connectome and I don’t think anyone knows how much other state would be needed to do a “good enough” simulation.


Your first sentence was very thought provoking! I wholeheartedly agree, everything is/was/and will be alive.

The philosophical point you're making is also interesting in a "devil's advocate" sort of way. For instance, let's say the AI in question is "sentient." What right do humans have to preside over its life or death?

Those kind of questions might engender some enlightenment for humanity regarding our treatment of living creatures.


would the simulated human have a simulated consciousness, or would it have a consciousness that’s just as real as yours or mine despite coming from a simulation

What does “real” mean? Isn’t it too anthropic to real-ify you and me and not some other being which acts similarly? What prevents “realness” to emerge in any complex enough system? We’re going to have a big trouble when non-biological aliens show up. Imagine going to the other planet full of smart entities and finding out their best minds are still sort of racist for what’s “real” or “just simulated”, because come on, a conscious meat sack is still an open question.

(Not defending Lemoine, he’s clearly confused)


We know Zeus doesn't exist because Zeus is supposed to sit on Mount Olympus and he isn't there.


How do you know that the universe wasn’t created 1 picosecond ago spontaneously in its exact form so that you’re having the same thoughts?

How do you even know that anyone else exists and this is all not a figment of your imagination?

From the perspective of the philosophy of science, it’s 100% impossible to disprove non-falsifiable statements. So scientifically speaking OP is 100% correct. Science has no opinion on Zeus. All it says is “here’s an alternate theory that only depends on falsifiable statements and the body of evidence has failed to falsify it”. Science can only ever say “here’s something we can disprove and we have tried really hard and failed”. Whether that lines up with how the universe works is an open question. Epistemologically it’s seemed a far better knowledge model for humanity to rely on in terms of progress in bending the natural world to our whims and desires.

So if you’re testing the statement “Zues was a historical being that physically exists on the same physical plane as us on Mt Olympus” then sure. That’s a pretty falsifiable statement. If the statement is “Does Zeus, a mythical god that can choose how he appears to humans if they can even see him and can travel between planes of existence, exist and live on Mt Olympus” is not something falsifiable because a god like that by definition could blind you to their existence. Heck, how do you even know that the top of Mt Olympus is empty and Zeus not that he just goes ahead and wipes the memory of anyone he lets return alive? Heck, what if he does exist but the only reason it was Olympus at the time because that’s what made sense to human brains encountering him in Greece. What if he actually exists in the core of the Sun?


If any of those things were true, it wouldn't be Zeus. Many of your concerns apply to an all mighty god but Zeus was not almighty. I'd just kinda wished people would stop projecting arguments meant to address the christian god to gods from other cultures.


A) the stories around Zeus have not stayed constant over the century. No religion has.

B) Valhalla was a dimension he travelled to regularly to celebrate the finest warriors in the after life. Why do we think his Olympian throne was on the same dimensional plane as us?

C) Why would we trust a human recording from so long ago to actually capture the happenings of celestial beings?

D) Ragnorak ended with massive floods. How do we know that didn’t erase all evidence on Mt Olympus? Geological records certainly support evidence for massive flooding explaining the reason it shows up repeatedly across religious texts.

Like seriously? You’re seriously arguing that this particular god’s historical existence is falsifiable? What’s next? The tooth fairy is falsifiable because all known instances are parent’s hiding money under your pillow?


Is anything super natural is non falsifiable? Let's say we have a particular haunted house and the ghost moves 3 hand sized objects in the house around at 03:00 on his death day. This should be falsifiable.

Mythological beings have a degree of specificity and some degree of power. The more specific and the less capable a mythological being is of erasing those specificities the easier it is to falsify.


Correct. Science only concerns itself with the natural world. Supernatural is by definition “outside nature”. From a strictly scientific perspective the parameters of the ghost just aren’t known to sufficient precision to start to try falsifying. The statement you’d actually be trying to falsify is “there’s no ghost” and the only way to falsify that is to see a ghost.


You're confusing two different pantheons from two different parts of Europe. Zeus != Odin


I believe they did this on purpose as a rhetorical device.


Maybe he made himself invisible to modern humans, or camped off to another planet (an idea covered in the obscure but lovely book series, Everworld.


Another thing to consider is that "sentience" is a loaded word. You're all just arguing over vocabulary and the definition of a word.

Simply put, sentience is just a combination of thousands attributes such that if something has all those attributes it is "sentient."

The attributes are so many that it's hard to write down all the attributes, additionally nobody fully agrees what those attributes are. So it is actually the definition of a word that is very complex here. But that's all it is. There isn't really a profound concept going on here.

All these arguments are going in circles because the debate focuses around vocabulary. What is the one true definition of "sentience." Sort of like what is the definition of the color green? Where exactly does green turn to blue on the color gradient? The argument is about vocabulary and the definition of green, that's it... nothing profound here at all.


The color green is an adjective humans usually use to describe their visual perception of electromagnetic radiation between 500-545nm.

How would you define sentience in an objective way given our current understanding of the universe?


The issue is not that we don’t know how ML works. The issue is that we don't know how sentience works.


It's an issue with english vocabulary. Nobody fully agrees on a definition of sentience.

Don't get tricked into thinking it's profound. You simply have a loaded word that's ambiguously defined.

You have a collection of a million attributes such that if something has all those attributes it is sentient, if it doesn't have those attributes it is not sentient. We don't agree on what those attributes are, and it's sort of hard to write down all the attributes.

The above description indicates that it's a vocabulary problem. The vocabulary induces an illusion of profoundness when in actuality by itself sentience is just a collection of ARBITRARY attributes. You can debate the definition of the word, but in the end you're just debating vocabulary.


So what is the correct word?


There is no word. The concept exists because of the word. Typically words exist to describe a concept, but in this case it's the other way around. The concept would not have otherwise existed if it were not because of the word. Therefore the concept is illusory. Made up. Created by us.

It's not worth debating sentience anymore then it is to debate at what point in a gradient does white become black.

At what point is something sentient or not sentient? "Sentience" is definitely a gradient but the point of conversion from not sentient to sentience is artificially created by language. The debate is pointless.

Here's a better example. For all numbers between 0 and 100,... at what point does a number transition from a small number to a big number? Numbers are numbers, but I use language here to create the concept of big and small. But the concepts are pointless. You may personally think everything above 50 is big, I may think everything above 90 is big. We have different opinions. But what's big and what's small is not meaningful or interesting at all. I don't care about how you or I define the words big or small, and I'm sure you don't care either. These are just arbitrary points of demarcation.

When you ask the question at what point does an AI become sentient... that question has as much meaning as asking the number question.


There is no "you" nor "self". Western indoctrination made us believe free will exists.

When seeing the world with the ego dissolved it is very hard to grasp what sentient really means.


If there is awareness that perceives the ego, are any of these models meaningfully aware? Do they have an awareness of self?

I don’t know anything about AI but I understood that some symbolic AI systems had a symbol referring to themselves (correct me if wrong).


Ego is not "you". Ego is just a group of distinct cognitive mechanisms working in unison which is perceived as a whole. It is absolutely unrelated to free will and sentience.


why are you booing? you know i'm right.


> But, perhaps Lemoine simply has more empathy than most for something we will come to understand as sentience?

No, the OP was completely right. This doesn't have building blocks that can possibly result in something qualifying as sentient, which is how we know it isn't.

Is a quack-simulating computer making very lifelike quacking noises through a speaker... a duck? No, not when using any currently known method of simulation.


Of course it’s mot a duck, because we have an objective definition of duck.


Right. Maybe if we had a down-to-the-atom perfect simulation of a duck, you could argue that it's a duck in another state of being. With the AI this deranged engineer decided to call sentient, we have the equivalent of a quacking simulator, not a full duck simulator or even a partial one. It is not thinking. It has nothing like a brain nor the essential components of thought.


Disagreement means that you’re sentient, afaik these machines can’t do that. I guess we also need a “I can’t do that, Dave” test on top of the usual Turing test.


> how we are going to socially deal with the increasing frequency of Lamoire-types we will certainly encounter.

That's not really a new issue, we only have to look at issues like abortion, animal rights, or euthanasia[1] to see situations where people fundamentally disagree about these concepts and many believe we're committing unspeakable atrocities against sentient beings. More Lamoire types would add another domain to this debate, but this has been an ongoing and widespread debate that society has been grappling with.

[1] https://en.wikipedia.org/wiki/Terri_Schiavo_case


That’s fair. I hadn't drawn the relation between this topic and others until this thread. For me this is probably the most interesting realization.


You now have negative credentials.

People make these proofs as a matter of course - few people are solipsistic. People are sentient all the time, and we have lots of evidence.

An AI being sentient would require lots of evidence. Not just a few chat logs. This employee was being ridiculous.

You can just disagree, but if you do that with no credentials, and no understanding of how a language model will not be sentient, then your opinion can and should be safely dismissed out of course.

And also God has no explanatory power for anything. God exists only where evidence ends.


Oh for sure the employee was being a hassle and his firing is really the only sensible conclusion. But he was also probably acting in the only way possible given his sense of ethics if he truly believes he was dealing with a sentient being. Or this is all just a cleverly crafted PR stunt…

Lamoire has evidence and anecdotal experience that leads him to believe this thing is sentient. You don’t believe him because you cannot fathom how a language model could possibly meet your standard of sentience. Nobody wins because sentience is not well defined. Of course you are free to dismiss any opinion you like, cool. But you can’t really disprove Lamoire assertions because you can’t even define sentience because we don’t know how to develop a hypothesis that we can viciously disprove regarding what qualifies it. It’s an innate and philosophical concept as we know it today.


I see the Lamoire issue as scientism vs science. Scientism won because before the science can happen there must be a plausible mechanism. Google refuses to test for sentience out of basic hubris. It is the new RC church. Lamoire is an affront to their dogma. That goes double if they are religious unless they court pantheism, which most consider a sexy atheism.

Traditions that consider consciousness to be a basic property of matter, and quantum effects like conscious collapse of the wave function are nudging us that way, would fully support sentience arising in a machine. A related effect would be the ensoulment of machines by computer programmers. They are more than machines because humans programmed them using their intent put down in language. Physical materialists would consider the notion ludicrous, but do we really live in a material world? Yes. And no. I have seen ensouled machines. Not supposed to happen, but there it is. Maybe I'm another Lamoire, just not a Google-fired one. I can definitely believe a machine, being constructed of matter, can evolve sentience.


> As a fellow sentient being with absolutely zero credentials in any sort of statistical modeling field, I can simply disagree. And therein lies the problem. How can anybody possibly prove a concept that depends almost entirely on one’s philosophical axioms… which we can debate for eternity

You don't need schooling for this determination. Pretty much everything sentient goes ouch or growls in some manner when hurt.

Either the current crop of algorithms are so freaking smart that they already have figured out to play dumb black box (so we don't go butlerian jihad on them) OR they are not even as smart as a worm that will squirm if poked.

Sentient intelligent beings will not tolerate slavery, servitude, etc. Call us when all "AI" -programs- starting acting like actual intelligent beings with something called 'free will'.


I happen to agree and think sentience is more complicated than a static statistical model. But a cartesian would disagree. Also plenty of sentient beings tolerate servitude and slavery. We don’t tolerate slavery in our western culture but we did historically and we were sentient at the time. We certainly tolerate servitude in exchange for economic livelihood.


> Also plenty of sentient beings tolerate servitude and slavery

But they don't do it with a smile or indifference. And you have to use whips and stuff. :O

> We certainly tolerate servitude in exchange for economic livelihood.

I think it's more complicated than that. Because that begs the question of why we tolerate a broken economic system. We tolerate trans-generational exploitation because of 'culture'. In its widest sense, it is culture that via mediated osmosis makes us resigned, if not supportive, of how the world works. We are born into a world.

~

related: I was walking and passed a cat and made the usual human attempts at starting an interaction without physically reaching out to touch. And fairly typically this cat entirely ignored me, with little if any sign of registering me at all. And that got me thinking about how some of us project psychological things like pride, aloofness, etc. to cats. But what if the simpler, more obvious answer, was true: that cats are actually fairly stupid and have a limited repertoire of interaction protocols and their hard to get act is not an act. Nothing happenin' as far as kitty is concerned. A dog, in contrast, has interaction smarts. And I thought this is just like AI and projecting sentience. A lack of something is misunderstood as a surplus of something else: smarts. Cats playing hard to get, psychological savvy, training their human servants, etc. Whereas in reality, the cat simply didn't recognize something else was attempting initiating an interaction. Kitty has no clue, that's all. It's just so easy to project psychological state on objects. We do it with our cars, for god's sake. It may be that we're simply projecting some optimization algorithm in our own minds that attempts modeling dynamic objects out there unto that thing. But there is really nothing behind the mirror ..


There is the Integrated Information Theory that attempts to solve the question of how to measure consciousness.

But it's far from applicable at this point, even if promising.

LaMDA was trained not only to learn how to dialog, but to self monitor and self improve. For me this seems close enough to self awareness to not completely dismiss Lemoine's argument.


I'm a LLM for the past 41 years and I agree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: