Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI Is Not a Horse (georgestrakhov.com)
17 points by georgestrakhov on Jan 22, 2023 | hide | past | favorite | 77 comments



Desktop computers have been around since there 1960's, and Apple's first computer was released in 1974. I think people understand what computer programs are without some clumbsy or awkward metaphor. ChatGPT isn't sentient, it's a large language model (LLM) that performs better than a Markov Chain but the underlying principal isn't so far off, but it's still a computer program. One that you can anthroponorphize readily, but it's still just a computer program, albiet a modern, and quite impressive one. It's a tool like a hammer


From what I learned about neural networks (not much really), any specific NN is not a computer program, in our progammer’s sense. It’s a higher order data in a commonly used format/structure reducible by a computer program when combined with an input. You may rightly claim equivalence, but regular people usually imagine loops, conditions, some low-code-like schematic diagrams or a cryptic text when you say that. At least that’s what I think most of them do. They definitely don’t think of hyperdimensional gradient descent optimization matrices or what’s used there today.

While I agree that ChatGPT isn’t sentient (in however broad definition of this term we’d like to use that could still make sense), neither is it a traditional program, nor is it non-sentient because it’s a program, imo.


Who's to say that a brain is not a kind of a computer and that all humans are a sort of a computer program? Going from "computer program" to "not sentient" is a leap of logic I'd like to hear more about.


> Who's to say that a brain is not a kind of a computer and that all humans are a sort of a computer program?

An interesting insight from a book I won't remember:

When clockwork was the most advanced technology, many books talked about how people are just like clockwork. When steam was the most advanced, many books talked about how humans are just like steam engines. When cars were the most advanced, people were just like cars. And now we're just like computers.

And the ancients talked about how people are just like flowing this or that. Because waterworks were the advanced tech of the time.


Today, synthesizing viruses and modifying DNA is at the forefront of technology:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6171941/

For some reason, books don't talk about how humans are just like DNA though. But the logical extension of those technologies is to design and build biological organisms.

Obviously, if that ever happens, humans will say that humans are like those organisms.

But then someone will remember the quote. And because the quote has been true in the past, and humans were nothing like clockwork, steam engines, or computers, then, by extension, humans are also not like biological organisms.

Which is nonsense, unless you assume that humans contain something metaphysical and unreproducible.

If not, then you have to agree that at some point the distinction will blur. But at which point?


> the logical extension of those technologies is to design and build biological organisms

A friend of mine was doing her PhD on "implementing NAND gates in DNA" almost 10 years ago now. Microbiology is wild.

I think the distinction will be that humans aren't like designed/programmed biological organisms. At least until we are ...


Yeah this is largely the availability bias of techies


The trouble is going from a mechanism to consciousness. Really not clear why consciousness is required for anything at all or how possibly in could emerge. Yet — ala Descartes — surely the awareness of our own thought is the thing we can be most certain of. Everything else could be an illusion except for that.


Even the awareness of our own thoughts, and certainly the sensation that we're somehow "in control of them" (vs being determined by the laws of physics) is almost certainly largely an illusion...


Who’s to say that computations occur at all? Maybe reality is just one great big static lookup table?


The leap of logic is using a simile to draw a paralell between a human and a computer program.

You don’t even have an argument. Just a who’s-to-say.


Au contraire, the leap of logic is assuming that computer programs could never be sentient. We simply don’t have any proof either way, so we have to assume the possibility.


Sorry about the blase reply, then.

> Au contraire, the leap of logic is assuming that computer programs could never be sentient.

That’s not quite what I meant. I don’t categorically dismiss the possibility.

> We simply don’t have any proof either way, so we have to assume the possibility.

Like God?


> ChatGPT isn't sentient

Perhaps, perhaps not.

Although I lean towards you on this, we don't have a definition good enough to test against, so we can't tell if it is or isn't.

I was recently thinking about the philosophy course I took 22 years ago; Descartes saying that every experience he ever had could be illusions or insanity, the only thing he could know was "I think therefore I am", and A. J. Ayer a few centuries later saying no, all you can say is "There is a thought now" because any memory or certainty you have of the past is also capable of being insanity.

So… how can we be sure that LLMs or image generators or any other AI don't have the feeling of an instantaneous now? That they don't have "a thought now"?

We can't rely on the fact that they will, if asked, hallucinating a story about when they were switched off: we also do that, not just with dreams when we sleep, but with our tales of reincarnation and afterlives.


> Although I lean towards you on this, we don't have a definition good enough to test against, so we can't tell if it is or isn't.

I don't think a definition of consciousness is possible in any objective sense, as consciousness is by definition, a subjective experience.

A more useful question is why ask if LLMs are conscious?

ex:

1. Are you trying to decide whether shutting them off is ethical?

2. Are you trying to figure out how to manipulate them to ask your questions correctly?

3. Are you trying to answer whether they can replicate anything a human can do?

When the contextual question that prompts the word "consciousness" is defined, we can make progress in determining whether something is "conscious".

For example, if it's a question of ethics, then you follow up by defining your ethical framework.


I think the ethical question is the only one that really requires even discussing consciousness.

Unless there are externally observable things that can't or are unlikely to be done without consciousness, which doesn't seem to be a solved question either.


I don't think ethics involves consciousness either; you can't inspect other people's consciousness, so how can you put ethical weight on it?

You can assume that other people are conscious like you, and so you put ethical value on that. But then you're really putting ethical value on "being like me", irrespective of whether other people are or are not conscious.


> But then you're really putting ethical value on "being like me", irrespective of whether other people are or are not conscious.

I can't find flaw with that, but I wish it were not so. I want it to be about consciousness being special rather than brain structure chauvinism.


"Unless there are externally observable things that can't or are unlikely to be done without consciousness..."

If it weren't, why would it have evolved in the first place? It seems me not unlikely that the only way we can accurately surmise what's going on inside the heads of other people (an obvious survival advantage) is to have that subjective experience of what's going on inside our own.


But even the ethical question is straight forward.

LLM definitely don’t suffer, and we are happy “shutting off” a dog which both can suffer and has some greater level of consciousness. So, yes, it’s fine to shut off LLMs


Society may be happy doing that to a dog, I am not. Nor am I happy doing that to animals which are turned into dog meat. Contradictions abound with more than a superficial glance on many moral topics, in part due to our lack of any of omniscience, omnipotence, nor omnibenevolence.


But we aren’t talking about you. We are talking about us, society. And it’s unequivocal.


Society is made up of individuals.

200 years ago, some societies were fine with slavery. Enough individuals rejected it and society changed.

What you say only works insofar as "ethics" means only the rules of your society and not any person or group; this would also reject the "E" from PETA, the USA declaration of independence, and the Suffragettes.


Okay. So we kill and keep dolphins captive. We eat octopus. We run experiments on monkeys.

This happens globally. For 1000s of years. There is no global or even local movement to stop treating conscious animals any better… which is exactly my point, AI in its current state is less than that. So until we as a society want to tackle consciousness animals I don’t think we will tackle consciousness in AI


> For example, if it's a question of ethics, then you follow up by defining your ethical framework.

A. I oppose the forcing into unpaid labor and/or murder of sentient beings.

B. We do not know (to a satisfactory degree) what constitutes sentience.

C. Given B, we do not know whether LLMs are sentient. [1]

D. Given A and C, we should exercise extreme caution when it comes to making LLMs widely available, at least until B is resolved.

[1] Indeed, solipsists posit that we cannot even be certain that other humans are sentient.


There are a number of issues with your reasoning here.

The main one that my previous post was getting at was (A) is not an axiom in your ethical framework; it's derived. And I think you'll find if you try and build up to (A) from base principles, sentience won't be a condition of the ethics of an action.

A secondary one is that (C) is true regardless of what you replace LLMs with, because you've outright state that you don't have a means to judge what sentience is. You could take your argument and use it for:

- Humans

- Cats

- Ants

- Car engines

- Rocks

- Air

And it will hold, since you aren't relying on any particular properties of LLMs to make your case.


(A) is just sentiocentrism, which isn’t a novel ethical view. There’s a pretty extensive Wikipedia page on it [1].

Regarding your second point, indeed we can’t truly know whether or not anything is sentient, so we must reason in terms of probabilities.

We can observe some degree of understanding and emotion in humans, cats (and other animals), and LLMs. We also consider (in the case of humans and animals) that their brains are probably biologically similar to ours, and (in the case of LLMs) that their neural nets are weakly modeled after our brains.

When we consider car engines, rocks, or air, we cannot observe any degree of understanding or emotion (though indeed we can never be certain). We also do not observe any probable similarities to our brains.

Therefore, we arrive at the conclusion that P(other humans sentient) > P(cats sentient) > P(ants sentient) >> P(engines sentient) > P(rocks or air sentient).

This leads us to apply the precautionary principle [2] to humans, animals, and LLMs, but not to engines, rocks, or air.

But indeed, ethical questions around the uncertainty of sentience aren’t novel either. Wikipedia also has a page on this topic [3].

[1] https://en.wikipedia.org/wiki/Sentiocentrism

[2] https://en.wikipedia.org/wiki/Precautionary_principle#Animal...

[3] https://en.wikipedia.org/wiki/Ethics_of_uncertain_sentience


Pre-civilization animism like assigning will and consciousness to trees is more common-sensical compared to what you are doing.


Argument by insult isn’t appropriate.

What exactly do you find “more common-sensical” about imaging a tree is conscious than something that can create English sentences describing the world?


Not so much an argument as a statement.

A tree is made of living matter, like fauna. An AI is a program which runs on silicon, and I don’t think scientists have found silicone-based lifeforms on Earth.


The only way that statement would be relevant is if souls existed.

I don't believe in souls.

To your previous comment upthread, GPT-3 is about the complexity of a rat brain.

Do rats have feelings? I don't know, as we share a common ancestor it might just be an anthropomorphisation of shared mammalian reflexes that makes me think rats act like they have them.

Does GPT-3? I don't know, but it's a very good actor, so I can't even infer anything from the fact it can act like it has them — after all a VHS player can also perfectly reproduce human emotional affect.

Is GPT-3 more like a VHS or more like a rat? I see aspects of both in it. I don't see a way to more than guess at which aspect makes a difference.

A rat brain has to keep doing rat things to stay alive, but GPT-3 can use the same parameter space to instead communicate in Spanish and Mandarin, to answer physics questions and write poetry and Python. The "staying alive" process might, or might not, have anything to do with conciseness, but even if consciousness in vivo is about the process of staying alive, there is no obvious reason to think the process/structure cannot be replicated in-silico.

Trees can't do any of those things that LLMs and diffusion models can do.


>> The only way that statement would be relevant is if souls existed.

I think the OP is saying that carbon is not like silicon and perhaps carbon is special in a way that silicon isn't. They're not saying that humans are special because they have souls, or something along those lines.


Yep, pretty much.


>I don't believe in souls.

Why not?

I mean I could presume a lot on your behalf, like how the concept is non-falsifiable, lack of evidence that can be replicated, past trauma from people you trusted who insisted upon it but then behaved hypocritically, etc. but every time someone says they don't believe in souls I can't help but ask why not.

You don't have to answer but I'm always interested to hear what others say.

Disclaimer: I personally believe in something akin to souls, not individualized, and it's hard to explain my ideas so I like to hear what others think before I say what I think.


The more I learned about the world, the less room there has been for souls.

Souls fit fine in a world where ghosts, mediums, possession by sprits, apparitions from the afterlife, dryads, and/or the katabasis of Orpheus are taken seriously. But each of these has turned out to have been mundane (in the old sense of the word) or pure fiction.


> Souls fit fine in a world where ghosts, mediums, possession by sprits, apparitions from the afterlife, dryads, and/or the katabasis of Orpheus are taken seriously. But each of these has turned out to have been mundane (in the old sense of the word) or pure fiction.

And yet you believe there could be a ghost in the machine…


During my childhood, I moved around frequently, and one of my valuable experiences this taught me that a significant portion of a culture's understanding of the world is largely arbitrary. From my point of view, concepts like souls fall firmly in this category.


And like concepts like conscious machines?


I'm a materialist, so that matter can think I take as possible a priori, simply because we do, and we're made of matter. And since I think consciousness (however one ultimately defines it) is a physical process, the particulars of any specific substrate might be interesting, but none of it surprising that it might exist.


Makes sense. Thanks.


> The only way that statement would be relevant is if souls existed.

I resent your presumptious attitude.

My argument has got nothing to do with souls. And I don’t even personally believe in them.

> Trees can't do any of those things that LLMs and diffusion models can do.

LLM isn’t mystical. We know what they are made of. So why fantasize that they are conscious? It’s like an extra, unnecessary step.


What makes it “common sense” that being made of cells makes you conscious but being able to learn and speak does not?


The made of cells thing is an old superstition that merely makes relatively more sense[1] than this modern techy superstition: A human is living matter, is conscious, so why not trees aswell

The techy superstition, however, involves people looking at AI programs, acknowledging that we know how they work, and then for some odd reason want to insist that they could now or at some point gain the emergent property of consciousness. Based on what? Well because they “learn”. And “speak”. What are these two words? Metaphors based on what humans do. So some computer programs can do things that we metaphorically compare to our own skills. And that imbues them with consciousness? How?

At this point you get the response that all theists insist upon in the end, when it all comes down to it:

Why not? Incredulity, in other words. Why not God? Why not a ghost in the machine? Can you disprove that there is a ghost in the machine?

[1] Not in an absolute sense


"> ChatGPT isn't sentient Perhaps, perhaps not."

I'm pretty sure it's not, but I have to admit that it does a reasonable job in many situations of passing a Turing test. (Not all, it fails for example at counting.) My guess is that if Alan Turing interacted with one, he'd conclude that he needed to go back to the drawing board on his test for sentience.


> how can we be sure that LLMs or image generators or any other AI don't have the feeling of an instantaneous now

I don’t think I had seen anyone yet suggesting that image generators could have feelings.


Neither have I, but why would one be different from the other?


It isn't.


That's a pretty good observation though - it's only because ChatGPT spits out language that appears to convey some of the sorts of thought patterns that humans have that we're even having this discussion, but is it even relevant? I would think a chimpanzee or a dolphin (or a human with the inability to use language for some reason) would far more obviously qualify as sentient than a LLM. And as you say, a similar computational model that generates images or music instead doesn't even provoke the debate about possible sentience. Personally I don't think there's any doubt at this point - ChatGPT itself constantly reminds you in no uncertain terms that it's "just a language model". I've never seen it display anything remotely like emotion, hope/desires, seen it attempt to plead or engage with you in any sort of way. It can't initiate anything (unprompted, it will never just bring up something to talk about), it shows no curiosity, fails to appreciate or recognise humour, never seems to show uncertainty, and frankly has no "personality" (at best there are a few particular turns of phrase it tends to use that I might associate with it, but compared to the 100s of cases where I've got to know actual humans purely through online text-based interactions, there's really no comparison). Once we start to see those sorts of clear indicators of human-like sentience (probably sooner than we expect!) then it's worth revisiting the discussion.


> we don't have a definition

Don't worry, these goalposts are not meant to stay in place for too long.


> but it's still just a computer program

Maybe you are not impressed by the answer to "how does it work?", but we need to think about "what does it learn?". What data, what skills, what limitations. It's not just any data, it is our whole culture as a dataset, does it have some kind of special properties?


You're absolutely right that we should try to understand LLMs. That's the point of science, after all, to understand stuff.

But the truth is that very few people are actually trying to understand LLMs. Rather, what the vast majority do, is that they poke them unsystematically and then ooh and aah about all the things they can do that earlier language models couldn't do.

Careful analysis? Thorough, systematic experimentation? Theoretical results???? I have yet to see any of that and of course it's a joke to talk about theoretical results when it comes to machine learning research, these days.

It's as if we don't have any computer science, any theories of information, complexity, computation, learnability, etc, that we could muster to try and explain how a complex computer program works. So are people really trying to understand or are they just happy to ride the hype train? Because I obviously think it's the latter.


Hmm. my metaphor for AI is as a statistical prediction machine that produces maximally likely output for a given input, conditioned on some historical data.


Your metaphor fails to indicate the dependence of AI on data. AI can't appear in a void. There must be an environment producing its data, then AI development is conditioned on the environment. We forget about that when we just focus on the learned model. But without all the external support structure there is no AI.

Let's take a simple case - AlphaZero. Its environment was just a Go board, and rules simple. Knowing who won the game easy to test at the end of the game. So this led to a massive number of self-play games. In a few days AZ beat humans, and it was our own game, with 2 millennia head start.

This is how much the environment and the external support structure influences AI development. "Maximally likely output for a given input" depends on it.


"Your metaphor fails to indicate the dependence of AI on data." I'm not sure how you conclude that; the post you're responding to says exactly "...conditioned on some historical data." No historical data, no conditioning, and therefore no AI, just random bits.

And fwiw, you could say much the same about human beings. There must be an environment producing our oxygen, water, food. Indeed, it's not clear what a single human raised in isolation would be like, even assuming the oxygen, water, food. Not very intelligent seeming, I would guess; perhaps not even with much of a language. Fortunately, no one has performed the experiment, but we've come close: deaf people raised by non-deaf parents in a non-deaf community appear to have a sparse sort of sign language. Only when they're put together with other deaf people does a more typical, fluent sign language arise, with a full grammar. (This we know from the deaf people in Nicaragua.)


Well maybe for DNs but there can be deterministic systems that are learned through RL that aren't stochastic. But it's a good enough metaphor for a big part of what people talk about when they say AI.


I've been using "AI is algorithmic slight of hand."


In what sense is AI “sleight of hand”, but your intelligence isn’t?


Real human intelligence comes from a place where we don't understand yet. I recall experiments being done where they could identify where a "thought" started but now how it initiated. Have to find the research now. Whereas AI are just algorithms which read data and process it into schemas IE we know the beginning, middle, and end and is nothing more than a facsimile of what we know as real bonafide intelligence from a human.


I’d say FGPM for fairly good pattern matching would be a nice descriptor.


I don't think we need a metaphor to understand the relationship between humans and AIs; AIs are tools used by humans like many others.


Ah yes, and the Internet is just a series of tubes.


I always thought the criticism of that quote was misplaced. In networking it's extremely common for us to refer to connections as "Pipes". It's even embedded in many of the programming languages exactly as such. It's also important to note, that he never used the word "just" - that's a misquote.


I don't know what your strange definition of the internet has to do with my comment, but just to stay on topic, the internet is also a tool we used, to communicate or inform ourselves, we are talking about the relationship between thing X and humans, not its definition.


It’s a metaphor (a bad one) made famous by a US Senator. I used it to invoke the importance of metaphor. We absolutely need metaphors (good ones) to explain technology to not only the public, but also to the people who legislate.

The Internet is not just a tool in the same way that a hammer or a dishwasher is a tool because it has had a totally transformative effect on humanity. It’s on the level of fire, gun powder, or the printing press. And there are inklings that the same argument could be made for AI. So your comment is just completely missing the point.

“The medium is the message.”


One should use good (hopefully) metaphors to explain technology as you said, this is done all the time, but here we want to define the relationship between humans and thing X, and I would say there is no difference in the relationship with AIs as with all other tools, I would not treat a hammer differently from a printing press because it has less transformative effect on humanity.


> I would not treat a hammer differently from a printing press because it has less transformative effect on humanity

Total nonsense. There’s a reason the first amendment says nothing about hammers.

To use a metaphor, it’s like saying there’s no reason to treat 1.5 and pi differently, because they’re both just numbers.


Tools are tools, their importance in history doesn't change the relationship we have with them, you can say it change the relationship between humans when a particular tool is used, but it feels kinda delusional believing that somehow I will have a different relationship between two inanimate objects like an hammer or a printing press. I also noticed that your arguments are always centered on the United States.

EDIT: you added an edit with a metaphor that talks about numbers, but also no, I would not have a different relationship between two abstract concepts like 1.5 or pi, I would use them when necessary.


None of this “AI” is actually AI. When we do finally meet one, we’ll probably be its gut bacteria.


AIs, once they become better at business than humans, will be in charge. That is implicit in the concept of corporate capitalism.


The shareholders will be in charge. Of course, it’s possible that trading firms will choose to outsource the job to AIs.


Nick Cave and HackerNews intersection is not something I saw coming.


Oh, I think on a long enough timeline the master/slave metaphor will prove apt, possibly twice.


AI is part of a Centaur per Kasparov


I’m curious, does the author actually have any horses?


As a matter of fact, I don't. But my daughter does horse riding and volunteers at the stables and the whole thing is quite beautiful. Though it smells. Something to add to the metaphor maybe.


Are you gatekeeping a metaphor?


Don’t be rude please.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: