Hacker News new | past | comments | ask | show | jobs | submit login

Thank you.

This is one of the questions far too few people seem to be paying attention to.

"Thinking" in any way that we truly understand the term requires consciousness, and consciousness requires much more continuity than LLMs have. It would need continuity of input as well as continuity of learning in order to even be able to begin to approach something we might recognize as consciousness.




Consider for a moment that the language generation neural network in your brain may not have continuity, and that it may exist elsewhere in your brain.


I don't find that particularly relevant to the questions at hand.

Even if it's true, then at best it means that the LLMs that we've created are only a small part of what's necessary for an AGI that functions as an "artificial brain": that not only are they not conscious, they are fundamentally incapable of it, and in order to create artificial consciousness, we have now gone as far along that particular path as we can (roughly speaking), and need to be looking into how to create the other parts of the "artificial brain", and how to hook them together.


I'm not sure that consciousness is anyone's goal here is it? Can you have a useful intelligence without consciousness?


Imagine a hypothetical black box that could correctly answer any question you ask it and perform any task that you instruct it to perform. In terms of the impact such a thing would have on the world, would it matter if it were conscious? Would it even be desirable for it to be conscious? IMO, discussions of consciousness and self-awareness are a complete red herring when it comes to the topic of AGI.


Two things:

1) They may feel like a red herring to you, but they are a big part of what a lot of people are talking about, and coming into a discussion that is clearly talking about whether an AI is conscious or how we could decide if it was and saying "that's all pointless, you shouldn't even be having this discussion" is kind of rude.

2) I am deeply skeptical that it is possible to have anything we could reasonably deem an "AGI" without it demonstrating at least what appears to be consciousness. Certainly, the "singularity" that many people talk about as being either an inevitability or an actual goal seems impossible without an AGI that can self-analyze and self-improve, which seem, at least to me, like they probably require consciousness, at least at the level being talked about.


> They may feel like a red herring to you, but they are a big part of what a lot of people are talking about, and coming into a discussion that is clearly talking about whether an AI is conscious or how we could decide if it was and saying "that's all pointless, you shouldn't even be having this discussion" is kind of rude.

Don't silence people, answer them. Say why it's actually important. Don't say that by saying "when you say it's not important, you're disrespecting the people who say it's important." That kind of thinking will take us back to the middle ages.

> I am deeply skeptical that it is possible to have anything we could reasonably deem an "AGI" without it demonstrating at least what appears to be consciousness.

The meaning of the word "AGI" is not an interesting thing to talk about. Call it what you want. If there's a conventional meaning to the word AGI that you're using, explain it and explain why an algorithm that isn't constantly running fails that. If you do this using the word "consciousness," you're passing the buck. You might as well be talking about souls for all the precision that the word "consciousness" has.

> seems impossible without an AGI that can self-analyze and self-improve

Thought experiment. What if you yourself aren't an AGI that can self-analyze and improve? What if you are two not-AGIs (by your definition):

1) a learner that improves the other based on new information, and

2) a controller that produces output,

And that they switch roles between 1 and 30 times per second, based on the intensity of the input. Would that mean you weren't conscious?


Well, imagine a model of the apparent motion of the planets where each planet moved on a perfectly circular orbit with any number of smaller circles on top of it, with the main circle of the orbit centered on a point between the Earth, or even the Sun, and another point a bit further from it. So, you know, an epicyclical model of the motion of the planets [1].

"In terms of the impact such a thing would have on the world", as you put it, would it matter if that model was completely wrong, despite its great predictive power?

Did we gain something when we figured out how the planets really move?

_______________

[1] https://en.wikipedia.org/wiki/Deferent_and_epicycle


"Wrong" doesn't make much sense here. The more inductive bias we think is appropriate which we try to shove into models, the worse they perform. There's also an awful lot of fabrication the brain does with sense data, rationales etc. all of this is to say we have no clue what makes us tick. This means there's absolutely no guarantee we would recognize a replication of "human"

Something different is not necessarily "wrong". a plane's flight is no less "true" than a bird. It's not flying the "wrong" way.

Trying to elevate our very poor and wrong understanding of "human" to be the same as "right" or "true" is very silly. Even Biology with its set of constraints does not always solve the same problem the same way. Who are you to dub one way "right" ? Makes no sense.


Why does thinking require consciousness? This just seems like goalpost-moving. A few years ago, if someone gave you a transcript of a GPT-4 conversation, you'd definitely say it thinks, but now "it's not thinking without consciousness" and "it's not thinking unless it learns continuously".

By that reasoning, we die when we sleep.


Why?


Well, "consciousness", at least as we recognize it, requires a mechanism by which the entity being measured can continuously form new "thoughts" and "memories" (which requires continuity of learning, and at the very least continuity of input being fed back from its own output), and some form of continuous external input of information about the world to at least be available, even if it is not always on.

A standard LLM is a static bundle of trained data that sits, inert, on a drive, with a process waiting for discrete input. When that input arrives, it does nothing to modify the trained data—the LLM's "memory"—it simply triggers a computational process that reads both the input and the trained data and produces an output based on them. This does not resemble in any way the structure of something that could be reasonably described as a conscious mind.


Continuous processing may be putting undue weight on accidental features of humans to be a necessary feature. Consider that a conscious mind can't represent the gaps in its processing, and so has an appearance of continuity. But this appearance probably doesn't map onto a continuous reality. An example is anesthesia patients that report a seemingly uninterrupted time from counting down pre-surgery to waking up. So interruptions, gaps, discontinuities, and so on don't necessarily eliminate the possibility for consciousness. It may be the case that LLMs are conscious when they are engaged in active inference.

While I generally favor a requirement for recurrent processing, I have low but non-zero credence for certain feedforward networks being conscious. The point of recurrence is to allow information about itself to influence its processing. But It seems plausible that feedforward constructs can represent meta-information in a way that is computed as part of constructing the output.


We don’t have a definition of consciousness that allows you to recognize it.


People rarely even offer a definition of consciousness that stays consistent between both premises of a syllogism.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: