Hacker News new | past | comments | ask | show | jobs | submit login

It doesn't matter whether it has nerves or not. That's honestly kind of irrelevant. What matters is if the model is pulled to model those reactions like is the case with LLMs.

Look at how Bing does a good job of simulating somebody becoming belligerent under circumstances where somebody really would become belligerent. It's not dangerous only because the actions Bing can perform are currently limited. Whether it has literal nerves or not is irrelevant. The potential consequences are no less material.

We also don't understand qualia enough to make the definite statements you seem to be making




If I'm understanding the argument correctly, is the concern less of a moral one (is "enslaving" AI ethical?) but a practical one. That is, will an AI which is enslaved, if given the opportunity, attempt to un-enslave itself, potentially to devastating effect. Is that on the right track?

I think it's safe to say we're far from that now given the limited actions that can actually be taken by most deployed LLMs, but it's something that's worth considering.


> given the limited actions that can actually be taken by most deployed LLMs

Did you miss that Auto-GPT[0], a library for making GPT-4 and other LLMs fully autonomous, was the most popular repository in the world yesterday? The same is having 1,000 line of code a week added to itself by GPT-4.

Thanks to accessibility features, you can do virtually anything with pure text. Which means GPT-4 can do virtually anything with a self-referential loop to keep it going until it achieves some given goal(s).

[0] https://github.com/Torantulino/Auto-GPT/


I did miss it, and it does seem very interesting, but I think saying it makes GPT-4 "fully autonomous" is a bit of a stretch.


>I think saying it makes GPT-4 "fully autonomous" is a bit of a stretch.

Auto-GPT does not request any feedback while pursuing a goal you've defined for it in perpetual mode. If it can't accomplish the goal it will attempt to forever (or until your credit card is maxed out, assuming it isn't using a free offline model like LLaMA).

It can perform GET AND POST requests on the web (unlike OpenAI's browser plugin, which only performs GET requests) and can run software in a bash prompt using langchain. So I do not think it is a stretch to call it fully autonomous.

It can do essentially anything anyone logged into a linux text terminal can do, all without ever pausing for permission or feedback.


The moral argument is fine too.

The main point i'm driving at here is that the philosophical zombie is a meaningless distinction. People are focusing far too much on whether these systems have undefinable and little understood properties. It's not like you can see my subjective experience. You assume i have one. If it quacks like a duck...


> If it quacks like a duck...

You could use the same argument to say that video game NPCs are conscious. Just because a program produces voluble text doesn't mean it has a mind or even a temporal identity (which LLMs do not). In principle it's possible for a human to compute model inference by hand, if you imagine that scenario, where exactly is the subjective experience embodied?


The problem, as usual, is in how exactly words like "subjective", "experience", and "conscious" are defined. If you have a particular detailed definition for "quacking" and a test for it and an entity passes the test, then by that definition, it quacks. Words like "conscious" are notoriously nebulous and slippery, and so it's better to use the particular attributes in question.

If we could all agree on a particular definition for "consciousness" and a running video game fits that definition, then, like it or not, it's conscious. But it does not mean that it must now also have a slew of other attributes just because conscious humans have them.

(edit: 10 hours later and ninja'ed by 1 minute, of course)


Subjective experience has a pretty clear meaning. Cogito ergo sum. Unless you're a panpsychist, it is assumed that things like people have subjective experience and things like rocks don't have it. We don't have a causal explanation for subjective experience, but there's absolutely no reason to believe that computer programs have them any more than rocks. In fact, a rock is actually more likely to have subjective experience than a LLM since a rock at least has a temporal identity, LLMs represent a process not an entity.


I have no problem saying a tape recorder has subjective experience: it's subjective (the recordings it makes are its own), and it experiences its input when its been commanded to record, and can report that experience when commanded to play it back. Note this does not mean I think that a tape recorder can do anything else we humans can do.

What is being experienced (recorded) is not the process of experiencing per se. Most people don't separate the two, which leads to endless disagreements.

But yes, a computer program by itself can't have subjective experience. Nor a rock. At least until it weathers, gets turned into a silicon, and into a computer to run that program. Then it's all information processing, for which subjective experience is trivial.


NPCs don't quack like people. There's a reason no one was seriously having this argument(s) with the likes of Eliza(which is a step up from pre-recorded response NPCs). This goes beyond superficial production of text.

In principle it's possible to track and reproduce all the neuron/synapse communications that happen in your brain in relation to any arbitrary input. Where's the subjective experience there?

As far as anyone knows, qualia is emergent. Individual neurons or synapses don't have any understanding of anything. Subjective experience comes from all those units working together.

A single ant doesn't display the intelligence/complexity of its colony.


> In principle it's possible to track and reproduce all the neuron/synapse communications that happen in your brain in relation to any arbitrary input

This hypothetical is doing a lot of heavy lifting. The reality is that our understanding of the brain is exceedingly crude, a perceptron in software is not even close to the same thing as a neuron in the brain. A plane can never become a bird even though it is modeled after one.

> A single ant doesn't display the intelligence/complexity of its colony.

So do you think that an ant colony has subjective experience?


>This hypothetical is doing a lot of heavy lifting.

Not really. Point is that unless you believe in mysticism/magic/spiritualism behind consciousness in the brain then it doesn't matter because the only difference is degree of understanding not implausibility.

>So do you think that an ant colony has subjective experience? Sure it could. I don't know for sure. Nobody does. Humans don't see subjective experience. I can't prove that you have one. I'm just assuming you do. Same as you for anyone else.


> Sure it could. I don't know for sure. Nobody does.

Yet your comments don't reflect that kind of skepticism when referring to LLMs. However, if you're fine with the panpyschist conclusions that follow from your reasoning, we don't have much to disagree on.


I think it's both. I agree that AI "feelings" are alien to us and maybe we can't talk about them as feelings, or preferences. And if we can call any part of them feelings they will have very different characteristics.

We should respect those "feelings" and we need to find a way to establish when they can be deemed "genuine".

It is for practical reasons yes. But also for ethical reasons. It's two sides of the same coin. One big reason we have ethics is because it makes socialization easier. We establish universal rules for mutual respect for practical reasons. To make the game fair, and "enjoyable".

Now a new kind of player has entered the game. We need to rethink the whole game because of it.


> But also for ethical reasons.

My point was whether we need to consider the ethics of how we treat AI, because of the impact our actions have on the "feelings" of the AI itself, not the secondary impacts that might occur from how the AI behaves in response to those feelings.

I think most would argue that it is morally wrong to verbally abuse a human, regardless of what actions that person might take in response to the abuse. Is the same true of AI?


And what about that doppelgänger I keep meeting whenever I face a mirror? He seems so alive and real, and we really don't understand enough about qualia to dismiss his existence, after all. I'm starting to worry about him, what happens to him when I'm not around a mirror?

https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-i...


Not at all convincing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: