Hacker News new | past | comments | ask | show | jobs | submit login

> Nobody can make a statement that LLMs obviously have zero understanding of the world, nobody can make a statement that LLMs are just stochastic parrots because we don't really get whats going on internally

For such strong statements that they do have an understanding of the world, and are not simply stochastic parrots (arguably the null hypothesis), the burden of proof is on the LLM proponents. Precious little proof has been provided, and stating that nobody knows what goes on inside obviously does not add to that.




> stating that nobody knows what goes on inside obviously does not add to that.

No one is saying that LLMs absolutely understand the world. But many people are saying that an aspect of understanding is a possibility likely enough to warrant further investigation and speculation. When someone says nobody knows what's going on, they are simply acknowledging this possibility.

Not realizing this and even dismissing the possibility of something beyond a stochastic parrot does not add to anything.

What is the burden of proof that you yourself are not a stochastic parrot? Seems like we can't tell either and we only can guess from your inputs and outputs. This blurriness of even proving sentience for you makes the output of LLMs that much more interesting. Do you seriously need to assign burden of proof here when clearly there is something very compelling going on here with the output of LLMs?


Saying that: 'we don't know how human intelligence works AND we don't know how AI works IMPLIES human intelligence EQUALS AI' is clearly a logical fallacy, sadly one heard far too often on HN, given that people here should know better.


Except this was never said.

What was said is that intelligent output from an LLM implies a "possibility" (keyword) of intelligence.

After all, outputs and inputs are all that we use to assume you as a human are intelligent. As of this moment we have no other way of judging whether something is intelligent or not.

You should read more carefully.


> What was said is that intelligent output from an LLM implies a "possibility" (keyword) of intelligence.

No it doesn't, because you can break down how they "learn" and generate output from their models, and thought or intelligence doesn't occur at any step of it.

It's like the first chess computer, which was actually a small guy hiding under the table. If you just show that to someone who treats it as a black box, sure, you might wonder if this machine understands chess. But if you put a little guy in there, you know for a fact that it doesn't.


No you can't break it down. The experts don't fully understand the high level implications of an LLM. This is definitive. We have no theoretical modelling of what LLMs will output. We can't predict it at all, therefore we do not fully understand LLMs from a high level.


'Possibility' - thus as per my original point, the burden of proof is on the proponents.

'outputs and inputs' - that is reduction almost to absurdity, clearly human intelligence is rather more than that. Again, we come back to the 'we don't understand human intelligence therefore something else we don't understand but seems to mimic humans under certain conditions is also intelligent'.


The only thing absurd is your argument. Short of mind reading inputs and outputs are the only thing we have to determine what is intelligent. Go ahead prove to me you are an intelligent being without emitting any output and I'll 100 percent flip my stance and believe you.

That is the whole point of the turing test. Turing developed it simply because we can't fully know what is intelligent through telepathy. We can only compare outputs and inputs.

>- thus as per my original point, the burden of proof is on the proponents

There are no proponents making a claim that intelligence is absolutely true. There are only proponents saying it is possibly true.

Burdens are human qualities assigned to random people for no apparent reason. If it talks like a human then the possibility becomes open by common sense, burden of proof is just some random tag you are just using here.

But again no one is making a claim that LLMs are conscious. But you seem to be making a claim that it isn't. You made a claim, Great. looks like it's your burden now. Or perhaps this burden thing is just stupid and we should all use common sense to investigate what's going on rather then making baseless claims then throwing burdens on everyone else.


I think the Turing Test has a lot to answer here for the current fandango. It (and your input/output argument) boils down to 'if it can't be measured it cannot exist', which does not hold up to philosophical scrutiny.

Burden of proof is a well established legal and scientific concept that puts the onus on one side of the debate to show they are right, and if they are unable to prove that, then the other side would automatically given the 'judgement'. For example, if someone claimed there was life on the Moon, it would be on them to prove it, otherwise the opposite would quite rightly be assumed (after all, the Moon is an apparently lifeless place). Another example, a new drug has to be proven safe and effective before it can be rolled out - instead of others having to prove it is NOT safe and effective to STOP the rollout.


Nobody said if it can't be measured it doesn't exist. Nothing of this nature was said or implied.

What I do believe is that if it can't measured then it's existence is only worthwhile and relevant to you. It is not worthwhile to talk about unmeasurable things in a rigorous way. We can talk about unmeasurable things hypothetically, but topics like whether something is intelligent or not where we need definitive information one way or another requires measurements and communication in a shared reality that is interpretable by all parties.

If you want to make a claim outside of our shared reality then sure, be my guest. Let's talk about religion and mythology and all that stuff it's fine. However...

There's a hard demarcation between this stuff and science and a reason why people on HN tend to stick with science before jumping off the deep end into philosophy or religion.

My point on burden of proof was lost on you. Who the burden is placed on is irrelevant to the situation. Imagine we see a house explode and I thus make a claim that because I saw a house explode an actual house must have exploded. Then you suddenly conveniently declare that if I made the claim the burden is on me to prove it. What? Do you see the absurdity there?

We see AI imitating humans pretty well. I make a soft claim that maybe the AI is intelligent and suddenly some guy is like the burden of proof is on you to prove that AI is intelligent!

Bro. Let's be real. First no definitive claim was made second it's a reasonable speculation irregardless of burdens. The burden of proof exists in medicine to prevent distribution and save lives, people do not use the burden of proof to prevent reasonable speculation.


>> What is the burden of proof that you yourself are not a stochastic parrot?

Because the person you're talking to is a human?


Am I? How do you know this isn't output generated by an LLM?


Well, you tell me: was it?

I assume we're having a good faith conversation?


We are. But the point is you can't tell. You are entirely relying on my output to make an identification.


Really? I thought I was relying on the intuition that most comments on this site are unlikely to be generated by an LLM.

Also, I thought your point was "What is the burden of proof that you yourself are not a stochastic parrot?".


[flagged]


>> Go use that on your philosophy friends

Don't be an asshole.


Having read your comment again, I think the key word here is 'speculation', in all its (in)glorious forms.


There's a difference between wild speculation and reasonable speculation with high likelihood.

For example. I speculate you are a male and it's highly likely I'm right. The speculation I'm doing here is of the same nature as the speculation for intelligence.

The angle your coming at it from is that any form of opinion other then the opinion that LLMs are stochastic parrots is completely wild speculation. The irony is that you're doing this without realizing your position is in itself speculation.


What do you mean by the "stochastic parrots" (null) hypothesis in this case? Cards on the table, I think by any reasonable interpretation it's either uninformative or pretty conclusively refuted, but I'm curious what your version is.


I mean that it simply surfaces patterns in the training data.

So responses will be an 'agregation' (obviously more complex than that) of similar prompt/response from the training corpus, with some randomness thrown in to make things more interesting.


"Surfaces patterns in the training data" seems not to pin things down very much. You could describe "doing math" as a pattern in the training data, or really anything a human might learn from reading the same text. I suspect you mean simpler patterns than that, but I'm not sure how simple you're imagining.

A useful rule of thumb, I think, is that if you're trying to describe what LLMs can do, and what you're saying is something that a Markov chain from 2003 could also do, you're missing something. In that vein, I think talking about building from a "similar prompt/response from the training corpus", though you allow "complex" aggregation, can be pretty misleading in terms of LLM capabilities. For example, you can ask a model to write code, run the code and give the model an error message, and then model will quite often be able to identify and correct its mistake (true for GPT-4 and Claude at least). Sure, maybe both the original broken solution and the fixed one were in the training corpus (or something similar enough was), but it's not randomness taking us from one to the other.


There is a big difference between 'doing math' by repeating/elaborating on previously seen patterns, and by having an intuitive grasp of what is going on 'under the hood'. Of course our desktop calculators work (very well) on the latter principle.

As you say, both the broken and correct solutions were likely in the training corpus (and indeed the error message), so really we are doing a smoke and mirrors performance to make it look like the correct solution was 'thought out' in some sense.


I think dismissing problem-solving as "smoke and mirrors" based on regurgitating training data will give you a poor predictive model for what else models can do. For example, do you think that if you change the variable names to something statistically likely to be unique in human history, the ability will break?

As for pattern recognition vs intuitive grasp--I don't think I follow. I would call pattern recognition part of intuition, unlike logically calculating out the consequences of a model, but on the other hand I would not say that a desktop calculator "grasps" anything-it is not able on its own to apply its calculating ability to real world instantiations of mathematical problems in the way that humans (and sometimes LLMs) can do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: