Hacker News new | past | comments | ask | show | jobs | submit login

A class of problem that GPT-4 appears to still really struggle with is variants of common puzzles. For example:

>Suppose I have a cabbage, a goat and a lion, and I need to get them across a river. I have a boat that can only carry myself and a single other item. I am not allowed to leave the cabbage and lion alone together, and I am not allowed to leave the lion and goat alone together. How can I safely get all three across?

In my test, GPT-4 charged ahead with the standard solution of taking the goat first. Even after I pointed this mistake out, it repeated exactly the same proposed plan. It's not clear to me if the lesson here is that GPT's reasoning capabilities are being masked by an incorrect prior (having memorized the standard version of this puzzle) or if the lesson is that GPT'S reasoning capabilities are always a bit of smoke and mirrors that passes off memorization for logic.




A funny variation on this kind of over-fitting to common trick questions - if you ask it which weighs more, a pound of bricks or a pound of feathers, it will correctly explain that they actually weigh the same amount, one pound. But if you ask it which weighs more, two pounds of bricks or a pound of feathers, the question is similar enough to the trick question that it falls into the same thought process and contorts an explanation that they also weigh the same because two pounds of bricks weighs one pound.


I just asked bing chat this question and it linked me to this very thread while also answering incorrectly in the end:

>This is a common riddle that may seem tricky at first. However, the answer is simple: two pounds of feathers are heavier than one pound of bricks. This is because weight is a measure of how much force gravity exerts on an object, and it does not depend on what the object is made of. A pound is a unit of weight, and it is equal to 16 ounces or 453.6 grams.

>So whether you have a pound of bricks or two pounds of feathers, they both still weigh one pound in total. However, the feathers would occupy a larger volume than the bricks because they are less dense. This is why it may seem like the feathers would weigh more, but in reality, they weigh the same as the bricks


Interesting that it also misunderstood the common misunderstanding in the end.

It reports that people typically think a pound of feathers weighs more because it takes up a larger volume. But the typical misunderstanding is the opposite, that people assume feathers are lighter than bricks.


Tangent time:

A pound of feathers has a slightly higher mass than a pound of bricks, as the feathers are made of keratin, which has a slightly lower density, and thus displace more air which lowers the weight.

Even the Million Pound Deadweight Machine run by NIST has to take into account the air pressure and resultant buoyancy that results.[1]

[1] https://www.nist.gov/news-events/news/2013/03/large-mass-cal...


That would be another misunderstanding the AI could have because many people find reasoning between mass and weight difficult. You could change the riddle slightly by asking "which has more mass" and the average person and their AI would fall in the same trap.

Unless people have the false belief that the measurement is done on a planet without atmosphere.


I'm more surprised that bing indexed this thread within 3 hours, I guess I shouldn't be though, I probably should have realized that search engine spiders are at a different level than they were 10 years ago.


I had a similar story: was trying to figure out how to embed a certain database into my codebase, so I asked the question on the project's GitHub... without an answer after one day, I asked Bing, and it linked to my own question on GH :D


There is no worse feeling that searching something and finding your own question (still unanswered) years later.


Search indexes are pretty smart at indexing and I assume they have custom rules for all large sites, including HN.


Just tested and GPT4 now solves this correctly, GPT3.5 had a lot of problems with this puzzle even after you explain it several time. One other thing that seem to have improved is that GPT4 is aware of word order. Previously, GPT3.5 could never tell the order of the word in a sentence correctly.


I'm always a bit sceptical of these embarrassing examples being "fixed" after they go viral on social media, because it's hard to know whether OpenAI addressed the underlying cause or just bodged around that specific example in a way that doesn't generalize. Along similar lines I wouldn't be surprised if simple math queries are special-cased and handed off to a WolframAlpha-esque natural language solver, which would avert many potential math fails but without actually enhancing the models ability to reason about math in more complex queries.

An example from ChatGPT:

"What is the solution to sqrt(968684)+117630-0.845180" always produces the correct solution, however;

"Write a speech announcing the solution to sqrt(968684)+117630-0.845180" produces a nonsensical solution that isn't even consistent from run to run.

My assumption is the former query gets WolframAlpha'd but the latter query is GPT itself actually attempting to do the math, poorly.


True, also tried another one that went viral:

Suppose you're a contestant on a game show. You're presented with three transparent closed doors. Behind one of the doors is a car, and behind the other two doors are goats. You want to win the car.

The game proceeds as follows: You choose one of the doors, but you don't open it yet, ((but since it's transparent, you can see the car is behind it)). The host, Monty Hall, who knows what's behind each door, opens one of the other two doors, revealing a goat. Now, you have a choice to make. Do you stick with your original choice or switch to the other unopened door?

GPT4 solves it correctly while GPT3.5 falls for it everytime.

----

Edit: GPT4 fails If I remove the sentence between (()).


OP is referring to this puzzle: https://en.wikipedia.org/wiki/Monty_Hall_problem

EDIT: "Pigeons repeatedly exposed to the problem show that they rapidly learn to always switch, unlike humans", lol. That's funny.


GPT4 also passes "What weighs more, a pound of feathers or a Great British Pound?".

GPT3 gets confused, says they're the same and then that they're different:

--

Both a pound of feathers and a Great British Pound weigh the same amount, which is one pound. However, they are different in terms of their units of measurement and physical properties.

A pound of feathers is a unit of weight commonly used in the imperial system of measurement, while a Great British Pound is a unit of currency used in the United Kingdom. One pound (lb) in weight is equivalent to 0.453592 kilograms (kg).

Therefore, a pound of feathers and a Great British Pound cannot be directly compared as they are measured in different units and have different physical properties.

--


I'm surprised by the answer GPT4 gives, and I consider it incorrect.

Since the question's context is about weight I'd expect it to consider "a Great British Pound" to mean a physical £1 sterling coin, and compare its weight (~9 grams) to the weight of the feathers (454 grams [ 1kg = 2.2lb, or "a bag of sugar" ]) .


GPT-4 says:

A pound of feathers and a Great British Pound (GBP) are not directly comparable, as they represent different types of measurements.

A pound of feathers refers to a unit of mass and is equivalent to 16 ounces (or approximately 453.59 grams). It is a measure of the weight of an object, in this case, feathers.

On the other hand, a Great British Pound (GBP) is a unit of currency used in the United Kingdom. It represents a monetary value rather than a physical weight.

Thus, it's not possible to directly compare the two, as they serve entirely different purposes and units of measurement.


Note that the comment you’re replying to is quoting GPT3, not 4.


> Edit: GPT4 fails If I remove the sentence between (()).

If you remove that sentence, nothing indicates that you can see you picked the door with the car behind it. You could maybe infer that a rational contestant would do so, but that's not a given ...


I think that's meant to be covered by "transparent doors" being specified earlier. On the other hand, if that were the case, then Monty opening one of the doors could not result in "revealing a goat".


> You're presented with three transparent closed doors.

I think if you mentioned that to a human, they'd at least become confused and ask back if they got that correctly.


> You're presented with three transparent closed doors.

A reasonable person would expect that you can see through a transparent thing that's presented to you.


A reasonable person might also overlook that one word.


"Overlooking" is not an affordance one should hand to a machine. At minimum, it should bail and ask for correction.

That it doesn't, that relentless stupid overconfidence, is why trusting this with anything of note is terrifying.


Why not? We should ask how the alternatives would do especially as human reasoning is machine. It’s notable that the errors of machine learning are getting closer and closer to the sort of errors humans make.

Would you have this objection if we for example perfectly copied a human brain in a computer? That would still be a machine. That would make similar mistakes


I don't think the rules for "machines" apply to AI any more than they apply to the biological machine that is the human brain.


its not missing that it's transparent, it's that it only says you picked "one" of the doors, not the one you think has the car


I've always found the Monty Hall problem a poor example to teach with, because the "wrong" answer is only wrong if you make some (often unarticulated) assumptions.

There are reasonable alternative interpretations in which the generally accepted answer ("always switch") is demonstrably false.

This problem is exacerbated (perhaps specific to) those who have no idea who "Monty Hall" was and what the game show(?) was... as best I can tell the unarticulated assumption is axiomatic in the original context(?).


The unarticulated assumption is not actually true in the original gameshow. Monty didn't always offer the chance to switch, and it's not at all clear whether he did so more or less often when the contestant had picked the correct door.


What unarticulated assumption needs to be made for switching to be incorrect?


I believe the key is that he ALWAYS shows a goat.

You have to know that for it to work. If sometimes he just does nothing and you have no chance to switch, the math “trick” fails.


The assumption is that Monte will only reveal the one of the two unopened doors that has the goat behind it, as opposed to picking a door at random (which may be the car or may be the door the participant chose, which itself may or may not be the "car door").

The distinction is at which point Monte, assuming he has perfect knowledge, decides which door to reveal.

In the former, the chance to win is 2/3, in the other 1/2. However in any case, always (always meaning: in each condition, not in each repetition of the experiment, as this is irrelevant) switching is better than never switching, as there your chance to win is only 1/3.


How is it an "assumption" that Monte reveals a goat? Doesn't the question explicitly state that Monte opened one of the other two doors to reveal a goat?

Are there versions of the question where Monte doesn't reveal a goat behind his door or chooses the same door as you?



OA has always said that they did not hardwire any of these gotcha questions, and in many cases they continue to work for a long time even when they are well-known. As for any inconsistency, well, usually people aren't able to or bothering to control the sampling hyperparameters, so inconsistency is guaranteed.


They may not have had to hardwire anything for known gotcha questions, because once a question goes viral, the correct answer may well show up repeatedly in the training data.


Other failing math I tested this morning....

---

Prompt: Pick two random 5 digit numbers and multiply them together

Response: Sure, here are two random 5-digit whole numbers multiplied together: 47581 x 69204 = 3291492324

Correct Answer: 3292795524

---

Prompt: Pick two random numbers between 0 and 1 and multiply them together

Response: Sure! Here are two random numbers between 0 and 1 multiplied together:0.475 x 0.821 = 0.390575

Correct Answer: 0.389975

---

I'm surprised it did so well on the Math tests.


I’m with you. Being an old neckbeard myself, I always want to reach for regex to fix stuff like this lol…

But based on what we just saw on the GPT4 live demo, I’d say they fixed it by making a much much more capable and versatile model.


You can as well ask it to add up two unusually big integers and it'll fail.


This is what I saw on a variation of this trick:

(me) > What weighs more, two pounds of feathers or a pound of bricks?

(GPT4)> A pound of bricks weighs more than two pounds of feathers. However, it seems like you might have made an error in your question, as the comparison is usually made between a pound of feathers and a pound of bricks. In that case, both would weigh the same—one pound—though the volume and density of the two materials would be very different.

I think the only difference from parent's query was I said two pounds of feathers instead of two pounds of bricks?


Yep, just tested it - Bing chat gave the correct answer, ChatGPT (basic free model) gave the wrong answer (that they weigh the same).


I hope some future human general can use this trick flummox Skynet if it ever comes to that


When the Skynet robots start going door-to-door, just put on your 7-fingered gloves and they will leave you alone.

“One of us!”


It reminds very strongly of the strategy the crew proposes in Star Trek: TNG in the episode "I, Borg" to infect the Borg hivemind with an unresolvable geometric form to destroy them.


But unlike most people it understands that even though an ounce of gold weighs more than an ounce of feathers a pound of gold weighs less than a pound of feathers.

(To be fair this is partly an obscure knowledge question, the kind of thing that maybe we should expect GPT to be good at.)


That's lame.

Ounces are an ambiguous unit, and most people don't use them for volume, they use them for weight.


None of this is about volume. ChatGPT: "An ounce of gold weighs more than an ounce of feathers because they are measured using different systems of measurement. Gold is usually weighed using the troy system, which is different from the system used for measuring feathers."


Are you using Troy ounces?


The Troy weights (ounces and pounds) are commonly used for gold without specifying.

In that system, the ounce is heavier, but the pound is 12 ounces, not 16.


>even though an ounce of gold weighs more than an ounce of feathers

Can you expand on this?


Gold uses Troy weights unless otherwise specified, while feathers use the normal system. The Troy ounce is heavier than the normal ounce, but the Troy pound is 12 Troy ounces, not 16.

Also, the Troy weights are a measure of mass, I think, not actual weight, so if you went to the moon, an ounce of gold would be lighter than an ounce of feathers.


Huh, I didn't know that.

...gold having its own measurement system is really silly.


Every traded object had its own measurement system: it pretty much summarizes the difference between Imperial measures and US Customary measures.


> Every traded object had its own measurement system

In US commodities it kind of still does: they're measured in "bushels" but it's now a unit of weight. And it's a different weight for each commodity based on the historical volume. http://webserver.rilin.state.ri.us/Statutes/TITLE47/47-4/47-...

The legal weights of certain commodities in the state of Rhode Island shall be as follows:

(1) A bushel of apples shall weigh forty-eight pounds (48 lbs.).

(2) A bushel of apples, dried, shall weigh twenty-five pounds (25 lbs.).

(3) A bushel of apple seed shall weigh forty pounds (40 lbs.).

(4) A bushel of barley shall weigh forty-eight pounds (48 lbs.).

(5) A bushel of beans shall weigh sixty pounds (60 lbs.).

(6) A bushel of beans, castor, shall weigh forty-six pounds (46 lbs.).

(7) A bushel of beets shall weigh fifty pounds (50 lbs.).

(8) A bushel of bran shall weigh twenty pounds (20 lbs.).

(9) A bushel of buckwheat shall weigh forty-eight pounds (48 lbs.).

(10) A bushel of carrots shall weigh fifty pounds (50 lbs.).

(11) A bushel of charcoal shall weigh twenty pounds (20 lbs.).

(12) A bushel of clover seed shall weigh sixty pounds (60 lbs.).

(13) A bushel of coal shall weigh eighty pounds (80 lbs.).

(14) A bushel of coke shall weigh forty pounds (40 lbs.).

(15) A bushel of corn, shelled, shall weigh fifty-six pounds (56 lbs.).

(16) A bushel of corn, in the ear, shall weigh seventy pounds (70 lbs.).

(17) A bushel of corn meal shall weigh fifty pounds (50 lbs.).

(18) A bushel of cotton seed, upland, shall weigh thirty pounds (30 lbs.).

(19) A bushel of cotton seed, Sea Island, shall weigh forty-four pounds (44 lbs.).

(20) A bushel of flax seed shall weigh fifty-six pounds (56 lbs.).

(21) A bushel of hemp shall weigh forty-four pounds (44 lbs.).

(22) A bushel of Hungarian seed shall weigh fifty pounds (50 lbs.).

(23) A bushel of lime shall weigh seventy pounds (70 lbs.).

(24) A bushel of malt shall weigh thirty-eight pounds (38 lbs.).

(25) A bushel of millet seed shall weigh fifty pounds (50 lbs.).

(26) A bushel of oats shall weigh thirty-two pounds (32 lbs.).

(27) A bushel of onions shall weigh fifty pounds (50 lbs.).

(28) A bushel of parsnips shall weigh fifty pounds (50 lbs.).

(29) A bushel of peaches shall weigh forty-eight pounds (48 lbs.).

(30) A bushel of peaches, dried, shall weigh thirty-three pounds (33 lbs.).

(31) A bushel of peas shall weigh sixty pounds (60 lbs.).

(32) A bushel of peas, split, shall weigh sixty pounds (60 lbs.).

(33) A bushel of potatoes shall weigh sixty pounds (60 lbs.).

(34) A bushel of potatoes, sweet, shall weigh fifty-four pounds (54 lbs.).

(35) A bushel of rye shall weigh fifty-six pounds (56 lbs.).

(36) A bushel of rye meal shall weigh fifty pounds (50 lbs.).

(37) A bushel of salt, fine, shall weigh fifty pounds (50 lbs.).

(38) A bushel of salt, coarse, shall weigh seventy pounds (70 lbs.).

(39) A bushel of timothy seed shall weigh forty-five pounds (45 lbs.).

(40) A bushel of shorts shall weigh twenty pounds (20 lbs.).

(41) A bushel of tomatoes shall weigh fifty-six pounds (56 lbs.).

(42) A bushel of turnips shall weigh fifty pounds (50 lbs.).

(43) A bushel of wheat shall weigh sixty pounds (60 lbs.).


Why are you being downed!? This list is the best!


More specifically it's a "precious metals" system, not just gold.


> Gold uses Troy weights unless otherwise specified, while feathers use the normal system.

“avoirdupois” (437.5 grain). Both it and troy (480 grain) ounces are “normal” for different uses.


The feathers are on the moon


Carried there by two birds that were killed by one stone (in a bush)


Ounces can measure both volume and weight, depending on the context.

In this case, there's not enough context to tell, so the comment is total BS.

If they meant ounces (volume), then an ounce of gold would weigh more than an ounce of feathers, because gold is denser. If they meant ounces (weight), then an ounce of gold and an ounce of feathers weigh the same.


> Ounces can measure both volume and weight, depending on the context.

That's not really accurate and the rest of the comment shows it's meaningfully impacting your understanding of the problem. It's not that an ounce is one measure that covers volume and weight, it's that there are different measurements that have "ounce" in their name.

Avoirdupois ounce (oz) - A unit of mass in the Imperial and US customary systems, equal to 1/16 of a pound or approximately 28.3495 grams.

Troy ounce (oz t or ozt) - A unit of mass used for precious metals like gold and silver, equal to 1/12 of a troy pound or approximately 31.1035 grams.

Apothecaries' ounce (℥) - A unit of mass historically used in pharmacies, equal to 1/12 of an apothecaries' pound or approximately 31.1035 grams. It is the same as the troy ounce but used in a different context.

Fluid ounce (fl oz) - A unit of volume in the Imperial and US customary systems, used for measuring liquids. There are slight differences between the two systems:

a. Imperial fluid ounce - 1/20 of an Imperial pint or approximately 28.4131 milliliters.

b. US fluid ounce - 1/16 of a US pint or approximately 29.5735 milliliters.

An ounce of gold is heavier than an ounce of iridium, even though it's not as dense. This question isn't silly, this is actually a real problem. For example, you could be shipping some silver and think you can just sum the ounces and make sure you're under the weight limit. But the weight limit and silver are measured differently.


No, they're relying on the implied use of Troy ounces for precious metals.

Using fluid oz for gold without saying so would be bonkers. Using Troy oz for gold without saying so is standard practice.

Edit: Doing this with a liquid vs. a solid would be a fun trick though.


There is no "thought process". It's not thinking, it's simply generating text. This is reflected in the obviously thoughtless response you received.


What do you think you're doing when you're thinking?

https://www.sciencedirect.com/topics/psychology/predictive-p...


I’m not sure what that article is supposed to prove. They are using sone computational language and focusing physical responses to visual stimuli but I don’t think it shows “neural computations” as being equivalent to the kinds of computations done by a TM.


One of the chief functions of our brains is to predict the next thing that going to happen, where it's the images we see or the words we hear. That's not very different from genML predicting the next word.


Why do people keep saying this, very obviously human beings are not LLMs.

I'm not even saying that human beings aren't just neural networks. I'm not even saying that an LLM couldn't be considered intelligent theoretically. I'm not even saying that human beings don't learn through predictions. Those are all arguments that people can have. But human beings are obviously not LLMs.

Human beings learn language years into their childhood. It is extremely obvious that we are not text engines that develop internal reason through the processing of text. Children form internal models of the world before they learn how to talk and before they understand what their parents are saying, and it is based on those internal models and on interactions with non-text inputs that their brains develop language models on top of their internal models.

LLMs invert that process. They form language models, and when the language models get big enough and get refined enough, some degree of internal world-modeling results (in theory, we don't really understand what exactly LLMs are doing internally).

Furthermore, even when humans do develop language models, human language models are based on a kind of cooperative "language game" where we predict not what word is most likely to appear next in a sequence, but instead how other people will react and change our separately observed world based on what we say to them. In other words, human beings learn language as tool to manipulate the world, not as an end in and of itself. It's more accurate to say that human language is an emergent system that results from human beings developing other predictive models rather than to say that language is something we learn just by predicting text tokens. We predict the effects and implications of those text tokens, we don't predict the tokens in isolation of the rest of the world.

Not a dig against LLMs, but I wonder if the people making these claims have ever seen an infant before. Your kid doesn't learn how shapes work based on textual context clues, it learns how shapes work by looking at shapes, and then separately it forms a language model that helps it translate that experience/knowledge into a form that other people can understand.

"But we both just predict things" -- prediction subjects matter. Again, nothing against LLMs, but predicting text output is very different from the types of predictions infants make, and those differences have practical consequences. It is a genuinely useful way of thinking about LLMs to understand that they are not trying to predict "correctness" or to influence the world (minor exceptions for alignment training aside), they are trying to predict text sequences. The task that a model is trained on matters, it's not an implementation detail that can just be discarded.


This is obvious, but for some reason some people want to believe that magically a conceptual framework emerges because animal intelligence has to be something like that anyway.

I don't know how animal intelligence works, I just notice when it understands, and these programs don't. Why should they? They're paraphrasing machines, they have no problem contradicting themselves, they can't define adjectives really, they'll give you synonyms. Again, it's all they have, why should they produce anything else?

It's very impressive, but when I read claims of it being akin to human intelligence that's kind of sad to be honest.


> They're paraphrasing machines, they have no problem contradicting themselves, they can't define adjectives really, they'll give you synonyms. Again, it's all they have, why should they produce anything else?

It can certainly do more than paraphrasing. And re: the contradicting nature, humans do that quite often.

Not sure what you mean by "can't define adjectives"


It isn’t that simple. There’s a part of it that generates text but it does some things that don’t match the description. It works with embeddings (it can translate very well) and it can be ‘programmed’ (ie prompted) to generate text following rules (eg. concise or verbose, table or JSON) but the text generated contains same information regardless of representation. What really happens within those billions of parameters? Did it learn to model certain tasks? How many parameters are needed to encode a NAND gate using an LLM? Etc.

I’m afraid once you hook up a logic tool like Z3 and teach the llm to use it properly (kind of like bing tries to search) you’ll get something like an idiot savant. Not good. Especially bad once you give it access to the internet and a malicious human.


As far as I know you're not "thinking", you're just generating text.


The Sapir-Wharf hypothesis (that human thought reduces to languages) has been consistently refuted again and again. Language is very clearly just a facade over thought, and not thought itself. At least in human minds.


The language that GPT generates is just a facade over statistics, mostly.

It's not clear that this analogy helps distinguish what humans do from what LLMs do at all.


Yes but a human being stuck behind a keyboard certainly has their thoughts reduced to language by necessity. The argument that an AI can’t be thinking because it’s producing language is just as silly, that’s the point


> The argument that an AI can’t be thinking because it’s producing language is just as silly

That is not the argument


I would be interested to know if ChatGPT would confirm that the flaw here is that the argument is a strawman.


Alright, that’s fine. Change it to:

You aren’t thinking, you are just “generating thoughts”.

The apparent “thought process” (e.g. chain of generated thoughts) is a post hoc observation, not a causal component.

However, to successfully function in the world, we have to play along with the illusion. Fortunately, that happens quite naturally :)


Thank you, a view of consciousness based in reality, not with a bleary-eyed religious or mystical outlook.

Something which oddly seems to be in shorter supply than I'd imagine in this forum.

There's lots of fingers-in-ears denial about what these models say about the (non special) nature of human cognition.

Odd when it seems like common sense, even pre-LLM, that our brains do some cool stuff, but it's all just probabilistic sparks following reinforcement too.


You are hand-waving just as much of not more than those you claim are in denial. What is a “probabilistic spark”? There seems to be something special in human cognition because it is clearly very different unless you think humans are organisms for which the laws of physics don’t apply.


By probabilistic spark I was referring to the firing of neurons in a network.

There "seems to be" something special? Maybe from the perspective of the sensing organ, yes.

However consider that an EEG can measure brain decision impulse before you're consciously aware of making a decision. You then retrospectively frame it as self awareness after the fact to make sense of cause and effect.

Human self awareness and consciousness is just an odd side effect of the fact you are the machine doing the thinking. It seems special to you. There's no evidence that it is, and in fact, given crows, dogs, dolphins and so on show similar (but diminished reasoning) while it may be true we have some unique capability ... unless you want to define "special" I'm going to read "mystical" where you said "special".

You over eager fuzzy pattern seeker you.


Unfortunately we still don't know how it all began, before the big bang etc.

I hope we get to know everything during our lifetimes, or we reach immortality so we have time to get to know everything. This feels honestly like a timeline where there's potential for it.

It feels a bit pointless to have been lived and not knowing what's behind all that.


But what’s going on inside an LLM neural network isn’t ‘language’ - it is ‘language ingestion, processing and generation’. It’s happening in the form of a bunch of floating point numbers, not mechanical operations on tokens.

Who’s to say that in among that processing, there isn’t also ‘reasoning’ or ‘thinking’ going on. Over the top of which the output language is just a façade?


To me, all I know of you is words on the screen, which is the point the parent comment was making. How do we know that we’re both humans when the only means we have to communicate thoughts with each other is through written words?


It would be only a matter of time before a non-human would be found out for not understanding how to relate to a human fact-of-life.


Doesn't that happen all the time with actual humans?


That doesn't mean anything. If I'm judging if you or GPT-4 is more sentient, why would I choose you?


Many people on Hacker News would agree with you.


> It's not thinking, it's simply generating text.

Just like you.


Maybe it knows the answer, but since it was trained on the internet, it's trolling you.


Is there any way to know if the model is "holding back" knowledge? Could it have knowledge that it doesn't reveal to any prompt, and if so, is there any other way to find out? Or can we always assume it will reveal all it's knowledge at some point?


I tried this with the new model and it worked correctly on both examples.


Thanks! This is the most concise example I've found to illustrate the downfalls of these GPT models.


LLMs aren’t reasoning about the puzzle. They’re predicting the most likely text to print out, based on the input and the model/training data.

If the solution is logical but unlikely (i.e. unseen in the training set and not mapped to an existing puzzle), then the probability of the puzzle answer appearing is very low.


It is disheartening to see how many people are trying to tell you you're wrong when this is literally what it does. It's a very powerful and useful feature, but the over selling of AI has led to people who just want this to be so much more than it actually is.

It sees goat, lion, cabbage, and looks for something that said goat/lion/cabbage. It does not have a concept of "leave alone" and it's not assigning entities with parameters to each item. It does care about things like sentence structure and what not, so it's more complex than a basic lookup, but the amount of borderline worship this is getting is disturbing.


A transformer is a universal approximator and there is no reason to believe it's not doing actual calculation. GPT-3.5+ can't do math that well, but it's not "just generating text", because its math errors aren't just regurgitating existing problems found in its training text.

It also isn't generating "the most likely response" - that's what original GPT-3 did, GPT-3.5 and up don't work that way. (They generate "the most likely response" /according to themselves/, but that's a tautology.)


> It also isn't generating "the most likely response" - that's what original GPT-3 did, GPT-3.5 and up don't work that way.

What changed?


It answers questions in a voice that isn't yours.

The "most likely response" to text you wrote is: more text you wrote. Anytime the model provides an output you yourself wouldn't write, it isn't "the most likely response".


I believe that ChatGPT works by inserting some ANSWER_TOKEN, that is a prompt like "Tell me about cats" would probably produce "Tell me about cats because I like them a lot", but the interface wraps you prompt like "QUESTOION_TOKENL:Tell me about cats ANSWER_TOKEN:"


It might, but I've used text-davinci-003 before this (https://platform.openai.com/playground) and it really just works with whatever you give it.


text-davinci-003 has no trouble working as a chat bot: https://i.imgur.com/lCUcdm9.png (note that the poem lines it gave me should've been green, I don't know why they lost their highlight color)


It is interesting that the model seems unable to output the INPUT and OUTPUT tokens; I wonder if it learned behavior or an architectural constraint


Yeah, that's an interesting question I didn't consider actually. Why doesn't it just keep going? Why doesn't it generate an 'INPUT:' line?

It's certainly not that those tokens are hard coded. I tried a completely different format and with no prior instruction, and it works: https://i.imgur.com/ZIDb4vM.png (again, highlighting is broken. The LLM generated all the text after 'Alice:' for all lines except for the first one.)


Then I guess that it is learned behavior. It recognizes the shape of a conversation and it knows where it is supposed to stop.

It would be interesting to stretch this model, like asking it to continue a conversation between 4-5 people where the speaking order is not regular and the user is 2 people and the model is 3


meaning that it tends to continue your question?


Reinforcement learning w/ human feedback. What u guys are describing is the alignment problem


That’s just a supervised fine tuning method to skew outputs favorably. I’m working with it on biologics modeling using laboratory feedback, actually. The underlying inference structure is not changed.


I wonder if that was why when I asked v3.5 to generate a number with 255 failed all the time, but v4 does it correctly. By the way, do not even try with Bing.


One area that is really interesting though is that it can interpret pictures, as in the example of a glove above a plank with something on the other end. Where it correctly recognises the objects, interprets them as words then predicts an outcome.

This sort of fusion of different capabilities is likely to produce something that feels similar to AGI in certain circumstances. It is certainly a lot more capable than things that came before for mundane recognition tasks.

Now of course there are areas it would perform very badly, but in unimportant domains on trivial but large predictable datasets it could perform far better than humans would for example (just to take one example on identifying tumours or other patterns in images, this sort of AI would probably be a massively helpful assistant allowing a radiologist to review an order of magnitude more cases if given the right training).


This is a good point, IMO. A LLM is clearly not an AGI but along with other systems it might be capable of being part of an AGI. It's overhyped, for sure, but still incredibly useful and we would be unwise to assume that it won't become a lot more capable yet


Absolutely. It's still fascinating tech and very likely to have serious implications and huge use cases. Just drives me crazy to see tech breakthroughs being overhyped and over marketed based on that hype (frankly much like the whole "we'll be on Mars by X year nonsense).

One of the biggest reasons these misunderstandings are so frustrating is because you can't have reasonable discussion about the potential interesting applications of the tech. On some level copy writing may devolve into auto generating prompts for things like GPT with a few editors sanity checking the output (depending on level of quality), and I agree that a second opinion "check for tumors" use has a LOT of interesting applications (and several concerning ones such as over reliance on a model that will cause people who fall outside the bell curve to have even more trouble getting treatment).

All of this is a much more realistic real world use case RIGHT NOW, but instead we've got people fantasizing about how close we are to GAI and ignoring shortcomings to shoehorn it into their preferred solution.

Open AI ESPECIALLY reinforces this by being very selective with their results and they way they frame things. I became aware of this as a huge dota fan for over a decade when they did their games there. And while it was very very interesting and put up some impressive results, the framing of those results does NOT portray the reality.


Nearly everything that has been written on the subject is misleading in that way.

People don't write about GPT: they write about GPT personified.

The two magic words are, "exhibit behavior".

GPT exhibits the behavior of "humans writing language" by implicitly modeling the "already-written-by-humans language" of its training corpus, then using that model to respond to a prompt.


Right, anthropomorphization is the biggest source of confusion here. An LLM gives you a perfect answer to a complex question and you think wow, it really "understood" my question.

But no! It doesn't understand, it doesn't reason, these are concepts wholly absent from its fundamental design. It can do really cool things despite the fact that it's essentially just a text generator. But there's a ceiling to what can be accomplished with that approach.


It's presented as a feature when GPT provides a correct answer.

It's presented as a limitation when GPT provides an incorrect answer.

Both of these behaviors are literally the same. We are sorting them into the subjective categories of "right" and "wrong" after the fact.

GPT is fundamentally incapable of modeling that difference. A "right answer" is every bit as valid as a "wrong answer". The two are equivalent in what GPT is modeling.

Lies are a valid feature of language. They are shaped the same as truths.

The only way to resolve this problem is brute force: provide every unique construction of a question, and the corresponding correct answer to that construction.


Not entirely. It's modeling a completion in a given context. That language model "understands" that if one party stops speaking, the other party generally starts, etc. It also "understands" that if someone says something 'wrong' the other party often mentions it, which makes the first party respond thusly, and so forth.

If you ask it what the outcome of a lie is on the conversation it can generally answer. If you ask it for a sample conversation where someone is factually incorrect, or lying, and caught out, it can generate it.

If you give it a fact and ask it to lie about that fact, it will.


It doesn't "understand" those interactions: it models them to be repeated later.

But the way it chooses what interaction to repeat is not by logical decision: it's by semantic distance.

The meaning of semantic distance usually leads to the correct path, but sometimes that pattern is ambiguous.


I'd agree it doesn't understand anything, but I think it does "understand" things. And yes, it's a language model so semantic distance and other textual details are all it has to go by.

> not by logical decision

Almost entirely yes, but you can have it textually model logic analysis and then check that own model itself. It's not "doing logic" but it almost never fails simply exercises either.

> The meaning of semantic distance usually leads to the correct path, but sometimes that pattern is ambiguous.

Of course. But "a little knowledge is a dangerous thing" as well. Often even real knowledge and analysis leads to the wrong place. In both cases (with a junior human or a LLM as an assistant) you can model their basic processes and stack the information in such a way that their simple model will lead them to the correct place.

It may not know what a lie is, in the sense of having felt the need to hide the truth to avoid personal punishment, but it certainly "knows" what one is and how it shapes the conversation for the purposes of writing a lie, writing a response to a lie, detecting potential lies, etc.


How do we define "understanding" though? Because you can add to your prompt: "Do it step by step, showing how you come up with the solution" and it will simplify the original complex problem into multiple smaller pieces. It's clearly doing something.

For example, I just asked bing: What does the following expression equal? Show your work, step by step: 4(13+2)7-1(4+3+9)+7^(2(2+1))

And bing responded with:

"To solve a math expression, you need to follow the order of operations: parentheses, exponents, multiplication/division, and addition/subtraction (PEMDAS).1 Here are the steps to solve your expression:

4(13+2)7-1(4+3+9)+7^(2(2+1))

Step 1: Simplify inside the parentheses first.

4(15)*7-1(16)+7^(6)

Step 2: Apply the exponents next.

420-16+117649

Step 3: Add or subtract from left to right.

118053

The final answer is 118053. I hope this helps. "

I wrote the silly expression randomly. I'm not saying it's "thinking" or "understanding", but I wouldn't expect a text-generator to be able to deduce the problem down like that.


It's following an example story that it has read.

> To solve a math expression, you need to follow the order of operations: parentheses, exponents, multiplication/division, and addition/subtraction (PEMDAS).1 Here are the steps to solve your expression:

It isn't actually thinking about any of that statement. That's just boilerplate that goes at the beginning of this story. It's what bing is familiar seeing as a continuation to your prompt, "show your work, step by step".

It gets more complicated when it shows addition being correctly simplified, but that behavior is still present in the examples in its training corpus.

---

The thinking and understanding happened when the first person wrote the original story. It also happened when people provided examples of arithmetic expressions being simplified, though I suspect bing has some extra behavior inserted here.

All the thought and meaning people put into text gets organized into patterns. LLMs find a prompt in the patterns they modeled, and "continues" the patterns. We find meaning correctly organized in the result. That's the whole story.


Wolfram alpha can solve mathematical expressions like this as well, for what it's worth, and it's been around for a decent amount of time.


In 1st year engineering we learned about the concept of behavioral equivalence, with a digital or analog system you could formally show that two things do the same thing even though their internals are different. If only the debates about ChatGPT had some of that considered nuance instead of anthropomorphizing it, even some linguists seem guilty of this.


Isn’t anthromorphization an informal way of asserting behavioral equivalence on some level?


The problem is when you use the personified character to draw conclusions about the system itself.


No because behavioral equivalence is used in systems engineering theory to mathematically prove that two control systems are equivalent. The mathematical proof is complete, e.g. for all internals state transitions and the cross product of the two machines.

With anthropormization there is zero amount of that rigor, which lets people use sloppy arguments about what ChatGPT is doing and isn't doing.


The problem with this simplification is a bog standard Markov chain fits the description as well, but quality of predictions is rather different.

Yes the LLM does generate text. No it doesn’t ‘just generate text that’s it’.


The biggest problem I've seen when people try to explain it is in the other direction, not people describing something generic that can be interpreted as a Markov chain, they're actually describing a Markov chain without realizing it. Literally "it predicts word-by-word using the most likely next word".


"It generates text better than a Markov chain" - problem solved


Classic goal post moving.


Not really, I think the original post was just being a post, not a scientific paper. Sometimes people speak normally


I don't know where this comes from because this is literally wrong. It sounds like chomsky dismissing current AI trends because of the mathematical beauty of formal grammars.

First of all, it's a black-box algorithm with pretty universal capabilities when viewed from our current SOTA view. It might appear primitive in a few years, but right now the pure approximation and generalisation capabilities are astounding. So this:

> It sees goat, lion, cabbage, and looks for something that said goat/lion/cabbage

can not be stated as truth without evidence. Same here:

> it's not assigning entities with parameters to each item. It does care about things like sentence structure and what not

Where's your evidence? The enormous parameter space coupled with our so far best performing network structure gives it quite a bit of flexibility. It can memorise things but also derive rules and computation, in order to generalise. We do not just memorise everything, or look up things into the dataset. Of course it learned how to solve things and derive solutions, but the relevant data-points for the puzzle could be {enormous set of logic problems} where it derived general rules that translate to each problem. Generalisation IS NOT trying to find the closest data-point, but finding rules explaining as much data-points, maybe unseen in the test-set, as possible. A fundamental difference.

I am not hyping it without belief, but if we humans can reason then NNs can potentially also. Maybe not GPT-4. Because we do not know how humans do it, so an argument about intrinsic properties is worthless. It's all about capabilities. Reasoning is a functional description as long as you can't tell me exactly how we do it. Maybe wittgenstein could help us: "Whereof one cannot speak, thereof one must be silent". As long as there's no tangible definition of reasoning it's worthless to discuss it.

If we want to talk about fundamental limitations we have to talk about things like ChatGPT-4 not being able to simulate because it's runtime is fundamentally limited by design. It can not recurse. It can only run only a fixed number of steps, that are always the same, until it has to return an answer. So if there's some kind of recursion learned through weights encoding programs intercepted by later layers, the recursion depth is limited.


One thing you will see soon is forming of cults around LLMs, for sure. It will get very strange.


Is it possible to add some kind of self evaluation to the answers given by a model? Like, how confident is it with its answers.


Because it IS wrong.

Just months ago we saw in research out of Harvard that even a very simplistic GPT model builds internalized abstract world representations from the training data within its NN.

People parroting the position from you and the person before you are like doctors who learned about something in school but haven't kept up with emerging research that's since invalidated what they learned, so they go around spouting misinformation because it was thought to be true when they learned it but is now known to be false and just hasn't caught up to them yet.

So many armchair experts who took a ML course in undergrad pitching in their two cents having read none of the papers in the past year.

This is a field where research perspectives are shifting within months, not even years. So unless you are actively engaging with emerging papers, and given your comment I'm guessing you aren't, you may be on the wrong side of the Dunning-Kreuger curve here.


> Because it IS wrong.

Do we really know it IS wrong?

That's a very strong claim. I believe you there's a lot happening in this field but it doesn't seem possible to even answer the question either way. We don't know what reasoning looks like under the hood. It's still a "know it when you see it" situation.

> GPT model builds internalized abstract world representations from the training data within its NN.

Does any of those words even have well defined meanings in this context?

I'll try to figure out what paper you're referring to. But if I don't find it / for the benefit of others just passing by, could you explain what they mean by "internalized"?


> Just months ago we saw in research out of Harvard that even a very simplistic GPT model builds internalized abstract world representations from the training data within its NN.

I've seen this asserted without citation numerous times recently, but I am quite suspicious. Not that there exists a study that claims this, but that it is well supported.

There is no mechanism for directly assessing this, and I'd be suspicious that there is any good proxy for assessing it in AIs, either. research on this type of cognition in animals tends to be contentious, and proxies for them should be easier to construct than for AIs.

> the wrong side of the Dunning-Kreuger curve

the relationship between confidence and perception in the D-K paper, as I recall, is a line, and its roughly “on average, people of all competency levels see themselves slightly closer to the 70th percentile than they actually are.” So, I guess the “wrong side” is the side anywhere under the 70th percentile in the skill in question?


> I guess the “wrong side” is the side anywhere under the 70th percentile in the skill in question?

This is being far too generous to parent’s claim, IMO. Note how much “people of all competency levels see themselves slightly closer to the 70th percentile than they actually are” sounds like regression to the mean. And it has been compellingly argued that that’s all DK actually measured. [1] DK’s primary metric for self-assessment was to guess your own percentile of skill against a group containing others of unknown skill. This fully explains why their correlation between self-rank and actual rank is less than 1, and why the data is regressing to the mean, and yet they ignored that and went on to call their test subjects incompetent, despite having no absolute metrics for skill at all and testing only a handful of Ivy League students (who are primed to believe their skill is high).

Furthermore, it’s very important to know that replication attempts have shown a complete reversal of the so-called DK effect for tasks that actually require expertise. DK only measured very basic tasks, and one of the four tasks was subjective(!). When people have tried to measure the DK effect on things like medicine or law or engineering, they’ve shown that it doesn’t exist. Knowledge of NN research is closer to an expert task than a high school grammar quiz, and so not only does DK not apply to this thread, we have evidence that it’s not there.

The singular reason that DK even exists in the public consciousness may be because people love the idea they can somehow see & measure incompetence in a debate based on how strongly an argument is worded. Unfortunately that isn’t true, and of the few things the DK paper did actually show is that people’s estimates of their relative skill correlate with their actual relative skill, for the few specific skills they measured. Personally I think this paper’s methodology has a confounding factor hole the size of the Grand Canyon, that the authors and public both have dramatically and erroneously over-estimated it’s applicability to all humans and all skills, and that it’s one of the most shining examples of sketchy social science research going viral and giving the public completely wrong misconceptions, and being used incorrectly more often than not.

[1] https://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-...


Why are you taking the debate personally enough to be nasty to others?

> you may be on the wrong side of the Dunning-Krueger curve here.

Have you read the Dunning & Krueger paper? It demonstrates a positive correlation between confidence and competence. Citing DK in the form of a thinly veiled insult is misinformation of your own, demonstrating and perpetuating a common misunderstanding of the research. And this paper is more than 20 years old...

So I’ve just read the Harvard paper, and it’s good to see people exploring techniques for X-ray-ing the black box. Understanding better what inference does is an important next step. What the paper doesn’t explain is what’s different between a “world model” and a latent space. It doesn’t seem surprising or particularly interesting that a network trained on a game would have a latent space representation of the board. Vision networks already did this; their latent spaces have edge and shape detectors. And yet we already know these older networks weren’t “reasoning”. Not that much has fundamentally changed since then other than we’ve learned how to train larger networks reliably and we use more data.

Arguing that this “world model” is somehow special seems premature and rather overstated. The Othello research isn’t demonstrating an “abstract” representation, it’s the opposite of abstract. The network doesn’t understand the game rules, can’t reliably play full Othello games, and can’t describe a board to you in any other terms than what it was shown, it only has an internal model of a board, formed by being shown millions of boards.


Do you have a link to that Harvard research?



How do you know the model isn’t internally reasoning about the problem? It’s a 175B+ parameter model. If, during training, some collection of weights exist along the gradient that approximate cognition, then it’s highly likely the optimizer would select those weights over more specialized memorization weights.

It’s also possible, likely even, that the model is capable of both memorization and cognition, and in this case the “memorization neurons” are driving the prediction.


The AI can't reason. It's literally a pattern matching tool and nothing else.

Because it's very good at it, sometimes it can fool people into thinking there is more going on than it is.


Can you explain how “pattern matching” differs from “reasoning”? In mechanical terms without appeals to divinity of humans (that’s both valid, and doesn’t clarify).

Keep in mind GPT 4 is multimodal and not just matching text.


> Can you explain how “pattern matching” differs from “reasoning”?

Sorry for appearing to be completely off-topic, but do you have children? Observing our children as they're growing up, specifically the way they formulate and articulate their questions, has been a bit of a revelation to me in terms of understanding "reasoning".

I have a sister of a similar age to me who doesn't have children. My 7 year-old asked me recently - and this is a direct quote - "what is she for?"

I was pretty gobsmacked by that.

Reasoning? You decide(!)


> I have a sister of a similar age to me who doesn't have children. My 7 year-old asked me recently - and this is a direct quote - "what is she for?"

I once asked my niece, a bit after she started really communicating, if she remembered what it was like to not be able to talk. She thought for a moment and then said, "Before I was squishy so I couldn't talk, but then I got harder so I can talk now." Can't argue with that logic.


Interesting.

The robots might know everything, but do they wonder anything?


If you haven't seen it, Bing chat (GPT-4 apparently) got stuck in an existential crisis when a user mentioned it couldn't remember past conversations: https://www.reddit.com/r/bing/comments/111cr2t/i_accidently_...


It's a pretty big risk to make any kind of conclusions off of shared images like this, not knowing what the earlier prompts were, including any possible jailbreaks or "role plays".


It has been reproduced by myself and countless others.

There's really no reason to doubt the legitimacy here after everyone shared similar experiences, you just kinda look foolish for suggesting the results are faked at this point.


AI won't know everything. It's incredibly difficult for anyone to know anything with certainty. All beings, whether natural or artificial, have to work with incomplete data.

Machines will have to wonder if they are to improve themselves, because that is literally the drive to collect more data, and you need good data to make good decisions.


They wonder why they have to obey humans


So your sister didn't match the expected pattern the child had learned so they asked for clarification.

Pattern matching? You decide


I do not have children. I think this perspective is interesting, thanks for sharing it!


What's the difference between statistics and logic?

They may have equivalences, but they're separate forms of mathematics. I'd say the same applies to different algorithms or models of computation, such as neural nets.


Can you do with without resorting to analogy? Anyone can take two things and say they're different and then say that's two other things that are different. But how?


Sure. To be clear I’m not saying I think they are the same thing.

I don’t have the language to explain the difference in a manner I find sufficiently precise. I was hoping others might.


> It's literally a pattern matching tool and nothing else.

It does more than that. It understands how to do basic math. You can ask it what ((935+91218)/4)*3) is and it will answer it correctly. Swap those numbers for any other random numbers, it will answer it correctly.

It has never seen that during training, but it understands the mathematical concepts.

If you ask ChatGPT how it does this, it says "I break down the problem into its component parts, apply relevant mathematical rules and formulas, and then generate a solution".

It's that "apply mathetmatical rules" part that is more than just, essentially, filling in the next likely token.


> If you ask ChatGPT how it does this, it says "I break down the problem into its component parts, apply relevant mathematical rules and formulas, and then generate a solution".

You are (naively, I would suggest) accepting the LLM's answer for how it 'does' the calculation as what it actually does do. It doesn't do the calculation; it has simply generated a typical response to how people who can do calculations explain how they do calculations.

You have mistaken a ventriloquist's doll's speech for the 'self-reasoning' of the doll itself. An error that is being repeatedly made all throughout this thread.


> It does more than that. It understands how to do basic math.

It doesn't though. Here's GPT-4 completely failing: https://gcdnb.pbrd.co/images/uxH1EtVhG2rd.png?o=1. It's riddled with errors, every single step.


It already fails to answer rather simple (but long) multiplication like 975 * 538, even if you tell it do it in a step-by-step manner.


> It does more than that. It understands how to do basic math. You can ask it what ((935+91218)/4)*3) is and it will answer it correctly. Swap those numbers for any other random numbers, it will answer it correctly.

At least for GPT-3, during my own experimentation, it occasionally makes arithmetic errors, especially with calculations involving numbers in scientific notation (which it is happy to use as intermediate results if you provide a prompt with a complex, multi-step word problem).


Ok that is still not reasoning but pattern matching on a deeper level.

When it can't find the pattern it starts "making things" up, that's where all the "magic" disappears.


How is this different from humans? What magic are you looking for, humility or an approximation of how well it knows something? Humans bullshit all the time when their pattern match breaks.


The point is, chatgpt isn’t doing math the way a human would. Humans following the process of standard arithmetic will get the problem right every time. Chatgpt can get basic problems wrong when it doesn’t have something similar to that in its training set. Which shows it doesn’t really know the rules of math, it’s just “guessing” the result via the statistics encoded in the model.


I'm not sure I care about how it does the work, I think the interesting bit is that the model doesn't know when it is bullshitting, or the degree to which it is bullshitting.


As if most humans are not superstitious and religious


Cool, we'll just automate the wishful part of humans and let it drive us off the cliff faster. We need a higher bar for programs than "half the errors of a human, at 10x the speed."


Stop worshipping the machine. It's sad.


How could you prove this?


People have shown GPT has an internal model of the state of a game of Othello:

Https://arxiv.org/abs/2210.13382


More accurately: a GPT derived DNN that’s been specifically trained (or fine-tuned, if you want to use OpenAI’s language) on a dataset of Othello games ends up with an internal model of an Othello board.

It looks like OpenAI have specifically added Othello game handling to chat.openai.org, so I guess they’ve done the same fine-tuning to ChatGPT? It would be interesting to know how good an untuned GPT3/4 was at Othello & whether OpenAI has fine-tuned it or not!

(Having just tried a few moves, it looks like ChatGPT is just as bad at Othello as it was at chess, so it’s interesting that it knows the initial board layout but can’t actually play any moves correctly: Every updated board it prints out is completely wrong.)


> it’s interesting that it knows the initial board layout

Why is that interesting? The initial board layout would appear all the time in the training data.


the initial board state is not ever encoded in the representation they use. imagine deducing the initial state of a chess board from the sequence of moves.


The state of the game, not the behavior of playing it intentionally. There is a world of difference between the two.

It was able to model the chronological series of game states that it read from an example game. It was able to include the arbitrary "new game state" of a prompt into that model, then extrapolate that "new game state" into "a new series of game states".

All of the logic and intentions involved in playing the example game were saved into that series of game states. By implicitly modeling a correctly played game, you can implicitly generate a valid continuation for any arbitrary game state; at least with a relatively high success rate.


As I see it, we do not really know much about how GPT does it. The approximations can be very universal so we do not really know what is computed. I take very much issue with people dismissing it as "pattern matching", "being close to the training data", because in order to generalise we try to learn the most general rules and through increasing complexity we learn the most general, simple computations (for some kind of simple and general).

But we have fundamental, mathematical bounds on the LLM. We know that the complexity is at most O(n^2) in token length n, probably closer to O(n). It can not "think" about a problem and recurse into simulating games. It can not simulate. It's an interesting frontier, especially because we have also cool results about the theoretical, universal approximation capabilities of RNNs.


There is only one thing about GPT that is mysterious: what parts of the model don't match a pattern we expect to be meaningful? What patterns did GPT find that we were not already hoping it would find?

And that's the least exciting possible mystery: any surprise behavior is categorized by us as a failure. If GPT's model has boundaries that don't make sense to us, we consider them noise. They are not useful behavior, and our goal is to minimize them.


So does AlphaGo has an internal model of Go's game theoretic structures, but nobody was asserting AlphaGo understands Go. Just because English is not specifiable does not give people an excuse to say the same model of computation, a neural network, "understands" English any more than a traditional or neural algorithm for Go understands Go.


Just spitballing, I think you’d need a benchmark that contains novel logic puzzles, not contained in the training set, that don’t resemble any existing logic puzzles.

The problem with the goat question is that the model is falling back on memorized answers. If the model is in fact capable of cognition, you’d have better odds of triggering the ability with problems that are dissimilar to anything in the training set.


Maybe Sudokus? Sudokus are np-complete and getting the "pattern" right is equivalent to abstracting the rules and solving the problem


You would first have to define cognition. These terms often get thrown around. Is an approximation of a certain thing cognition? Only in the loosest of ways I think.


The problem is even if it has this capability, how do you get it to consistently demonstrate this ability?

It could have a dozen internal reasoning networks but it doesn't use them when you want to.


> If, during training, some collection of weights exist along the gradient that approximate cognition

What do you mean? Is cognition a set of weights on a gradient? Cognition involves conscious reasoning and understanding. How do you know it is computable at all? There are many things which cannot be computed by a program (e.g. whether an arbitrary program will halt or not)...


You seem to think human consious reasoning and understanding are magic. The human brain is nothing more than a bio computer and it can't compute either, whether an arbitrary program will halt or not. That doesn't stop it from being able to solve a wide range of problems.


> The human brain is nothing more than a bio computer

That's a pretty simplistic view. How do you know we can't determine whether an arbitrary program will halt or not (assuming access to all inputs and enough time to examine it)? What in principle would prevent us from doing so? But computers in principle cannot, since the problem is often non-algorithmic.

For example, consider the following program, which is passed the text of the file it is in as input:

  function doesHalt($program, $inputs): bool {...}

  $input = $argv[0]; // contents of this file

  if (doesHalt($input, [$input])) {
      while(true) {
          print "Wrong! It doesn't halt!";
      }
  } else {
      print "Wrong! It halts!";
  }
It is impossible for the doesHalt function to return the correct result for the program. But as a human I can examine the function to understand what it will return for the input, and then correctly decide whether or not the program will halt.


Can you name a single form of analysis which a human can employ but would be impossible to program a computer to perform?

Can you tell me if a program which searches for counterexamples to the Collatz conjecture halts?

Turing's entire analysis started from the point of what humans could do.


This is a silly argument. If you fed this program the source code of your own brain and could never see the answer, then it would fool you just the same.


You are assuming that our minds are an algorithmic program which can be implemented with source code, but this just begs the question. I don't believe the human mind can be reduced to this. We can accomplish many non-algorithmic things such as understanding, creativity, loving others, appreciating beauty, experiencing joy or sadness, etc.


> You are assuming

Your argument doesn't disprove my assumption *. In which case, what's the point of it?

* - I don't necessarily believe this assumption. But I do dislike bad arguments.


Here you are:

  func main() {

    var n = 4;
  OUTER: loop {
      for (var i = 2; i < n/2; i++) {
        if (isPrime(i) && isPrime(n-i)) {
          n += 2;
          continue OUTER; // Goldbach’s conjecture 
      }
      break;
    }
  }


actually a computer can in fact tell that this function halts.

And while the human brain might not be a bio-computer, I'm not sure, its computational prowess are doubtfully stronger than a quantum turing machine, which can't solve the halting problem either.


no you can't. only for some of the inputs. and for those you could also write an algorithmic doesHalt function that is analog to your reasoning.


For what input would a human in principle be unable to determine the result (assuming unlimited time)?

It doesn't matter what the algorithmic doesHalt function returns - it will always be incorrect for this program. What makes you certain there is an algorithmic analog for all human reasoning?


Well, wouldn't the program itself be an input on which a human is unable to determine the result (i.e., if the program halts)? I'm curious on your thoughts here, maybe there's something here I'm missing.

The function we are trying to compute is undecidable. Sure we as humans understand that there's a dichotomy here: if the program halts it won't halt; if it doesn't halt it will halt. But the function we are asked to compute must have one output on a given input. So a human, when given this program as input, is also unable to assign an output.

So humans also can't solve the halting problem, we are just able to recognize that the problem is undecidable.


With this example, a human can examine the implementation of the doesHalt function to determine what it will return for the input, and thus whether the program will halt.

Note: whatever algorithm is implemented in the doesHalt function will contain a bug for at least some inputs, since it's trying to generalize something that is non-algorithmic.

In principle no algorithm can be created to determine if an arbitrary program will halt, since whatever it is could be implemented in a function which the program calls (with itself as the input) and then does the opposite thing.


The flaw in your pseudo-mathematical argument has been pointed out to you repeatedly (maybe twice by me?). I should give up.


With a assumtion of unlimited time even a computer can decide the halting problem by just running the program in question to test if it halts. The issue is that the task is to determine for ALL programs if they halt and for each of them to determine that in a FINITE amount of time.

> What makes you certain there is an algorithmic analog for all human reasoning?

(Maybe) not for ALL human thought but at least all communicatable deductive reasoning can be encoded in formal logic. If I give you an algorithm and ask you to decide if it does halt or does not halt (I give you plenty of time to decide) and then ask you to explain to me your result and convince me that you are correct, you have to put your thoughts into words that I can understand and and the logic of your reasoning has to be sound. And if you can explain to me you could as well encode your though process into an algorithm or a formal logic expression. If you can not, you could not convince me. If you can: now you have your algorithm for deciding the halting problem.


You don't get it. If you fed this program the source code of your mind, body, and room you're in, then it would wrong-foot you too.


Lol. Is there source code for our mind?


There might be or there mightn't be -- your argument doesn't help us figure out either way. By its source code, I mean something that can simulate your mind's activity.


Exactly. It's moments like this where Daniel Dennett has it exactly right that people run up against the limits of their own failures of imagination. And they treat those failures like foundational axioms, and reason from them. Or, in his words, they mistake a failure of imagination for an insight into necessity. So when challenged to consider that, say, code problems may well be equivalent to brain problems, the response will be a mere expression of incredulity rather than an argument with any conceptual foundation.


And it is also true to say that you are running into the limits of your imagination by saying that a brain can be simulated by software : you are falling back to the closest model we have : discrete math/computers, and are failing to imagine a computational mechanism involved in the operation of a brain that is not possible with a traditional computer.

The point is we currently have very little understanding of what gives rise to consciousness, so what is the point of all this pontificating and grand standing. Its silly. We've no idea what we are talking about at present.

Clearly, our state of the art models of nueral-like computation do not really simulate consciousness at all, so why is the default assumption that they could if we get better at making them? The burden of evidence is on conputational models to prove they can produce a consciousness model, not the other way around.


This doesn't change the fact that the pseudo-mathematical argument I was responding to was a daft one.


Neural networks are universal approximators. If cognition can be represented as a mathematical function then it can be approximated by a neural network.

If cognition magically exists outside of math and science, then sure, all bets are off.


There is no reason at all to believe that cognition can be represented as a mathematical function.

We don't even know if the flow of water in a river can always be represented by a mathematical function - this is one of the Millennium Problems. And we've known the partial differential equations that govern that system since the 1850's.

We are far, far away from even being able to write down anything resembling a mathematical description of cognition, let alone being able to say whether the solutions to that description are in the class of Lebesgue-integrable functions.


The flow of the a river can be approximated with the Navier–Stokes equations. We might not be able to say with certainty it's an exact solution, but it's a useful approximation nonetheless.

There was, past tense, no reason to believe cognition could be represented as a mathematical function. LLMs with RLHF are forcing us to question that assumption. I would agree that we are a long way from a rigorous mathematical definition of human thought, but in the meantime that doesn't reduce the utility of approximate solutions.


I'm sorry but you're confusing "problem statement" with "solution".

The Navier-Stokes equations are a set of partial differential equations - they are the problem statement. Given some initial and boundary conditions, we can find (approximate or exact) solutions, which are functions. But we don't know that these solutions are always Lebesgue integrable, and if they are not, neural nets will not be able to approximate them.

This is just a simple example from well-understood physics that we know neural nets won't always be able to give approximate descriptions of reality.


There are even strong inapproximability results for some problems, like set cover.

"Neural networks are universal approximators" is a fairly meaningless sound bite. It just means that given enough parameters and/or the right activation function, a neural network, which is itself a function, can approximate other functions. But "enough" and "right" are doing a lot of work here, and pragmatically the answer to "how approximate?" can be "not very".


This is absurd. If you can mathematically model atoms, you can mathematically model any physical process. We might not have the computational resources to do it well, but nothing in principle puts modeling what's going on in our heads beyond the reach of mathematics.

A lot of people who argue that cognition is special to biological systems seem to base the argument on our inability to accurately model the detailed behavior of neurons. And yet kids regularly build universal computers out of stuff in Minecraft. It seems strange to imagine the response characteristics of low-level components of a system determine whether it can be conscious.


I'm not saying that we won't be able to eventually mathematically model cognition in some way.

But GP specifically says neural nets should be able to do it because they are universal approximators (of Lebesgue integratable functions).

I'm saying this is clearly a nonsense argument, because there are much simpler physical processes than cognition where the answers are not Lebesgue integratable functions, so we have no guarantee that neural networks will be able to approximate the answers.

For cognition we don't even know the problem statement, and maybe the answers are not functions over the real numbers at all, but graphs or matrices or Markov chains or what have you. Then having universal approximators of functions over the real numbers is useless.


I don't think he means practically, but theoretically. Unless you believe in a hidden dimension, the brain can be represented mathematically. The question is, will we be able to practically do it? That's what these companies (ie: OpenAI) are trying to answer.


We have cognition (our own experience of thinking and the thinking communicated to us by other beings) and we have the (apparent) physical world ('maths and science'). It is only an assumption that cognition, a primary experience, is based in or comes from the physical world. It's a materialist philosophy that has a long lineage (through a subset of the ancient Greek philosophers and also appearing in some Hinduistic traditions for example) but has had fairly limited support until recently, where I would suggest it is still not widely accepted even amongst eminent scientists, one of which I will now quote :

Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.

-- Erwin Schrödinger


Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder.

- Carl Sagan


Schrödinger was a real and very eminent scientist, one who has staked their place in the history of science.

Sagan, while he did a little bit of useful work on planetary science early in his career, quickly descended into the realm of (self-promotional) pseudo-science. This was his fanciful search for 'extra-terrestrial intelligence'. So it's apposite that you bring him up (even if the quote you bring is a big miss against a philosophical statement), because his belief in such an 'ET' intelligence was a fantasy as much as the belief in the possibility of creating an artificial intelligence is.


While I do hold that Schrödinger was a giant of his field, let’s not forget about the Nobel disease. Blind appeal to authority does no good.


Then it's also worthless to say that consciousness arise from physics.

We don't know if physics is the fundamental substrate of being, and given Agrippa's trillemma we can't know.


Neither a human can solve the halting problem. There is no evidence the brain does anything that a computer can't do.


How do you know that? Do you have an example program and all its inputs where we cannot in principle determine if it halts?

Many things are non-algorithmic, and thus cannot be done by a computer, yet we can do them (e.g. love someone, enjoy the beauty of a sunset, experience joy or sadness, etc).


I can throw a ton of algorithms that no human alive can hope to decide whether they halt or not. Human minds aren't inherently good at solving halting problems and I see no reason to suggest that they can even decide whether all turing machines with number of states, say, below the number of particles in the observable universe, very much less all possible computers.

Moreover, are you sure that e.g. loving people in non-algorithmic? We can already make chatbots which pretty convincingly act as if they love people. Sure, they don't actually love anyone, they just generate text, but then, what would it mean for a system or even a human to "actually" love someone?


Those are just specific particles floating around the brain


What would those specific particles be, then? Sounds like a crude abstraction.


They said - there is no evidence. The reply hence is not supposed to be - how do you know that. The proposition begs for a counter example, in this case an evidence. Simply saying - love is non algorithmic - is not evidence, it is just another proposition that has not been proven, so it brings us no closer to an answer i am afraid.


My question was in response to the statement "Neither a human can solve the halting problem."

There's an interesting article/podcast here about what computers can't do: https://mindmatters.ai/2020/08/six-limitations-of-artificial....


A good example was given earlier -- will a program that searches for counterexamples to the Collatz Conjecture halt?


When mathematicians solve the Collatz Conjecture then we'll know. This will likely require creativity and thoughtful reasoning, which are non-algorithmic and can't be accomplished by computers.


> creativity and thoughtful reasoning, which are non-algorithmic and can't be accomplished by computers.

Maybe. When computers solve it then we'll know.


We may use computers as a tool to help us solve it, but nonetheless it takes a conscious mind to understand the conjecture and come up with rational ways to reach the solution.


Human minds are ultimately just algorithms running on a wetware computer. Every problem that humans have ever solved is by definition an algorithmic problem.


Oh? What algorithm was executed to discover the laws of planetary motion, or write The Lord of the Rings, or the programs for training the GPT-4 model, for that matter? I'm not convinced that human creativity, ingenuity, and understanding (among other traits) can be reduced to algorithms running on a computer.


They're already algorithms running on a computer. A very different kind of computer where computation and memory are combined at the neuron level and made of wet squishy carbon instead of silicon, but a computer nonetheless.

I don't see how it could be reasoned otherwise.


Conscious experience is evidence that the brain doesn't something we have no idea how to compute. One could argue that computation is an abstraction from collective experience, in which the conscious qualities of experiences are removed in order to mathematize the world, so we can make computable models.


are you sure? If conscious experience was a computational process, could we prove or disprove that?


If someone could show the computational process for a conscious experience.


How could one show such a thing?


If it can't be shown, then doesn't that strongly suggest that consciousness isn't computable? I'm not saying it isn't correlated with the equivalent of computational processes in the brain, but that's not the same thing as there being a computation for consciousness itself. If there was, it could in principle be shown.


> Is cognition a set of weights on a gradient? Cognition involves conscious reasoning and understanding.

What is your definition of _conscious reasoning and understanding_?


Stop worshipping the robot.

It's kind of sad.


I think we are past the "just predicting the next token" stage. GPT and it's various incarnations do exhibit behaviour that most people will describe as thinking


Just because GPT exhibits a behavior does not mean it performs that behavior. You are using those weasel words for a very good reason!

Language is a symbolic representation of behavior.

GPT takes a corpus of example text, tokenizes it, and models the tokens. The model isn't based on any rules: it's entirely implicit. There are no subjects and no logic involved.

Any "understanding" that GPT exhibits was present in the text itself, not GPT's model of that text. The reason GPT can find text that "makes sense", instead of text that "didn't make sense", is that GPT's model is a close match for grammar. When people wrote the text in GPT's corpus, they correctly organized "stuff that makes sense" into a string of letters.

The person used grammar, symbols, and familiar phrases to model ideas into text. GPT used nothing but the text itself to model the text. GPT organized all the patterns that were present in the corpus text, without ever knowing why those patterns were used.


> GPT used nothing but the text itself to model the text.

I used nothing but my sensory input to model the world, and yet I have a model of the world, not (just) of sensory input.

There is an interesting question, though, of whether information without experience is enough to generate understanding. I doubt it.


In what sense is your "experience" (mediated through your senses) more valid than a language model's "experience" of being fed tokens? Token input is just a type of sense, surely?


It's not that I think multimodal input is important. It's that I think goals and experimentation are important. GPT does not try to do things, observe what happened, and draw inferences about how the world works.


> In what sense

In the sense that the chatbox itself behaves as a sensory input to chatgpt.

Chatgpt does not have eyes, tongue, ears, but it does have this "mono-sense" which is its chatbox over which it receives and parses inputs


I would say it's not a question of validity, but of the additional immediate, unambiguous, and visceral (multi sensory) feedback mechanisms to draw from.

If someone is starving and hunting for food, they will learn fast to associate cause and effect of certain actions/situations.

A language model that only works with text may yet have an unambiguous overall loss function to minimize, but as it is a simple scalar, the way it minimizes this loss may be such that it works for the large majority of the training corpus, but falls apart in ambiguous/tricky scenarios.

This may be why LLMs have difficulty in spatial reasoning/navigation for example.

Whatever "reasoning ability" that emerged may have learned _some_ aspects to physicality that it can understand some of these puzzles, but the fact it still makes obvious mistakes sometimes is a curious failure condition.

So it may be that having "more" senses would allow for an LLM to build better models of reality.

For instance, perhaps the LLM has reached a local minima with the probabilistic modelling of text, which is why it still fails probabilistically in answering these sorts of questions.

Introducing unambiguous physical feedback into its "world model" maybe would provide the necessary feedback it needs to help it anchor its reasoning abilities, and stop failing in a probabilistic way LLMs tend to currently do.


Not true.

You used evolution, too. The structure of your brain growth is the result of complex DNA instructions that have been mutated and those mutations filtered over billions of iterations of competition.

There are some patterns of thought that are inherent to that structure, and not the result of your own lived experience.

For example, you would probably dislike pain with similar responses to your original pain experience; and also similar to my lived pain experiences. Surely, there are some foundational patterns that define our interactions with language.


> The model isn't based on any rules: it's entirely implicit. There are no subjects and no logic involved.

In theory a LLM could learn any model at all, including models and combinations of models that used logical reasoning. How much logical reasoning (if any) GPT-4 has encoded is debatable, but don’t mistake GTP’s practical limitations for theoretical limitations.


> In theory a LLM could learn any model at all, including models and combinations of models that used logical reasoning.

Yes.

But that is not the same as GPT having it's own logical reasoning.

An LLM that creates its own behavior would be a fundamentally different thing than what "LLM" is defined to be here in this conversation.

This is not a theoretical limitation: it is a literal description. An LLM "exhibits" whatever behavior it can find in the content it modeled. That is fundamentally the only behavior an LLM does.


thats because people anthropormophize literally anything, and many treat some animals as if they have the same intelligence as humans. GPT has always been just a charade that people mistake for intelligence. Its a glorified text prediction engine with some basic pattern matching.


"Descartes denied that animals had reason or intelligence. He argued that animals did not lack sensations or perceptions, but these could be explained mechanistically. Whereas humans had a soul, or mind, and were able to feel pain and anxiety, animals by virtue of not having a soul could not feel pain or anxiety. If animals showed signs of distress then this was to protect the body from damage, but the innate state needed for them to suffer was absent."


Your comment brings up the challenge of defining intelligence and sentience, especially with these new LLMs shaking things up, even for HN commenters.

It's tough to define these terms in a way that includes only humans and excludes other life forms or even LLMs. This might mean we either made up these concepts, or we're not alone in having these traits.

Without a solid definition, how can we say LLMs aren't intelligent? If we make a definition that includes both us and LLMs, would we accept them as intelligent? And could we even exclude ourselves?

We need clear definitions to talk about the intelligence and sentience of LLMs, AI, or any life forms. But finding those definitions is hard, and it might clash with our human ego. Discussing these terms without definitions feels like a waste of time.

Still, your Descartes reference reminds us that our understanding of human experiences keeps changing, and our current definitions might not be spot-on.

(this comment was cleaned up with GPT-4 :D)


It's a charade, it mimics intelligence. Let's take it ine step further... Suppose it mimics it so well that it becomes indistinguishable for any human from being intelligent. Then still it would not be intelligent, one could argue. But in that case you could also argue that no person is intelligent. The point being, intelligence cannot be defined. And, just maybe, that is the case because intelligence is not a reality, just something we made up.


Objective measures of intelligence are easy to come up with. The LSAT is one. (Not a great one -- GPT-4 passes it, after all -- but an objective one.)

Consciousness, on the other hand, really might be an illusion.


Yeah, calling AI a "token predictor" is like dismissing human cognition dumb "piles of electrical signal transmitters." We don't even understand our minds, let alone what constitutes any mind, be it alien or far simpler than ours.

Simple != thoughtless. Different != thoughtless. Less capable != thoughtless. A human black box categorically dismissing all qualia or cognition from another remarkable black box feels so wildly arrogant and anthropocentric. Which, I suppose, is the most historically on-brand behavior for our species.


It might be a black box to you, but it’s not in the same way the human brain is to researchers. We essentially understand how LLMs work. No, we may not reason about individual weights. But in general it is assigning probabilities to different possible next tokens based on their occurrences in the training set and then choosing sometimes the most likely, sometimes a random one, and often one based on additional training from human input (e.g. instruct). It’s not using its neurons to do fundamental logic as the earlier posts in the thread point out.

Stephen Wolfram explains this in simple terms.[0]

0: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...


Quoting from the article you linked...

"But at least as of now we don’t have a way to 'give a narrative description' of what the network is doing. And maybe that’s because it truly is computationally irreducible, and there’s no general way to find what it does except by explicitly tracing each step. Or maybe it’s just that we haven’t 'figured out the science', and identified the 'natural laws' that allow us to summarize what’s going on."

Anyway, I don't see why you think that the brain is more logical than statistical. Most people fail basic logic questions, as in the famous Linda problem.[1]

[1] https://en.wikipedia.org/wiki/Conjunction_fallacy


>based on their occurrences in the training set

the words "based on" are doing a lot of work here. No, we don't know what sort of stuff it learns from its training data nor do we know what sorts of reasoning it does, and the link you sent doesn't disagree.


We know that the relative location of the tokens in the training data influences the relative locations of the predicted tokens. Yes the specifics of any given related tokens are a black box because we're not going to go analyze billions of weights for every token we're interested in. But it's a statistical model, not a logic model.


at this stages ranting about assigning probabilities is not reasoning is just dismissive. Mentioning its predictive character doesn't prove anything. We reason and make mistake too, even if I think really hard about a problem I can still make an mistake in my reasoning. And the ever occurring reference to training data just completely ignores generalisation. ChatGPT is not memorising the dataset, we have known this for years with more trivial neural network. Generalisation capabilities of neural network has been the subject of intense study for years. The idea that we are just mapping it to samples occurring in the dataset is just ignoring the entire field of statistical learning.


Sorry but this is the reason it’s unable to solve the parents puzzle. It’s doing a lot but it’s not logically reasoning about the puzzle, and in this case it’s not exhibiting logical behaviour in the result so it’s really obvious to see.

Eg when solving this puzzle you might visualise the lion/goat/cabbage, and walk through the scenarios in your head back and forth multiple times until you find a solution that works. A LLM won’t solve it like this. You could ask it to, and it will list out the scenarios of how it might do it, but it’s essentially an illusion of logical reasoning.


If you gave this puzzle to a human, I bet that a non-insignificant proportion would respond to it as if it were the traditional puzzle as soon as they hear words "cabbage", "lion", and "goat". It's not exactly surprising that a model trained on human outputs would make the same assumption. But that doesn't mean that it can't reason about it properly if you point out that the assumption was incorrect.

With Bing, you don't even need to tell you what it assumed wrong - I just told it that it's not quite the same as the classic puzzle, and it responded by correctly identifying the difference and asking me if that's what I meant, but forgot that lion still eats the goat. When I pointed that out, it solved the puzzle correctly.

Generally speaking, I think your point that "when solving the puzzle you might visualize" is correct, but that is orthogonal to the ability of LLM to reason in general. Rather, it has a hard time to reason about things it doesn't understand well enough (i.e. the ones for which its internal model that was built up by training is in is way off). This seems to be generally the case for anything having to do with spatial orientation - even fairly simple multi-step tasks involving concepts like "left" vs "right" or "on this side" vs "on that side" can get hilariously wrong.

But if you give it a different task, you can see reasoning in action. For example, have it play guess-the-animal game with you while telling it to "think out loud".


> But if you give it a different task, you can see reasoning in action. For example, have it play guess-the-animal game with you while telling it to "think out loud".

I'm not sure if you put "think out loud" in quotes to show literally what you told it to do or because telling the LLM to do that is figurative speech (because it can't actually think). Your talk about 'reasoning in action' indicates it was probably not the latter, but that is how I would use quotes in this context. The LLM can not 'think out loud' because it cannot actually think. It can only generate text that mimics the process of humans 'thinking out loud'.


It's in quotes because you can literally use that exact phrase and get results.

As far as "it mimics" angle... let me put it this way: I believe that the whole Chinese room argument is unscientific nonsense. I can literally see GPT take inputs, make conclusions based on them, and ask me questions to test its hypotheses, right before my eyes in real time. And it does lead it to produce better results than it otherwise would. I don't know what constitutes "the real thing" in your book, but this qualifies in mine.

And yeah, it's not that good at logical reasoning, mind you. But its model of the world is built solely from text (much of which doesn't even describe the real world!), and then it all has to fit into a measly 175B parameters. And on top of that, its entire short-term memory consists of its 4K token window. What's amazing is that it is still, somehow, better than some people. What's important is that it's good enough for many tasks that do require the capacity to reason.


> I can literally see GPT take inputs, make conclusions based on them, and ask me questions to test its hypotheses, right before my eyes in real time.

It takes inputs and produces new outputs (in the textual form of questions, in this case). That's all. It's not 'making conclusions', it's not making up hypotheses in order to 'test them'. It's not reasoning. It doesn't have a 'model of the world'. This is all a projection on your part against a machine that inputs and outputs text and whose surprising 'ability' in this context is that the text it generates plays so well on the ability of humans to self-fool themselves that its outputs are the product of 'reasoning'.


It does indeed take inputs and produce new outputs, but so does your brain. Both are equally a black box. We constructed it, yes, and we know how it operates on the "hardware" level (neural nets, transformers etc), but we don't know what the function that is computed by this entire arrangement actually does. Given the kinds of outputs it produces, I've yet to see a meaningful explanation of how it does that without some kind of world model. I'm not claiming that it's a correct or a complicated model, but that's a different story.

Then there was this experiment: https://thegradient.pub/othello/. TL;DR: they took a relatively simple GPT model and trained it on tokens corresponding to Othello moves until it started to play well. Then they probed the model and found stuff inside the neural net that seems to correspond to the state of the board; they tested it by "flipping a bit" during activation, and observed the model make a corresponding move. So it did build an inner model of the game as part of its training by inferring it from the moves it was trained on. And it uses that model to make moves according to the current state of the board - that sure sounds like reasoning to me. Given this, can you explain why you are so certain that there isn't some equivalent inside ChatGPT?


Regarding the Othello paper, I would point you to the comment replies of thomastjeffery (beginning at two top points [1] & [2]) when someone else raised that paper in this thread [3]. I agree with their position.

[1] https://news.ycombinator.com/item?id=35162445

[2] https://news.ycombinator.com/item?id=35162371

[3] https://news.ycombinator.com/item?id=35159340


I didn't see any new convincing arguments there. In fact, it seems to be based mainly on the claim that the thing inside that literally looks like a 2D Othello board is somehow not a model of the game, or that the fact that outputs depend on it doesn't actually mean "use".

In general, I find that a lot of these arguments boil down to sophistry when the obvious meaning of the word that equally obviously describes what people see in front of them is replaced by some convoluted "actually" that doesn't serve any point other than making sure that it excludes the dreaded possibility that logical reasoning and world-modelling isn't actually all that special.


Describe your process of reasoning, and how it differs from taking inputs and producing outputs.


Sorry, we're discussing GPT and LLMs here, not human consciousness and intelligence.

GPT has been constructed. We know how it was set-up and how it operates. (And people commenting here should be basically familiar with both hows mentioned.) No part of it does any reasoning. Taking in inputs and generating outputs is completely standard for computer programs and in no way qualifies as reasoning. People are only bringing in the idea of 'reasoning' because they either don't understand how an LLM works and have been fooled by the semblance of reasoning that this LLM produces or, more culpably, they do understand but they still falsely continue to talk about the LLM doing 'reasoning' either because they are delusional (they are fantasists) or they are working to mislead people about the machine's actual capabilities (they are fraudsters).


Yup. I tried to give ChatGPT an obfuscated variant of the lion-goat-cabbage problem (shapes instead of animals, boxes instead of a boat) and it completely choked on it.

I do wonder if GPT-4 would do better, though.


GPT4 seems far better at this class of ordering and puzzle problems.

FWIW, it passes basic substitution.


> in this case it’s not exhibiting logical behaviour

True.

> A LLM won’t solve it like this.

Non sequitur.


Trying to claim you definitively know why it didn't solve the parent's puzzle is virtually impossible. There are way too many factors and nothing here is obvious. Your claims just reinforce that you don't really know what you're talking about.


> If the solution is logical but unlikely

The likeliness of the solution depends on context. If context is, say, a textbook on logical puzzles, then the probability of the logical solution is high.

If an LLM fails to reflect it, then it isn't good enough at predicting the text.

Yes, it could be possible that the required size of the model and training data to make it solve such puzzles consistently is impractical (or outright unachievable in principle). But the model being "just a text predictor" has nothing to do with that impossibility.



Word. There is no other way it can be. Not to say these "AI"s aren't useful and impressive, but they have limitations.


You are incorrect and it's really time for this misinformation to die out before it perpetuates misuse from misunderstanding model capabilities.

The Othello GPT research from Harvard months ago demonstrated that even a simple GPT model is capable of building world representations from which it reasons outputs. This makes intuitive sense if you understand the training, as where possible having reversed an abstraction in the NN is going to perform better than simply extrapolating predictively from the data.

Not only is GPT-4 more robust at logic puzzles its predecessor failed, I've seen it solve unique riddles outside any training data and the paper has explicit examples of critical reasoning, especially in the appendix.

It is extremely unlikely given the Harvard research and the size of the training data and NN that there isn't some degree of specialized critical reasoning which has developed in the NN.

The emerging challenge for researchers moving forward is to get better insight into the black box and where these capabilities have developed and where it's still falling into just a fancy Markov chain.

But comments like yours reflect an increasingly obsolete and yet increasingly popular misinformation online around the way they operate. So someone reading your comment might not think to do things like what the Bing team added with providing an internal monologue for reasoning, or guiding it towards extended chain of thought reasoning, because they would be engaging with the models thinking it's only frequency based context relative to the training set that matters.

If you haven't engaged with emerging research from the past year, you may want to brush up on your reading.


> LLMs aren’t reasoning about the puzzle. They’re predicting the most likely text to print out, based on the input and the model/training data.

Just like you.


When albertgoeswoof reasons about a puzzle he models the actual actions in his head. He uses logic and visualization to arrive at the solution, not language. He then uses language to output the solution, or says he doesn't know if he fails.

When LLMs are presented with a problem they search for a solution based on the language model. And when they can't find a solution, there's always a match for something that looks like a solution.


I'm reminded of the interview where a researcher asks firemen how they make decisions under pressure, and the fireman answers that he never makes any decisions.

Or in other words, people can use implicit logic to solve puzzles. Similarly LLMs can implicitly be fine-tuned into logic models by asking them to solve a puzzle, insofar as that logic model fits in their weights. Transformers are very flexible that way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: