Hacker News new | past | comments | ask | show | jobs | submit login

For the curious, here was the conversation I had:

ME: What's another saying similar to: "The cat is out of the bag" or "The genie is out of the bottle"?

chatgpt: Another similar saying is "the toothpaste is out of the tube." Like the other two expressions, this phrase conveys the idea that something has been revealed or unleashed, and it cannot be undone or put back into its original state.

ME: Can you invent a new phrase that has the same meaning, which is completely new?

chatgpt: Certainly! Here's a new phrase with a similar meaning: "The confetti has left the cannon." This phrase suggests that once something is out in the open or has happened, it's impossible to reverse or contain it, much like confetti that has been shot out of a cannon during a celebration.

ME: What's the name for this kind of expression? Aphorism? Fable?

chatgpt: This kind of expression is known as an idiom. Idioms are phrases or expressions with a figurative meaning that is different from their literal meaning. They are commonly used in everyday language to convey a specific message or sentiment in a more colorful or memorable way.




I don’t understand why people aren’t more impressed with it clearly understanding and then even inventing idioms. That shows some real intelligence.


It’s because they’re confused in thinking human intelligence isn’t learned stochastic expectation.


This seems backward to me. Wouldn’t you be less impressed by ChatGPT if you thought that thought that human intelligence worked the same way as LLMs?

If humans have some special sauce different from the computer, then it’s crazy that ChatGPT can emulate human writing so well. If humans are also just statistical models, then of course you can throw a big training set at some GPUs and it’ll do the same thing. Why should we be surprised or impressed by idioms?


Well, I don’t they work exactly like the human mind, because we are likely a more complex ensemble model. But a) it’s impressive because it’s reaching a goal of computing for generations and as a computer scientist I’m excited, b) it’s corpus of learnings is the amazing collection of everything that we built with the modern internet, so it’s breadth and depth is astounding. The act of creating what we have naturally is itself exciting though.


I suppose it depends on the extent to which you're impressed by the concept of LLMs, I can see it both ways.

To explain the other way to your thinking: human intelligence is the same; holy crap they cracked robotic 'human' intelligence, it works exactly the same way.


That's how the brain evolved, after all: take incoming input, process according to previous patterns, output a reaction to peripheral nervous system that will enhance survival. The evolutionary purpose of a cortex is to store a statistical model of an animal's environment based on past stimulus. Human intelligence is just a complicated permutation of these initial systemic parameters.


That’s why people have to take courses in reasoning and logic to be able to successfully apply it. It’s not like reasoning and logic are natural human skills. They’re learned through reinforcement learning by repeatedly exposing our brain to patterns again and again until we can train our stochastic expectation juice to not hallucinate an answer.


And it takes ~20 years to train a new brain so it can coherently answer questions about a wide variety of topics. Even worse, you can't even copy-paste it once you're done!


It arguably needs much less training data though.


What we shouldn't is anthropomorphise it too much. While LLMs can express themselves and interact with us in natural language, their minds are very different from ours - they never learned by having an embodied self, and they can't continuously learn and adapt the way we do - once the conversation is over, it's like it never existed unless it's captured for a future training cycle.

Right now, their ability to learn is severely limited. And, yet, they outcompete us quite easily in a lot of different tasks.


Agreed. There are a hundred different kinds of information processing that go into a human-like mind, and we've kinda-sorta built one piece. And there are a lot of pieces that it would neither be sane nor useful to build (eg. internalized emotions), so we might not see an AI with all the pieces for a very long time ("never" is probably too much to hope for).


It's amusing that our first contact with a completely alien intelligence is with one of our own making.


How? Organisms with brains process every second of their life, is that not training data on a level comparable with current AI models?


From a pure data amount point of view yes, but relatively little of that would seem to be relevant for our intellectual capacities. If GPT was a robot moving autonomously around in the world with full visual, auditory and tactile apparatus, it may be a bit different.


Hm, not sure how most of that data would be irrelevant, could you clarify? I think all of that data as well as interacting with the environment creates the level of knowledge and intelligence we have today.


I'm definitely impressed when I see things like this. This is much more impressive than writing a 5 paragraph nonsense about a 2 sentence prompt.


Is it?

There are many mentions of confetti cannons on the web, along with explanations of how they work (saying something like confetti shoots out of the cannon). Chat-GPT just picked a random thing (confetti) and completed the pattern "X out of Y" with the thing confetti comes out of.

It's easy. The cereal is out of the box. The helium is out of the balloon. The snow is out of the globe.

And it's exactly the one thing LLMs are trained to do: reproduce patterns of words. They are (perhaps) already better than humans at that one specific skill (another win for AI research) but I don't think it's a sign of general intelligence.


Like you, I thought these pieces of software and data were little more than statistics-based text generators. But it turns out that this is a Category Mistake.

There was an argument made by Raphaël Millière in a recent Mindscape Podcast [1] with Sean Carroll that finally landed for me. He used the example that human beings are driven to eat and reproduce, so by that argument all humans are just eating and reproducing machines. "Ah! But we developed other capabilities along the way to allow us to be good at that!" And that's the point.

GPT-4, for example, is very very good at producing pleasing and useful output for a given input. It uses a simulated neural net to do that. Why would one assume that on the way toward becoming excellent at that task that a neural net wouldn't also acquire other abilities that we associate with reasoning or cognition? When we test GPT-4 for these things (like Theory of Mind) we actually find them.

"Ah hah!" you say, "Humans are set up to learn from the get go, and machines must be trained from scratch." However if you consider the entirety of our genetic legacy together with our childhoods, those are our equivalent "training" from scratch.

I don't think it can be easily dismissed that we're seeing something significant here. It's not human-level intelligence yet. Part of the reason for that is that human brains are vastly more complex than any LLM at the moment (100s of trillions of "parameters" in LLM-speak, along with other advantages). But we're seeing the emergence of something important.

[1] https://www.youtube.com/watch?v=aUJOcVPdDvg


That emergence is precisely what I'm looking for evidence of.

Human beings evolved to eat and reproduce and yet here we are, building computers and inventing complex mathematical models of language and debating whether they're intelligent.

We're so far from the environment we evolved to solve that we've clearly demonstrated the ability to adapt.

ChatGPT doing well at a language task isn't demonstrating that same ability to adapt because that's the task it was designed and trained to do. ChatGPT doing something completely different would be the impressive example.

In short: I don't categorically reject the possibility that LLMs might become capable of more than being "statistics-based text generators", I simply require evidence.


This paper from late last year shows that LLMs are not "just" stochastic parrots, but they actually build an internal model of the "world" that is not programmed in, just from trying to predict the next token.

https://ar5iv.labs.arxiv.org/html/2210.13382

PS: More research has been done since that confirmed and strengthened the conclusion.


Another poster shared a link to this paper last week for Theory of Mind: https://arxiv.org/abs/2302.02083

We're seeing those other capabilities emerge; like being able to play chess though it's not been trained to do so. That is, these LLMs are displaying emergent abilities associated with reasoning.

These LLMs aren't R. Daneel Olivaw or R2D2 (which is what I think of when I think of the original term for AI, and what we took to calling AGI). We're closer to seeing the just-the-facts AIs we encounter in Blindsight. Intelligence without awareness.

Funny that we still have to use science fiction to make our comparisons because our philosophy of intelligence, mind, and consciousness are insufficient to speak on the matter clearly.


The issue I see here is you are doing a worse job at this than ChatGTP. Creating idioms is hard, that is why we left most of them to Shakespeare.

- I regularly return cereal to its box.

- "helium" and "balloon" have a more awkward rhythm than "confetti" and "cannon". It also loses the connotations of sudden, explosive and exciting change.

- Snow & globe I'm not even sure what that means in practice. It has poor prospects as an idiom. Is the snow even known for leaving globes?


> "helium" and "balloon" have a more awkward rhythm than "confetti" and "cannon". It also loses the connotations of sudden, explosive and exciting change.

Not only that, but "the confetti has left the cannon" is an alliteration, which makes the phrase even more poetic.


I actually agree, it's a very good phrase.

But I do think it's cherry picking the most impressive example. I repeated the dialog (and some variations), each time asking for a completely new idiom, and ChatGPT responded with several phrases that aren't new at all:

"The toothpaste is out of the tube"

"The lid has been lifted"

"The secret has been spilled"

"The arrow has left the bow"


> But I do think it's cherry picking the most impressive example.

Yes, but don't we cherry pick from what humans have said, too? I'm sure there have been many dumb and obvious proverbs that didn't survive.

> ChatGPT responded with several phrases that aren't new at all

Are you using the GPT-4 version of ChatGPT? That's what GP used.


it also attests to the ramifications, the magnificent difficulty of cleaning confetti, you will keep occasionally finding confetti around the house for months or years


To continue with some human brain powered pedantry for fun I’d argue that “the snow is out of the globe” is not a valid idiom to suit the “X out of Y” pattern.

We have multiple example pairings of X and Y but the common components are that putting X back in Y would be impossible or extremely difficult, and also that X is in some way meant to no longer be contained inside Y as part of either a desired outcome, it’s normal function or the natural expected and thus inevitable result. Cats want to escape, helium leaks, confetti is expelled to have the desired effect, and toothpaste is squeezed out to use it…

For the snow to come out of a snow globe you have to smash it which is not normal usage, not normally meant to happen, and shouldn’t happen by itself. Making the idiom “The snow is out of the globe” not a proper member of this “family” of idioms. (Also I’m not sure if there’s an agreed upon collective noun for idioms)


"The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence."

https://en.wikipedia.org/wiki/AI_effect


Better that than the opposite effect, to assume that because a system solves a single problem very well, it is intelligent.

Is Stockfish intelligent?

Is a system with A* pathfinding intelligent?

I would define intelligence as the ability to solve a wide variety of novel problems. A system built to be excellent at a single task may be better than humans at that task but still lack "intelligence".

We still don't know what that even is exactly, but historically people consistently underestimate how difficult it is.

Not even knowing how to approach it, researchers work on solving single specialized problems instead and make little progress on whatever "intelligence" is.

(If you'd prefer a different definition of intelligence under which Stockfish and GPT are intelligent, then what would you call the ability to solve a wide variety of novel problems? Feel free to substitute that word for "intelligence" above if you'd like to understand what I'm saying.)


> Is Stockfish intelligent?

> Is a system with A pathfinding intelligent?*

I'm not sure if we should get stuck on definitions of intelligence.

The fact is that these tools are useful, as are the currently existing AI's. The latter can also pass for humans, in many ways, while the algorithms you mentioned can only pass for humans in very narrow domains. Both can exceed human performance in some ways.

Eventually, AI's may be indistinguishable from human or convince humans that they should be treated differently from "mere" programs and algorithms, and at that point we will have entered a new era, call it what you will.


We'll have to "get stuck on" definitions of intelligence if we want to talk precisely about what LLMs are capable of.


Not necessarily, as we can just evaluate them on their performance as we give them ever greater challenges.

To do this we do not need to consider whether they're intelligent at all.


In a way, this comment perfectly encapsulates why the argument "machines will never replicate human behavior" is so ridiculous. Instead of engaging with the discussion and topic, you chose a position, and then tried to justify it without really thinking about why one example works and the other one doesn't. In doing so you're literally showing that for certain topics, machines are already more capable than some humans.


I didn't say "machines will never replicate human behavior" so I don't think you're engaging with what I said.


Sure, you didn't say that. You merely said:

> And it's exactly the one thing LLMs are trained to do: reproduce patterns of words. They are (perhaps) already better than humans at that one specific skill (another win for AI research) but I don't think it's a sign of general intelligence.

while completely missing why the machine did a better job.


Is it so impressive that the “fart left the butt” ? :)


That’s brilliant. Though personally I prefer "The SARS-COV-2 virus has left the lab"


Not exactly sure why it would be surprising that it can come up with a convincing idiom when it can produce remarkably good _poetry_


I think the thrilling part is that it's a somewhat atomic concept that can somewhat convincingly be proven to not exist in the training data. While poetry is more impressive if it's as original it's harder to show that it's not just stitched together from the training data.


I just asked GPT-4 to come up with more such "provably original idioms":

""" Here are a few more examples of idioms with meaningful and provable atomic originality:

"The kite has touched the stars" - This phrase could mean that someone has achieved a seemingly impossible goal or reached a level of success that was thought to be unattainable.

"The paint has mingled on the canvas" - This idiom might convey the idea that once certain decisions are made or actions taken, the resulting outcome can't be easily separated or undone, similar to colors of paint that have blended together on a canvas.

"The clock has chimed in reverse" - This expression could be used to describe a situation where something unexpected and unusual has occurred, akin to the unlikely event of a clock chiming in reverse order.

"The flower has danced in the wind" - This phrase could signify that someone or something has gracefully and nimbly adapted to changing circumstances, just as a flower might sway and move in response to the wind. """


Yeah but can it beatbox?


That is impressive, especially given that neither Google nor DuckDuckGo find any previous trace of that idiom.


did you search in different languages?


Conspiracy theory: this idiom used to be widespread on the internet, but a shadowy cabal of AI maximalists erased millions of such expressions from search engines so GPT could appear to re-invent them.

It is a good expression though -- evocative but not gross or violent. You could imagine many less successful analogies to something ejecting something else.


That is not a conspiracy theory... it is alternate reality fan fiction.

While making up "what ifs" can be fun, it doesn't merit either of the words "conspiracy" or "theory".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: