Hacker News new | past | comments | ask | show | jobs | submit login

> Um, what?

Gotcha you're not actually interested in conversation




No, I literally have no idea what you are talking about and how could I? What a projection


https://openai.com/research/multimodal-neurons

When you ask a question to a human that has to do with a concept - in the above article it's Halle Berry because it's a funny discovery, but it could be as broad as science - you can often map those concepts to specific neurons or groups of neurons. Even if the question you ask them doesn't contain the word "science", it still lights up that neuron if it's about science. The same is true of neural networks. They eventually develop neurons that mean something, conceptually.

It's not always true that the neurons that neural network's develop are the same ones that humans have developed, but it is true that they aren't thinking purely in words, they have a map of how concepts relate and interact with one another. That's a type of meaning and it's a real model of the world, not the same one we have, and it's not even close to perfect, but neither is ours.


> they have a map of how concepts relate and interact with one another

Yeah but not one that operates how Chomsky described. It can't tell you if the earth is flat or not. Humans figured it out. ChatGPT can only tell you what other humans already said. It doesn't matter that it does so based on a neural net. You completely missed the point.


ChatGPT can tell you the earth is round. You can ask it yourself.

If you’re saying ChatGPT can’t look at the cosmos and deduce it, well it doesn’t have access to visual input, so that’s not the dunk you think it is.

If you’re saying ChatGPT can’t learn from what you tell it, that’s a design decision by openAI, not inherent to machine learning.

There are absolutely models that can do primitive versions of deducing the earth’s roundness, and ChatGPT can deduce things based on text (e.g. you can ask it to play a chess position that’s not in it’s training set and it will give reasonably good answers most of the time).


Lol you can't just end-run the conversation by calling what ChatGPT does learning and then just leaving it at that. It's not an argument, it's a tautology.

> ChatGPT can deduce things

No it can't because it doesn't understand anything about Chess, it's just determining the best response based upon the information fed into it. It's not discerning rules.

You are just fundamentally ignoring Chomsky's point as a means of trying to rebut him. It doesn't work like that. He gave a fairly explicit example of what intelligence is and why ChatGPT does not express it, and your response is basically "but ChatGPT is intelligent because it gave me an answer to something I asked it". These conversations would be so much better if it didn't constantly have to revolve around these sort of basic failure-to-launch in thinking through these problems.


>If you’re saying ChatGPT can’t learn from what you tell it, that’s a design decision by openAI, not inherent to machine learning.

Well, if "machine learning" = "just throw more data to build the ginormous network of statistical weights ever larger" in the hopes that it can better approximate actually knowing things, the inability to learn things is inherent to machine learning.

The argument is that these networks don't understand the concepts that we use to generate our intelligence. It doesn't understand what the world is. It doesn't understand what an object is. It only understands statistical associations with words.

The symptoms of this manifest as hallucinations that it can't correct based on its own fact-checking. It can happily tell you the world is flat at some point even if it told you it was round earlier because there's no concept of "world" upstairs. It's just math on strings.

When it's computing that chess position, it's not picturing a board and a game and objectives about capturing pieces and defeating an opponent; the argument is that it's just doing stats on text (chess notation text). It's a black box stuffed to the brim with brute force "intelligence" from its training data, and it's using that to seem intelligent in the sense that we're intelligent -- actually able to learn and reason with concepts.

The reason we don't need as much training data and actually learn concepts may be because our brains are using a similar-yet-different (we have no idea) mechanism wherein concepts can actually be represented while language models are boiling the ocean to try and encode abstract concepts in the extremely inexpressive terms of a statistical circuit. And, if that's all language models can do, they'll never be able to encode concepts like our brains can. You'd need a galaxy-sized computer to store the concepts a quarter of a human brain can, say, unless you use a better method. Think trying to write code directly in machine binary. It'll take you forever compared to doing it with Python.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: