Hacker News new | past | comments | ask | show | jobs | submit login

Just because you don't see the hard lines doesn't mean they aren't there. We are deluding ourselves by avoiding a hard definition of intelligence so we can keep believing that we are creating AI when its really nothing of the sort.



Just because you can't see unicorns doesn't mean they aren't there, but at some point you have to give up the search. It's fine to talk about how, broadly speaking, rats are more intelligent than ants, plants or microbes (which are basically I/O rules with a body), chimpanzees more so than rats, humans more so than than chimps etc. But in general there's a ton of overlap and the qualities we associate with intelligence – memory, planning, self-awareness, tool use, whatever – are only loosely correlated continua.

There are a few more binary measures in intelligence research, such as the mirror test, but at best they're only a small piece of the puzzle. There's no sudden point where everything clicks into place.

Of course, if you have such a good definition of intelligence, feel free to enlighten me.


Well I would say that "intelligence" is learning and inference with causal models rather than just predictive or correlative models. You can then cash it all out into a few different branches of cognition, like perception (distinguishing/classifying which available causal models best match the feature data under observation), learning (taking observed feature data and using it to refine causal models for greater accuracy), inference (using causal models to make predictions under counterfactual conditions, which can include planning as a special case), and the occasional act of conceptual refinement/reduction (in which a model is found of how one model can predict the free parameters of another).


It's an interesting perspective, but the thing about this kind of definition is that it's very much focused on the mechanism of intelligence rather than the behaviour that the mechanism produces – which flies in the face of our intuition about what intelligence is, I think.

If we found out that one species of chimp learns sign languages through a causal model while another learns it through an associative one (for example) we wouldn't label one more or less intelligent, because it's the end result that matters – don't you think?

Likewise, arguably the ultimate goals of AI are behavioural (machines that can think/solve problems/communicate/create etc.), even if it's been relatively focused on mechanisms lately. Any particular kind of modelling is just a means to that end. Precisely what that end is is still a bit hard to pin down, though.


>Brains are complicated, and there is a huge amount of heterogeneity in how people process information and think about mathematics (and indeed all topics, but it is clearer in mathematics perhaps). Some are very visual, some are big on calculation.

What do you mean by "associative model"? That doesn't map to anything I've heard of in cognitive science, statistics, machine learning, or Good Old-Fashioned AI.

But actually, I would expect different behaviors from an animal that learns language via a purely correlational and discriminative model (like most neural networks) versus a causal model. Causal models compress the empirical data better precisely because they're modelling sparse bones of reality rather than the abundant "meat" of correlated features. You should be able to generalize better and faster with a causal model than with a discriminative, correlative one.


I think I meant correlational, but it was really just a placeholder for "a different model". You could replace the chimp with some kind of alien whose thinking model is completely, well, alien to us – but still proves its intelligence by virtue of having a spaceship.

I'm not necessarily saying that different models lead to exactly the same behaviour. Clearly, chimps' models don't generalise as well as ours do and they don't have a language model that matches ours in capability, for example, which leads to different behaviour. But given that their behaviour is generally thought of as less intelligent as opposed to not intelligent at all, it seems like the mechanism itself is not the important thing.


You are repeating the same mistake as before. The difference is significant between admitting we don't know what those lines are and claiming that they don't exist.


It's not a mistake – I see the difference and I am claiming that those lines don't exist. There's plenty of evidence for that, including the continuous range of intelligent behaviour from plants to humans. It's just an empirical fact that there's no hard line.

Of course, that could be wrong – new evidence may come to light, after all. But even so, it doesn't make any sense to say that trying to understand and replicate intelligence is deluded, just because we don't know where that line is – because figuring out what intelligence is is exactly the problem that people are trying to solve. AI is one part of that, along with more empirical research in fields like cognitive science and biology.

Are people researching quantum gravity deluding themselves because they don't yet have a hard definition of quantum gravity? Figuring that out is exactly the point!


If you are going in the wrong direction, you won't get to your destination by going faster. You should stop and rethink and you won't do this unless you can admit that you might be wrong.

How much research had you done before you assertively proclaimed that those lines don't exist? Because it looks nothing like a smooth transition to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: