Hacker News new | past | comments | ask | show | jobs | submit login
Moravec's Paradox (wikipedia.org)
155 points by jkuria on Aug 15, 2019 | hide | past | favorite | 87 comments



And this is why logistics companies aren't replacing humans with robots very quickly.

Simply put, robots today do not have the combination of speed, accuracy, gentleness, strength and 'hand-eye' coordination needed to do simple tasks that a 6 year old could learn in minutes.

You want a 4 ton pallet lifted from spot A to spot B? There's a robot arm that can do that same task over and over flawlessly. You move that pallet 12 inches to the left and rotate it 10 degrees? Best case, it realizes it can't do it and stops trying. Worst case, you have 4 tons of probably-broken goods thrown all over the place.

There's always examples of specific subsets being done well- a robot that can grab and align parts quickly (so long as they don't overlap and they're the expected parts); a robot that can carefully reach into a cluttered container and grab just one item (but only one item per minute vs a human's 20); a robot that can carefully lift an egg without damaging it (provided the egg is within acceptable size and weight bounds).

You want the whole package of skills? For now, you need a human.


Re: "You move that pallet 12 inches to the left and rotate it 10 degrees? Best case, it realizes it can't do it and stops trying. Worst case, [chaos]"

Maybe it only has to be 90% accurate. A remote-control human operator can help those who get stumped. Record the problem scenarios for later analysis to improve the next version. The bot will gradually improve, and human intervention will gradually shrink.

This applies to cars also. Auto-cars can contact humans if stumped, such as lane closures or officers directing traffic. This includes both the passenger (if qualified) and remote-control operators.

The bot will be conservative such "when it doubt, ask for help". As it learns to solve problems via experienced-based tuning (above), there will be fewer times it has to ask a human.


"it is comparatively easy to make computers exhibit adult level performance on intelligence tests"

Is this even remotely close to being true? I'm not aware of any AI projects that can match an average human score on a standardised IQ test.

In fact I'm not aware of any AI projects that can understand how an IQ test paper works without prior coaching.

Most humans won't just attempt the problems but will be able to work out what the questions are asking, even if they have never seen those kinds of questions before.

So this seems wholly wrong to me. I don't disagree that basic motor skills and perceptions are an important and difficult part of the puzzle, but I absolutely disagree that abstract thinking is a comparatively trivial solved problem - or that board game solvers are anything other than a warm-up exercise.


These projects exist, and are so easy that we don't even bother to use AI on them. As an example of high-level reasoning, Metamath has a body of theorems of ZFC using classical logic [0]. This body of theorems is written out using entirely formal (symbolic, syntactic) reasoning. Over 32,000 theorems are checked in total, at a rate of around 6 kilotheorems per second.

If we recall that proof search is (Turing-)undecidable, for the systems that we care about, then it becomes unsurprising that we do not put AI to the task, other than as a light at the end of the tunnel, since proof search is not only AI-hard, but as impossible for AIs as for humans. That said, there have been constant improvements to the automated generation of human-readable proofs, and the papers have the same shape as AI research papers from previous generations of AI, with expert systems and inference engines [2][3].

Your focus on IQ tests is a little unhealthy [1].

[0] http://us.metamath.org/mpeuni/mmset.html

[1] https://en.wikipedia.org/wiki/Intelligence_quotient#History

[2] https://www.ijcai.org/Proceedings/15/Papers/172.pdf

[3] http://poincare.matf.bg.ac.rs/~janicic/papers/2014-amai-afta...


The SI unit for “theorem / second” is the Erdos.


A standard RPM[1] test should to be a fairly simple problem for a good old-fashioned reasoning system to solve if you first remove the sensorymotor aspects of it! That is, you have to translate the test to a purely logical representation, otherwise what you have is just an instance of Moravec's paradox.

[1] https://en.m.wikipedia.org/wiki/Raven%27s_Progressive_Matric...


That's a problem I have with the paradox. While it's true for some high level cognitive skills, it's not true in general. The general purpose AI has proven to be as hard or harder than the sensory-motor skills. Otherwise, we would have HAL-like computers.


Put it in historical context; the mainframes of Moravec's day were less powerful than a microcontroller which controls the windshield wiper in your car. The fact that guys like Moravec were able to build checkers computers, things that could "talk" (at least in the meta-x emacs sense), things that could diagnose infectious diseases as well as a doctor, and things which did applied mathematics (those integrals didn't integrate themselves back in the day) is absolutely astounding.

The fact that they we still can't generally solve crap like the lost robot problem which insects are able to handle should also strike you as astounding.

That's what the Moravec paradox means. Stuff humans find difficult due to logic, abstraction and theory are often easy for computers, and stuff that bugs can do naturally are still impossible for computers.


Indeed. At the time the paradox was proposed, there had not been much progress in natural language understanding, so the scope of what it considers to be AI has at least one important part missing.


Language is a low level task. You don’t think about decoding “purple” or associating it with a color that’s automatic. Similarly decoding letter shapes etc.

Even forming responses is very automatic. You yell “Stop” you don’t think what’s the best phrase to convey they are about to hit a car.

Though like breathing you can pay attention and do something else.


To the contrary, of all the cognitive capabilities that evolution has bestowed on us, I think language is clearly the highest.


Defining your own scale is fine. But, don’t be confused when you are the only one using it.

High vs low level tasks are defined in terms of what you can do while doing the task. You can be thinking about a math problem for example while talking with someone.


I will agree that tasks like parsing sounds and forming words are low-level tasks, but my original post was about understanding the meaning of language, and it is not just in my own private lexicon that this is regarded as a higher cognitive process, as a brief search will reveal (e.g. [1].)

With regard to your ranking test, is it talking with someone or thinking about a math problem that gets ranked as low-level? It would seem to work either way, in which case, I guess it would be both.

[1] https://www.thefreedictionary.com/higher+cognitive+process


Medicine, psychology, and neuroscience all have different definitions of this stuff.

understanding the meaning of language

Understanding is vague in these terms. You can notice someone calling your name while distracted with other activities. That’s an automatic process and assigning meaning for each word in English is a difficult task. How do you teach a computer what cells are in the abstract?

There is also some high level language decoding where people extract meaning from poetry etc. But, we need to get the low level task done first.

PS: As to ranking, their are a lot of high level tasks you can’t do at the same time. Often described as things that take concentration, but few language tasks make this list.


> Medicine, psychology, and neuroscience all have different definitions of this stuff.

Well, maybe, but is it conventional usage in any of those fields to call the cognitive processes involved in producing and understanding linguistic communication low-level? Regardless, my ranking it high-level is not just my private usage, as you claimed. In fact, I would guess that it is a more conventional ranking than the one that seems to be implied by your method.

> How do you teach a computer what cells are in the abstract?

Well, precisely, it's at a higher level of abstraction than just picking out the word (I am pretty sure that's a conventional usage of 'higher'.)

>There is also some high level language decoding where people extract meaning from poetry, etc.

Poetry may be particularly obtuse, but that does not make all other meaning-extraction low-level. A lot of it, IMHO, belongs in that 'etc.'

PS to your PS: There are also concentration-demanding motor skills. And the point about your ranking method is that it seems to put thinking about a math problem at the same level as talking with someone, which you are claiming is low-level.


> thinking about a math problem at the same level as talking with someone

Plenty of people can hold a simple conversation while doing differential equations. As the conversation’s complexity increases that stops being the case.

There is clearly a difference when talking to a distracted person. But, decoding “Do you want a soda?” and responding appropriately seems to require little if any attention.

Neuroscience gives support of this in terms of what parts of the brain are involved in different aspects of language.


I am interested as to whether you consider doing differential equations a simple task, here? Presumably so, as your method puts it on a par with simple conversations.


No, you can’t do multiple high level tasks at the same time. Low level automatic tasks like breathing and balance fine to do at the same time.

Being able to do differential equations and simple conversations but not complex conversations means simple conversations are lower level.

Thus, it’s not language that makes complex conversations a high level task.


You seem to be missing the point here - your method is symmetrical, and it would be begging the question to break the symmetry the way you want it to go. One can do simple math (calculus, even) while having a complicated conversation -- and if you cannot, it could be because complicated conversations are so intellectually demanding that you can't even do simple math concurrently.

And your argument is also beside the point, now that you have restricted it to simple conversations.


The argument is not limited to math. You also can’t learn to play the violin while holding a deep unrelated conversation. This extends to a huge range of activities.

But, you don’t lose language skills during these activities.

Thus it’s (A, B, C, ..., or Z) and language. Thus language is clearly the lesser activity.

PS: People can often count or do memorized addition etc, but doing complex math tasks is off the table. It’s odd that some things even prevent you from recognizing symbols.


You need to show that, unlike the math case, it does not work the opposite way in each case.

What a coincidence it is that out of all A..Z , you should have picked math, the one that doesn't work, as your first example!


> The one that does not work

How so? You can recall facts not do math. Try adding say 2437 + 5627 while learning to juggle. It’s fine if you have already learned to juggle or have memorized the answer. But not if you are both doing the computation and learning to juggle at the same time.

On the other hand people will hold a simple conversation while practicing juggling.


How so? Just reread my 2nd. previous post. it, in its entirety, addresses how so.

If the math example works, why expand the scope to A..Z? Simply having math as an exception renders it pointless.

https://news.ycombinator.com/item?id=20708638


>One can do simple math (calculus, even) while having a complicated conversation<

As I said you can’t do math. You can recall facts, like say what the capital of France is or what’s 3 * 7, but you can’t do computation requiring working memory. So, at most your making a semantic argument based on misrepresenting what doing math is.

This is not a semantic argument, we can argue about the difference between practicing juggling vs learning juggling. But that’s all worked out in the literature and people doing decades of experiments.


So you are saying that complicated conversations are so cognitively demanding that it is not possible to do any sort of math while engaged in one. On the other hand, math is not so cognitively demanding that you can't have a simple conversation concurrently.


Yes, of course this is the limits, not everything requires absolute concentration. And conversation quickly passes that simple threshold.


OK, with this new-found agreement, let's revisit the original issue:

You claimed that my categorization of language (and, specifically, the cognitive processes involved in understanding linguistic communication, which is what I was discussing in the post you replied to) as high-level, was contrary to professional usage, where it is, you claim, regarded as a low-level capability. In support of that claim, instead of doing the obvious -- offering some citations (as I had done) -- you present, as the accepted way to make the distinction, a test: language is lower-level than mathematics because you can have a conversation while doing mathematics in your head.

Or should that be that you can do mathematics in your head while having a conversation? The first problem with this test is its symmetry: in itself, it does not rank the one thing over the other, as you can switch them around (for that matter, it cannot even establish that they are at different levels.) To try and get around that problem, you insist that mathematics is a high-level one, and so language must be low-level.

Implicit in this move is the corollary that, by definition, one can do at most one high-level task at a time. I suppose that when people refer to a cognitive capability as high-level, that is explicitly what they mean (though I doubt it), but if so, it is an assumption that should be possible to verify empirically (and you should do so, if you want to make use of it.)

Therefore, your claim that mathematics is the high-level capability is begging the question, as, through the corollary of the previous paragraph, it is tantamount to an a priori claim that language comprehension is the lower-level one. You have not shown that language comprehension is lower level; you have asserted it.

To try to avoid the problem that mathematics is not feasible concurrently with all conversations, you next modified the claim to include only simple conversations. Not only does this render the test moot, as it excludes language in general (the observation that a simple conversation is easier than calculus is trivial and uninformative in the general case), it is also a tendentious move; for one thing, you have not made a similar bifurcation of mathematics. Also, as you acknowledge at one point, it implies that, in your way of looking at the issue, simple language comprehension (or, for that matter, simple mathematics) is on a different cognitive level than difficult language comprehension or mathematics (note that there is a four-way comparison to be made there.) Again, maybe most people think that is how the concept of cognitive levels are to be understood, but I doubt they do -- and if they do, then it is sufficient for the point I was making in my original post.

Now you have come to say that complicated conversations are so cognitively demanding that it is not possible to do any sort of math while engaged in one, while math is not so cognitively demanding that you can't have a simple conversation concurrently. From the corollary of your position (as explained three paragraphs up), that would make mathematics lower-level than language comprehension! This follows from your own position because the latter is capable of excluding the former, but not vice-versa.

This just goes to show that your test is not achieving what you think it does. It is possible that you are coming from some actual science, but if so, it is getting lost in your insistence on defending this test. More relevantly, it is possible that people working in the field do, in fact, regard language comprehension as a low-level cognitive capability, but this is not the way to show they do.


I think it is true if you put proper emphasis on the word "comparatively". It's maybe better to state it as "It is much harder to make computers control movement as well as a toddler, than it is to make them exhibit adult level performance on intelligence tests."

Or, to put it another way, the hardest part about teaching a computer to do a classical IQ test is how to hold the pencil, not how to find the right answers.


I believe where test-taker bots fail is if the question assumes human experience or "common sense". If the question starts out, "If you eat half of an apple...", the bot won't naturally understand "eat" in terms of what it does to an apple. It has to be programmed with some kind of model or heuristics for eating ahead of time.


The point is that you could train AI to perform well at IQ tests but it doesn't mean that it demonstrated actual intelligence.


This is a case of where our intuitions and expectations are deeply flawed. It's not a paradox.

I think the explanation of it is pretty simple. To ourselves, the unconscious things like vision and bodily movement tend to feel effortless. Whereas the consciously controlled things like reasoning involve conscious effort, and they can feel difficult to us. The "paradox" comes from assuming that the amount of effort or difficulty that we consciously experience corresponds to the actual degree of sophistication of the processing involved.


"Paradox" does not mean "contradiction", it means "appears to be a contradiction (at least at first glance) whether or not it actually is".

This result is highly unintuitive to humans, and is hence a paradox.


Newtons laws of motion would be paradoxes according to your definition. An educated modern person is very familiar with them, but they are highly unintuitive to the uninitiated - this why it took so long to invent them and why they were such a big deal.

There's a distinction I've made in another comment between appearing to be contradictory to reasonably-well established knowledge vs appearing to be contradictory to intuitions. I don't think the latter are paradoxes, and I think the case in question is one of these.


It is possible to fall (essentially) forever without hitting the ground. In fact, the ISS does exactly that.

In addition an object can be "in freefall" whilst travelling upwards. A ball thrown upwards begins "falling" the moment it leaves your hand.

(And this is not just an arbitrary definition of the word "fall" - if you are catapulted out of a slingshot, you feel weightless from the moment you leave the slingshot, not just when you begin to come back down)

These direct consequences of Newton's laws of motion, and are unintuitive to most modern people, even those who learned Newton's Laws in high-school science.

They are apparent contradictions, and hence I would classify them as paradoxes.


Quine had three classes of paradox:

> A veridical paradox produces a result that appears absurd but is demonstrated to be true nonetheless. Thus the paradox of Frederic's birthday in The Pirates of Penzance establishes the surprising fact that a twenty-one-year-old would have had only five birthdays if he had been born on a leap day. Likewise, Arrow's impossibility theorem demonstrates difficulties in mapping voting results to the will of the people. The Monty Hall paradox demonstrates that a decision which has an intuitive 50–50 chance is in fact heavily biased towards making a decision which, given the intuitive conclusion, the player would be unlikely to make. In 20th-century science, Hilbert's paradox of the Grand Hotel and Schrödinger's cat are famously vivid examples of a theory being taken to a logical but paradoxical end.

> A falsidical paradox establishes a result that not only appears false but actually is false, due to a fallacy in the demonstration. The various invalid mathematical proofs (e.g., that 1 = 2) are classic examples, generally relying on a hidden division by zero. Another example is the inductive form of the horse paradox, which falsely generalises from true specific statements. Zeno's paradoxes are 'falsidical', concluding, for example, that a flying arrow never reaches its target or that a speedy runner cannot catch up to a tortoise with a small head-start.

> A paradox that is in neither class may be an antinomy, which reaches a self-contradictory result by properly applying accepted ways of reasoning. For example, the Grelling–Nelson paradox points out genuine problems in our understanding of the ideas of truth and description.

> A fourth kind, which may be alternatively interpreted as a special case of the third kind, has sometimes been described since Quine's work.

> A paradox that is both true and false at the same time and in the same sense is called a dialetheia. In Western logics it is often assumed, following Aristotle, that no dialetheia exist, but they are sometimes accepted in Eastern traditions (e.g. in the Mohists,[12] the Gongsun Longzi,[13] and in Zen[14]) and in paraconsistent logics. It would be mere equivocation or a matter of degree, for example, to both affirm and deny that "John is here" when John is halfway through the door but it is self-contradictory simultaneously to affirm and deny the event.

https://en.wikipedia.org/wiki/Paradox


I don't think it's a paradox by any of those definitions. Why do you think it is?


Clearly number 1: an unintuitive, unexpected result that is nevertheless true. Other examples include Simpson's paradox, Braess' paradox, or the mentioned Monty Hall paradox.


I agree with that as the best fit to Quine's categories, but it is also paradoxical (though not a logical paradox) in that one might think that such a result would have far-reaching implications for cognitive science and AI, but, on reflection, it is not a disruptive result.


There are paradoxes that are unintuitive because they appear contrary to other knowledge that we seem reasonably certain of. Then there are things that people find unintuitive or unexpected simply because they run contrary to their intuitions and assumptions. Like, for numerous people in history, the idea of a spherical Earth or how far away the stars really are. These latter cases aren't paradoxes.

Is there some established knowledge that the relative difficulties of these AI tasks appear to run contra to? I believe it is just intuition and assumptions they run contra to.


There's no clear boundary between assumptions and knowledge. Spherical Earth and distant stars feel no longer paradoxical because we have internalized the more correct model. However, the latter is actually a good example of a result that was paradoxical in its time! Early attempts to measure stellar parallax[1] were unsuccessful, which was at odds with the heliocentric model, unless stars were actually unimaginably far away, something that was deemed implausible at the time.

It seems that most of this type of paradoxes result either from having an accurate-seeming model that nevertheless fails to predict an experimental result, or from making an intuitive but incorrect assumption that a model predicts something which, in fact, it does not.

Things like the stellar parallax and Olbers' paradox are examples of the former class, whereas Simpson's paradox and the Monty Hall problem are instances of the latter.

[1] https://en.wikipedia.org/wiki/Stellar_parallax#Early_theory_...


There doesn't have to be a clear boundary to say that, for example, there's a difference between unfounded assumptions and beleifs that have been tested to different degrees of rigour. What basis is there for the belief that abstract reasoning ought to involve more sophisticated processing? I think it's just an assumption with no solid basis (my initial comment gives the reason I believe that assumption exists).

Yes, the stars thing may have felt paradoxical but it isn't itself a paradox. The thing is that if all it takes for something to be considered to be a paradox is for it to feel paradoxical to a sufficient number of people the a lot of well-established bits of scientific knowledge -- that are in no way paradoxes to people who have the relevant undergraduate degrees -- are paradoxes because they feel that way to the population at large.


Well, outside of math, "paradox" is not a fundamental ontological category. What is a paradox now may not be so in a hundred years, and hindsight is always 20:20.

Paradoxes of this type are definitely paradoxes if they contradict something that appears to be common sense and a consensus position of some sort, irrespective of the level of rigor. The Monty Hall problem confuses even career statisticians, but that's not because the common but wrong answer is somehow based on sound statistical knowledge; it's simply because answer seems so intuitive and self-evident that we don't bother to "shut up and calculate".

It seems to me that to count as a paradox, a result has to be both contradictory—not just different—from what is expected; and, the result has to be especially surprising; in Bayesian terms, the expected outcome must have had a high prior probability.


I'm saying, for reasons given in my initial comment, that it does not appear in any way contradictory or surprising to people like me who don't hold certain assumptions. Of course other people may find it paradoxical. We're in agreement that it's not a fundamental ontological category, i.e. that it's part of a map not the territory.

And it's possible to argue about whether a particular map is flawed or not, such as it being flawed because it contains certain flawed assumptions. If we can show that a map in which it seems paradoxical is flawed, and show that a different map, in which it doesn't seem paradoxical, is more accurate, then this can be grounds for arguing that the phenomenon is no deserving of being called paradoxical.


The fact that stars appear close despite being incredibly far away, and the fact that the earth appears flat (and that we don't fall off) despite being spherical are both, indeed, paradoxes.


Why do you think that? Is there, to you, anything unintuitive or unexpected, relative to intuitions, that is not a paradox?


I like that question!

Hmm, I'd say to be a paradox, it needs to be so unexpected that it's an apparent contradiction. Merely slightly unexpected is not sufficient.

"I did not expect you to arrive so early" is not a paradox.

"The left facing arrow on your website takes me to the next page." whilst both unexpected and unintuitive, is not a paradox.

Some edge cases:

"I am speaking to you in person whilst I am watching you on a live broadcast on television" is maybe a paradox.

A statement like "Fruit juice is as unhealthy as sugary soft drinks." could be a paradox for many people.


> I'd say to be a paradox, it needs to be so unexpected that it's an apparent contradiction.

Thinking about the topic of paradoxes some more, I would say part of my objection is in the following.

You could distinguish between two sorts of paradoxes

1) where the phenomenon P itself is genuinely paradoxical

2) where the phenomenon P itself is not itself paradoxical, but where our knowledge of the world is such that our picture of P appears to be paradoxical.

In cases of 2), I object to saying that P is paradoxical. The reason for my objection is that doing so would be confusing the map for the territory. What is apparently paradoxical really has nothing to do with P itself. It is all a matter of a particular person's knowledge relating to P.

In those cases, P is in no way paradoxical, and it is misleading to call it paradoxical. It would be less misleading to say that some group of people have flawed knowledge relating to P such that there is an apparent contradiction in their knowledge.

In short, in cases of 2), P is not paradoxical, rather, certain people's beliefs about P contain illusory contradictions.


Calling something a "paradox" is statement about the map, and is relative to a person. I.e., something can be paradoxical to you but not to me.

Calling something a "contradiction", on the other hand, is a statement about the territory.

> some group of people have flawed knowledge relating to P such that there is an apparent contradiction in their knowledge.

> certain people's beliefs about P contain illusory contradictions.

These are both (specific forms of) paradoxes.

Actual contradiction is also (a different, specific form of) paradox.

This is a common usage of the word paradox, and is not incorrect, but "apparent contradiction" is also in common use , also correct, and is the way in which the word is used in the term "Moravec's Paradox".

EDIT: Both usages of the term are compatible, in the same way that "dog" and "Labrador" are both correct terms with which to refer to a Labrador (but only one is correct when referring to, e.g., a Husky).


I'm arguing that it is neither paradoxical nor contradictory, and that there is an error (outlined in my initial comment) in the maps in which it seems such.


I really recommend the book Consciousness and the Brain as a summary of modern neuropsychology. The long story short is that conscious thought is only a small fraction of the brain. There's a tremendous amount of pre-conscious machinery that pre-processes and evaluates sensory input.

The role of consciousness is only to act as a sort of "global workspace" to reconcile any discrepancies that can't easily be evaluated and explained by the pre-conscious modules of the brain. As such the vast majority of sensory input never even makes it to the conscious level.


The idea that there must be a connection between robotic capabilities and human evolution is also a naturalistic misstep. It’s impossible for humans to create accurate metric maps accurately so we prefer topological maps, but in robotics, this is reverses. Robots and humans do not have the same premises.


There doesn't have to be such a connection. Of course different approaches are going to have their pros and cons relative to different tasks, but that doesn't mean we can overlook the inherent difficulty of the task in question. It's a mistake to assume that spatial navigation and interaction with the physical world, for example, must be inherently simpler tasks than abstract reasoning.


What I'm saying, basically, is that achieving true artificial intelligence does not mean mimicking human abilities and modes of operation. Grasping a pen is easy _for humans_. I am attacking the idea that human abilities should somehow serve as a benchmark or landmark for robotics and AI. Academia is really caught in that idea, and it is often used to justify research directions and whole fields.


Ok, but I don't see the relevance of that point to what I said in my comment.


> I think the explanation of it is pretty simple. To ourselves, the unconscious things like vision and bodily movement tend to feel effortless. Whereas the consciously controlled things like reasoning involve conscious effort, and they can feel difficult to us.

However! Maybe the reasoning feels difficult for an evolutionary reason, because it actually consumes a lot of energy. That would contradict the claim that the processing involved is simple.


This is also why brain mass is roughly proportional to body mass, but not very proportional to general intelligence. A blue whale has a 7kg brain, about 5x the size of the typical human brain, but doesn't appear to have even human-level intelligence let alone 5x super-human intelligence.


It's better to compare the weight using brain/body^(2/3) than using body/body directly. With that correction most of the paradox disappear. More info in https://en.wikipedia.org/wiki/Encephalization_quotient


From understanding never density in mammals is mostly the same. So large body mass means more nerves that have to communicate with central nervous system.

Also some people have argued that fish can't feel pain or are less sensitive to due to their lower nerve density from my understanding. A whale is a mammal so I would assume it's going to have similar nerve density. Which means the brain has to be able to take input from a larger volume of body so that brain in large animals would make sense to be larger.



The brain size comparison and its relationship to body mass is super interesting.

The intelligence comparison is a little bit too simplistic though. How do you define intelligence? Are we humans really "more intelligent" than whales? There are a lot of human behaviors that are not intelligent at all, and which seem to be a lot less intelligent than what whales do (and pretty much any other animal on the planet for that matter).


Such as?


Such as war and indiscriminate destruction of our environment


What makes this https://en.wikipedia.org/wiki/Gombe_Chimpanzee_War different from human war? And other species certainly do destroy their environments https://en.wikipedia.org/wiki/Invasive_species it's just that we don't consider them morally culpable.


Great links and examples, very good point. Which brings us back to the original point: are humans actually more intelligent than animals? I don't think so. It's a human self-aggrandizing illusion to believe we are somehow "better" in any way to animals.


Does a whale have more muscles to control than a human? Is its environment more complex to navigate - swimming through water vs balancing , walking running and jumping? Does it have more bones making its body more complex to maneuver? Are its eyes larger with many more rods and cones, looking at a more complex environment and requiring greater neuron counts to interpret?

I think the answer to most (though not all) of these questions is no. Just because a body is bigger doesn't necessarily make it more complex to control surely? For the most part large mammals have the same number of muscles and bones as a small one, and past a certain body size eyes don't get particularly larger.

I seem to recall that even very large dinosaurs could have comparatively tiny brains compared to mammals - did they control their bodies clumsily and observe the world much less acutely than mammals?

So I have always found this explanation a bit suspect - it is a "just so" story because we observe that whales and elephants don't seem all that smart yet they have bigger brains and bodies but that doesn't make the explanation true by itself. Has there been any actual research that supports the hypothesis that large bodies require bigger brains to control?


The brain volume largely correlates to body size [0] in mammals. More muscle cells require more nerves to control them which requires more coordination and hence a larger brain.

How the mammalian brain divies up this process depends on enviromental factors and the body plan. Dolphins and whales have specific parts of the brain used for ecolocation and the water pressure affects the volume of the brain. Elephants have larger areas devoted to using their trunk. Aye-Ayes have large visual cortices as they hunt inscets in the dark for food.

[0] https://en.wikipedia.org/wiki/Brain_size


Hmmh, but a whale's sensori-motor skills are quite rudimentary compared to a human's. A human can manipulate objects, walk, run, dance, swim, jump. A whale can only swim and maybe do a fairly restricted set of dives?


Have a look at dolphins playing with bubble rings - https://www.youtube.com/watch?v=bT-fctr32pE - and making labyrinths out of suspended mud to catch fish with - https://www.youtube.com/watch?v=bzfqPQm-ThU


We are talking about Whales not dolphins. Dolphins have always been the subject of researches for their higher than average intelligence.


Don't forget that pretty much every part of your's and a whales body has nerves in it to detect pain & contract muscles.

More body mass means more nerves that need to send and receive information from the central nervous system.

So even assuming a constant amount nerve endings in cm^3 of flesh between whales and humans a whale will have significantly more.


Whales can do the DSP for sonar, humans can’t.


there are blind people who can use essentially echo location to get an idea of their surroundings.


> "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly"

Fascinating, that could in a way be interpreted to imply that our conscious attention/perception is focused on the things that our minds cannot predict/control and everything else is "automated" (subconscious).

That could also be related to our perception of uncertainty and the way we deal with it. Our attention tends to be focused on what we are the most uncertain of.


To expand on your point, anecdotally it seems that the human mind works in the same way in other contexts, too. For example:

- "Grass is always greener on the other side"

- "You only fully appreciate great things when you lose them"

- All the people getting anxious or even depressed because social media constantly barrages us with the perfect pictures of perfect peoples' perfect lives, all the while forgetting that on many objective scales the fact that you're even able to get depressed by something like that puts you in the jackpot-winners category in the lottery of life compared to a huge percent of the world's population.


I think this has a related outcome in the commercialisation of automated systems. It seems that it is much easier to create automated systems that outperform highly paid domain knowledge specialists at diagnostic tasks, than it is to get them to reliably deliver pizza as well as an average human with a moped.


Exactly. This is what Steven Pinker Says in the parent article:

The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.[7]


To be fair it's not quite that simple.

Automation has already been heavily cutting at the so-called low-skill jobs starting from Industrial Revolution. Or by some definition, from the Agricultural one somewhat earlier.

But the examples of the gardener and cook that were in your citation are actually quite good, I admit.


The secret to automating repetitive jobs is often finding a different way to do it that provides a similar output, but is more amenable to machinery. A sewing machine doesn't sew at all like most humans, a dishwasher doesn't wash like you would with a sponge etc.

Gardening is hard to automate, but harvesting of somethings (grains, mostly) is heavily automated. Cooking is more amenable to augmentation than full automation, but there are exceptions mostly around processed foods.


    we're least aware of what our minds do best
I'm not sure we are aware of anything.

If you ask me:

    How much is 2x3?
I might think:

    That's easy! I just have to imagine 3 stones.
    Then I imagine two piles of 3 stones.
    Then I count all those stones. 6!
But how did I came up with that? How did I even figure out what the question is about? It's all unconscious and so far we have no clue how to program a system that grasp even these type of easy questions.

It could be a nice approach to AI to try and breed (via AI/Genetic Algorithms/Whatever) a system that can solve math questions, independent on how they are phrased.


To be fair, you have been taught what the symbols 2, x, 3, and 6 mean, and what multiplication means, and how it can be thought of as n sets of m items. 2 and 3 are small enough numbers that subitizing kicks in, which does indeed appear to be an evolved, hardcoded mechanism. But what about 7x9? Most of us have either memorized it, or have to break it down into more elementary operations, using an algorithm that has been, again, taught to us. Never mind something like 435x928.

It is certainly possible to teach a machine how to reason about algebra from the first principles, allowing it to solve much, much more complex problems than multiplying small integers. The program doesn't know how it is executed, either, but it does have a much better understanding of what multiplying actually is than most humans.


The mind is the layer that we use to make sense of the world, it is designed to see outside because of the way the world works, but it is difficult to see inside.

Every concept that we can come up with is articulated in terms of concepts that have been stored in the minds memory (inference) the funny thing is that we are born with almost empty minds as we grow older it starts to feel up to the point where we cant even think without using language which is also not inborn but is acquired here on earth through the mind.

The problem is to see behind this layer. If we observe the mind through the mind we only see that it is a mechanism for storage, processing and inference of the stored data. Learning is simply finding new inferences between data or acquiring new data from the senses that are connected to the mind.

But as humans we are biologically limited, our senses can not even pick up everything this already implies that the what we know through the mind will always be limited (unless if you believe in evolution and that our senses will evolve).

Meditation does this with a sort of denial of the knowledge that we acquired through the mind (closely related to the nature of the ego) that is why highly spiritual people often say perceived reality is an illusion, and to some a deity is a way to see outside of the box.

AI is trying to simulate the mind with what we know about it through it.

I also feel like this is what the incompleteness theorem is about, we have some sort of axioms of Intelligence that we got through the mind but these finite axioms can not proof all the dynamics of Intelligence.


I would say the biggest challenge with general AI is that our brains are NOT empty when we’re born, and we have no way to map those connections. We don’t have memories yet, but we’re wired to recognize faces, learn languages and use sensory data. If a baby’s brain was empty, we could most likely bootstrap a human on an exaflop supercomputer today by feeding it the things a baby would see/hear.


As have often been pointed out.

A cleaning lady is going to have her job longer than a radiologist.


It was probably only by coincidence with the publishing of this link that this morning someone from IP 46.123.245.139 changed "Hans Moravec" in the wikipedia article to "Matej Moravec" . Right @gardkor?


What's the background to this comment? At least for me, this comment, together with the calling out by name of a specific user, is context free—besides why anyone, or this particular person, would do this, I have no idea what the significance of 'this' (the substitution of a different first name) even is—and seems not to contribute much to this conversation except for casting aspersions (possibly well deserved or possibly not; without context I have no way of knowing).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: