Hacker News new | comments | show | ask | jobs | submit login
Jeff Hawkins Is Finally Ready to Explain His Brain Research (nytimes.com)
491 points by tysone 65 days ago | hide | past | web | favorite | 69 comments



This seems to be article that gives more information: A Theory of How Columns in the Neocortex Enable Learning the Structure of the World: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5661005/

Excerpt: A simple thought experiment may be useful to understand our model. Imagine you reach your hand into a black box and try to determine what object is in the box, say a coffee cup. Using only one finger it is unlikely you could identify the object with a single touch. However, after making one contact with the cup, you move your finger and touch another location, and then another. After a few touches, you identify the object as a coffee cup. Recognizing the cup requires more than just the tactile sensation from the finger, the brain must also integrate knowledge of how the finger is moving, and hence where it is relative to the cup. Once you recognize the cup, each additional movement of the finger generates a prediction of where the finger will be on the cup after the movement, and what the finger will feel when it arrives at the new location. This is the first problem we wanted to address, how a small sensory array (e.g., the tip of a finger) can learn a predictive model of three dimensional objects by integrating sensation and movement-derived location information.


I think the paper that the NYT article is referencing is actually this one, published yesterday: A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex

https://www.biorxiv.org/content/early/2018/10/13/442418

Jeff Hawkins is scheduled to give a talk at the Human Brain Project Open Day[1] in Maastricht, NL about this paper.

The one linked in the parent is from last year, related to the mentioned story of him running his finger across the coffee cup.

[1]: https://www.hbpopendaysummit-2018.org/programme/keynote-spea...


This paper, which was inspired by Hawkins' work, is interesting to me:

Feynman Machine: A Novel Neural Architecture for Cortical And Machine Intelligence

https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/download/...

Details in the previous paper: https://arxiv.org/pdf/1609.03971



This recent talk by Jeff Hawkins at Simons Foundation breaks it down quite well. And includes a live re-enactment of his coffee cup "epiphany"

Does the Neocortex Use Grid Cell-Like Mechanisms to Learn the Structure of Objects?

https://www.youtube.com/watch?v=zVGQeFFjhEk

Russell Epstein also make a case for a predictive mechanism in "reorientation". Familiarizing ourselves to new physical locations may have an equivalence neurally and cognitively to the kind of "context-switch" that occurs when we switch from using one language to another.

The Neurocognitive Basis of Spatial Reorientation

https://www.cell.com/current-biology/fulltext/S0960-9822(18)...



I think that the visualization he used in the book was perfect. Imagine turning around and seeing an apple upon your desk. You would be surprised; and by that it shows the implicit modeling we do of our reality.


Quite a few old thinkers have thought up models for that kind of prediction we do. Ex: Piaget. Gary Drescher's book "made up minds" presents his PhD work building a simulated environment and a robot in it that builds concepts about its environment through exploration.


Reminds me of the old mind palace tricks. Lots of mystic traditions rely on speaking about concepts as being close or far from each other in 3D space even when they have no physical representation.

The idea that the columns main function is to relate things in space makes a lot of sense.


>This is the first problem we wanted to address, how a small sensory array (e.g., the tip of a finger) can learn a predictive model of three dimensional objects by integrating sensation and movement-derived location information.

This doesn't follow from above but the idea of a pattern matching machine is certainly pretty great though I do see the parallels with FPGAs and how, even if the structure is the same, the function can be completely different and thus "understanding" the code running is a super hard problem and only in certain ways similar with all people.


The premise of this research, that the cortex is broken up into discontinuous “columns”, is only partly true. If you record several neurons that are lined up one on top of another in different layers of the cortex, they will usually respond to the same stimulus or activity: for example, responding to touch on the same finger.

But the word “column” implies discontinuous modules. You can actually observe this in sensory cortex of mice and rats, where each whisker has a 1-1 relationship with a “column” of neurons that primarily respond to that whisker. This observation is what originally led neuroscientists to propose the concept of the column as a fundamental unit of organization of the cortex.

But it turned out that lots of other brain regions don’t have this kind of discontinuous organization. Neuroscientists still use the term “cortical column” but they’re really just referring to the fact that vertically aligned neurons seem to be closely connected.

When the Numenta people talk about “cortical columns” they seem to be describing the original idea of cortical columns that didn’t pan out. It’s really weird.


Not to mention that the blanket assertion that 'columnar' structure is similar across cortex is simply not accurate. Humans seem to have ~180 distinct cortical regions [0], and when we start digging into the actual cytoarchitecture they all start to look different. Yes it is possible to try to shove them all into the '6 layer' model, but if we weren't imposing an extremely strong prior for 6 onto these cell populations we would think that they were all radically different, not uniformly the same as many computationalists and theorists would like to believe.

0. Glasser and Van Essen https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4990127/


knowing practically nothing of this, and more specifically the cytoarchitectural makeup of various cortical regions, I am curious with the following questions: (I dont have a mature lexicon for formulating my questions on this subject - so please take an ELI5 perspective on my questions)

1. Does the stimuli for which a region is responsive to dictate the cytoarchitecture found in that region? Or conversely, does the cytoarchitecture dictate the response-nature of that region?

2. Can we find commonality in cytoarchitecture for regions which respond to a certain stimuli across species? (Is 'touch' always handled by the same cytoarchitecture of brains regardless of mice/men/mammal/reptile/etc?

3. re: 2. - Is 'touch' always handled by the same cytoarchitecture in all Humans? What, if any, differences in cytoarchitecture may be found in language handling regions for people who speak Japanese vs French? Or deaf vs hearing?

4. Is the mass of a a particular region along with its cytoarchitecture corollary to ones given intelligence

Lots of questions....


1. A fascinating question to which we do not have a good answer. We fundamentally do not understand the connection between which cells compose a circuit and how that circuit is related to the spatial and temporal characteristics of the information that it processes. To make an analogy to electronics, we can look at the retina and say "oh, this must do something spatial" and we can look at the tympanic membrane and say "ah! this must do something with frequency" but if you were to compare the purely cellular structures of visual and auditory cortex, you might notice that there were a bunch of spiny stellate cells in layer 4 of one region that were not there in the other, but what that means? We haven't the faintest idea. Note also, that there have been many reports of people with sensory deficits (i.e. blindness, or deafness) having "gain of function" in other cortical regions. It is not clear that this is actually what is going on.

2. Yes. This we do have very strong evidence for. Visual, auditory, somatosensory, olfactory, etc. cortex are present in all mammals and the connectivity patterns between them an other brain structures are conserved. Birds do not have laminar cortex but instead have nuclear (or nucleated) cortex where functional units seem to be organized into nuclei or bundles of cells rather than layers. However, the genes, connectivity, and function of these units seems to be strikingly similar to that of laminar mammalian cortex [0].

3. Cortex is problematic. All vertebrates have basically the same peripheral circuitry for measuring the world. Rods, cones, merkel disks (touch), tastebuds, and olfactory sensory neurons are all highly conserved. So in that sense the answer is yes. We we get to cortex, things change. There is no evidence that there are fundamental cytoarchitectural differences between language. There might be between hearing and deaf, especially if the individual has been deaf from birth. See Carla Shatz work on cortical development [1].

4. No evidence for this. We don't have a very good understanding of how or even whether certain gross anatomical features are related to intelligence. The Human Connectome Project is probably the closest to having population data that could answer the question. Otherwise the geneticists are way ahead [2], but we don't know how their results are manifest in the brain.

0. http://rstb.royalsocietypublishing.org/content/370/1684/2015... 1. https://www.ncbi.nlm.nih.gov/pubmed/8895456 2. https://www.nature.com/articles/mp2014105


Thank you for this response.

The other question/statement i have is;

Is there a discrete number of various cytoarchtectural patterns that can/do exist and are they catalogued? Assuming that through various amino acids and proteins we can get a stem cell to present as various tissue types, can we get them the present as a neuron? But then they cytoarchtecture is probably dictated by genetic coding and not amino-acids/proteins, so steering cell clusters to create a particular 'brain-circuit' is decades away, i assume? Unless there is a way to scaffold the desired outcome by putting new stems cells/neurons next to others which are arranged in the desired format already?


1. To slightly change your question, with regard to the structure of microcircuitry in the brain, of which cytoarchitecture is a purely anatomical component, we are just starting (as in about a decade or two in, but nothing systematic). Cajal and Brodmann did the pure cytoarchitectural studies over 100 years ago, and that is well understood. There are many ongoing projects to gather and characterize this across all human brain regions, most of the effort in the community centers around the BigBrain project, but not all of it is publicly available yet since many of the projects to make use of that data are just finishing their first 'grad student' cycle. On a more fundamental level we are just starting to do a systematic survey of the types of neurons in the brain (previous smaller studies have been done for decades, but have been very hard to compare [1]). We need to have that as a foundation to be able to meaningfully catalogue the circuits that are composed of them. In a bad analogy, we need to have names for the basic discrete circuit components in the brain (resistor, capacitor, transistor, etc.) before we can come up with something like a Horowitz and Hill for the neural circuits.

2. Yes, people are working on this, but whether the neurons that are created are actually like the ones in a living brain require much more research. The search term for this is 'neural conversion' and the key paper is [2].

3. It is either much simpler, or much more complicated. Axons send projections along signalling gradients during development and have something akin to a lock and key system (made of cell surface proteins) that help them hit the correct targets. If you can make the signal and design the lock and key, in theory it would work. The issue of course is that we have zero idea what sticking a specific new connection in will do, though there is this paper which suggests that pure connectivity changes can have a gain of function [3]. The other area to look in for stuff related to this is axonal repair (spinal cord repair), and that is where all the theory seems to go out the window because there are so many signals that the neurons listen to. If you dig in there you will see that people have tried scaffolds, signals, stem cells, and all other manor of hocus pocus to get it to work because the search space they are in is terrifyingly huge.

0. https://en.wikipedia.org/wiki/BigBrain 1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3619199/ 2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2756723/ 3. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5774341/


I don't know where you get this idea; they talk a lot about the connections between columns and how they reinforce each other.


It doesn't seem to me like they're suggesting that columns are "discontinuous" at all; On Intelligence, along with lectures Jeff Hawkins has given, describes in detail how columns are interconnected. What they describe sounds to me much more like "vertically aligned" and "closely connected" than "discontinuous."


I assume that you are referring to cortical columns aka hypercolums aka cortical modules and not cortical minicolumns?


Doesn't matter. What matters is whether the idea has engineering utility.

Jeff Hawkins is not a Biologist or a Neuroscientist (though he would probably qualify) trying to explain how the brain works. He is an Engineer using whatever bits he sees as potentially useful.


The entire premise of his research is that understanding how the brain actually works is key to developing an artificial general intelligence.


No, I think that's the route DeepMind and the rest went with. Jeff Hawkins seems to want the more biologically plausible model.


His book "On intelligence" is already extremely good on explaining the basic principles of how neocortex works. I still find this book one of the most influential reads ever, and have been following Numenta work ever since.

It's been a bit frustrating that after 15 years they still didn't make headlines with something more practical and groundbreaking, so I'm a bit excited to hear what will be unveiled at this conference tomorrow.


Aside from anything else, you've gotta love a book on something as poorly-understood as "intelligence" that comes with an appendix of "Testable Predictions".

His theories may be right or wrong, but he's doing science right.


I am very interested in the book. Considering that other comments have been saying that his ideas are not widely seen as worth considering by the academic community I want to compare it with the ideas that hold more merit in the eyes of academics. Could anyone point me to a good book about some of the more widely accepted theories ?


I LOVE this book. I wish they had a place on school bookshelves when it first came out as the concepts are extremely easy to follow. He provides some very simple real world examples to explain the principles and it's so damn interesting I found it hard to put the book down when I finished a chapter.


It’s weird how this headline and article makes it sound like Hawkins is being coy about his theories, or preferring to work in secret than explain them. But On Intelligence was published years ago, and contained a section on testable predictions. Likewise, Numenta has published papers, and made source code available.

So it doesn’t seem to me that the problem is that Hawkins has been uncommunicative, it’s that the wider academic and research community just hasn’t been interested. Perhaps that’s simply because his theories don’t have merit, but it’s also hard to shake the feeling that’s it more because his career, and therefore his theories, hasn’t fit the standard academic mould.

It seems to me that, if you want to be an influential theorist, you’re expected to study at a prestigious university, work with an older, respected researcher while getting your PhD, then become a professor, setup a research lab, mentor some PhD students of your own, then finally maybe spin off your research into some startups, or take a high-paying research position at a big company.

Hawkins hasn’t followed this path, so he has no network of peers and former students to evangelise his work and carry it forward. He’s done research and published it, like the scientific ideal says you should, but he’s missing the structure necessary to make it land in the research world.

It’s like a sort-of reverse cargo cult. He’s done the work of constructing a real airstrip and working planes, but he can’t persuade anybody to fly, because they’ve never seen or heard of air travel before.


It's not always like that. If his ideas worked, or somehow led to better understanding , his models would be adopted (like how neuroscientists are now diving into deep-learning). But he hasn't for decades, and at some point people just stop listening.


It’s been a while since I read ‘On Intelligence’ but I think I remember it containing a pretty good primer of his ideas, and references to open source implementation of his predictive algorithm with the invitation to apply it to specific domains because his team had no chance of exploring all the applications.

If tweets were searchable, I was really taken reading this book and I think I made a pretty grand statement that future generations would compare this book to work like The Origin of Species.

Now I’m interested to dig up my copy and read again :)


I just finished my first read through of that book last night after having had it in my “backlog” for years. I’m kicking myself for not reading it earlier. It really was a fantastically intriguing read and definitely recommend it to anyone interested in the subject matter.


Do you feel it's likely that he's created something that works and rather than using or elaborating upon his work, and thus bettering their own career, everyone has thought, "Wait a second, isn't this the Palm Pilot guy?" and discarded the idea?

Instead of just cutting out the middle bit of getting a PhD, becoming a professor, setting up a research lab, and mentoring students for decades, and leaping straight to "make a startup from an idea that works", every single person decided to go the hard road because it seems proper?

Your cargo cult idea works - he's arranged all the trappings of being an influential theorist - funding and research labs and media coverage - except perhaps for the one thing that lets him fly: a worthwhile idea.


Their paper from few days ago is here: https://www.biorxiv.org/content/biorxiv/early/2018/10/13/442... Rest of them and the code can be found in https://numenta.com/

The ideas represented and level of detail in their theory is such that the most likely response from everyone else is just 'Nice theory' and a shrug.

Hawkins and Co. have been working on this for years. Numenta sends frequent newsletters and try to keep people interested, but there is little concrete there except some interesting ideas.


After reading “On intelligence” and being disappointed by people eagerly associating ML with AI, I think this guy and his team may be onto something. It may not be much, but I welcome any attempt/perspective that is different from the “mainstream” approaches.


I have been following Jeff's work since 2005, only time will tell if his work will bear fruition. But science will be immensely grateful to Jeff for igniting the passions of several AI researchers during the deepest and darkest of AI winters (mid-2000), I was one of them and pursued research at MIT.


It was so exhilarating, that promise of maybe knowing ourselves. I remember watching the lectures on Youtube and getting On Intelligence from the library at another university department.

I was a student of linguistics when I discovered it, having started a move from theoretical linguistics to a masters in more computational stuff just as machine learning was beginning to become more widely used. Hawkins was one of a few things that combined to make me so completely disenchanted with the entire field of computational linguistics, especially as it was practiced at my university, that I just left and became a regular software developer. I decided I'd rather have to center things in CSS than throw progressively fancier maths at bag-of-words models and poorly annotated corpora and pretending it had anything to do with thinking machines.

(I don't regret this move at all since another major factor was I didn't want to go do a PhD somewhere far away; I do somewhat regret going into computational stuff in the first place, though - before that I wanted to become one of those Indiana Jones linguists who go to the mountains and write grammars of languages only spoken by a handful of 80-year-olds... that would have been really fun)


This will be exciting. His book "on intelligence" was my favorite book on how the brain works nuerally and how that relates to computers. Much better than Ray kurzweils books in my opinion.


I can recommend The Mind is Flat https://www.amazon.com/Mind-Flat-Illusion-Mental-Improvised-... which is more high level, but still seems compatible with this theory.


I’ve been following Numenta’s work since 2010 and today I’m happy to see this at the top of HN. Hopefully the deep learning community longing for AGI, will finally take a look at Numenta’s work. I also would like to shout out to Matt who’s been involved in the open source work as well as the amazing HTMSchool YouTube videos which I highly recommend for anyone interested to learn about the underlying concepts: https://www.youtube.com/playlist?list=PL3yXMgtrZmDqhsFQzwUC9...


Thank you!


To prove that the brain runs a single learning algorithm would actually be a bigger breakthrough than going from there to determining the algorithm itself. What a profound thing it would be to learn that the mechanics of the brain are fundamentally simple but applied on a massive scale of data, time, and parallelism. I applaud the effort of trying to uncover this model -- embracing the idea that The Brain is comprehensible despite A Brain being impenetrably complex is a liberating concept.


> To prove that the brain runs a single learning algorithm would actually be a bigger breakthrough

I don't think it's provable in this form, you'd also have to specify what "single algorithm" is. Because algorithms themselves are delineated by human categorizing; what you call a "single" algorithm can be subdivided into several simple(r) algorithms, and at the same time can be just one part of what someone else will call "a single algorithm".


Surely there has to be a single learning algorithm for most of the skills a modern human learns. There has been no time to evolve specialized algorithms for things like chess, programming, riding a bike, etc., and yet we can easily learn these skills.


"There has been no time to evolve specialized algorithms for things like chess, programming, riding a bike, etc., and yet we can easily learn these skills."

Not necessarily. We got a mix of innate head-start and training from parents/others for a really, really, long time before we could do that. The brains learning style and speed even changes over time going from rapid learning in intuitive way to learning slower and with more stability in a combo of self- and others-driven way. At some point, our mind is pretty well-formed where our own thoughts and explorations are driving much of our behavior. And somewhere between the kid and adult parts we start easily learning new skills.

There's probably a mix of general, specialized, and stuff in between algorithms. It changes over time, too, instead of being a static, pre-trained model.


You could conceivably have different algorithms at different layers - e.g., early inputs directly from sensors vs deeper representation layers.

You could also easily have different algorithms for different senses, evolved at various historical stages...


They’re all just optimizable functions, right?


“When the brain builds a model of the world, everything has a location relative to everything else,” Mr. Hawkins said. “That is how it understands everything.”

This is an exciting idea to me. I don’t fully understand it, but it resonates on some intuitive level. Makes me think of spatial audio, echolocation, our sense of balance, all these forms of intelligence that rely on understanding space.

Random trivia- I interviewed with Numenta back in 2007. Got an offer, was excited by the vision, but I didn’t see how they would have any kind of product within the next 5 years. 11 years later, they’re maybe about to launch something. Much respect for them pushing through on this wildly ambitious idea. I’m rooting for them.


>Makes me think of spatial audio, echolocation, our sense of balance, all these forms of intelligence that rely on understanding space.

These are all brainstem functions. No cortex required!

Binaural hearing is basically sorted out by the time you hit thalamus, and most of the heavy lifting is done in the inferior colliculus, medial superior olive and dorsal cochlear nucleus.


How is black related to loud?


In my mind, black is straight down, while loud is in front of me a little above eye level, rotated clockwise.

You can map any set of concepts into an N-dimensional space if you have a similarity metric between them. If some concepts are anchored in space, they define the overall layout. It could be that loud is related to large, which is related to tall, which is naturally anchored above eye level.


- Black bears

- Helicopters

- Heavy metal

- Clanky furnaces

- Stompy boots

Just a few.


I do like this all encompassing approach and was a fan of his book "On Intelligence"

In computer hardware and software terms: we don't necessarily need to clone an x86 CPU, glue and I/O chips, and run an actual copy of firmware, bootloader, and Windows. If we understood just one of these, how the x86-like CPU works, and could implement a functional simulator that works similarly enough that would be extraordinarily useful. ANNs are a primordial stab at this.

But this article is incredibly light on detail. I hope he does something like a follow up book to explain what he has learned in the time since.


I never really know what to make of Hawkin's research, but I'll always be grateful to him for his contributions to starting the Redwood Center at Berkeley.


Cortical column: https://en.wikipedia.org/wiki/Cortical_column

> In the neocortex 6 layers can be recognized although many regions lack one or more layers, fewer layers are present in the archipallium and the paleopallium.

What this means in terms of optimal artificial neural network architecture and parameters will be interesting to learn about; in regards to logic, reasoning, and inference.

According to "Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function" https://www.frontiersin.org/articles/10.3389/fncom.2017.0004... , the human brain appears to be [at most] 11-dimensional (11D); in terms of algebraic topology https://en.wikipedia.org/wiki/Algebraic_topology

Relatedly,

"Study shows how memories ripple through the brain" https://www.ninds.nih.gov/News-Events/News-and-Press-Release...

> The [NeuroGrid] team was also surprised to find that the ripples in the association neocortex and hippocampus occurred at the same time, suggesting the two regions were communicating as the rats slept. Because the association neocortex is thought to be a storage location for memories, the researchers theorized that this neural dialogue could help the brain retain information.


Re: Topological graph theory [1], is it possible to embed a graph on a space filling curve [2] (such as a Hilbert R-tree [3])?

[1] https://en.wikipedia.org/wiki/Topological_graph_theory

[2] https://en.wikipedia.org/wiki/Space-filling_curve

[3] https://en.wikipedia.org/wiki/Hilbert_R-tree

[4] https://github.com/bup/bup (git packfiles)


  before the world can build artificial intelligence, it must
  explain human intelligence so it can create machines that
  genuinely work like the brain
Why must intelligent machines work like the human brain? It’s just an accidental result of evolution and doesn’t define what ‘intelligence’ is?


We have no definition of intelligence that doesn't refer to human behaviour.

In part this is a curse on ML/AI, since so many of the terms that are used to define the subject and goals of the field are pre-scientific, even religious in origin - consciousness and will are two other examples.


I'm no expert but Jeff Hawkins' theories seem a bit dated to me.

The concept described in this article is the same one in his book "On Intelligence", which was written long ago. Before we knew the impact of neural network arrangement and architecture on its performance.

Examples like that paper that described a task-specific region of the brain that handles face recognition is not supported by his model.


I don't buy the "one algorithm" idea. I think small changes from region to region can generate completely different dynamics - for example different types of neuron cells, different use of neurotransmitters, or different responses in time. It might look the same under a microscope but hyper-parameters differ between regions. In other words it is a parametrised family of algorithms, not just one.

What would interest me more than the one algorithm is to find out the loss functions that steer our brain into fast learning after birth. We have prior knowledge about the world encoded in the structure of the brain and our loss functions (what the brain optimises for). The brain has many such loss functions - it can't possibly do all it does with just one like neural networks, and they are genetically evolved. These priors could be readily transferred to AI models.


There are no ‘loss functions’, our brain doesn’t optimize in machine learning sense


I think that if it learns, it must have loss functions. We just don't know how they are implemented. If we did, we could replicate the brain priors. They are probably distributed, unlike ML loss functions which distill everything to a single number.


well if it learns by remembering sets of features and matching new stimuli against this "database" than we can talk about distance functions in associative memory, and the problem of optimization reduces to a problem of matching (as well as generalizing, i.e constructing abstract representations from simpler ones, as in "chunking"). brain priors would be the distributed representations of stimulus-response pairs stored in memory


The idea that much (all?) of human thought, need, fear etc. is associated with (identifiable with?) sensation having definite location in the body, is one that should be familiar to anyone who has practised meditation as taught to Vipassana students or basic relaxation as taught to method actors.

And perhaps someone will be able to correct me on this, but didn't Kant treat spatial awareness as a priori and presumably rather fundamental?


In linguistics there is also work that attempts to understand the formation of abstractions from sensor-motoric concepts. Here's one example:

http://fas-philosophy.rutgers.edu/goldman/Spring%202008%20Se...


“You do not have to emulate the entire brain,” he said. “But you do have to understand how the brain works and emulate the important parts.”

Nonsense.

What % parts of a bird were necessary to understand for the first successful aircraft?

It may be very helpful all the way to unnecessary, no one knows that yet.


I'm not sure I'm following your point. What percentage of parts of a bird were necessary for the first successful flight? I don't know, less than 1%? Maybe less than 0.1%?

But you do have to understand [generally] how a bird works to [generally] emulate its wings [and fly].


For every biologically inspired invention, there are plenty more that were invented by understanding principles of engineering and science.

Evolution has given us some incredible stuff, but there could exist simpler, more elegant solutions. A barn door will give you lift just not as efficient as a NACA airfoil shape.

Evolutionary mimicry has its place in engineering, no doubt. But arrogantly stating that we must emulate biological systems is blatantly and demonstrably false. Otherwise, we wouldn't have nuclear power, superconductivity, lightbulb, sewing machines and computers.


Did the original source actually arrogantly state that? I suspect it had some nuance to it.


I think you are disagreeing with the rather absolute nature of Hawkins' statement while accepting that he may be on to something. My take is: we know that the brain works. OTOH the current ML-oriented focus in AI is apparently promising but leads to some surprising and disturbing anomalies - thinking in particular of how an image can be minutely "gamed" to fool a classifier into completely misidentifying it. So to me (as an outside amateur admittedly) it seems like we are missing some essential meta-information about how cognition really works, and that the known precedent of the brain is the obvious place to look.


Sure, agree with everything you've said. Moreover, understanding how the brain truly works, think of how much value that could have for medicine, psychology, etc.

The point is, history has shown over and over, it may be more or less helpful in building the best general AI we can. We just don't know.

Making an unqualified declaration that it's required is hard not to see as hyperbole as this point in history.

So is his one statement wrong, yes. Do I want to stop him? Heck no, giddy up. I hope he makes big breakthroughs.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: