Hacker News new | past | comments | ask | show | jobs | submit login

I think it's clear an important element of AGI simply involves traversing large search spaces: This is essentially what deep neural nets are doing with their data as they train and return classification results... and it's not unreasonable to think they can already do this with a performance on par with the human brain.

The problem is that there's a lot of additional "special sauce" we're missing to turn these classification engines into AGI, and we don't have a clue if this "special sauce" is computationally intensive or not. I'm guessing the answer is "no" since the human cortex seems so uniform in structure and therefore it seems to me it is mostly involved in this rather pedestrian search part, not the "special sauce" part.

(disclosure: I'm not a neuroscientist or AI researcher)




Actually I think there's good reason to think the opposite; that the "special sauce" missing in current machine learning approaches will in fact be very computationally intensive.

I am thinking of two things in particular that current ML approaches lack; that are ubiquitous (well arguably for the second...) in neuroscience: feedback and modeling uncertainty

A proper Bayesian approach to uncertainty will be akin to an expectation over models; that's an extra order of magnitude.

Feedback is also likely to be expensive. Currently all we really know how to do is 'unroll' feedback over time and proceed as normal for feedforward networks.

Also keep in mind that inference and training are pretty different things; inference is fast, but training takes a lifetime.

> and it's not unreasonable to think they can already do this with a performance on par with the human brain.

I disagree for the reasons stated above; we know we're missing a big piece because of the lack of feedback and modeling of uncertainty.

I do happen to be a neuroscientist and an ML researcher... but I think that just means I am slightly more justified in making wild prognostications... which is totally what this is... But ultimately I'm still just some schmuck, why should you believe me?

Nothing I have said in this comment can be considered scientific fact; we just don't know. But I have a feeling...


> I am thinking of two things in particular that current ML approaches lack; that are ubiquitous (well arguably for the second...) in neuroscience: feedback and modeling uncertainty

You'll be interested in a recent post about a new paper for an AI physicist then: https://news.ycombinator.com/item?id=18381827




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: