I am thinking of two things in particular that current ML approaches lack; that are ubiquitous (well arguably for the second...) in neuroscience: feedback and modeling uncertainty
A proper Bayesian approach to uncertainty will be akin to an expectation over models; that's an extra order of magnitude.
Feedback is also likely to be expensive. Currently all we really know how to do is 'unroll' feedback over time and proceed as normal for feedforward networks.
Also keep in mind that inference and training are pretty different things; inference is fast, but training takes a lifetime.
> and it's not unreasonable to think they can already do this with a performance on par with the human brain.
I disagree for the reasons stated above; we know we're missing a big piece because of the lack of feedback and modeling of uncertainty.
I do happen to be a neuroscientist and an ML researcher... but I think that just means I am slightly more justified in making wild prognostications... which is totally what this is... But ultimately I'm still just some schmuck, why should you believe me?
Nothing I have said in this comment can be considered scientific fact; we just don't know. But I have a feeling...
You'll be interested in a recent post about a new paper for an AI physicist then: https://news.ycombinator.com/item?id=18381827