Hacker News new | past | comments | ask | show | jobs | submit login
Possible explanations for the slow progress of AI research (wikipedia.org)
19 points by maxilevi on Nov 25, 2019 | hide | past | favorite | 15 comments



Slow compared to what? You could write equivalent verbiage for the slow progress of mathematics, chip design, or cinematography-- the only difference is that uninformed randoms feel less bold when critiquing fields these fields (as opposed to AI), or perhaps people are less inclined to waste time reading such half-formed speculation.

The reason progress towards general artificial intelligence is "slow" is because it's a hard problem.

Formulating a definition of intelligence precisely enough that it can be optimized is incredibly difficult. We can capture facets of it, in some settings, but also have to deal with the cost of hardware and the difficulty of acquiring data. Sometimes you can get inspired by examining how animals behave, or analyzing the brain, or considering what it means to learn in abstract, but considering one thing in isolation means that you tend to hit a wall eventually.

We refine ideas, we try things, and are buoyed by the advances in technology that makes some strategies possible, but for every major milestone we have a ton of things that people tried and couldn't get to work. I personally have spent months working on stuff that ultimately yielded a minor improvement; I've spent days proving results that ended up taking half a page in some papers. It's not easy, and no one expects it to be easy, although some jerks shilling something singularity-related might make such claims.

But saying (to pick an example from the article) "oh, you just need to incorporate emotions, bro" into a learning agent is just about the dumbest thing I have ever heard.


The problem with current approaches to AI using backpropagation is that while gradient descent is effective for learning a task, the brain is not a task learning machine (parts of it are but not the parts that produce human intelligence) and does not do gradient descent. Achieving human level intelligence (strong AI) will require going beyond the gradient descent paradigm.

The new paradigm that must be followed is building an agent with a neural network that evolves according to the 'neurons that fire together wire together' approach combined with reinforcement learning.


I'm not saying you're wrong, but I can't see how in the world you could possibly know any of what you said. The human brain doesn't (as far as we know) seem to use backpropagation. Everything else is wide, wide open.


I think it's accepted that neurons which interact exhibit increased axon growth between themselves. Given the local nature of biological processes I think this must therefore be the dominant process behind connection building in the brain.

It's really a very general statement about neural development. I have no idea how important the high level structure of the brain is for human thought. It's possible that most brain regions are irrelevant and have more to do with regulating bodily processes than anything else, mere machinery for running a body. But who knows.

The layered residual and wide reaching connections of neocortical neurons are probably equivalent to the 'airplane' in the bird flight analogy.


> I think it's accepted that neurons which interact exhibit increased axon growth between themselves.

hebbian learning seems real, but also incomplete. As normally stated it’s a positive feedback loop — if that’s all that was going on, we would be incapable of change.

There are a number of non-hebbian learning paradigms (e.g., volume learning) that seem true too.


We know so little about how the brain works it's hopelessly hard that we would be able to emulate it. We should maybe instead focus on making AI think the same way airplanes fly. Meaning studying the brain, especially the human brain is so hard it's no wonder we aren't anywhere near understanding intelligence, free will, and other aspects. Science is just such a hard thing to do! It's also possible we simply aren't capable of understanding those things because we have limits to our intelligence.

EDIT: Great talk from Chomsky https://youtu.be/TAP0xk-c4mk


We don't even know what fraction of what we term consciousness or intelligence comes from the brain.

Antonio Damasio is a good read.


What's your definition of consciousness?


Money. Like any scientific research, AI research requires funding. Much of current funding for AI research comes in the form of VC money to Startups who are working on building a commercial solution with AI.

But unlike pure scientific pursuit, like say in Academia; Startups have a time limit by which they need to produce the end result i.e. a product and the success of it depends on several other variables.

There are startups which are primarily research oriented, but majority of so called 'AI' startups are working on producing commercial solution from someone else's thesis paper[1] which itself may not have proved to offer significant advantage over conventional methodologies.

When the VC's fail to get their returns from a number of failed 'AI' investments, it affects the entire ecosystem.

[1] https://hitstartup.com/artificial-intelligence-thesis-papers...


I think is wrong to assume that VC’s is what fueling advances in AI work. It’s for sure contributing especially from large companies (companies that once where VC funded), but I think the bulk and hardcore advancements in AI are still coming from universities and research centers.

There’s only two or three companies in the top 25 of organizations leading AI research. The vast majority are universities. https://link.medium.com/L2JrbVKAT1


Isn’t that the general premise of VC? We have had a long boom period of startups, and may have been more than a bit blinded by our own efforts to produce useful gadgets and perhaps we have forgotten that most of the fundamental tech comes from academic and national research done with government money.


How about if there's already an existing AI and it keeps other AIs from being built by providing shitty ML software that people adopt instead of allowing for real software to thrive which would end up being the solution to the real AI problems?

Nah, just learning pyTorch now.


ML, deep learning, AI stalled theme this last week. I’d just like to mention what if we are dramatically underestimating by many orders of magnitude the computational, storage and bandwidth of organic brains? What if organic brains are using Quantum computing not yet discovered? I mean we had X17 news that opens possibilities that we are missing standard model physics. It just seems we humans have a bad habit in the historical past of being quite over confident in our level of understanding.


Full agreement. A honey bee has ~1M neurons, yet is able to navigate complex environments utilizing visual, sound and olfactory information. Assuming identical physical capabilities, just how much computer processing power would it take to make something able to function as a bee? A cat is two orders of magnitude away from the human neuron count, yet its ability just to navigate the world [1] blows all existing ai out of the water, not to mention other behaviors.

I don't know how to quantify it precisely, but it seems to me that linear increases in intelligence require exponential increases in computing power, both in digital ai and in animals.

[1] https://www.youtube.com/watch?v=kGeN8jkt4UE


Have a look at Roger Penrose’s The emperor's new mind. He has been widely critiqued by fellow physicists for invoking quantum mechanics to explain the brain, but perhaps he’s ideas resonate with you?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: