The reason progress towards general artificial intelligence is "slow" is because it's a hard problem.
Formulating a definition of intelligence precisely enough that it can be optimized is incredibly difficult.
We can capture facets of it, in some settings, but also have to deal with the cost of hardware and the difficulty of acquiring data.
Sometimes you can get inspired by examining how animals behave, or analyzing the brain, or considering what it means to learn in abstract, but considering one thing in isolation means that you tend to hit a wall eventually.
We refine ideas, we try things, and are buoyed by the advances in technology that makes some strategies possible, but for every major milestone we have a ton of things that people tried and couldn't get to work.
I personally have spent months working on stuff that ultimately yielded a minor improvement; I've spent days proving results that ended up taking half a page in some papers.
It's not easy, and no one expects it to be easy, although some jerks shilling something singularity-related might make such claims.
But saying (to pick an example from the article) "oh, you just need to incorporate emotions, bro" into a learning agent is just about the dumbest thing I have ever heard.
The new paradigm that must be followed is building an agent with a neural network that evolves according to the 'neurons that fire together wire together' approach combined with reinforcement learning.
It's really a very general statement about neural development. I have no idea how important the high level structure of the brain is for human thought. It's possible that most brain regions are irrelevant and have more to do with regulating bodily processes than anything else, mere machinery for running a body. But who knows.
The layered residual and wide reaching connections of neocortical neurons are probably equivalent to the 'airplane' in the bird flight analogy.
hebbian learning seems real, but also incomplete. As normally stated it’s a positive feedback loop — if that’s all that was going on, we would be incapable of change.
There are a number of non-hebbian learning paradigms (e.g., volume learning) that seem true too.
EDIT: Great talk from Chomsky https://youtu.be/TAP0xk-c4mk
Antonio Damasio is a good read.
But unlike pure scientific pursuit, like say in Academia; Startups have a time limit by which they need to produce the end result i.e. a product and the success of it depends on several other variables.
There are startups which are primarily research oriented, but majority of so called 'AI' startups are working on producing commercial solution from someone else's thesis paper which itself may not have proved to offer significant advantage over conventional methodologies.
When the VC's fail to get their returns from a number of failed 'AI' investments, it affects the entire ecosystem.
There’s only two or three companies in the top 25 of organizations leading AI research. The vast majority are universities. https://link.medium.com/L2JrbVKAT1
Nah, just learning pyTorch now.
I don't know how to quantify it precisely, but it seems to me that linear increases in intelligence require exponential increases in computing power, both in digital ai and in animals.