Hacker News new | past | comments | ask | show | jobs | submit login

I think we're still lacking the conceptual framework, the grand scheme if you will.

AGI probably requires some kind of hierarchy of integration, and current AI is only the bottom level. We probably need to build a heck of a lot of levels on top of that, each one doing more complex integration, and likely some horizontal structure as well, with various blocks coordinating each other.

That's my intuition as well. I'd expect AGI to arise once we put together enough various tools into an integrated whole. So for classification of sensory input, you'd build in a CNN or the like. You'd build in something else (RNN?) for NLP. You'd use something else for planning (which might also work as the highest level controller determining "what sort of tool should I apply to solve goal X?"

I wonder what this would look like? What would be some ways one could connect different types of AI systems in a loose and fuzzy way so that they can use each others' output in a meaningful way?

I like this line of reasoning, you're basically stating that we have found effective ways of emulating some of the sensory parts of a central nervous system. Which seems intuitively right; we can classify things and use outputs from this classification in some pre-determined ways, but there's no higher-level reasoning.

There are many hierarchical AI systems though. For example Hierarchical Hidden Markov models.

Pretty sure Deep Mind made a RL system a bit like that.

Also reminds me of Ogmai AIs technology.

Well, a scooter and the Falcon Heavy are both "vehicles", but that doesn't mean the scooter will ever do what the rocket does.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact