Neural networks are great at pattern recognition. Things like LSTMs allow pattern recognition through time, so they can develop "memories". This is useful in things like understanding text (the meaning of one word often depends on the previous few words).
But how can a neural network know "facts"?
Humans have things like books, or the ability to ask others for things they don't know. How would we build something analogous to that for neural network-powered "AIs"?
There's been a strand of research mostly coming out of Jason Weston's Memory Networks research. This extends on that by using a new form of memory, and shows how it can perform at some pretty difficult tasks. These included graph tasks like London underground traversal.
One good quote showing how well it works:
In this case, the best LSTM network we found in an extensive hyper-parameter search failed to complete the first level of its training curriculum of even the easiest task (traversal), reaching an average of only 37% accuracy after almost two million training examples; DNCs reached an average of 98.8% accuracy on the final lesson of the same curriculum after around one million training examples.
Would this be an apt metaphor: LSTM's were like a student who had to know how to take a test and memorize how to do the problems - a DNC can learn how to take the test but it can look at its notes.
I guess it is unlikely that one could have an AGI without some kind of memory, so there is that.
Memorizing is just one of the actions such an agent is able to perform. Another mental action besides memory would be attention. It would also need to be able to simulate the world, people and systems it is interacting with (to know how they behave) in order to be able to do reasoning and planning.
In short, an AGI would need: sensing (deep neural nets for vision, audio and other modalities), attention, memory, estimating the desirability and effects of various actions (a kind of imagination), an extensive database of common known facts, and the ability to act (for example by speech and movement).
Many of these systems have been demonstrated. Sensing, attention and memory are common place in ML papers. Creativity is demonstrated in generative models that can write text, music and paint. Ability to predict the future and reason about it was demonstrated in AlphaGo. Speech and motor control are under development. We have most of the necessary blocks, but nobody has put them together to form a functioning general AI yet.
My preferred one is "An AGI is one which knows which are sensible questions to ask".
That's because it seems to me that most "AI-lite"-type goals are procedural. AGI needs to have agency.