I found this particularly interesting. I have a young daughter and I find it fascinating how quickly and reliably she can learn an object and then recognise others in a way a computer just can’t do at all. Eg see very rough line drawing of an object, then recognise that animal in real life.
It’s also interesting the way I can explain new objects or animals based on one she knows and then have her recognise them easily. Eg a tiger is like a lion but has no mane and does have stripes.
This comes very naturally when teaching and learning and feels very uniquely ‘human’ compared to the rigidity of computer vision I’ve seen so far.
This project blows my mind, by the way. I still think it's the coolest thing to come out of AI research so far.
I'm astounded. Zero-shot visual reasoning?? As an emergent capability?? Um. wat.
>"GPT-3 can be instructed to perform many kinds of tasks solely from a description and a cue to generate the answer supplied in its prompt, without any additional training. For example, when prompted with the phrase “here is the sentence ‘a person walking his dog in the park’ translated into French:”, GPT-3 answers “un homme qui promène son chien dans le parc.” This capability is called zero-shot reasoning. We find that DALL·E extends this capability to the visual domain, and is able to perform several kinds of image-to-image translation tasks when prompted in the right way."
Is there any way to try it out easily? Everything on that page looked like it was pre-rendered (constrained-choice mad libs).
It does appear pre-rendered and my guess is the blog might only showcase the use cases that worked the best. Or it might take a really long time to generate results.
Given some text options and an image, CLIP will predict/guess which of the texts is a description of the image.
It handles images in various art styles well?
See : https://openai.com/blog/clip/
The reasoning required for making the building blocks image is really impressive.
It's funny isn't it how we're so amazed at these abilities, but even a fairly slow-witted child could do all this stuff.
While Numenta doesn't have the amazing results of DeepMind and friends, I think they are doing really great stuff in the space of biologically plausible intelligence.
A new book by Jeff Hawkins is coming out .
"Brilliant....It works the brain in a way that is nothing short of exhilarating." ― Richard Dawkins.
Surely we can track any and every physical mechanism in the brain "accessible" to consciousness to build itself up from assuming it is physical. How can we still not take that set of physical mechanisms and build a lasting model of consciousness.
I have a personal bet that no progress will be made on consciousness in my lifetime. Maybe one of the theories we have is correct and we just can't test it correctly. I could easily be wrong. But it seems like we lack the imagination, not the technical ability, to find the answer. And not much has changed from my outside perspective in 30 years. Look, I'm genuinely puzzled by science's inability to have a working model of consciousness. I'm not asking rhetorically why we have no lasting models, I don't know what the reason could be.
To me these are even more "woo" than string theory which gets lambasted around the popular alt big-science channels.
Having a model does not mean you can necessarily run the computations to sufficient accuracy to make good predictions. Even with something as well-understood as gravity, N-body problems are still quite difficult and calculations must be numerically approximated. This method works fine for most systems but diverges rapidly if a chaotic perturbation is present.
A few different photons impinging on an eye can completely change the state of the brain a few seconds later.
I wonder, because if there were, such standards should apply to the simulated copy of the worm's mind, as well.
It depends on whether the simulation itself is conscious, which is an open question in the philosophy of mind (is our wetware the only feasible substrate?) with much debate. If the answer is no, it's therefore not capable of suffering and not worthy of ethical concern, but as of yet we don't know the answer. We may of course wish to assume that it is conscious, just for the sake of risk aversion.
Imagine a worm hell webpage that had a full neural simulation of the worm with its pain receptors all firing constantly. Would it be immoral to open that page and leave it running? Imagine instead of getting rickrolled by a link, you get wormhelled by a link and cursed with the knowledge your computer just instantiated a worm spirit to hurt it. What if someone made a webpage that had that secretly embedded in the background?
(What about the opposite: what if you had a worm heaven simulation in a webpage? Are you doing anything good by leaving that running for a while and then closing it? Does it make a difference if there's persistence or not?)
Imagine if you placed the worm hell or heaven simulation in a smart contract on a blockchain and made transactions to execute it. The simulation would be executed by every node in the network that verifies the transaction, and it would be executed again by anyone that verifies the blockchain any time in the future.
Certain compressed file and video formats contain instructions for computing the decompressed results. Imagine encoding the worm hell or heaven simulation in a short video file, so that merely viewing the video file causes the computation to happen on your device.
Well this is pretty moot anyway while we're talking about worms specifically, but at some point it might be doable with higher animals like mice. Or humans. Substitute different animals for worm in any of this.
My current thinking is that computations that don't use their results to meaningfully affect the outside world are meaningless. Nobody who opens the webpage that has it secretly in the background without realizing it is committing any sin. Creating or deliberately accessing the worm hell webpage (without entangling the results meaningfully with the external world) is bad only to the degree that it normalizes the idea in your mind of hurting beings that are fully entangled in our world.
If you simulated a worm hell, and then physically instantiated the resulting worm in real life (say by writing its neural data into the neurons of a real live worm), then you should be considered as having done the moral equivalent of torturing a live worm. If you run worm hell and then give the resulting worm full internet access, then you've done the moral equivalent of torturing a live worm to the degree that worms can live, socialize, and empathize online (which is not much in the case of worms, but is significant for humans).
If you run worm hell and then persist the result, it's bad in the sense that there's continuous potential for the situation to become one of the above. Sharing the results of a worm hell simulation is more bad because it increases the potential for the above.
I don't think there's any moral difference between running two identical deterministic worm hell simulations and saving the results, vs running the worm hell simulation once and saving the results and then copying the results file once. (I don't think copying the results file makes much difference except as much as it increases the risk of sharing the results.)
To me at least, understanding is quantified by the difference between the number of bits in your model and the number of bits in the thing you want to understand. "True understanding" is when you've gotten the model down to a small enough size that you can fit it in your head. But there is no a priori law that everything interesting in the universe should be able to fit in your head. We've just been lucky so far. Maybe we will keep making progress understanding how consciousness works, but never shrink it down far enough that we are satisfied with the "explanation".
You mention our understanding of physics. That understanding only got pinned down and refined by spending billions on large research projects.
The only brain related research project I know off that has a budget even close to what we routinely spend on physics is the Human Brain Project , and that only started in 2013. It's not expected to be complete until 2023 and even then it is only laying the foundation for more ambitious projects to build on.
To add to other replies, there's a gap in our ability to measure these mechanisms.
We can see the big picture with bad resolution (e.g. lesion studies, EEG, fMRI), or take tiny keyhole peeps with somewhat better resolution (e.g. single-neuron recording or MEAs). Moreover, each of these methods gives us just a narrow kind of indirect data (e.g. blood oxygenation, or electric activity at few synapses) out of a rich functional context.
They are amazing but perhaps still far too limited to inform truly detailed models and understanding.
Building an artificial brain made of billions of artificial neurons and of synapses is a worthy goal but it could be like building a working computer. Having a computer does not tell us what is running in the computer. Software is the ghost in the machine. We don't want to simulate neurons and synapses, we want to simulate the programs that are running in actual brains. But what are they? And how could we find that out?
I believe we'll finally figure out some 'Maxwell's Equations of Consciousness', stare at them for a while, then collectively say 'ah crap' because seeing the maths doesn't equate to understanding what it means.
(Perhaps, with a bit more "arbitrary horizontal diversification based on subsets of inputs" rather than mainly "specialization by function".)
You could view it as 'thousand brains' with lots of weight sharing for efficiency+regularization.
There is no global “teaching signal” or “delta rule” error correction. Learning via reward and punishment is a wrong level of abstraction for fundamental cognitive tasks like visual object recognition or “parsing” auditory signal.
Sometime people mention dopamine as a kind of reinforcement signal but it operates on completely different time scale, orders of magnitude slower than any iterative optimization model would require.
And the energy and time spent on iterative optimization in ANNs is not available to living organisms with constrained resources.
If you’re interested in authoritative opinion on what kind of learning is biologically plausible see e.g. prof. Edmund Rolls recent book called “Brain Computations”.
It's about computing the amount by which you adjust the weight.
And unless there's been a major development in neuroscience that I'm not aware of, backprop is not the way the brain does it.
- there's not a "correct" training prediction
- there's not a "final layer"
Recently they talked about grid cells models: https://twitter.com/Numenta/status/1357836938955218945
and reviewing the new 'Tolman-Eichenbaum Machine' memory model by James Whittington' lab: https://twitter.com/Numenta/status/1362187375900651526
The way an artificial neural network actually functions can most certainly be interpreted as multiple sub-systems "voting" to reach a conclusion.
You can even explicitly design architectures to perform that task exclusively (mixture of experts).
Since ANNs were loosely designed on the way we assume the brain works ... back in the 50's ... how is this an original idea?
Am I missing something?
FWIW there definitely was an old paper where the animal model (IIRC a ferret?) was deafened and had its visual cortex ablated, and then they wired the lower level visual regions to the auditory cortex, quite early in development. The animal could sort of see, and the auditory cortex had developed a bunch of visual cortex like properties, which is pretty awesome. But at the same time it was certainly not the same thing as using the visual cortex to do so, if you look at the actual behavior the animal had clear deficits in its visual capabilities. Of course that is far from a perfect version of the experiment being described though.
Anyway, I read that a long while ago but here is a link to a broader review written by the researcher that did that work, in case anyone is interested in digging further: https://pubmed.ncbi.nlm.nih.gov/16272112/
Further each of the thousand neural nets could have a different learning algorithm.
Soon after you start thinking how voting can be implemented, you will quickly realize, that one of the best ways to do it is to connect each of 1000 networks to each of N outputs with some sort of weights (in trivial version - uniform), which effectively turns 1000 networks into 1 network with extra N unit layer, and no weight reuse (e.g. probably a disadvantage) in all but the last layer.
Numenta Platform for Intelligent Computing - https://news.ycombinator.com/item?id=24613866 - Sept 2020 (23 comments)
Jeff Hawkins: Thousand Brains Theory of Intelligence [video] - https://news.ycombinator.com/item?id=20326396 - July 2019 (94 comments)
The Thousand Brains Theory of Intelligence - https://news.ycombinator.com/item?id=19311279 - March 2019 (37 comments)
Jeff Hawkins Is Finally Ready to Explain His Brain Research - https://news.ycombinator.com/item?id=18214707 - Oct 2018 (69 comments)
IBM creates a research group to test Numenta, a brain-like AI software - https://news.ycombinator.com/item?id=9401697 - April 2015 (19 comments)
Jeff Hawkins: Brains, Data, and Machine Intelligence [video] - https://news.ycombinator.com/item?id=8804824 - Dec 2014 (15 comments)
Jeff Hawkins on the Limitations of Artificial Neural Networks - https://news.ycombinator.com/item?id=8544561 - Nov 2014 (16 comments)
Numenta Platform for Intelligent Computing - https://news.ycombinator.com/item?id=8062175 - July 2014 (25 comments)
Numenta open-sourced their Cortical Learning Algorithm - https://news.ycombinator.com/item?id=6304363 - Aug 2013 (20 comments)
Palm founder Jeff Hawkins on neurology, big data, and the future of AI - https://news.ycombinator.com/item?id=5917481 - June 2013 (6 comments)
Numenta releases brain-derived learning algorithm package NuPIC - https://news.ycombinator.com/item?id=5814382 - June 2013 (59 comments)
The Grok prediction engine from Numenta announced - https://news.ycombinator.com/item?id=3933631 - May 2012 (25 comments)
Jeff Hawkins talk on modeling neocortex and its impact on machine intelligence - https://news.ycombinator.com/item?id=1945428 - Nov 2010 (27 comments)
Jeff Hawkins' "On Intelligence" and Numenta startup - https://news.ycombinator.com/item?id=59012 - Sept 2007 (3 comments)
The Thinking Machine: Jeff Hawkins's new startup, Numenta - https://news.ycombinator.com/item?id=3539 - March 2007 (3 comments)
When neural nets seriously hit the scene 30-odd years ago (I read a lot of the early papers, even having to mail off to get some in those pre-Internet days), no one seriously thought they were the way the brain works, just that they presented the opportunity to simulate a tiny slice of the way brains might work.
As Ted Nelson so perfectly puts it, "Everything is deeply intertwingled."