In case anyone is looking, here is they paper they presented: https://drive.google.com/file/d/13u14zNsi7uthLpwCoMrckwhasau...
But isn't what they call sleeping basically regularization? In particular, mini-batch and drop-out??
The model that they are creating is a sparse model. This means that, for the vast majority of possible inputs, the machine should produce no hypothesis whatsoever about what it is looking at. In fact, a good sparse representation only produces a representation when the input is in the space of valid inputs. If the model is given a valid input, it should come up with a representation of that input, if it is given garbage, it should prefer to not try.
Signal is pretty easy to find, you just open your eyes, explore your environment, and update your weights accordingly.
We would alternatively like to train the model to recognize when it is looking at nothing, but we have to be pretty careful about what we show it. If any signal is present at all, we definitely don't want to discourage our model from finding it. That was, after all, the point of the model in the first place. Fortunately, because of the size of the space, we can be certain that if we completely shut off the inputs and feed it noise instead, it is very very far from a valid input, and anything that it includes in its representation was included in error.
> “The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Los Alamos computer scientist and study coauthor Garrett Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”
- Why we dream?
- Why we process information while we sleep?
- Why lack of sleep is associated with serious health problems?
- Why the amount of sleep is so long - about a third of the day in the most common case - but varies a lot from person to person and by age?
A model of a brain as a neural network that's constantly learning, but for stability purposes needs a restart every so often, doesn't seem convincing in terms of other things we think we know about sleep.
2) Well, it would get in the way of actually living? If you're talking down some stairs and then your brain's motor cortex starts working on consolidating a skill and you fall you'd hurt yourself.
3) I don't think we know that. Possibly we had to sleep for memory related reasons and our bodies just evolved to use that time for other sorts of important housekeeping when it wasn't moving around doing stuff?
4) Learning is very important and takes time. In addition to the procedural memory consolidation in REM sleep we also move long term memory to a more stable form of storage in other phases. But neural nets don't really have anything analogous to a brain's explicit memory so this doesn't really bare on that.
Sleep is primarily an energy conserving activity, being able to survive periods of low energy availability (winter) is an evolutionary advantage.
The why we sleep book made me think of this hypothesis.
He states that dreaming likely came before consciousness too which I found interesting.
As in, our early brains form reality using the same concept as dreaming.
I'm very excited by the budding neuromorphic phase of generality research for these kinds of discoveries
So I’d argue that ML systems already “sleep”, it just looks more like dolphin sleep due to their similar need for continuous operation.
Yesterday I was listening again to some videos of rain, to help me relax. I know very little about what happens when we sleep or why we dream. But if it is sort of like exposing our brains to soft noise, then that helps me understand why rain relaxes me.
Definitely not as some sort of half baked ML project though that pushes simple articles on demographics it thinks can't read.
You're proposing something like "Accept-Semantic-Complexity", or maybe extend Accept-Language somehow to include acceptable complexity. It could work in principle but I doubt anyone would use it. Many websites already have a tendency to ignore Accept-Language and instead use some fuzzy region detection when they offer multiple languages.
That said, expandable context and background boxes are still a great idea.
Think about how real-time guarantees on GC are bad for throughput.
Evolution can get "stuck" when optomizations become obligatory. One theory of "hot bloodedness" (which is not one thing, nor something that evolved once, to be clear) is large species that passively maintained some body temperature then shrinking and being forced to actively maintain it.
It is a simple, easy to state, hypothesis of what sleep is doing in organic brains, and you should note that there is an extreme paucity of those.
Perhaps the artifical brain referred to in the article is not the kind of artifical brain you're used to reasoning about, but the goal of these researchers is not to optimize their performance on ImageNet, it is to discover how the brain actually organizes itself. You should give it a read.
“AI may need friends too”
“AI may need training too!”
Called energy witch is electricity
>“AI may need friends too”
Called network and nodes
>“AI may need training too!”
GPT3 really means Generative Pretrained Transformer
Yet today, cutting edge deep learning technology is based on a crude and increasingly inaccurate model of neurons.
If we're now making discoveries that are revealing artificial processes that are similar to our own, it's a sign we're headed in the right direction.
I dispute that any time any day with you, just because we define what intelligence is, do's not mean that we are intelligent.
The useful things that we want to get out of an AI system, i.e generalized learning of abstract concepts, are most clearly demonstrated by the human brain.
Since we now seem to prefer down votes over discussion, I'll just leave this with my own speculation that the reason for this strange avoidance of the brain is that it's a dead end for both academia and industry.
It's much easier and more profitable to expand on already existing machine learning technologies than to try and find some revolutionary breakthrough in neuroscience.
EDIT: Wow you changed your comment that much that my comment makes nearly no sense anymore.
Machinery does not need to sleep. Networks do not need to sleep. Roads do not need to sleep.
Living creatures do.
Machines and Living creatures are not the same thing, they do 'learn' the same way.
So, yeah, in a way a brain is like a computer, it works as a metaphor. But no, a brain not a computer in reality. So a poor use of metaphor.
Or in another way, this sort of stuff comes from confusion in conflating the map with the terrain - a map can be useful at one level, but it is not the terrain itself so useless at a different level of observation.
what is this pls?
A neuromorphic processor is a processor that tries to function as the brain does. I say tries to because the models are still very simplified versions of how we think that a neuron works. The spiking part comes from one of these models. A neuron will only send an electrical signal to another neuron once a certain threshold is met, this is what is meant by a spike. What this allows us to do is to add a temporal/time component to a neural network. You've probably heard of "neurons that wire together, fire together". If you have a lower threshold it means that the neuron you are sending the signal to is more relevant for whatever thought process is going on right now and vice versa. New input can change these thresholds.
The biggest promise of spiking neuromorphic computing seems to be a massive reduction in energy usage while still offering decent accuracy. So for example you could use it to train a neural network to get to 80% accuracy after which you'd let another type take over to get to 95%. This field is still in its infancy though, so expect things to change/improve fast.
i.e. Hebbian learning
Usually with a Hopfield network or something similar.
My personal take is that biologically plausible approaches are attempting to discover (and in lieu of understanding, harness) the algorithms (or mechanisms that cause emergent intelligence) of our brains.
Modern airplanes don't flap their wings either.
However, it seems increasingly likely that the empirical question may be answered at some point soon!