This hasn't been true for at least 20 years. There's a classic paper showing evidence of this in 1993 (O'Keefe and Reece) in rats. And has been an active area of discussion both before and since. (Note that it's not to say that rate isn't important, but as a community no one has beleived that all the information would be encoded in rate for many years)
There's lots of good explainers here that link to relevant research: http://www.scholarpedia.org/article/Encyclopedia:Computation...
Lowly mice likely have better olfactory capabilities than us. It wouldn't surprise me if their brains can handle some very specific things better than we do.
anecdotally - in front of us a woman with a dog is leaving the dog park, and the dog pulls in the opposite direction that the woman tries to go, and that goes for a few seconds until the woman "Oh!, you're right, today we parked there" and follows the dog.
And on 2 occasions spread in time and space i saw a racoon confidently crossing an intensive traffic boulevard (once it was Page Mill in Palo Alto and another was Geary in SF. The Geary was around 6-7pm, getting dark, high traffic, such crossing presents some cognitive challenge even for humans) following the basic procedure we are taught in childhood - check for traffic (and the racoons were checking for the correct direction), wait for the sufficient opening, cross to the middle, check and wait again, cross.
Thank you, thank you, thank you.
This fills an important gap.
Everyone else has realized that both happen depending on how that particular part of the nervous system works, or even what particular kind of information is flowing through it.
This title reads like it was written by a rate coder who woke up one day and was like "Woah, you mean ... there might be more to it than just averaging spike counts per unit time???"
edit: Hah, they are literally the first two headings in the wikipedia article on neural coding .
I think there almost certainly must be neural behavior which codes fairly simply based on rate, but it's very difficult to believe that there isn't neural computation based on precise timing relationships.
Source? I'm not a neuroscientist, but I'd have thought that I'd have heard of this if there were legit evidence. Are you saying that neurons might use RNA as a sort of "long-term" memory of activation patterns? This seems really unlikely! But again, I'm not a neuroscientist.
Changes to DNA would be the long term changes, not RNA. But yes, look up epigenetics, and especially DNA methylation.
There was an article on this that I don't know how to find again - the point being that neurons are not unique in their capacity for computation, they are only the most evolved formed of it.
The structure of neurons allows them to achieve "adjacency" to other cells which would be far from one another in most tissues, and electrical transmission of the action potential just allows neurons to control that chemical signaling in a very precise and directed way.
O'Keefe and Reece in 1993: https://onlinelibrary.wiley.com/doi/abs/10.1002/hipo.4500303...
... [F]iring consistently began at a particular phase as the rat entered the field but then shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle... The phase was highly correlated with spatial location... [B]y using the phase relationship as well as the firing rate, place cells can improve the accuracy of place coding.
However, the controllers all include high-resolution timers. If you need to transmit an analog value, rather than bit-coding it over 12 discrete digital IO, a clever programmer might turn on a digital output for the desired number of milliseconds, or select between 10 or 16 or however many states you want to represent with your one wire using a predefined list of durations. You can convey far richer information this way!
Always interesting to see what researchers are discovering in the automated control system that is biology... sometimes we can discover techniques for use in industry with biomimicry, sometimes biology we didn't know about seems to imitate technology we developed independently.
Did you describe PWM?
A PLC and a robot might interchange digital signals such as "In cycle", "Part present", "Faulted", "Clear of fixture", "Screw present", and "Cycle start". Hopefully the original designers also pulled a couple spare wires in case you need to add another sensor. But the old equipment is being asked to do something new, requirements now say you need to transmit, perhaps, which of dozens of part numbers to select from, or the touch point at which a sensor fired between 10 and 100mm. That's a binary signal with 6 or 7 bits, which means a $200 8-channel output card, multiconductor cable, and a $200 input card. When possible, you'd buy the fieldbus option cards from the factory and pull network cables. But in a pinch, with the more typical two spare inputs and two spare outputs, you can move some data to the time domain and work out "If aux1 is pulsed high for 680ms, the new sensor tripped at 68mm". One wire, one bit of state (0V or 24V), many values transmitted.
You could extend this concept with a clock signal, data signal, and transmit arbitrary serial data, but that's going a bit too far for most maintenance techs. A pair of timers is comprehensible.
I've spent so much time debugging weirdness getting into signals from things like lightning strikes, AM radio stations, and nearby switching power supplies....
"The phenomenon is called phase precession. It’s a relationship between the continuous rhythm of a brain wave — the overall ebb and flow of electrical signaling in an area of the brain — and the specific moments that neurons in that brain area activate. A theta brain wave, for instance, rises and falls in a consistent pattern over time, but neurons fire inconsistently, at different points on the wave’s trajectory. In this way, brain waves act like a clock, said one of the study’s coauthors, Salman Qasim, also of Columbia. They let neurons time their firings precisely so that they’ll land in range of other neurons’ firing — thereby forging connections between neurons."
My understanding of what they're saying is that it's related to "neurons that fire together wire together". Given different signal travel distances, it's necessary for neurons to fire at different times if they're to arrive at a given destination at the same time (in order to "wire together"). They achieve this timing by firing in synchrony with the theta brain waves that travel across regions.
So, with this understanding, I guess you could say the timing is encoding information, but really in essence it's only the relative spatial position - within the cortex - of the firing neuron that's being "encoded". A more useful way to view it is just that firing times are synchronized to theta waves in order to achieve larger scale coordination that compensates for signal travel distances.
Firing just before the recipient neuron fires strengthens the bond, and firing afterwards/off tempo weakens the bond.
It's an elegant concept, because it handles neural weights, a non-linear activation function, and clock speed with a simple, distributed rule.
This phase precession mechanism being discussed is what allows inputs arriving from different distances (i.e. with different signal travel times) to arrive close to the same time such that the recipient fires. Once it fires, then "fire together, wire together" can strengthen/weaken the synapses as appropriate.
The phase precession is how place cell A overlaps firing with place cell B when the individual is moving from location A to B, strengthening the connection (to create a mental link/memory between the two?)
In the place cell instance, it seems one effect of this mechanism might be for place cell firing to act as a predictive input for adjacent place cells (a pretty solid real-word prior - you can't be "here" unless you just came from somewhere adjacent!), and another might be to make prior-and-current place simultaneously available which could be used to learn direction of travel.
If this is a general (or at least widespread) mechanism, not limited to place cell firing, then it opens up all sort of learning possibilities by bringing together (in time) inputs that would otherwise be asynchronous.
It feels like that, because it is like that: there's probably way more which isn't known about the brain yet, than things known with a decent amount of certainty. Just skimming over recent papers in the fundamental area, a lot of that summarizes as 'So here we found that area A is connected to area B and modulated when things happen in C, so attributes to function D. But way more research needed, and unsure what this means in relation to E and F'.
Kidneys probably meet that criteria - we have dialysis machines which allow someone to survive without kidneys almost perfectly.
Yet the idea of replacing someones brain with something artificial and having them function as normal is still a LOOOOOOONG way off.
Something I've been thinking a lot about lately: Implicit in statements like this is the idea of a system. That some complex-seeming artifact is actually composed of a relatively smaller number of essential things and all of the observed complexity is just emergent properties of the simpler underlying system. Find the handful of hidden rules and you can build back up to the whole thing from first principles.
For example, if you were to learn chess purely by watching people play, it would be a huge struggle at first. Does how they hold the pieces matter? What role does timing play? Why does one player rest their head on their cheek while staring at the board? Eventually you start to figure out which actions are essential and which aren't. It doesn't matter where inside a square a piece is placed. All pawns are behaviorally equivalent, etc.
We really really like systems. So much so that we tend to assume everything is one. But I see no evidence to assume that biology and evolution work that. Evolution is a semi-random walk over the phenotype space and fitter organisms are discovered (mutated) entirely randomly. It may be that a kidney mostly filters blood, but also does a little of this other thing, and the fact that it pushes your small intestine out of the way is important, and also and also and also...
We can increase our understanding by learning more, but there may simply be no "first principles" for what makes an organism tick and almost all of its complexity may be irreducible. There may be absolutely no separation between "fundamental property" and "implementation detail". It may be that no terms in the grand equation of life cancel out.
Of course evolution is also messy and isn't operating out of a playbook of decomposable single-function parts. Experiments with evolving electronic hardware have resulted in circuits taking advantage of all sorts of nasty non-linear analog effects, as you might expect.
Still, given the inherently incremental nature of evolution, it is highly likely to result in a system of parts operating with some degree of independence to each other. While there are still many aspects of the brain's functioning we don't understand, it's pretty apparent that it is composed of functional parts like this - cortex, hippocampus, cerebellum, basal ganglia, etc.
But this is NOT true of all biology. I picked molecular biology as an example for exactly this reason. It’s driven by evolution with all its messiness, but yet it DOES have some reducible complexity. There really is DNA that is transcribed via certain molecules to RNA which is transcribed by other molecules into protein via a sequence of certain amino acids coded by the DNA base pairs. There is reducible complexity in spite of the crazy messiness of evolution, and it ends up looking a lot like some of our engineered systems in some instances (ie we can use the language of information theory and bits to describe encoding of genomes). We are able to use this to actually develop vaccines specifically using mRNA as a delivery mechanism, with specific, engineered changes to the transcribed viral protein spike to improve the vaccine’s effectiveness.
What I see in neuroscience looks a lot like genomics and inheritance before the discovery of DNA. And the insistence that “biological systems are entirely non reducible complexity” feels just a bit too much like a cop-out. This is not magic. If you are a neuroscientist optimistic about the field, then you must also believe there is some reducible complexity in there that will be discovered. I do get the feeling, based on research progress like in this article, that there really are some breakthroughs coming in really understanding what’s going on.
What insistence do you refer to?
This only means we know what kidneys do, not how they do it.
(But probably we do... I am not a biologist.)
The role of microtubules, for example, are mostly ignored even though they may explain the complexity of cognition displayed by relatively “simple” brains.
It's possible those microtubules in our brains are 1 dimensional superconductors, and thus might be capable of holding Qubits. We might have quantum memory.
It's also important to note that quantum computers are just faster classical computers, they are still Turing machines. Many people who are looking for some non-computable element of consciousness in quantum effects in microtubules seem to forget that.
They can encode exponentially many states simultaneously, and are non-deterministic.
But what is clear is that there is no computation that can be done on a QC that can't be done on a TM.
The role of non-determinism in computation is more debatable. There is even a notable MIT professor (who coined the term Actor model) who claims that real computers, like the Android phone I'm writing this on, are in fact not Turing machines, because of non-determinism / parallelism. He also claims that Godel's incompleteness theorem doesn't apply to actual mathematics, so I would take his words with a huge spoonful of salt.
so for a 8 bit key for example you would need to run the same operation 256 times 65536 for a 16 bit key and 4,294,967,296 for a 32 bit. so in essence quantum computation just lets you cheat on massively parallel processes but is not a hyper-turring machine or a turring oracle. in fact there are many algorithms that you get no gain from using qubit other than i higher energy bill
Qubits aren't magic either and the amount of data they can store is well known, it's just a complex bit instead of a real one.
It's important to note that probabilistic Turing machines are different from quantum Turing machines in terms of efficiency, as far as it is known today. But still, that doesn't change the fact that they can (inefficiently) simulate each other.
 - https://doi.org/10.1007/BF02650179
However, Feynman does not claim what you say he claims. His whole article is about efficiently simulating QM on a classical computer, and he shows that is not possible given what we understand of quantum probability today, at least (he does this by explicitly asking for a computer which does not grow exponentially in size with the size of the physical system it wants to simulate).
In modern CS terms, what he is discussing is whether the complexity classes BPP and BQP are equal or not (as far as we know today, just as he claims, BPP seems to be a smaller subset of BQP, but this is not proven).
This is all perfectly in line with my claim, and is in fact explicitly there in Feynman's paper: a classical computer can perfectly simulate a quantum system/computer, but requires exponential time/space to do so (as far as we know).
Isn't there a point where we could say we can't really simulate powerful enough quantum computers because they can "decode" the patterns behind any psuedorandom computation so quickly?
Like no one in the right mind would be using classical computers at that point except for anything except basic computation and word processing. We might as well say humans and pen and papers can simulate quantum computers. Quantum computer capabilities may outstrip classical computers nearly as much as they do humans with pen and paper.
"Simulation" kind of loses it's meaning when taken far enough.
But then I also see that "classical computers can never be built big enough to explore more than 400, actually more than, probably 100 qubits, 100 qubits doesn’t seem like very much. No classical computer can do the calculation of following what 100 qubits do" https://blog.ycombinator.com/leonard-susskind-on-richard-fey...
Are we really in no danger of quantum computers being able decipher patterns behind these traditional sources of noise? There is no pattern to quantum randomness. Aren't we going to have switch to truly random sources of noise eventually, instead of pseudorandom ones (anything non quantum)?
But this chaos is deterministic. Chaos means highly sensitive to initial conditions and involves nonlinearity, but it is still entirely classical and deterministic. In the real world we do not measure things precise enough to keep track of a lava lamp's deterministic evolution, but it is there within the chaos.
So I am wondering if a quantum computer, which Susskind says takes only 100 qubits to outperform any Turing Machine constructable ever, may one day do better at picking out the deterministic patterns behind the chaos of things like lava lamps. And if that happens, we may need more extreme versions of chaotic systems to keep secrets. And since quantum randomness is the only true randomness in the universe, forever indiscernable in principle; will one day all deterministic, chaotic means of adding "randomness" be replaced with quantum sources of randomness, due to how powerful quantum computers are?
*Now you could say even the lava lamp involves quantum randomness because everything is ultimately quantum. But because it is so macroscopic, it behaves more classically the a smaller quantum system.
It's very important to understand that this only applies for a limited set of algorithms. QCs are not universal accelerators. In particular, if picking out this deterministic patterns from the apparent chaos were an NP problem, the QC would be just as slow as any other computing machine that we know so far.
You're also misunderstanding how chaotic systems work. With a chaotic system, even if you know the precise time evolution rules, you're not going to be able to predict the outcome at time T, because a tiny difference in the initial conditions, or a tiny interference from the outside world, will mean vastly different outcomes.
In fact, QCs would be particularly BAD at predicting the outcome of a chaotic system, because QCs can only give answers up to some error bound, unlike classical computers which can perform exact calculations. But the error introduced by the QC itself is probably going to compound the imprecision in the initial measurements of your chaotic system.
One final note that is important to state: the problem with predicting chaotic systems is not physical or computational, it is mathematical. You can have even simple systems whose solution can vary orders of magnitude more than a variance in the parameters. Solving such a system is easy and fast, but the solution is physically meaningless: a 0.01% error in the measurements can mean that you solution is off by a factor of 100.
I was thinking along the lines of running current observed data backwards to fine tuned initial conditions. That must require lots of computational power. Are we sure quantum computers and quantum algorithms wont speed this part up. That has to be somwewhat isomorphic to factoring large numbers, which I thought QC does have a potential advantage. But maybe chaos is distinct from factoring, I don't know much about how it would be modeled computationally. I realize most of the battle is getting proper initial conditions. But quantum computing is also coming at time when quantum sensing is growing. To me with quantum computing speed ups and quantum sensors, we have the two ingredients necessary to make progress on chaotic systems. Better initial conditions and better computational methods. That was my thought. Sorry for the ramble
I don't think argument is about efficiency. "a classical computer can perfectly simulate a quantum system/computer" is not explicitily there, it's an argument against that. It seems to me you're saying anything that's not strictly proving BQP > BPP supports something else.
> The rule of simulation that I would like to have is that the number of computer elements required to simulate a large physical system is only to be proportional to the space-time volume of the physical system. I don't want
to have an explosion. That is, if you say I want to explain this much physics,
I can do it exactly and I need a certain-sized computer. If doubling the
volume of space and time means I'll need an exponentially larger computer, I consider that against the rules (I make up the rules, I'm allowed to do
He emphasizes this again in the section about computing the probabilities:
> We emphasize, if a description of an isolated part of nature with N
variables requires a general function of N variables and if a computer stimulates this by actually computing or storing this function then doubling the size of nature (N->2N) would require an exponentially explosive
growth in the size of the simulating computer. It is therefore impossible,
according to the rules stated, to simulate by calculating the probability. [emphasis mine]
So when he uses the term 'computer' he doesn't mean 'abstract Turing machine', he explicitly means 'realizable/efficient Turing machine'.
This also has interesting (morbid?) implications for how long our memories last after death.
Seems like neuroscience is still in the phase of reducing misunderstanding.
What if each time a neuron sends a signal to another neuron the potential required for that connection decreases slightly? So the next time there's an action potential between neurons it's more likely to go where it has already gone. In other words, frequent connections make that same pathway more readily traversed. Could it be that a memory is simply a new signal traveling down a 'worn path'?
I realize that there needs to be some way for this resistance change to be reset over time, so is it possible that dreams serve the purpose of writing somewhat random data, like wear leveling or trimming on an SSD?
This is just pure speculation and I have "Hello World" levels of knowledge about neuroscience. But hey, it's fun to speculate, right?
See here: https://en.wikipedia.org/wiki/Long-term_potentiation
Funnily, I think AI is a potentially really good use for understanding neurology further. There is so much disparate research out there, from the neuron, to the network, to the brain itself, it would be interested to feed it all into GPT-3, both the research papers and a large compendium of imaging and firing maps, and then ask it to look for connections. I'd be ridiculously interested in working on that (time to get a phd?).
It’s recently suggested there is nothing special about dreaming - its just a confluence of himan interpretation with a mechanism to stop you from going blind.
I only have an AI understanding here, so this could be too simple.
E.g. put all of our best analysis tools to task against an operating pentium chip and see if we can determine from first principals that it is running a W98 screen saver.
That would give us a small sense of the challenge we are facing.
It really lacks nuance
We have tools that allow us some degree of insight, but honestly, it is incredibly difficult and progress is slow and staggered.
https://pubmed.ncbi.nlm.nih.gov/25391792/ -- "Two-photon excitation microscopy and its applications in neuroscience"
https://www.nature.com/articles/s41598-018-26326-3 -- "Three dimensional two-photon brain imaging in freely moving mice using a miniature fiber coupled microscope with active axial-scanning"
Sure, it's localized and you cannot go deep, but there is so much to learn that that is plenty at this point.
Leave out the completely and it's a different story though: it's perfectly possible to 'dig in while functioning' i.e. inspect small parts using electrode arrays and that will not kill the patient and only do minimal damage (the couple of cells killed by that are nothing in comparison with the entirety). Non-invasive fMRI techniques also have come a long way but temporal resolution is low. But as you say: difficult, slow, and by no means a 'complete' view. On the other hand: no idea how one would even begin to handle the insane amount of data which would come from inspecting a complete brain. So what goes on now, tackling smaller areas/connections thereof one by one, is not even that bad of an approach.
Our ability to know how a brain works on the level of how well we know, say, the combustion engine works is severely limited by the fact that we're dealing with living beings and that the state of consciousness of the subject matters.
Sort of, but to people not knowing anything about it it might sound as if it's impossible to do anything at all in vivo so I added some information about what is possible if you do not want a 'complete' recording.
reminds me of a argument i had with my sister a psych major about whether psychology is a actually a science. her answer was it could be but it would break to many laws, violate human rights and ethics boards would frown at you for cloning hundreds of humans to raise in identical situations to do a/b testing on.
Watch for firing patterns encoded on MRNA for pattern matching, and to help predict what might come next in a sequence. That was the other part of my theory.
This idea is also born out in most real neural systems, where firing rate is well correlated with various sorts of feature presentation.
But at faster timescales other things seem to be going on.
So either the brain is entirely random, or so exquisitely determined that we can’t possibly figure out its code from statistics on the bits alone. I put my bets on the latter.
Shape of the neurons is the memory.
A fetus brain doesn't have ups and downs. It is fluidic. As we learn, we get ridges. This is just a fact to prove that neurons/ the neural fluid(neurons together) take shape as it learns. Once we establish a simple, yet truthy foundation, pile up things on this for more missing pieces.