How the mouse's brain is scanned, very intrusively.[2] That's both impressive and scary. They're scanning the surface of part of the brain at a good scan rate at high detail. They're
seeing the activation of individual neurons. This is much finer detail than non-intrusive functional MRI scans.
Does the data justify the conclusions? The maze being used is a simple T-shaped maze. The "state machine" supposedly learned is extremely simple. They conclude quite a bit about the learning mechanism from that. But now that they have this experimental setup working, there should be more results coming along.
1. We can observe how the state machine gets generated, first just a jumble of locations in a hub and spokes topology (no correlations), then some correlations start happening pairwise, making a kind of a beads on a string topology, and then finally the mental model snaps marvelously to two completely separate paths that meet at ends. It's amazing to see these mental models get formed in vivo out of initial unstructured perceptions.
2. In addition to standard HMM modeling, authors find that a "biologically plausible recurrent neural network (RNN) trained using Hebbian learning" can mimic some of this (but not exactly). But more interestingly, they find that LSTMs or transformers cannot. Which makes sense structurally, but it's a good reminder for those who believe the anthropomorphic hype that transformers have memory or other such (they don't :) ).
Couldn't memoryless neural networks still possibly learn the Next-State function of a Finite-State machine? Depending on the training algorithm. Especially if the eventual usage of such networks is to be called over and over again to generate the next token; conceptually this to me seems analogous to the process of finitely unrolling a while loop or a computer pipeline.
So technically we have the technology to simulate a human brain. Just not anywhere near real time. And not at any semblance of reasonable cost. And not guaranteed to simulate the important parts.
In principle that seems plausible. Assuming that there is nothing immaterial about consciousness in the brain (not unreasonable), one may conclude that we are biological information processing mechanism, which should then computable by computers. But that goes against any person's intuition of what it is to be inside of a brain, doesn't it?
Beyond the philosophical problem, there's quite a bit of science fiction that has already thought about some scenarios once we become able to create digital humans. If you've got just a few minutes read Lena [1], it's a fun uncanny read.
If you mean it on the trivial meaning that "computers most in principle be able to simulate a brain", then yes, and it has been obvious for many decades already.
If you want to say that we know what algorithms to use to simulate a brain, then no, and this paper is one advance on the goal of knowing those algorithms. But it does not go all the way there.
The civilian level is the state of the art. The chip industry is at the cutting edge, there is nothing beyond it that is available at scale, and in this instance: scale matters!
There are some small exceptions of course: RSFQ digital logic is insanely fast (hundreds of gigahertz), but nobody has scaled it to large integrated circuits.
Supercomputers are built with somewhat esoteric parts, but not secret unobtainium. At least in principle the same RDMA switches and network components are commercially available. Similarly, the specialised CPUs like the NVIDIA Grace Hopper are available, although I doubt any wholesalers have it in stock!
To believe otherwise is to believe that governments (plural!) have secretly hidden tens of billions in cutting edge chip fabs, tens of billions of chip design shops, and more.
In reality the government buys their digital electronics from the same commercial suppliers you and I do.
Only a handful of specialised circuits are made in secret, such as radar amplifiers.
Are processors and processor speed the only limiting factor in terms of applications? (probably that they are fast enough anyway, could be a non-factor, communication between neuron is not that fast compared to clock speed if I remember well)
Especially in an era in which the recorded data can be fed to an algorithm that can approximate dynamic brain maps with more or less accuracy?
One concern is the lack of ethics, or more accurately, the different ethical considerations in the spy agencies.
They have every motivation to capture personal phone calls and text chats in bulk and run them all through an LLM-like training regime so that they can ask it questions like: “Does so-and-so plan a terrorist attack?”
Somewhere in an NSA data centre there is a model being trained on your emails, right now.
This is a misconception that shows up now and then.
In the 1940s through the 1970s, the military really did have a broad tech edge over the civilian market. The USAF was once the largest buyer of transistors. In the 1980s, the civilian electronics market became much larger and passed the military market. This upset some military people. Articles about "premature VHSIC integration" appeared, complaining that civilian electronics was ahead of military devices.
There were a lot of minicomputers in DoD systems for years after everybody else was using smaller and cheaper microprocessors. Some stuff was even older. The USAF's satellite control facility and NORAD at Cheyenne Mountain had the same consoles NASA used for 1960s Apollo well into the late 1980s.
We had a very early Sun workstation, one with an auxiliary color monitor. Someone put a world map on the screen, and overlaid it with the current positions of USAF satellites and ground stations,
as a demo. A visiting USAF general saw this, and demanded that the entire system be immediately shipped to the USAF's satellite control facility. They were still using big manual plotting boards, updated by people reading printouts, to track what was coming into range of each ground station. So the USAF got the Sun system and it was immediately replaced by a new one.
There was some cool stuff. I got to see a system in the 1980s where you could look at stored photos and do pan, zoom, and rotate. The UI was a zoom lever, a pan joystick, and a rotate knob. Took a half rack of custom electronics. They were building something like Google Earth for satellite photos. Now, of course, everybody has that capability.
There were niches where DoD tried to stay ahead. NSA put much effort and money into cryogenic computing. They had gigahertz electronics in the 1960s. There were several generations of NSA cryogenic technology, but each time, the commercial market pulled ahead with a cheaper and faster technology.[1]
If you read DARPA solicitations, you can see where DoD is trying to get ahead. Non-GPS precision navigation is a big thing, for example.
This was the typical pattern by 1990. DoD would be ahead in some very narrow niche for which there was little commercial market, but overall, behind commercial technology. I've been out of that world for many years, but from what I hear, it's still pretty much like that in the land of classified tanks.
The rule in the house is we don't say "I don't know." If we don't know something, we are required to think about it and then ask a question.
Recently, he asked how an audio recoding dog trainer worked in terms of how it "went back up" because he couldn't see the internals. He knew that it went down and back up, and then knew that it was not electronic, but mechanical. I asked him to think about it. He thought, and I could see his mind working, thinking about everything in his mind where a toy of his would go down and up. He sat for around 20 seconds and asked, is it a spring? I was quite impressed considering he is 4 years old and was able to come to this conclusion.
There is a map we create, a list of things that go up and down. From that list of things that can go up and down, knowing it was not a pulley or a plunger, because it returned to its original state, he's able to limit it to the one object that would work.
The biggest jumps in my education have been directly related to people mapping Concepts and ideas instead of memorization. That's like the idea that everything is a file. From that framework you can pull out questions like, can I read, do I have permissions to read, can I write, so now when someone explains to me I knew shiny object, like a fancy Wiz Bang database, I asked a couple questions and generally know how it works.
That's something I do too, but a bit modified: instead of going to someone about a problem, we try and figure it out instead, and if we can't, we go and ask help while explaining our original solution.
It seems to reflect the general way we understand the brain right? Wiring together/firing together? Then ~ abra cadabra ~ meaningful blobs of brain buzzy stuff emerge from seemingly simple rules? It seems beautifully pure that mind maps are literally "mind maps" in a sense, a bit like we have grid cells arranged proximally to mirror physical spaces as we walk through them.
They have the largest hippocampus to brain ratio and have 3D spatial memory of all their food source locations as well as (with ravens) who their human enemies are.
The hummingbirds hippocampus allows for an astonishing memory which will give them a 3d map to all the food-nodes it knows of.
What I wonder is if the flapping of the wings is so unconscious to the fore-front of the birds intentions, such that ALL of its awareness is in its intent to find food - and she looks, and steers her head wherever - but the hippo just complies and manages flying the bird - think of the hippocampus as an in flight navigator that the frontal perception of the bird can dial in and direct.
what would be interesting would be to map out feeding locations, then see if the follow the same route?
(p.s. Ive captured hummingbirds and even taken them in an Uber, had them fly around inside of the car on a freeway... and had one that met me at breaktime daily)... they are my spirit animal...
but I think their hippocampus-brain ratio is the key to memory
A hidden state machine plus a neural net appears to be similar to how mice learn to navigate a maze.
If you hold them still and probe their brain while they navigate in VR you see a state-machine map appear in their mind. That map varies if the VR map varies.
Aah, true, and therein lies the rub. I definitely think the structure and function of cortical columns as described by Numenta is getting close to the mark on a part of it. It seems to me like Transformer models are doing something very similar to this, where each attention head is roughly analogous to a cortical column. You can't build a mind on prediction alone, though (or can you?) so we need helper structures like the hippocampus.
prediction is an essential part. basically between the state you have + the inputs from the sensors you generate a prediction. this prediction is compared to the "real world" and the state is adjusted until the prediction matches the output.
2020-12: Some research done with human subjects regarding how the brain reacts when we're reading code.
> The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks. [1]
My naive perspective is that the foundational properties of the brain are probably really similar between mice and humans. For example, we have put human brain cells into rats and the brain cells have done... something.
The chemistry is probably different in a bunch of ways "rats evolved to use this hormone to feel X, we use it to tell us Y" or some other such thing, but structurally I'd imagine that neurons function similarly.
even if there are structural differences, just the fact that we can establish a loose isomorphism between our knowledge about computational models and the cognitive processes of any living creature seems like a really profound step forward for cognitive science if it holds up.
There are some important differences between mice and human hippocampi, including different long range connections...however the overall patterns of organization across the hippocampal subfields e.g. heavy recurancy in CA3, sparse separation of signals in dentate gyrus, etc...these are very similar in structure and response patterns between species. ..gotta love the spiking data in human epilepsy patients
How the mouse's brain is scanned, very intrusively.[2] That's both impressive and scary. They're scanning the surface of part of the brain at a good scan rate at high detail. They're seeing the activation of individual neurons. This is much finer detail than non-intrusive functional MRI scans.
Does the data justify the conclusions? The maze being used is a simple T-shaped maze. The "state machine" supposedly learned is extremely simple. They conclude quite a bit about the learning mechanism from that. But now that they have this experimental setup working, there should be more results coming along.
[1] https://www.biorxiv.org/content/10.1101/2023.08.03.551900v2....
[2] https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-01...