The belief that particular parts of the brain perform specific functions goes back a long way. One big factor is the commonness of brain injuries during WWI allowed scientists to draw conclusion like "damage in this region results in inability to do this" and that level of understanding has to various extents gone forward since how the brain really works is actually extremely mysterious.
Keep in mind that the scientist could seem localize a function to a group of neurons at a given moment. But you're right, that sort of thinking is too simple.
Luria goes into this question of localization in The Working Brain (a book that's now quite old). Overall, neuroscience has been trying to understand the brain for a long time and not doing very well.
There's some difficulties, such as if the brainstem is also involved in a function, then it's hard to tell because when the brainstem is damaged you're usually too dead to test the function. But I think if you started studying the brain without any prejudices about localization of function, you'd still end up finding a huge amount of it.
If an engineer designs a computer that computes on encrypted information, from an observer's naive perspective, the entire contents of memory is rewritten with a random value for each step. Yet there is a mathematical relationship between those seemingly random bits and whatever is encoded. Even just using some error correction and non-deterministic parallel processing and it might look pretty much like noise if you don't have the decoder ring.
Something like that is the only explanation I can come up with. Information is stored at a level of abstraction in a structure that is maintained by being continuously re-encoded in the ephemeral computations at a lower level.
Perhaps analog FPGA's are interesting windows https://news.ycombinator.com/item?id=23432601
It might very well be a feature of this drift that old memories or information is gradually adapted. After all evolutionary speaking you wouldn't expect the brain to optimize for archiving but for making fitness optimizing decisions in the world.
The behaviour observed is puzzling if your model says that the mice brains first train on the smell, then once is it learnt the memory becomes "locked in", and then later encounters with the smell are just read-only "recall and recognition" operations. Just like how we might modify a computer's memory and then later use the memory contents in a read-only way.
But even as a layman on the subject, I can see that learning a smell and later trying to identify the same smell isn't two separate kinds of tasks. If I smell a rose for the 1000th time, I'm still learning about its smell. Why would a brain not change?
By the way, I wonder how the NeuraLink team will work around that.
However, starting to think of the brain as if it should resemble the organization of a microprocessor, with 'ALUs', working memory, well-defined boundaries etc are definitely wrong, and a misapplication of the notion of Turing equivalence. It is nevertheless somewhat common, especially in popculture.
Also, that could solve many of their problems. Maybe things did not work out very well initially precisely because of this unsuspectedly huge brain plasticity.
What we learn in this article is that the mapping changes a lot, and quickly.
This is an important difference, as previously it could be enough to converge slowly towards the right mapping, but now it has to be done quickly in order to become good enough before starting again a retraining on it.
(To sum up my remark: the retraining has to be fast enough to not become obsolete before it can be used, given the "representation drift".)
Assume NeuraLink transmits an encoded signal that there's a dog in front of you. Without "representation drift" (or rather its underlying mechanism) I don't think your brain would ever notice that signal has anything to do with dogs. The wire is inserted into a random place, it is not designed to be repositioned to fire up your "dog" neurons it needs to send "dog" signal.
As for the assumption of mapping being stable, there's no such thing at all. First and foremost the brain itself must learn to interpret signals from the wire and send them back. Any mechanism on wire's side would mostly be to establish a feedback loop for the brain to be able to learn to do that (which it will do by rearranging its internal structure).
I always assumed that the NeuraLink allowed to output from the brain to external interfaces, not to input into the brain.
I have seen the public presentations of NeuraLink by Elon Musk, and the focus was always to extract info from the brain, not the other way around.
Do you have some info related to your interpretation?
So now I understand why the quiproquo, sorry.
I agree with you that in this situation (cursor controlling) nothing is changed regarding the learning task for the brain.
I’ve been meaning to coin a fallacy for this. Every era is absolutely certain that fundamental reality is just like [currently dominant technology] and always has been. I expect the fundamental reality of 2200 to be some kind of mystical networked entity, probably either deistic or pantheistic in nature.
I cringe when this mindset worms it’s way into technical terms. I remember cringing when the term ‘genetic algorithm’ became popular. (It’s a great optimization technique for some things, but it has little to do with biological evolution.) Likewise, ‘Neural Nets’. Aaargh... incredible technology, but nothing resembling an actual neuron.
From early decades, the term ‘computer’ — I feel — kinda hampered people’s ideas about what a computer could do. Specifically, the term ‘computer’ or ‘computor’ referred, originally, to a person who added up stuff: it was a job description in the late 1800s and early 1900s. Did the term’s prior usage affect the things for which early computers were used for?
What I have been missing is a nice list with previous examples of this line of thinking. The only good one that comes to mind is how seafaring and the ubiquity of maps was one such moment in history. I'd be curious if you have other examples.
I do think these types of analogies don't really work for the brain, though, or at least only scratch the surface. (Clearly the brain networks and computes, but there's much more going on.)
I'm a physicalist, but I suspect it might take longer to fully understand the brain than it'll take to fully understand the universe. (At least to a feasible level of universe detail; the difference is there'll probably be a point where we'll know we know the brain, but we won't necessarily ever be able to know if we've fully cracked the universe at the root, even if we can explain all observed phenomena.)
I agree. In short, they've discovered that they're measuring the wrong thing. Since we don't understand a lot about the brain, that should be unsurprising.
The same thing happens in computers; in many systems, a specific memory cell at a low-level can be easily remapped to different uses over time.
An image is the same image on a different screen; a code stays the same even if produced by different devices.
In the article it seems the word "pattern" is used not in the sense of an actual pattern, ie a shape or design, independent of what produces it, but to indicate a specific group of neurons.
For me (without knowing anything about it), I would think new sensory stimuli are encoded by "smart" neurons, and that gradually, as the stimuli is experienced again and again, less and less "smart" (or versatile, or performant) neurons are tasked with encoding it.
But there is no reason to expect that the computational model has any resemblance to the one used in PCs. There are good reasons to believe that the brain's functions are not based on binary logic circuits, that (some) memories are not simple storage etc.
> Put it this way: The neurons that represented the smell of an apple in May and those that represented the same smell in June were as different from each other as those that represent the smells of apples and grass at any one time.
> This is, of course, just one study, of one brain region, in mice. But other scientists have shown that the same phenomenon, called representational drift, occurs in a variety of brain regions besides the piriform cortex. Its existence is clear; everything else is a mystery.
On top of that layer the fact that the brain constantly tries to predict what is going to happen (based on a model of the world). So the experience of smelling an apple the first time is different from smelling it subsequently. Also smelling an apple is heavily influenced by visual input.
So, I don’t find their “findings” odd at all. I also think the whole we have neurons for X is an old theory that maybe we should move away from.
- Find a newly published paper
- Assume it’s true
- Build clickbait title
- Write 2,000 words, make sure to include references to old Greek philosophers
That's not a good headline, i'm not a journalist -- but the concept of withholding information crucial to the story while using hyperbole in a generic enough way to still be factually accurate to attract both experts and laymen alike to read an article (and more importantly to generate ad-revenue, who cares about readership!?) is the definition of clickbait.
Here's my own rule of thumb : If I have to read an article to understand the broad premise, the headline failed it's job journalistically; even if it gets all the hits and ad revenue in the world.
I get that the headline is constructed as and meant to be clickbait. But it somehow manages to fit that clickbait mold while being completely bland and boring. Something like “You won’t believe this weird thing neuroscientists can’t explain” (which wouldn’t have made it past the Atlantic editors, hopefully) at least makes an effort to convince you the article is interesting by outright telling you it is.
but you're right, it's not clickbaity, except for the elephant in the room, there are way more things neuroscientists can't explain than there are things they can explain.
I have read/watched quite a few occasions where scientists (mostly physicists) show child like excitement describing a newly encountered phenomenon that they can't fit in the current model. My grasp of contemporary physics is next to nothing but my guess is the scientists get excited because it opens up an unchartered path. It's an opportunity to expand the envelop of knowledge.
This is hilarious. Thanks for sharing this. It brightened my day.
This should be a more permanent link I hope.
What? I'm a scientist, I often don’t know what’s going on! I see it more as my duty to help others understand that that is ok, and often even a good thing because that is when the fun starts. Science is a method, not a “state”. States are for religions, they “know” what’s going on because "it has been written!". Us scientists are always writing.
It's fine for many people, but some like this writer, still can't shake the idea the world must have omniscient oracles.
I think however the intersection and pop culture and science is what makes us queasy about scientific confusion. Often, while we don't know somethings completely, we know them quite well enough (anthropogenic global warming from carbon emissions, lung cancer from cigarette smoking etc).
However, if there is scientific uncertainty it often becomes an invitation for quacks and charlatans - which is why scientists are sometimes so hesitant to speculate in public.
Scientists may propose a novel model. Then they check whether the model predicts reality better than the old model within some boundaries. Rinse and repeat.
Although a model will probably seem to be "explaining" reality, this is mostly misleading and by all means unnecessary. Pop-sci will explain anything to anyone anyway, they can even explain Hawking radiation without showing even one equation.
I mean, I can think of a few brain-related questions with no good answers:
- What is consciousness and how does a brain become conscious?
- How and why is one specific consciousness attached to one specific brain (& body)? Why am I controlling my body and not yours?
To take it a bit further, I am not sure if the first question makes a lot of sense either.
I would also note that the second question can make sense, but only if you believe consciousness is not strictly tied to the brain and body.
Why assume consciousness is necessarily linked and/or only a property of isolated individuals (vs a community, possibly multi-species)?
Do we define consciousness as awareness of
1 the general environment,
2 of the self as a distinct part of that environment,
3 of the (Freudian) ego as the self?
note that 2 & 3 especially make some pretty strong assumptions-- is it possible to separate something from environment (consider the impact of your microbiome), & for #3, is a person who has temporarily suspended ego (meditation, high-flow state, psychedelic drugs) conscious?
Former brain scientist speaking.
So, it seems I'm a brain mostly (or a nervous system?), or at least it seems it's my brain that is talking to you. The question is: are my feet conscious too?
What if it looks like I'm a brain because only the brain can talk?
Regarding the consciousness of multi-organisms: we are like a colony of many cells already! So it's possible that a group of humans may share consciousness somehow (but this seems different than my own, personal consciousness).
Likewise, we use our environment as an extension of our minds, or at least, spiders do. Perhaps that's what the rest of our bodies is for our brain: just part of the environment.
I took a couple of neuroscience classes.
There are core parts of our understanding that would take a lot of counter evidence to really change. For example, evolution. It's as close to a fact as you can possibly get.
On the flip side, there is always fringe parts of science that we are actively generating new models and theories around.
The problem is distinguishing between the two. Yes, you are an idiot if you think the earth is flat. No, you aren't an idiot if you think that we don't have a full understanding of how the brain works.
Scientific "blasphemy" is making a claim with little evidence that runs counter to the current accepted and working models which have mountains of evidence behind them.
It's the "Let's rewrite the whole system" mentality whenever you encounter a small bug in the software. Scientific understanding is generally a bunch of small fixes and tweaks vs rewriting things ever few years.
So, yes evolution by several forms of selection is pretty well established but by no means do we fully understand how the natural world came to be as it is today. Which is nice if you ask me.
As you say, we are tweaking this theory.
I think some people confuse scientists with science teachers or science students. Scientists aren’t people who spend their time learning about what we already know, they are people trying to discover things we don’t know.
He won the Nobel prize, but was tremendously unhappy. For, with no more questions to answer, what was the purpose of living?
Fry then asked a question in his savant way: why is the unified theory what it is, instead of something else?
The professor then realized there was another question to ask, one that would take decades to answer, thousands of postgrads, and flaming dump trucks of grant money. He was happy again.
“And, now that I've found all the answers, I realise that what I was living for were the questions!”
(Futurama, Reincarnation, 2nd part)
Religions are specifically claiming that they know a fundamental truth about existence. Their core tenants are one of knowing a truth that doesn't have supporting evidence and cannot be falsified.
This has nothing to do with religious institutes and everything to do with a fundamental claim of the religious.
Reading your post here, I take it you believe the bible is from god or a higher authority. Do you ever question that belief?
Why are my personal beliefs relevant?
Buddhism, for one, would not only accept that someone but would say it loud and clear on its own.
When someone says "I don't believe in karma, rebirth, or the teachings of Buddha" are you talking about a Buddhist? What makes a Buddhist if not at least some belief in some of the Buddhist teachings and beliefs. (Granted, the notion of the "sacred" doesn't really exist in Buddhism like it does in abrahamic religions).
Argumentum ad populum again.
Seriously, your country needs a hard basis on logic, math and scientific method. If not, your are doomed. Hard.
For example, let’s say we smell a lychee for the first time in a shop, and the brain stores that memory using a given set of neurons. Later, a hawker starts selling lychees in the local train in Mumbai we commute on everyday. Now memories of the smell of lychees and the smells and experiences of Mumbai local trains become strongly linked. At that point, in a simplified model of the brain, I can imagine why the brain may want to move those memories so they are using neurons that are closer together. Defragmentation!
I am likely terribly wrong, but would love to learn in what way I am wrong :)
As an aside, I wonder if the drift seen in neurons represents thought. Perhaps drift has to occur for the brain to continually think and self improve, even with limited inputs. I wonder how different drift rates in regions of the brain relate to things like better long term memory or creativity.
So, even if the it’s not intimately linked with human memory, the piriform cortex may be the origin of the cellular template that later diverged into whichever other brain tissues are responsible for other types of memory.
This kinda sounds like a high school essay.
While too simple, I think by analogy of a whirlpool current in a stream: we would not be surprised at all to observe that a given molecule of water participating in the whirlpool pattern at time (t) has only a (p)% chance of being in the pattern at time (t+1).
For at least two reasons. First, the perception of smell is extremely stable in humans. You can immediately recall a smell you smelled like a child and where you were and what you were doing.
And related to that, smell is a very strong association factor for memories. Meaning you much more easily remember something if it smells like before, rather than if it just sounds or looks like before.
If the neurons of this association drift all the time, question is how come those associations stick through the decades. As a programmer, you know that if you start moving associated data in your database around, it's very easy to break your data structure (dangling foreign keys etc.). A brain isn't digital but connected things must stay connected somehow.
I like how to a layman, every finding is so obvious, but to a scientist it's not obvious because it doesn't match the other things they know about the brain.
if you don't remember a smell, how would you notice? It's not like your brain will throw up an error message. The memory just won't come to mind.
What I don't like: so many humans think they know everything or at least position themselves that way. FYI this isn't an attack on experts.
So no hot take there.
Someone here likened it to how programs can move around in computer memory. If neurons are the "silicon of the brain" or the "PHY" layer, the biggest problem in neuroscience IMHO is that we have virtually no understanding of what is the next layer up, (i.e. the "logical layer"). What is the equivalent of program counters, CPU instructions, etc. in the brain?
The brain is a nonlinear recurrent dynamical system and we are really in the dark as to how to break the dynamics down into understandable subcomponents.
To get an idea of the complexity, this paper by Randall Beer analyzes the possible behaviours of 1 and 2 neuron circuits:
Beer, R. D. (1995). On the Dynamics of Small Continuous-Time Recurrent Neural Networks. Adaptive Behavior, 3(4), 469–509. doi:10.1177/105971239500300405
Obviously this kind of analysis doesn't scale to 10^10 neurons.
1. seek new, potentially-massive rewards, stimulation rather than do the same thing over and over again without ever trying anything new
2. improve processing of stimuli that are more frequent
Why would a smell be any different? Each time I smell an apple, or taste wine, or examine a painting, I am a different person, and I also discover more nuance about the subject.
What is surprising is that the location of the subset of neurons that reacts to a specific thing (say the smell of wine) changes over time. So neurons that were used to identify the smell of wine at time T, might be doing something completly unrelated 6 months later, and the task of identifying wine smell is now handled by another subset of neurons in the brain.
It is surprising and on the surface seems ineficient since your brain has to continually transfer knowledge from one region to another rather that just ajusting the existing region to new learnings.
What if a neuron (actually a cortical column) could learn thousands of X and fire only when specific environmental conditions are met? What if you constantly wire and rewire the connections between neurons? What if multiple areas can do some thing and they arrive at a conclusion through consensus? what if your brain can predict a lot of things and just discards sensory input when it matches the expected prediction?
Instead of looking at individual neuron response properties, look for population codes. This would be actually an interesting experiment with an artificial network, to observe how the population activation drifts over time (but remains identifiable) as the network is fed with more and more data
Might the entire Neuralink concept be based on a flawed assumption?
Is this a trend for weird, meaningless headlines?
Add to that the subheading. The combination made me back out straight away in protest.
Poor poor mice. Vivisection is done for the 'greater good', but not for the greater good of the millions (billions?) of critters who have suffered it. Yes I know that modern physiology is premised on it as method, the knowledge of organisms as they live not as they are dead, but what a toll.
Q: Which is heavier, 100kg of lead or 100kg of feathers?
A: The feathers, as you have to carry around what you did to those poor birds...
The sort of reasoning that neuroscientists are struggling with is that very same kind of reasoning that computer scientists and practitioners use when thinking about ANN/PDP and deep learning.
THAT would be a great title. They are trying to sell this idea all the time.
The brain probably isn't a quantum computer, or else we'd be able to factor integers quickly in our heads.
According to philosopher Paavo Pylkkänen, Bohm's suggestion of the quantum mind "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level". 
Factoring integers is a logic operation (thus not performed at the quantum level). But an operation like identifying an object or a smell (as it is what the article here is about) could be performed at a more deep level using quantum mechanics.
I can move information from one memory location to another but the info can be identical.
That's kind of what makes the field so exciting, what we don't know is almost all of it! Plus it's not some peripheral thing, it's right there at the core of everything we are individually and collectively. Sadly it's had more than its fair share of charlatans and credulous followers. Sure, the Atlantic, nobody expects them to be any good, right? Well we probably shouldn't but sometimes we still do.
If my boss came round and said, "This is adenozine, he's supposed to know what's going on," I'd be pissed. That's just dumping blame.
You change a single pixel in an image, for example, and you get a completely different result.
just like in SSD ;-)