Hacker News new | past | comments | ask | show | jobs | submit login
The brain isn’t supposed to change this much (theatlantic.com)
196 points by tardismechanic 10 days ago | hide | past | favorite | 177 comments

I find the assumption built into this article's incredulousness odd and I doubt I would ever had made it myself. Why would we ever expect qualia to be based on specific and static combinations of neurons? The fact alone that memories last a finite amount of time should be reason enough to conclude (given our limited number of them) that that's not how neurons work, at least insofar as the qualia of memory and present experience are the same, which seems like a sane null hypothesis to me. I think there is a strong cognitive bias in the present culture of thinking of the brain like a computer or hard drive. It's not, and I would have been more surprised if the scientists here had made the opposite finding that they did: that the constant qualia (in these mice's smells) was linked to constant combinations of activating neurons. That would contradict existing observations on brain damage and imply uncomfortable things about electrically programming the brain to produce known outcomes.

Why would we ever expect qualia to be based on specific and static combinations of neurons?

The belief that particular parts of the brain perform specific functions goes back a long way. One big factor is the commonness of brain injuries during WWI allowed scientists to draw conclusion like "damage in this region results in inability to do this" and that level of understanding has to various extents gone forward since how the brain really works is actually extremely mysterious.

Keep in mind that the scientist could seem localize a function to a group of neurons at a given moment. But you're right, that sort of thinking is too simple.

Luria goes into this question of localization in The Working Brain (a book that's now quite old). Overall, neuroscience has been trying to understand the brain for a long time and not doing very well.

When multiple regions are actually needed to work together to support a function as a network, then this shows up in the lesion and injury data also. There's also a lot of data from surgeries to address severe seizures.

There's some difficulties, such as if the brainstem is also involved in a function, then it's hard to tell because when the brainstem is damaged you're usually too dead to test the function. But I think if you started studying the brain without any prejudices about localization of function, you'd still end up finding a huge amount of it.

Programs are executed in different memory addresses all the time. The hardened linux kernel randomizes itself. It's like a computer alright, just very different from a von-Neuman one and a completely parallelized one. When I started reading the article I expected something mind-blowing, but it always seemed obvious given the sheer number of neurons that they would load balance and feedback, rewire themselves and do all sort of cool, difficult to predict stuff all the time.

I think part of the mystery is that computers can copy binary information losslessly, ad infinitum. So they can move information as often as needed with (almost) no error. We don't think neurons are as reliable, and excessive copying is expected to degrade a memory/representation. It seems like this might work for discrete categorical representations, whose activation is essentially binary, but would gradually fail for anything analog/continuous.

If the computational hypothesis for minds and brains is true, and it is only conjecture though a tempting one, I can only see one way out. Analogies are a wicked thing to indulge in here. But bear with me.

If an engineer designs a computer that computes on encrypted information, from an observer's naive perspective, the entire contents of memory is rewritten with a random value for each step. Yet there is a mathematical relationship between those seemingly random bits and whatever is encoded. Even just using some error correction and non-deterministic parallel processing and it might look pretty much like noise if you don't have the decoder ring.

Something like that is the only explanation I can come up with. Information is stored at a level of abstraction in a structure that is maintained by being continuously re-encoded in the ephemeral computations at a lower level.

If I understood correctly, I think I agree with your loose analogy. I've tried combining some error correction with ongoing re-learning. It stabilizes things against drift, but not indefinitely. This is all in an extremely preliminary cartoon sketch of a model. https://www.biorxiv.org/content/10.1101/2021.03.08.433413v1....

Digital abstractions seem practically worthless when the hardware is not explicitly designed to suppress non binary behavior.

Perhaps analog FPGA's are interesting windows https://news.ycombinator.com/item?id=23432601

The Xilinx 6200 series could safely run randomized bitstreams without damaging itself. Genetic algorithms could be applied to create circuits that worked despite being riddled with what would normally be fatal design flaws.

Considering that mystifying presupposes that the function of the brain is to represent lossless or perfect memories in a static way, rather than say, maintain coherence or consensus across experiences. It's not clear at all that altering or evolving memory is bad for the system overall.

It might very well be a feature of this drift that old memories or information is gradually adapted. After all evolutionary speaking you wouldn't expect the brain to optimize for archiving but for making fitness optimizing decisions in the world.

But human representations do degrade to the extent we forget detail, and also the information changes (sure, the odour might be the same and the experimental controls perfect, but an organism probably should change how they process it over time "new, unfamiliar odour", "smelt that yesterday"... "wonder if anything's happening related to that odour which emerged recently", "meh, background odour") so some of the information update is additive

> I think there is a strong cognitive bias in the present culture of thinking of the brain like a computer or hard drive.

Good point.

The behaviour observed is puzzling if your model says that the mice brains first train on the smell, then once is it learnt the memory becomes "locked in", and then later encounters with the smell are just read-only "recall and recognition" operations. Just like how we might modify a computer's memory and then later use the memory contents in a read-only way.

But even as a layman on the subject, I can see that learning a smell and later trying to identify the same smell isn't two separate kinds of tasks. If I smell a rose for the 1000th time, I'm still learning about its smell. Why would a brain not change?

The whole premise of NeuraLink is built upon this assumption and nobody found it weird...

By the way, I wonder how the NeuraLink team will work around that.

I am guessing that what GP meant was that some people have started thinking of the brain not as a theoretical computer (i.e. as something that can be modeled as a Turing machine or lambda calculus etc), but as a microprocessor. The compational nature of the brain is a pretty wide held belief (though not universal), one that I for one hold.

However, starting to think of the brain as if it should resemble the organization of a microprocessor, with 'ALUs', working memory, well-defined boundaries etc are definitely wrong, and a misapplication of the notion of Turing equivalence. It is nevertheless somewhat common, especially in popculture.

Brain certainly has regions that are structurally different. The logic of differentiation certainly does not correspond to microprocessor, but differentiation is there as a fact.

That interface is based upon learning. It simply means the learning has to be ongoing.

Yes, but that might entail a lot of rework, now that it is a moving target.

Also, that could solve many of their problems. Maybe things did not work out very well initially precisely because of this unsuspectedly huge brain plasticity.

Not sure I understand what problem plasticity poses to NeuraLink. If anything, that is the mechanism that is supposed to power it up. How otherwise brain would learn the meaning of the signals received from it?

NeuraLink was supposed to learn the mapping, yes, but on the assumption that the mapping was relatively stable, unless I am completely mistaken.

What we learn in this article is that the mapping changes a lot, and quickly.

This is an important difference, as previously it could be enough to converge slowly towards the right mapping, but now it has to be done quickly in order to become good enough before starting again a retraining on it.

(To sum up my remark: the retraining has to be fast enough to not become obsolete before it can be used, given the "representation drift".)

You are conflating low-level I/O (like eyes and NeuraLink) with high-level reasoning.

Assume NeuraLink transmits an encoded signal that there's a dog in front of you. Without "representation drift" (or rather its underlying mechanism) I don't think your brain would ever notice that signal has anything to do with dogs. The wire is inserted into a random place, it is not designed to be repositioned to fire up your "dog" neurons it needs to send "dog" signal.

As for the assumption of mapping being stable, there's no such thing at all. First and foremost the brain itself must learn to interpret signals from the wire and send them back. Any mechanism on wire's side would mostly be to establish a feedback loop for the brain to be able to learn to do that (which it will do by rearranging its internal structure).

Wow there is something weird:

I always assumed that the NeuraLink allowed to output from the brain to external interfaces, not to input into the brain.

I have seen the public presentations of NeuraLink by Elon Musk, and the focus was always to extract info from the brain, not the other way around.

Do you have some info related to your interpretation?

Is there a difference between the brain learning input and output? The brain would need to learn that firing that neutron makes the cursor move right.

The presentation with pigs was all about exporting the brain's representation for olfactory stimuli. They were not demonstrating learning to control cursors.

So now I understand why the quiproquo, sorry.

I agree with you that in this situation (cursor controlling) nothing is changed regarding the learning task for the brain.

I think there is a strong cognitive bias in the present culture of thinking of the brain like a computer or hard drive.

I’ve been meaning to coin a fallacy for this. Every era is absolutely certain that fundamental reality is just like [currently dominant technology] and always has been. I expect the fundamental reality of 2200 to be some kind of mystical networked entity, probably either deistic or pantheistic in nature.


I cringe when this mindset worms it’s way into technical terms. I remember cringing when the term ‘genetic algorithm’ became popular. (It’s a great optimization technique for some things, but it has little to do with biological evolution.) Likewise, ‘Neural Nets’. Aaargh... incredible technology, but nothing resembling an actual neuron.

From early decades, the term ‘computer’ — I feel — kinda hampered people’s ideas about what a computer could do. Specifically, the term ‘computer’ or ‘computor’ referred, originally, to a person who added up stuff: it was a job description in the late 1800s and early 1900s. Did the term’s prior usage affect the things for which early computers were used for?

I've been leaning towards a similar mode of thinking, and like to make the point when people get too obsessed about how the universe is a simulation or a neural network.

What I have been missing is a nice list with previous examples of this line of thinking. The only good one that comes to mind is how seafaring and the ubiquity of maps was one such moment in history. I'd be curious if you have other examples.

I kind of take the opposite view. I suspect all of the analogies - the clockwork, machine, computer, network universe - may all be right. I think all of these advances and corresponding analogies better help us understand nature, and I think people making these analogies did so for good reasons. The development of the technologies opened the door to new types of system conceptualizations.

I do think these types of analogies don't really work for the brain, though, or at least only scratch the surface. (Clearly the brain networks and computes, but there's much more going on.)

I'm a physicalist, but I suspect it might take longer to fully understand the brain than it'll take to fully understand the universe. (At least to a feasible level of universe detail; the difference is there'll probably be a point where we'll know we know the brain, but we won't necessarily ever be able to know if we've fully cracked the universe at the root, even if we can explain all observed phenomena.)

Sure, I’m not saying that these theories were inherently wrong, just that the idea of the universe being akin to [current technology] is almost certainly incomplete at best and extremely misleading at worst.

> Why would we ever expect qualia to be based on specific and static combinations of neurons?

I agree. In short, they've discovered that they're measuring the wrong thing. Since we don't understand a lot about the brain, that should be unsurprising.

The same thing happens in computers; in many systems, a specific memory cell at a low-level can be easily remapped to different uses over time.

Could it not be a pattern? Either a geometric pattern or a sequence pattern, like Morse code...?

An image is the same image on a different screen; a code stays the same even if produced by different devices.

In the article it seems the word "pattern" is used not in the sense of an actual pattern, ie a shape or design, independent of what produces it, but to indicate a specific group of neurons.

For me (without knowing anything about it), I would think new sensory stimuli are encoded by "smart" neurons, and that gradually, as the stimuli is experienced again and again, less and less "smart" (or versatile, or performant) neurons are tasked with encoding it.

I mean I think it's reasonable to expect that brains function in an evolved way that should minimize energy use when not outweighed by some other benefit. I think it's very valid to wonder what the benefit is for 90% or more of the neurons involved in experiencing a specific, simple stimuli changing over a relatively short time.

yeah, especially in 'getting use' to something. if regular air all of a sudden started smelling like roses, it would be immediately noticeable, but it probably wouldn't be long before we couldn't smell anything different about the air. That has lots of implications on neuron firing, for example, people could have feelings of their significant other, sex, happiness, etc all of a sudden smelling this rosy air.

Indeed, this criticism is as old as Ned Block's Troubles with Functionalism from 1978.

People of note have seriously spoken of "the neuron to detect X".

For stuff like sight things are very rigid. You will have a Jennifer Anderson neuron, that will fire everytime you see her picture.

It is especially weird that people in this day and age would even think of the brain as a single computer instead of as a network of computers. I have no idea how well that analogy holds, but it does seem like a natural base assumption to have. That image you are viewing in your browser might be on a completely different hard drive in a completely different datacenter from month to month.

I don't think it makes sense to think of the brain as even a network of (microprocessor style) computers. The brain is almost surely a computer in the theoretical sense of the word, i.e. it is equivalent to the lambda calculus or a Turing machine.

But there is no reason to expect that the computational model has any resemblance to the one used in PCs. There are good reasons to believe that the brain's functions are not based on binary logic circuits, that (some) memories are not simple storage etc.

Agreed. What I meant was that networking is the paradigm of the times when it comes to computing, so you would expect THAT to be the base assumption, not a 20 year out of date stand-alone computer.

I mean, even modern PCs and phones are networks of computing units internally.

I hate to quote the article so directly, but to make the headline a little less click-baity:

> Put it this way: The neurons that represented the smell of an apple in May and those that represented the same smell in June were as different from each other as those that represent the smells of apples and grass at any one time.

> This is, of course, just one study, of one brain region, in mice. But other scientists have shown that the same phenomenon, called representational drift, occurs in a variety of brain regions besides the piriform cortex. Its existence is clear; everything else is a mystery.

These are the only two paragraphs worth reading in that article

I personally feel similarly about a large percentage of articles in magazines like these. You can replace the whole article with 2 - 3 paragraphs usually located within the first third of the article, roughly.

It almost feels like writers are just writing for their own benefit. Like so they can point to this stuff and talk about how great of a writer they are. But no, they're actively doing a bad job. Maybe these kinds of magazines are just trying to entertain and the actual information is basically secondary. It feels like a lot of these are science themed writing, not writing where science is the goal.

not a [neuro]scientist, but is there such a thing as “the smell of an apple”? one would think that sensory input != actual representation and also that representation is context specific.

On top of that layer the fact that the brain constantly tries to predict what is going to happen (based on a model of the world). So the experience of smelling an apple the first time is different from smelling it subsequently. Also smelling an apple is heavily influenced by visual input.

So, I don’t find their “findings” odd at all. I also think the whole we have neurons for X is an old theory that maybe we should move away from.

How to write an article these days:

- Find a newly published paper

- Assume it’s true

- Build clickbait title

- Write 2,000 words, make sure to include references to old Greek philosophers

- Publish

Honestly, “[Anything-s]cientists have discovered a phenomenon that they can’t explain” is like the least clickbaity title ever. It’s basically equivalent to “Scientists have had a good day at work”.

"Scientists can't explain mechanism behind neurological representational drift" allows for individuals that understand the terminology to skip reading the article.

That's not a good headline, i'm not a journalist -- but the concept of withholding information crucial to the story while using hyperbole in a generic enough way to still be factually accurate to attract both experts and laymen alike to read an article (and more importantly to generate ad-revenue, who cares about readership!?) is the definition of clickbait.

Here's my own rule of thumb : If I have to read an article to understand the broad premise, the headline failed it's job journalistically; even if it gets all the hits and ad revenue in the world.

I think that’s the disconnect here: I don’t see the original headline as attractive at all (or hyperbolic, but that’s irrelevant here). It is not only uninformative (no argument about that), it’s uninformative enough that my reflexive reaction is to shrug and move on. (And I’m not immune to clickbait, but it does have to be enticing.)

No it's not. That it textbook click bait. "You won't believe this secret that is short enough to put in the headline, but we won't do that"

It seems that the point of my comment failed to get across (I did try to make it more explicit initially, but the result sounded disparaging and I didn’t like it).

I get that the headline is constructed as and meant to be clickbait. But it somehow manages to fit that clickbait mold while being completely bland and boring. Something like “You won’t believe this weird thing neuroscientists can’t explain” (which wouldn’t have made it past the Atlantic editors, hopefully) at least makes an effort to convince you the article is interesting by outright telling you it is.

actually, it's "Scientists have a bad day at work." Scientists explaining a phenomenon is a good day at work.

but you're right, it's not clickbaity, except for the elephant in the room, there are way more things neuroscientists can't explain than there are things they can explain.

> actually, it's "Scientists have a bad day at work."

I have read/watched quite a few occasions where scientists (mostly physicists) show child like excitement describing a newly encountered phenomenon that they can't fit in the current model. My grasp of contemporary physics is next to nothing but my guess is the scientists get excited because it opens up an unchartered path. It's an opportunity to expand the envelop of knowledge.

yes, and people often laugh after a faceplant, but successfully executing a flip is better.

This is more like tripping yourself over by moving your arm through your torso and knocking your leg out of position. A failure in a sense, but you just did something nobody else knew people could do.

Well, kind of, finding things you can’t explain (but still feel you should be able to) can be difficult, so I’d say that “skimmng papers all day and walking away feeling you haven’t changed the contents of your brain much” is the bad day. But yes, actively reducing humanity’s or even your own personal ignorance is a much better day, of course.

this can also be an algorithm for an AI news writer.

When anything like this gets published, I always wonder "what would it look like on a 6502?"


> We can thus conclude they are uniquely necessary for the game perhaps there is a Donkey Kong transistor or a Space Invaders transistor.

This is hilarious. Thanks for sharing this. It brightened my day.

Getting "ExpiredTokenThe provided token has expired.Request signature expired at: 2021-06-11T11:10:33+00:00"


This should be a more permanent link I hope.

"Scientists are meant to know what’s going on, but in this particular case, we are deeply confused."

What? I'm a scientist, I often don’t know what’s going on! I see it more as my duty to help others understand that that is ok, and often even a good thing because that is when the fun starts. Science is a method, not a “state”. States are for religions, they “know” what’s going on because "it has been written!". Us scientists are always writing.

Heh, came here to moan about this also. I bet the scientists interviewed for this story groaned so hard when they read that line. It's like...a Hollywood blockbuster idea of "scientist"

Most religions would say much the same, in fact — if you read the “experts”. So it’s very similar: a laity who believe everything is understood and a priesthood in awe at the mystery.

`knowledge blinds`

This may be better expressed as "When scientists publish research, they usually have some hypothesis (or hypotheses) to explain it, or maybe a theory they were hoping to prove". But in this unusual case, they admit they are confused.

No, it s an admission by the journalist he believed scientists were like modern priests, and an expression of his surprise when he realized that Science destroyed comfortable certainty to instead provide more questions.

It's fine for many people, but some like this writer, still can't shake the idea the world must have omniscient oracles.

In fact confusion is good. It shows honesty, and is way more exciting to the scientific community.

I think however the intersection and pop culture and science is what makes us queasy about scientific confusion. Often, while we don't know somethings completely, we know them quite well enough (anthropogenic global warming from carbon emissions, lung cancer from cigarette smoking etc).

However, if there is scientific uncertainty it often becomes an invitation for quacks and charlatans - which is why scientists are sometimes so hesitant to speculate in public.

Scientists do not hope to prove theories (except mathematics, where they have axioms, so they can prove things - sometimes).

Scientists may propose a novel model. Then they check whether the model predicts reality better than the old model within some boundaries. Rinse and repeat.

Although a model will probably seem to be "explaining" reality, this is mostly misleading and by all means unnecessary. Pop-sci will explain anything to anyone anyway, they can even explain Hawking radiation without showing even one equation.

Also, how can it be surprising that a brain-related phenomenon is not well understood?

I mean, I can think of a few brain-related questions with no good answers:

- What is consciousness and how does a brain become conscious?

- How and why is one specific consciousness attached to one specific brain (& body)? Why am I controlling my body and not yours?

I agree with your sentiment in general. However, I will note that the second question only makes sense for non-biological, dualistic notions of consciousness. If consciousness is taken to be a function or phenomenon happening in my brain&body, then there is no question to ask. It would be like asking 'why am I digesting my food and not yours'.

> why am I digesting my food and not yours

haha :D

To take it a bit further, I am not sure if the first question makes a lot of sense either.

I think the first question does make sense. By the same analogy, asking what is digestion precisely and how the stomach digests food is perfectly sensible.

I would also note that the second question can make sense, but only if you believe consciousness is not strictly tied to the brain and body.

Why assume a consciousness requires a brain?

Why assume consciousness is necessarily linked and/or only a property of isolated individuals (vs a community, possibly multi-species)?

Do we define consciousness as awareness of 1 the general environment, 2 of the self as a distinct part of that environment, 3 of the (Freudian) ego as the self?

note that 2 & 3 especially make some pretty strong assumptions-- is it possible to separate something from environment (consider the impact of your microbiome), & for #3, is a person who has temporarily suspended ego (meditation, high-flow state, psychedelic drugs) conscious?

Former brain scientist speaking.

We have some evidence that our own consciousness is specially linked to our brains, because brain alterations can alter it in a way that modifying other body parts don't. For example, removing a foot vs removing a portion of my brain will have different impacts on my mind.

So, it seems I'm a brain mostly (or a nervous system?), or at least it seems it's my brain that is talking to you. The question is: are my feet conscious too?

What if it looks like I'm a brain because only the brain can talk?

Regarding the consciousness of multi-organisms: we are like a colony of many cells already! So it's possible that a group of humans may share consciousness somehow (but this seems different than my own, personal consciousness).

Likewise, we use our environment as an extension of our minds, or at least, spiders do[0]. Perhaps that's what the rest of our bodies is for our brain: just part of the environment.

I took a couple of neuroscience classes.

[0] https://www.newscientist.com/article/mg24532680-900-spiders-...

Because consciousness itself degrades as the brain degrades? In more or less Predictable ways even.

Human consciousness does. But we have no idea which other things in our environment are conscious. We can't directly observe even other humans' consciousness.

Absence of evidence is evidence of absence ;)

a radio also can malfunction and stop making sounds but that doesn't mean the transmission has stopped elsewhere.

A scientist is meant to be curious. Saying you fully understand something is the opposite.

The mainstream media suffers from representational drift as it seeks to build scientists into secular priests whose dicta must be obeyed.

This is exactly what it feels like. Worship at the altar of Science™ or else, anything else is blasphemy.

There's nuance that's often hard to capture.

There are core parts of our understanding that would take a lot of counter evidence to really change. For example, evolution. It's as close to a fact as you can possibly get.

On the flip side, there is always fringe parts of science that we are actively generating new models and theories around.

The problem is distinguishing between the two. Yes, you are an idiot if you think the earth is flat. No, you aren't an idiot if you think that we don't have a full understanding of how the brain works.

Scientific "blasphemy" is making a claim with little evidence that runs counter to the current accepted and working models which have mountains of evidence behind them.

It's the "Let's rewrite the whole system" mentality whenever you encounter a small bug in the software. Scientific understanding is generally a bunch of small fixes and tweaks vs rewriting things ever few years.

I just want to add that yes, evolution by natural selection is a mechanism that can help us understand how the natural world came to be as it is. But not entirely. There is also sexual selection for example which leaves species with traits that decrease their pure chances of survival as an individual (think of peacocks dragging along that ridiculous but beautiful tail). And maybe there are more effects, yet undiscovered. I know that group selection is still often a center of debate. I like David Sloan Wilson's work about it [0].

So, yes evolution by several forms of selection is pretty well established but by no means do we fully understand how the natural world came to be as it is today. Which is nice if you ask me.

As you say, we are tweaking this theory.

[0] https://en.wikipedia.org/wiki/David_Sloan_Wilson

That was my first thought... if scientists knew what is going on, they would be out of a job.

I think some people confuse scientists with science teachers or science students. Scientists aren’t people who spend their time learning about what we already know, they are people trying to discover things we don’t know.

Imagine how boring the universe would be if we understood everything. I'm sure we'd adapt but it would be tough for a while knowing we accumulated the totality of knowledge.

We would start bothering lesser beings, as the Q do ;)

There was a good Futurama episode about that. The Professor got ahold of a new microscope that let him see down to the smallest particle, and from that he discovered the unified theory.

He won the Nobel prize, but was tremendously unhappy. For, with no more questions to answer, what was the purpose of living?

Fry then asked a question in his savant way: why is the unified theory what it is, instead of something else?

The professor then realized there was another question to ask, one that would take decades to answer, thousands of postgrads, and flaming dump trucks of grant money. He was happy again.

“And, now that I've found all the answers, I realise that what I was living for were the questions!”

(Futurama, Reincarnation, 2nd part)

Religion is the laws to let you do science at peace, and has nothing to do with how science works.

It sure seems like that has not been true historically. Religions are organizations full of people that have worldviews. When science finds evidence contradicting those worldviews it often becomes quite a bit less 'peaceful' real quick.

False. Civilization and law.

To say that religion “knows” is to be fundamentally unaware of religion. The core tenant of judeo-Christian belief systems is one of piety - being humble and accepting the fallibility of humans and thirsting to overcome those imperfections by studying the universe to better understand the divine principals which we call “truth.”

Yeah... And how many religions would accept someone that says "I don't think we have enough evidence for the existence of god". Or "This holy text appears to have some flaws that make me question whether or not it was divinely inspired".

Religions are specifically claiming that they know a fundamental truth about existence. Their core tenants are one of knowing a truth that doesn't have supporting evidence and cannot be falsified.

We are talking about religion, not religious institutions. According to bible, these are not the same. Another key theme of religion that atheists seem oblivious to. The important point here is to understand that all powerful institutional hierarchies can and will become corrupt. Religion is meant to bypass corrupt religious institutions by design; there’s a higher authority.

How do you know the bible should be used as a source of truth? Do you actually question the veracity of biblical claims? How do you determine which parts of the bible are inspired and which aren't?

This has nothing to do with religious institutes and everything to do with a fundamental claim of the religious.

Reading your post here, I take it you believe the bible is from god or a higher authority. Do you ever question that belief?

I take the Bible and related texts to be the key document(s) upon which the religious beliefs in question are ultimately based. For believers, the Bible/Torah/Koran are of more importance than the institutions, the priesthood, etc... If you want to talk about religious beliefs you would do well to focus on the scripture.

Why are my personal beliefs relevant?

> And how many religions would accept someone that says "I don't think we have enough evidence for the existence of god"

Buddhism, for one, would not only accept that someone but would say it loud and clear on its own.

There are still fundamental tenants of Buddhism that are accepted as true without evidence. The fact that Buddhists don't believe in deity isn't really the point of my post. My point is that like all religions, a part of Buddhism is to know something without evidence.

When someone says "I don't believe in karma, rebirth, or the teachings of Buddha" are you talking about a Buddhist? What makes a Buddhist if not at least some belief in some of the Buddhist teachings and beliefs. (Granted, the notion of the "sacred" doesn't really exist in Buddhism like it does in abrahamic religions).

He, good joke. Then they should kick the prayings and beliefs out and just start doing... science.

Yes compared to “anthk”, Newton was an idiot. “He”indeed.

Argumentum ad autoritam. Something said from Newton wont make it true.

No, I was making an empirical argument: most great scientists have been religious. It would not be too hard to quantify this if you scraped a few Wikipedia links.

>most great scientists have been religious

So what?

Argumentum ad populum again.

Seriously, your country needs a hard basis on logic, math and scientific method. If not, your are doomed. Hard.

It seems to me that this phenomenon of "representational drift" fits well with the results that show that every time a memory is remembered, it is essentially deleted, and re-recorded. Given what we know about how memory and experience make use of large swaths of the same neural pathways, it would make sense that this is the case with re-experiencing sensations too, not just re-remembering them. And if that is the case, then it would make sense that it's possible for these representations to be re-recorded in a different place or in a different way. For example mixed together with other neural wiring that is correlated but wasn't there when the original representation was made (basically de-duplicating the file system). It would also suggest that either pathways are mutating, or different neural pathways are "competing" for the right to be the ones to process a given input (and if it's the latter, then it also implies that there must be some kind of fitness function "how well did this pathway process this input" that allows a new pathway to win out over a previous one).

Sounds like defragmentation are first sight. Seems likely that the storage algorithm for human memory could use periodic optimization, especially since we recontextualize memories over time.

For example, let’s say we smell a lychee for the first time in a shop, and the brain stores that memory using a given set of neurons. Later, a hawker starts selling lychees in the local train in Mumbai we commute on everyday. Now memories of the smell of lychees and the smells and experiences of Mumbai local trains become strongly linked. At that point, in a simplified model of the brain, I can imagine why the brain may want to move those memories so they are using neurons that are closer together. Defragmentation!

I am likely terribly wrong, but would love to learn in what way I am wrong :)

There is I believe also a theory that suggests that the brain actively rewrites and combines/compresses memories to make them more useful. That also fits in well with this defragmentation concept.

I've had a hypothesis for a while that the brain performs compacting garbage collection while it sleeps at night. While it's probably not that simple, seeing that various sibling posters are hinting at the same notion is interesting.

I had a similar thought, albeit dumbed down just that neurons represent RAM locations, and memory locations can change but the data can be identical to data that was stored in other locations previously...

Reminds me of how if you take out the speech center of the brain immediately, you permanently lose the ability to speak, but if a slow growing tumor takes out the same region, the brain has enough time to slowly shift over to peripheral areas.

As an aside, I wonder if the drift seen in neurons represents thought. Perhaps drift has to occur for the brain to continually think and self improve, even with limited inputs. I wonder how different drift rates in regions of the brain relate to things like better long term memory or creativity.

It’s good to study the piriform cortex to understand memory formation / neuroplasticity, as (IIRC) it’s one of the most primitive “plastic” parts of the brain. In some species (e.g. fruit flies) the piriform cortex is the only part of the brain that changes at all after birth; i.e. it’s the only place where learning could possibly be going on.

So, even if the it’s not intimately linked with human memory, the piriform cortex may be the origin of the cellular template that later diverged into whichever other brain tissues are responsible for other types of memory.

> Scientists are meant to know what’s going on, but in this particular case, we are deeply confused.

This kinda sounds like a high school essay.

And one that doesn't seem to understand that science is the process by which we discover/uncover things we DON'T know.

I have a similar reaction as bejelentkezni – why would we expect a specific sensation to always activate the same set of neurons over time? It would seem to me that we are observing patterns in a dynamic system. Their relations to other patterns seem more relevant than the specific substrate members participating in the pattern.

While too simple, I think by analogy of a whirlpool current in a stream: we would not be surprised at all to observe that a given molecule of water participating in the whirlpool pattern at time (t) has only a (p)% chance of being in the pattern at time (t+1).

> why would we expect a specific sensation to always activate the same set of neurons over time?

For at least two reasons. First, the perception of smell is extremely stable in humans. You can immediately recall a smell you smelled like a child and where you were and what you were doing.

And related to that, smell is a very strong association factor for memories. Meaning you much more easily remember something if it smells like before, rather than if it just sounds or looks like before.

If the neurons of this association drift all the time, question is how come those associations stick through the decades. As a programmer, you know that if you start moving associated data in your database around, it's very easy to break your data structure (dangling foreign keys etc.). A brain isn't digital but connected things must stay connected somehow.

I like how to a layman, every finding is so obvious, but to a scientist it's not obvious because it doesn't match the other things they know about the brain.

I mean

if you don't remember a smell, how would you notice? It's not like your brain will throw up an error message. The memory just won't come to mind.

I think the point is that there are smells that we do remember for a lifetime, which must be explained. Either these stable memories aren't mediated by the cells in piriform that Schoonover and Fink recorded, or there is a mechanism that supports drifting yet stable distributed representations.

Or sometimes you get drift, and then you don't remember a smell, and this happens all the time - which is why if we re-smell something, we don't necessarily hit the same neurons, but we also remember old smells, because those didn't drift. Or is that excluded by the experiment?

My understanding was that these experiments didn't completely rule out the possibility of a stable sub-population. Also, the olfactory bulb projects to other areas besides piriform, and those might support long-term stability. My extremely speculative unfounded conjecture is that piriform is actually a novelty encoder, and plays a role in encoding new memories in a changing environment. But, I think the data don't quite support this at this time.

Stable storage mediums also work by refreshing data over time, eg DRAM. I can't think of one where data must be refreshed by moving it around physically, but surely there is one.

Delay line memory.

It’s worth reading the linked paper on the causes/consequences of representational drift: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7385530/

What I like about this, and what I like about existence in general is: While we continue to learn new things sometimes rapidly sometimes slowly, there is so much we truly don't know and it's both frightening and exciting. Constantly updating our priors.

What I don't like: so many humans think they know everything or at least position themselves that way. FYI this isn't an attack on experts.

In my experience people that call themselves “expert” think they know everything.

So no hot take there.

TLDR: Representation of the world doesn't seem to be locked to specific neurons or groups of neurons; over time, they drift around to completely different groups of neurons.

Someone here likened it to how programs can move around in computer memory. If neurons are the "silicon of the brain" or the "PHY" layer, the biggest problem in neuroscience IMHO is that we have virtually no understanding of what is the next layer up, (i.e. the "logical layer"). What is the equivalent of program counters, CPU instructions, etc. in the brain?

The brain is a nonlinear recurrent dynamical system and we are really in the dark as to how to break the dynamics down into understandable subcomponents.

To get an idea of the complexity, this paper by Randall Beer analyzes the possible behaviours of 1 and 2 neuron circuits:

Beer, R. D. (1995). On the Dynamics of Small Continuous-Time Recurrent Neural Networks. Adaptive Behavior, 3(4), 469–509. doi:10.1177/105971239500300405

Obviously this kind of analysis doesn't scale to 10^10 neurons.

My bet is reinforcement decay and strengthening exists to promote novelty rewards. Smell 1000 roses, it's a big "meh" but the identification of them becomes easier and better with strengthened pathways. This would be adaptive for the organism to:

1. seek new, potentially-massive rewards, stimulation rather than do the same thing over and over again without ever trying anything new

2. improve processing of stimuli that are more frequent

What is so confusing about this? The first time I read a book, it impacts me in a certain way. Every successive time, it hits me differently. I take different life experiences to it, and I understand it more deeply because I've read it before.

Why would a smell be any different? Each time I smell an apple, or taste wine, or examine a painting, I am a different person, and I also discover more nuance about the subject.

That your brain continually learns and evolves is not surprising at all.

What is surprising is that the location of the subset of neurons that reacts to a specific thing (say the smell of wine) changes over time. So neurons that were used to identify the smell of wine at time T, might be doing something completly unrelated 6 months later, and the task of identifying wine smell is now handled by another subset of neurons in the brain.

It is surprising and on the surface seems ineficient since your brain has to continually transfer knowledge from one region to another rather that just ajusting the existing region to new learnings.

we are making a big assumption here: that neurons have tasks and are responsible for X.

What if a neuron (actually a cortical column) could learn thousands of X and fire only when specific environmental conditions are met? What if you constantly wire and rewire the connections between neurons? What if multiple areas can do some thing and they arrive at a conclusion through consensus? what if your brain can predict a lot of things and just discards sensory input when it matches the expected prediction?

Well, I do find it’s easier to do significant refactors by copying bits of code into a new file or project rather than trying to make changes in-place.

> If representational drift can happen in the piriform cortex, it may be common throughout the brain.

Instead of looking at individual neuron response properties, look for population codes. This would be actually an interesting experiment with an artificial network, to observe how the population activation drifts over time (but remains identifiable) as the network is fed with more and more data

Isn't Neuralink (and other brain interfaces, I assume) also based on the idea that you can pinpoint which neurons are responsible for each function in the brain? If I've read the article correctly, it seems to suggest that they're a moving target. How can permanent implants interface with a moving target?

Might the entire Neuralink concept be based on a flawed assumption?

Perhaps the brain will naturally adjust to Neuralink, and similar technology, via a feedback loop that would tend to keep the Neuralink-monitored neurons more static?

> Neuroscientists have discovered a phenomenon that they can’t explain

Is this a trend for weird, meaningless headlines?

> “Scientists are meant to know what’s going on, but in this particular case, we are deeply confused.”

Add to that the subheading. The combination made me back out straight away in protest.

It's clickbait, nothing more.

Imagine if we discover most of the brain's functionality isn't in the brain or any other identifiable part of the body. Rather the brain just connects to something else, that's outside the known timespace.

I find the whole spin of this article a little odd. Isn't it enough to talk about how interesting representational drift is without spinning fairy tales about science?

I would really like to find a way to incentivize journalists to stop doing that kind of thing. Artistically misleading people shouldn't be seen as acceptable in factual journalism. This should have been rejected and rewritten.

> They needed to develop surgical techniques for implanting electrodes into a mouse’s brain and, crucially, keeping them in place for many weeks.

Poor poor mice. Vivisection is done for the 'greater good', but not for the greater good of the millions (billions?) of critters who have suffered it. Yes I know that modern physiology is premised on it as method, the knowledge of organisms as they live not as they are dead, but what a toll.

Reminds me of the joke I heard the other day:

Q: Which is heavier, 100kg of lead or 100kg of feathers?

A: The feathers, as you have to carry around what you did to those poor birds...

Someone have found a long post title that can't spoil anything from the article content, so HN users have to click to understand what it is about :/

The real question here is: what insights do computing professionals, whom work with neural networks or parallel distributed processing as their day job, bring to the table?

The sort of reasoning that neuroscientists are struggling with is that very same kind of reasoning that computer scientists and practitioners use when thinking about ANN/PDP and deep learning.

"Stupid article tries again to make fun of smart people"

THAT would be a great title. They are trying to sell this idea all the time.

They're not trying to make fun of smart people, they're trying to basically to Shyamalan the research, so every next paragraph wows you with its unexpected plot twists.

There are theories [1] that part of the brain's function might work at the quantum level. If true, we probably won't be able to really understand what happens by measuring it this way...

[1] https://en.wikipedia.org/wiki/Quantum_mind

Classical computers also do things "at the quantum level"; CPU gates rely on quantum effects, probably the HDD does too, etc. But it doesn't prevent understanding because that computer part implements a simpler interface.

The brain probably isn't a quantum computer, or else we'd be able to factor integers quickly in our heads.

Not necessarily.

According to philosopher Paavo Pylkkänen, Bohm's suggestion of the quantum mind "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level". [1]

Factoring integers is a logic operation (thus not performed at the quantum level). But an operation like identifying an object or a smell (as it is what the article here is about) could be performed at a more deep level using quantum mechanics.

[1] http://philpapers.org/archive/PYLCQA.1.pdf

Hmm… it seems like that's only true if it doesn't actually mean anything impressive. Your neurons or nose obviously can rely on quantum effects in the same way a modern semiconductor transistor process does. But that doesn't imply anything huge like "consciousness can't be emulated on a classical computer."

there is also the almost-definitely-true idea that the mind is an emergent property of the body. We can't understand emergent phenomena using reductive approaches, in the same way that we can't understand quantum phenomena using classical approaches.

What if neurons are just like storage spots? RAM.

I can move information from one memory location to another but the info can be identical.

That's the question put forward in the article itself, just not framed like yours. The question is if the neurons are just like storage spots then there has to be some mechanism that we don't know yet that does that transfer without losing the information (in the case of RAM, we give the instruction to transfer). That not-yet-known thing is the main mystery here (that is if the neurons are just like storage spots. They can be entirely different as well).

I'm working on this at the moment; Reconsolidation should work, but requires that representations/memories be periodically re-activated so that the neurons can re-learn how to encode them. Still very preliminary: https://www.biorxiv.org/content/10.1101/2021.03.08.433413v1....

The title reeks of scientific ignorance - science is all about investigate things unexplained...

Your comment ironically reeks of scientific ignorance itself. Science is as much about confirmation of prior observations and testing of established theories (and in fact more so by sheer volume of studies) as it is investigation of unexplained phenomena.

I spent 2 decades working as a scientist publishing and applying for grant money. If publishing or having a paid position matters to you as a scientist, then you have to do something novel. It doesn't preclude doing what you said - but that certainly isn't enough.

As titles go this one is pretty special. It's my understanding that Neuroscientists themselves claim to understand next to nothing about phenomena in the brain. Meanwhile large amounts of what they think they might understand has such large error bars on it won't surprise them to have it all turned over in the future. They claim to have profound second level ignorance, not to even know most of what it is that they don't know yet.

That's kind of what makes the field so exciting, what we don't know is almost all of it! Plus it's not some peripheral thing, it's right there at the core of everything we are individually and collectively. Sadly it's had more than its fair share of charlatans and credulous followers. Sure, the Atlantic, nobody expects them to be any good, right? Well we probably shouldn't but sometimes we still do.

I agree with the gist, though I think you maybe over-stating things a bit. Also, neuroscience is pretty wide field.

Wouldn't this be similar to how weights for individual terms in a interpolation polynomial can change when adding a new point, while it still (to some approximation at least) captures all the old?

That little line is so toxic and terrible.

If my boss came round and said, "This is adenozine, he's supposed to know what's going on," I'd be pissed. That's just dumping blame.

Prediction: neurons are more like cfd than ann.

ANN are already producing phenomena that match our perception. The key difference is ANN is trained, then training stops, and it's used. And the brain has to always learn while in use.

What is CFD?

Probably computational fluid dynamics

How do they know that it's not the neurons who are themselves slowly drifting position over those timescales?

Maybe it's like SHA256.

You change a single pixel in an image, for example, and you get a completely different result.

Could it be that learned/familiar experience is processed with different part of brain?

yes, it is. it adapts. firings get less strong as we get used to stimulus, and neurons that fire together wire together. so different connections form on more important info.

Is this what happens during sleep? Maybe this is what causes dreams.

I'm (weakly) leaning in this direction as well. But, there are other proposed functions of dreams that are hard to reconcile with this. Some people think that dreams prevent overfitting and/or sample the negative gradient in a contrastive learning algorithm. Perhaps these are the same thing in end? But I'm not sure how it all fits together.

This is one of the reasons why NeuraLink wouldn't work.

Its just moving from short term memory to long term memory.

Doesn’t the brain rearrange itself during sleep?

Wear management and prevention?

just like in SSD ;-)

Thanks for the click bait headline :(

can someone comment tldr?


TL;DR; over time the same physical objects/processes fire up different neurons when we experience them. Nothing unexplainable to see here.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact