> Neurons might contain something incredible within them.
but the HN title right now is
> Neurons might contain something within them
I guess there's some intensifier filter that removes "AMAZING" and "INCREDIBLE!" and "10 REASONS YOU'LL BE SHOCKED". But I like to imagine that people previously thought that neurons were entirely hollow
I'm rather fond of what the software came up with here. The title it was given is so bad that I'm having trouble thinking of a better edit myself, and the main article title is even worse. As for the article body:
Surely the blackout on this ferret-experiment cannot last forever.
I think we can maybe wait for a more neutral source?
Edit: sigh. To quote the guidelines:
> Please use the original title, unless it is misleading or linkbait; don't editorialize.
Not the stuff you already know they contain, like cytoplasm and a nucleus and other cell materials.
Gallistel, Hesslow (PI of ferret study, ) and colleagues constitute the second, relatively smaller, group of neuroscientists who believe synapses are only an "effect" that one sees as a result of learning. The true mechanisms are either hidden in the nucleus, cell membrane or somewhere inside the cell . This group so far has only very few substantially convincing experiments and more hypotheses. The ferret study  is one such experiment in this direction which was published in 2015. I am not aware of any more data to prove any of the hypothesis.
But of course even the inherent mechanisms that guide synapse formation and alteration are in the end guided by proteins "inside" the neuron. To me it seems these two groups are looking at the same idea at different steps of the memory learning pipeline.
- X=neural network geometric configuration,
- Y=individual synapses due to the various neurotransmitters,
- Z=cytoskeleton (already suspected to play a role)
It seems reasonable to suspect that there's a lot of duplication in the human brain - as demonstrated by the periodic articles about people who function normally with brains compressed/shrunk to a tiny percentage of normal size. And I guess that duplication comes from the multiple systems all able to provide a similar function.
For example, man with 10% normal brain size: https://www.newscientist.com/article/dn12301-man-with-tiny-b...
There is significant evidence for temporal memory maintained in the hippocampus and entorhinal cortex , just like grid cells in the hippocampus used as coordinate system for spatial and abstract navigation  while time cells facilitate "navigation" in temporal dimension.
 Hippocampal "time cells" bridge the gap in memory for discontiguous events https://pubmed.ncbi.nlm.nih.gov/21867888/
 Basic anatomy of human memory https://courses.lumenlearning.com/wsu-sandbox/chapter/parts-...
 Time cells in the human hippocampus and entorhinal cortex support episodic memory https://www.pnas.org/content/117/45/28463
 Grid cells: http://www.scholarpedia.org/article/Grid_cells
 Time (and space) in the hippocampus https://www.sciencedirect.com/science/article/pii/S235215461...
There is also the possibility that RNA is used for memory:
It seems plausible to me that the smaller the detail space one goes in, the more overall computation is limited by data-movement bandwidth rather than discreet computation operations. Intel has a couple extra CPUs in just sitting on top of their regular CPUs (manage engine and all-that). GPU computation is often bandwidth limited as well. So basically, perhaps it's possible to do storage and computation in multiple fashions in the brain and nature being nature, does all of them 'cause nothing prevents it. And that ultimately results in computation that might band width efficient but which only a bit of the CPU-like-processes because there isn't a way do things other ways.
E.g. brain can run on glucose or on ketones; muscles can run on oxygen producing CO₂ or without it producing lactic acid, etc. The body has a number of alternative mechanisms, this may be another such pair.
So its possible that both mechanisms occur simultaneously, there's just not enough evidence to clearly understand these (yet).
In higher dimensions our common thoughts are aggregated into massive socially-shared hyperbrains, each of which is segregated from the other based on both cultural and genetic similarity between specimens (mostly of the same species). Hyperbrains form a trie predicated on the commonality of our toughts and the closer you move to its roots, the closer you get to our biological origins, until eventually all species merge at the root.
Our individual biological brains then acts only as secondary devices similar to how L1/2/3 CPU cache is to the main memory of a computer. We use our brains to think only when the communication bridge is unstable, or when our experiences cannot be matched into compatibly vibrating wavelets in the hyperbrain. Our brains are also an anchoring devices of the self. While the hyperbrain encodes the shared memories and experiences of entire groups of specimens, our brain is a "diff" between the personal and the communal.
OK, anyway, I had fun making some stuff up, it's not like I understand anything this article says.
Entities would be able to seamlessly "context switch" between the different scales of shared memories/knowledge/mental models, from universal to individual. Hopefully with some rigorous isolation so that only you can ever access your individual mind. Maybe also some vandalism mitigations for those who might want to mess with the universal Neurapedia. Plus some kind of hardware switch that can fully cut the connection at a moment's notice, in the event of some neural 0-day or DoS.
In practice it might be infeasible to make it both seamless and safe from adversarial risks, but people said the same of Wikipedia. Though, the consequences of a manipulated Wikipedia article are probably a little different from the consequences of a manipulated neural interface/network.
Tack on a smattering of Orch-OR (Penrose and Hammeroff) and you have something like this, only there are actual papers about it.
I do have to wonder if Hammeroff isn't coming at the encoding part from the other direction. His work showed quantum-encoding structures in the microtubules inside neurons which could encode quite a bit of information, even to the point of functioning like little Turing machines. (Hammeroff encoded cellular automata on a simulacrum of these structures.) Maybe that "string" the article keeps talking about leads there.
At any rate, when an article uses slashes so frequently like that, you can tell it is more of a rant.
I guess I'm gullible.
Are you sure (y)our hyperbrain didnt make you write this?
On serious note: I wonder if there is a generator somewhere for this kind of BS. I have bookmarks for several (corpo lingo, resume, progressive newspeak, etc.), but not for this new age style.
Honestly, while I felt his idea was over reaching, the general idea that quantum noise can effect such a chaotic system was not entirely out of belief for me.
William James, father of American psychology, tells of meeting an old lady who told him the Earth rested on the back of a huge turtle.
"But, my dear lady," Professor James asked, as politely as possible, "what holds up the turtle?""It's no use, Professor," said the old lady "It's turtles-turtles- turtles, all the way!"
I've also heard that the anecdote (mentioned in TPR) didn't involve William James and that it was Bert Russell talking to the lady.
Where does Aristotle actually say that time is known as an object of the senses? I assure you he never says this. For Aristotle, time is the measure of change with respect to succession. Time is not a "thing"!
Tabula rasa doesn't mean that mental faculties don't exist. That's not what it means for something not to be in the mind that was not in the senses.
The interviewee is silly in his hostility toward Aristotle, especially given the basic lack of understanding.
Edit: also see his article on this from 2015 "Here's Why Most Neuroscientists Are Wrong About the Brain" https://nautil.us/blog/heres-why-most-neuroscientists-are-wr...
>With one caveat: whatever it looks like, it has to be apparent that its form gives it the functional properties of the polypeptides (the class of molecules that DNA belongs to).
But DNA isn't a polypeptide.
Neurons have to have something in them. Even empty space has quantum fluctuations!
Single neuron is very complex beast. They seem to be more similar to multi-layer perceptrons with multiple nonlinear steps. When neuron adapts that's memory single neuron level.
theres this big neuron called a Purkinje cell with a large dendritic tree (input side).
Purkinje cells have parallel fibers (axons) and climbing fibers (CF also axons) that synapse with their dendritic tree.
To make a long story short one type carries a warning signal which I will call the Start signal, and the other carries unconditional reflex signal (induced such that it elicits a blink) which I will call the Stop signal.
The training involves a warning start stimulus, then a fixed pause, and then the annoying stop stimulus. After training, it is observed that the neuron statistically responds after a pause commensurate with the trained pause.
Okay perhaps a single neuron programmable variable delay is not easy to explain with a conventional synaptically weighted neuron. (at this point I remark to myself that a conventional synaptic weighted neuron might still do that in theory if the input axon is long and windy to use it as a delay line memory, but then it would need to have lots of synapses with the same Purkinje cell, so I reject this brainfart as not biologically plausible)
Then training was resumed with a second interval, and after this secondary training the following was observed: after receiving a start signal, the purkinje cell statistically responds with 2 (!) responses corresponding to the trained pauses! To them this is the nail in the coffin for the combinational synaptic weighted neuron model. But This just makes me remember my original tap filter layout: the same single input axon could function as a delay line with multiple synaptic taps activated at different times.
So I look up the terms parallel fiber and climbing fiber... and guess what: unless the brain is still gestating or suffered local damage each Purkinje cell has exactly 1 climbing fiber axon associated with it. And this climbing fiber is long and windy and makes loads of synaptic connections. So the initial remaining explanation delay line tap filter brainfart (which I viewed as myself clasping at straws to save the synaptically weighted neuron model), turns out to be nearly exactly the layout of climbing fiber on purkinje cell. you can view this as a programmable tap filter, or as an impulse response convolver.
So instead of deposing the synaptically weighted neuron perspective, I view this as strengthening the usual interpretation! Evolution forced synaptically weighted neuron model to contort itself into an awkward tapped delay line filter!
Addressing will be by fuzzy hash of recently activated records.
I can see the connection thing. It's like
circle -> ball -> -- -> lightbulb
white -> light _/
Is someone saying that maybe a single neuron can "store" something like "white"?
The huge cell is a Purkinje cell. I don’t remember much about neuroscience, so I hope someone else can elaborate.
Later on the interview suggests that every single neuron could store megabytes of information, but this seems more like conjecture to me.