I like how "Due to the generality of our model, our findings could help explain empirical observations of how thermodynamic costs of information transmission make inverse multiplexing energetically favorable in many real-world communication systems." makes it sound like biology is not the real world
I am a neurobiologist and still don't understand the comment. The authors acknowledge biology in the first sentence.
wrt. claims in the full article however I have an issue with this observation...
> Another feature common to many biological and artificial communication systems is that they split a stream of information over multiple channels, i.e., they inverse multiplex. Often this occurs even when a single one of the channels could handle the entire communication load. For example, multiple synapses tend to connect adjacent neurons and multiple neuronal pathways tend to connect brain regions.
I dont see how a "single channel" i.e. a single synapse would suffice as the entire information output unit of a neuron. Making multiple synaptic connections is fundamental to the way neural networks perform computations.
I think the commenter was trying to point out that it seems that the specific case is somehow fictitious when you use an antithesis between specific and general and then relate the general case to the "real world". Or put another way: "real world" is an empty adjective in this statement, unless biology is not part of the real world. This is, of course, unintentional, which is why the commentator says "makes it sound" rather than something along the lines of "claims that".
I interpreted that line to mean there are multiple synapses connecting the same two neurons, which might seem to be redundant. Of course that is still an over simplification of neural networks, and biological organisms have good reasons for redundancy anyway...
Furthermore, representational drift observes that a biological single neuron's output given activation is not stable over time; which implies that there is greater emergent complexity than is modeled with ANNs (which have stable outputs given training, NN topology parameters, and activation functions that effectively weight training samples (which are usually also noise))
> The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.
> [...] We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed “representational drift”. In this study we geometrically characterize the properties of representational drift in the primary visual cortex [...]
> The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.
> [...] And when the scientists trained software called a "decoder" to guess which direction the animals were holding in mind, the decoder was relatively better able to do it based on the electric fields than based on the neural activity.
> This is not to say that the variations among individual neurons is meaningless noise, Miller said. The thoughts and sensations of people and animals experience, even as they repeat the same tasks, can change minute by minute, leading to different neurons behaving differently than they just did. The important thing for the sake of accomplishing the memory task is that the overall field remains consistent in its representation.
> "This stuff that we call representational drift or noise may be real computations the brain is doing, but the point is that at that next level up of electric fields, you can get rid of that drift and just have the signal," Miller said.
> The researchers hypothesize that the field even appears to be a means the brain can employ to sculpt information flow to ensure the desired result. By imposing that a particular field emerge, it directs the activity of the participating neurons.
> Indeed, that's one of the next questions the scientists are investigating: Could electric fields be a means of controlling neurons?
> Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments.
> [...] Here we present the first study that rigorously combines such a framework, stochastic thermodynamics, with Shannon information theory. We develop a minimal model that captures the fundamental features common to a wide variety of communication systems. We find that the thermodynamic cost in this model is a convex function of the channel capacity, the canonical measure of the communication capability of a channel. We also find that this function is not always monotonic, in contrast to previous results not derived from first principles physics. These results clarify when and how to split a single communication stream across multiple channels. In particular, we present Pareto fronts that reveal the trade-off between thermodynamic costs and channel capacity when inverse multiplexing. Due to the generality of our model, our findings could help explain empirical observations of how thermodynamic costs of information transmission make inverse multiplexing energetically favorable in many real-world communication systems.
> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
Isn't there more entropy if we consider all possible nonlocal relations between bits; or, is which entropy metric independent of redundant coding schemes between points in spacetime?