Hacker News new | past | comments | ask | show | jobs | submit login
The fundamental thermodynamic costs of communication (arxiv.org)
83 points by g0xA52A2A on Feb 12, 2023 | hide | past | favorite | 15 comments



I like how "Due to the generality of our model, our findings could help explain empirical observations of how thermodynamic costs of information transmission make inverse multiplexing energetically favorable in many real-world communication systems." makes it sound like biology is not the real world


Took me a while to understand this comment. I'm not a biologist.

A nerve, is a whole big bundle of axons[1]. Looks like each bundle carries hundreds or thousands of connections down to a muscle.

1 http://www.medicine.mcgill.ca/physio/vlab/other_exps/CAP/ner...


I am a neurobiologist and still don't understand the comment. The authors acknowledge biology in the first sentence.

wrt. claims in the full article however I have an issue with this observation...

> Another feature common to many biological and artificial communication systems is that they split a stream of information over multiple channels, i.e., they inverse multiplex. Often this occurs even when a single one of the channels could handle the entire communication load. For example, multiple synapses tend to connect adjacent neurons and multiple neuronal pathways tend to connect brain regions.

I dont see how a "single channel" i.e. a single synapse would suffice as the entire information output unit of a neuron. Making multiple synaptic connections is fundamental to the way neural networks perform computations.


I think the commenter was trying to point out that it seems that the specific case is somehow fictitious when you use an antithesis between specific and general and then relate the general case to the "real world". Or put another way: "real world" is an empty adjective in this statement, unless biology is not part of the real world. This is, of course, unintentional, which is why the commentator says "makes it sound" rather than something along the lines of "claims that".


I interpreted that line to mean there are multiple synapses connecting the same two neurons, which might seem to be redundant. Of course that is still an over simplification of neural networks, and biological organisms have good reasons for redundancy anyway...


I suppose that's one way to interpret it. And this does happen in place where it makes biological sense. See Calyx of Held.

https://en.wikipedia.org/wiki/Calyx_of_Held?wprov=sfti1


More like I have time on my hand to deal with 3 different teams. But do I want too? No thanks too much energy drainage maintaining comms.


Furthermore, representational drift observes that a biological single neuron's output given activation is not stable over time; which implies that there is greater emergent complexity than is modeled with ANNs (which have stable outputs given training, NN topology parameters, and activation functions that effectively weight training samples (which are usually also noise))

/? Representational drift brain https://www.google.com/search?q=representational+drift+brain ...

"Causes and consequences of representational drift" (2019) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7385530/

> The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.

"The geometry of representational drift in natural and artificial neural networks" (2022) https://journals.plos.org/ploscompbiol/article?id=10.1371/jo... :

> [...] We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed “representational drift”. In this study we geometrically characterize the properties of representational drift in the primary visual cortex [...]

> The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.

"Neurons are fickle: Electric fields are more reliable for information" (2022) https://www.sciencedaily.com/releases/2022/03/220311115326.h... :

> [...] And when the scientists trained software called a "decoder" to guess which direction the animals were holding in mind, the decoder was relatively better able to do it based on the electric fields than based on the neural activity.

> This is not to say that the variations among individual neurons is meaningless noise, Miller said. The thoughts and sensations of people and animals experience, even as they repeat the same tasks, can change minute by minute, leading to different neurons behaving differently than they just did. The important thing for the sake of accomplishing the memory task is that the overall field remains consistent in its representation.

> "This stuff that we call representational drift or noise may be real computations the brain is doing, but the point is that at that next level up of electric fields, you can get rid of that drift and just have the signal," Miller said.

> The researchers hypothesize that the field even appears to be a means the brain can employ to sculpt information flow to ensure the desired result. By imposing that a particular field emerge, it directs the activity of the participating neurons.

> Indeed, that's one of the next questions the scientists are investigating: Could electric fields be a means of controlling neurons?

/? representational drift site:github.com https://www.google.com/search?q=representational+drift+site%...


Computational neuroscience: https://en.wikipedia.org/wiki/Computational_neuroscience :

> Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments.


Are you making some sort of point, or just dumping content?


Perhaps I was too polite. The collapsed entropy (absent real world noise per observation) of the binary relations in the brain is a useful metric.


Interesting take on being impolite - making a basic statement using overtly pretensions and opaque language?


I only meant that it would make more sense to replace 'real-world' here with something like 'human technological'


> [...] Here we present the first study that rigorously combines such a framework, stochastic thermodynamics, with Shannon information theory. We develop a minimal model that captures the fundamental features common to a wide variety of communication systems. We find that the thermodynamic cost in this model is a convex function of the channel capacity, the canonical measure of the communication capability of a channel. We also find that this function is not always monotonic, in contrast to previous results not derived from first principles physics. These results clarify when and how to split a single communication stream across multiple channels. In particular, we present Pareto fronts that reveal the trade-off between thermodynamic costs and channel capacity when inverse multiplexing. Due to the generality of our model, our findings could help explain empirical observations of how thermodynamic costs of information transmission make inverse multiplexing energetically favorable in many real-world communication systems.

https://arxiv.org/abs/2302.04320

What is the Shannon entropy interpretation of e.g. (quantum wave function) amplitude encoding?

"Quantum discord" https://en.wikipedia.org/wiki/Quantum_discord

> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.

Isn't there more entropy if we consider all possible nonlocal relations between bits; or, is which entropy metric independent of redundant coding schemes between points in spacetime?


does this mean inverse muxing is how negentropy exists in general?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: