
Is Information in the Brain Represented in Continuous or Discrete Form? - gballan
https://arxiv.org/abs/1805.01631
======
aurelian15
_Rushton (1961) concluded that the neural signaling of a typical human
myelinated nerve fiber spanning, say, between a finger and the spinal cord
cannot employ a continuous representation due to the presence of noise.
Despite these seminal works, computational models based on continuous
representation dominate present day neuroscience literature – for example,
continuous attractor networks (Eliasmith, 2005; Wang, 2009)._

Sure, we know from basic information theory that any noisy system is
inherently discrete. However, depending on the magnitude of the noise, I don't
see why we should not describe models in terms of continuous variables.

As an analogy, consider the average numerical computer program. We often
derive the underlying math in terms of real-valued vector spaces. However,
when implementing the program on a computer system, all variables are
ultimately discrete. A reason for not thinking about our problems in terms of
discrete objects is that math just tends to get incredibly complicated as soon
as objects are discrete; and that the computational substrate is fine-grained
enough (in most cases).

Similarly, if noise limits the effective resolution of individual signals or
representations in the brain to 5 bits (number taken from the paper), that (in
my opinion) still does not mean that we should stop describing computational
neuroscience models in continuous spaces -- at least not, if they are
validated with the empirically measured amount of noise; which is exactly what
Eliasmith's lab (cited above) does (full disclosure: I'm one of his students).
Furthermore, as soon as you code information in populations of neurons, the
noise on individual connections becomes less important; one could argue that
whenever the brain needs precise computations, it dedicates more neural
resources to that problem; then, the noise will "average out".

~~~
noobermin
>I don't see why we should not describe models in terms of continuous
variables.

You're being way too polite in your wording, as your next paragraph states,
every field that uses computers has done it since the computers were invented.
Allow me to push it further: AFAIC almost all hand calculated math is
discrete. The numbers you can write on a page are countable, in fact finite,
so all _actually calculated_ math is "discrete" in some sense. No one every
really touches the full continuum of R other than abstractly: real
calculations whenever you truncate pi or sqrt(2), or generally calculate with
a fixed set of digits, you are doing _discrete_math_. So yes, Q is dense in R
which helps, and you could write out more digits if you really wanted to, but
even then, people restrict themselves to small, finite subsets of Q and even
resorting to using things like logarithms/order of magnitude to keep that set
as small as possible since our brains can't take all the hairiness. But still,
that set is large enough to still do the things we care about anyway.

However, when people want to think _abstractly_ rather than explicitly
(calculating), it's easier to take the limit as the gaps go to zero and deal
with clean, C^\infty functions. I mean, that's literally what people mean (at
least how my experimental physicist brain thinks it does) when they say
analysis/calculus is about approximations. I deal with plasma in the hot,
nonquantum limit so my ions and electrons are in fact discrete particles.
Whenever I calculate an electric field or a magnetic field for the
distribution, I ignore the fact that there will be noise on the scale of
individual particles and replace that noisy function with a clean smooth
function, which tends to be a good approximation. In fact, this is how almost
every classical physics problem is solved, you ignore the structure on the
particle level and pretend it is "continuous". Same with materials
engineering, and so on.

~~~
integration
Forgive my ignorance, but isn’t it true that we don’t know what elementary
particles are made of? In other words, doesn’t it appear that matter is both
continuous and discrete, and that we could concievably find particles that
comprise elementary particles... and so on? Is there a name for this paradox?

~~~
noobermin
I mean, the standard model says they are fundamental, although there are
theories that they may be "made up" of other things like in string theory,
although those theories have (imo) struggled to compare to experiment or
demand experiments that are infeasible today. Regarding "continuous and
discrete", quantum mechanically, electrons aren't definitein space, you might
be referring to that. In plasma physics we sit above the quantum limit
(neglecting p and x variance or more correctly, the variances' product well
exceed \hbar), so we treat them discretely, so in a sense, as all classical
physics is, it's an approximation too.

My point is that even in classical physics, I usually don't care about
fluctuations on small scales which will be noisy, so on top of the classical
approximation I make _another_ approximation where I replace a noisy function
with a smoother function that well approximates the noisy one. Smoothing out
the noise is an important tool for theoretical understanding (as OP's student
pointed out here), but it's important to remember it's just an approximation.

EDIT: re the other replier. Another example is I treat ions as "fundamental"
too, as we don't reach energies and conditions where their constituent nuclei
matter, only ionization.

------
nanna
It frustrates me that the engineering community has so little recollection of
its own history that it doesn't even contextualise /this/ question as one of
the founding problematics of its own specialism, and our cybernetic age in
general. The question of whether the brain functions discreetly ('digitally')
or continuously ('analogue') was one of the three main debates at the Macy
Meetings in the 1940s. [0]

For example Warren McCulloch and Walter Pitts effectively staked their careers
on the brain being a fully digital network, as a computer. But John Von
Neumann was more cautious, arguing that the digital function of the brain
rested on a chemical, analogue foundation, and that it was unclear whether
messages were coded in a digital or analogue way. Julian Bigelow argued that
mathematicians and physicists preferred to ignore the biological structure of
neurons and identify them with their digital operation.

This debate strikes at the heart of the debate between the difference between
analogue and digital, and in my opinion it's a philosophical, not just
physiological, problem.

[0] For anyone interested in this history I'd highly recommend Ronald R.
Kline's work, especially The Cybernetic Moment (p 46-47)

~~~
gldalmaso
Interestingly the DNA structure was discovered in the 1950s. In what ways
could it have shaped that discussion?

~~~
nanna
Historically speaking, I'm not sure, but I'd imagine the same debate would be
had there too. This isn't just an epistemological question, but the product of
multiple scientific revolutions (the return of atomism in the late 19C,
quantum mechanics, neurophysiology, cybernetics...), so, to take a Kuhnian
position, the innovations of the 1950s especially are well within the same
epistemological regime.

------
mannykannot
An interesting statement here:

'One answer, as outlined by VanRullen and Koch (2003), is that continuous
representation “cannot satisfactorily account for a large body of
psychophysical data”. For example, 1 cent does not typically have much value
to most people. However, a person may decide to buy a product if priced at
$1.99 – yet, refuse to buy the same product if priced 1 cent higher at $2.00.
Such an abrupt (or step) change in the brain’s purchasing decision cannot be
modeled using a continuous representation despite extensive attempts to do so
(Basu, 1997).'

The tacit assumption made here seems to be that our concept of number is hard-
coded into our brains at the physical signal level, and so that when we think
of the number 2, some part of our brain generates a physical signal that has
the quantity of 2 (absent that assumption, why would one think that this
experiment tells us anything about how signals are physically encoded in the
brain?)

AFAIK, it is generally thought that our concept of numbers is at a more
abstract, symbol-manipulation level (especially when, as in this example,
fractions are involved.) It seems to me that this paradox (which looks like a
variant of the sorites paradox, e.g. what is the minimum size of a heap of
sand?) is resolved if we consider peoples' tendency to disregard the pennies
in a monetary amount.

Caveat / mea culpa: I have not read the papers referenced in the quote.

~~~
chongli
It's really surprising to me that people still fall for the $1.99 trick. When
I see any price I instantly round it up to the nearest dollar, hundred,
thousand, etc. It makes comparisons so much easier!

~~~
CPLX
I bet you don't. In the sense that if you were under extreme surveillance and
someone analyzed all your purchasing decisions there would be a detectable
difference in your behavior based on the "99 cent trick".

It's a pretty well known cognitive bias to overestimate ones resistance to
cognitive biases.

------
lend000
> we show that information cannot be communicated reliably between neurons
> using a continuous representation

It seems like this doesn't take into account spiking frequencies (where the
frequency is a continuous variable), nor the potential for signals with
different average magnitudes (even if each signal is inconsistent due to the
presence of noise) to have a statistically significant effect over many
iterations.

~~~
mannykannot
In my view, there seems to be some confusion in the paper between 'discrete'
and 'digital': "Furthermore, in the present work, the terms continuous and
analog are treated as equivalent in an engineering sense, _as are the terms
discrete and digital._ ' [my emphasis]. A similar conflation is seen in
several of the quotes from other work in the introduction of the paper.

I think there is a distinction between 'discrete', where the signal is encoded
as a non-continuous physical property, and 'digital', which adds the concept
of place value to discreteness. This distinction is relevant, for example,
where the authors contrast the ability of digital recordings to resist the
degradation that analog recordings suffer from. That resistance, however,
depends quite substantially on error-correcting codes, for which place-value
matters.

------
vinceguidry
My intuition is that it's represented ultimately in a discrete form, but in a
symbolically-compressible way, such that continuities can be represented
extremely cleanly.

Let's take a contrived example. You know that if you press on the gas pedal of
the car, the car moves forward. You also know that if you press hard, it
jerks. Your brain doesn't represent all the intermediate states between light
and hard pressing, it instead represents the continuity between the two
symbolically.

We use these continuity representations at speed to do things in the real
world, and breaks in those continuities really trip us up, and force us to
slow down until a new representation of the continuity can be formed. When it
happens to me, it almost feels like I'm "repacking" the information back into
my brain.

But if you meditate on how you learn things, one can come to the conclusion
that all knowledge 'feels' the same way in mind. For me, this knowledge is
discrete, I've even been able to articulate some 'operations' that one can
apply directly onto conceptual units.

If you think about it, an "intuitive" understanding of something means
precisely this, you understand the system in whole without having to think
hard about what it's doing in parts. Outputs are mapped cleanly to inputs in
mind.

~~~
milesvp
I very much disagree with most of your examples. Most people don't seem to
have a good sense for how associative memories work. The brain is so good at
lying to us that we think we have all these discrete pieces of knowledge, but
the reality from my studies is less reliable. When we 'learn' something,
what's happening is that a bunch of neurons are being stimulated by inputs
from all over your body. What ends up happening is you end up getting a
neuronal pattern, that is largely repeatable given similar enough inputs (it's
why we can roughly see images someone is looking at with fmri brain scans).
But there is a problem when we 'learn' something, in that if we are in a
different context and many of our sensory inputs are significantly different
then we may be completely unable to recall the thing we learned, or be unable
to apply that knowledge because not enough of the neurons fired to 'pattern
match' our knowledge. What this means, is that if we want an instuitive
understanding of something we probably need to have a lot of concrete examples
to pull from, in as many different contexts as we can.

To get back to your gas pedal example. I'm fairly positive that the brain
needs to map as many of the degrees of hardness of pressing to percieved
acceleration as possible to get a good intuition as to the necessary force to
achieve a given acceleration. And it's even 'worse' than you realize, because
your brain also need to map out pedal depression vs environment (hills and
turns effect acceleration), and depression vs car load. And, if you want to be
a really good driver, collecting all this data in different cars.

Now we humans seem to have the ability to abstract away this underlying
machinery to some degree, largely thanks to the wiring of our neo cortex, so
after the age of 5 you can start to interpolate where unknown data points
might lie on some spectrum given a few reference data points. But even then it
won't be intuitive until we've done the exercise many times with different
data.

~~~
vinceguidry
I think when dealing with physical modeling of this sort, obviously we're
going to need lots of examples to get a full enough picture of reality. I play
ping-pong, so I have to be really cognizant of this, as my mind has to pattern
match the way the ball works hundreds of times in a game.

But if I truly had to see the pattern on every single angle, every single type
of spin, every different paddle, every different table, every different room,
then the sheer combinatorial complexity would make ping pong impossible to
improve at. Talking about and coming up with insights on better play would
also be impossible.

It's relatively easy to break the symbolic continuities that the brain stores
with new types of inputs. That doesn't mean those symbolic continuities don't
actually exist and that the brain will constantly seek them out.

------
panic
I don't understand the argument here. Obviously you need some sort of
amplification process to avoid noise. But there are amplifying processes that
don't follow the "discrete" Shannon model. Like, I'm pretty sure neurons
themselves are an example -- they maintain a stable voltage (even with noise)
until they depolarize and spike. The process of depolarization is
fundamentally analog, integrating together all the synaptic inputs (and
anything else that affects the voltage in the cell). It can't be modeled using
a sequence of discrete symbols, but it's also stable over time in the presence
of noise.

~~~
Maybestring
>It can't be modeled using a sequence of discrete symbols, but it's also
stable over time in the presence of noise.

Did you mean can be?

When you learned to be integrate continuous functions, I'm certain you learned
to do it while scratching sequences of discrete symbols onto paper.

~~~
panic
Using discrete symbols to describe a continuous model is different than using
a discrete model. In this case, the paper is arguing that a particular
discrete model based on sequences of discrete symbols (Shannon information
theory) applies to the problem, not an arbitrary model including things like
integrals.

------
macawfish
I have a feeling, based on years of reading peer reviewed material, that the
brain can store quantum information or something mathematically similar. Go
ahead and down vote me all the way to the loony bin if you'd like... Or read
this article: [https://www.quantamagazine.org/a-new-spin-on-the-quantum-
bra...](https://www.quantamagazine.org/a-new-spin-on-the-quantum-
brain-20161102/)

There have also been empirically successful applications of quantum
theoretical ideas to cognitive studies:

[https://en.m.wikipedia.org/wiki/Quantum_cognition?wprov=sfla...](https://en.m.wikipedia.org/wiki/Quantum_cognition?wprov=sfla1)

So while I appreciate that the article is taking into account the plausibility
of "both" possibilities, it strikes me to be an uncannily "classical"
question.

~~~
red75prime
Computers can store structures designed to be mathematically similar to
quantum information too (see [0] for example). It doesn't mean that computers
can efficiently perform quantum computations. If there's evidence that humans
can efficiently solve problems in BQP [1] complexity class, like cracking RSA
encryption, I'd like to hear it.

[0] [http://quantum-studio.net/](http://quantum-studio.net/) [1]
[https://en.wikipedia.org/wiki/BQP](https://en.wikipedia.org/wiki/BQP)

~~~
macawfish
I don't know of an example from BQP, but 3d protein folding is NP-complete and
humans have outperformed computers at protein folding. Although I don't know
how you'd even directly compare humans and computers. I suppose that's a big
part of what this article is grappling with.

The interesting thing to me is not if humans can solve exactly the same
problems as quantum computers with lots of qubits (Quantum computers are not
the identical to all of quantum theory!) The interesting question to me is if
humans can hold and pass around irreducibly probabilistic states without
collapsing them.

~~~
red75prime
Contemplation of brain cloning and consciousness creates a feeling of paradox,
then no-cloning theorem comes to mind. Right?

~~~
red75prime
If it is indeed your line of reasoning, I don't understand how it is supposed
to work. If the quantum state doesn't influence macroscopic behavior, then
it's useless. If it does, then it will collapse.

~~~
macawfish
I was being unclear. I'm wondering if the brain might hold probabilistic
states for extended ("macroscopic") periods of time so that they can be
observed/collapsed later, but not immediately.

------
analog31
This topic is admittedly way over my head, but the analogy that comes to mind
for me is audio recording and transmission. Back in the day, it was done with
entirely analog systems, yet FM radio produced an intelligible output despite
being bathed in a sea of noise.

Granted, after enough transcriptions, or if stored over a long enough period,
the signal would begin to get washed out... kind of like the information in my
brain.

~~~
jonnybgood
That analogy doesn’t work. Radio is not in a sea of noise due to frequency. Or
I should say that unless there are other sources on a particular frequency,
noise will not affect communication.

------
tabtab
It seems we are using terms meant for manufactured items that probably don't
apply to the brain. Neurons do what neurons do, regardless of the label. You
can model them (in sufficient approximation) using both analog and digital
means. If forced to classify neurons with "analog versus digital", I'd say
they resemble our analog machinery more than our digital machines. A cup of
coffee may change the "calculations" our brain makes, which is not a typical
feature of our digital machines (unless they have what we'd call a defect).

------
jonbarker
Probably discrete, but a related question is what constitutes the discrete
element? Science just discovered that it's not just neuron synapses as
previously believed. Recent discoveries with snails show that reflexive
'memories' are transferrable from one snail to another via injection, so our
synapse only model was likely incomplete for at least simpler organisms:
[https://www.cnn.com/2018/05/17/health/snail-memory-rna-
scien...](https://www.cnn.com/2018/05/17/health/snail-memory-rna-science-
study-trnd/index.html)

------
bitL
Is there any abstraction of Deep Learning on discrete domains? Using discrete
calculus from Concrete Mathematics perhaps? Or better a mixed
discrete/continuous one? I guess optimization there would be killer...

~~~
pests
There are many different ways to represent discrete domains in deep learning.
One example is entity embeddings such as word2vec which turns a finite list of
entities (words in word2vec) into a number of continuous variables.

~~~
bitL
I know. Though if you want to represent complex DAGs or do some inner
combinatorics, it doesn't generalize that well. DeepWalk and its variations
are only for toy-sized graphs.

------
zerostar07
> signifies a major demarcation from the current understanding of the brain’s
> physiology

There s always been a debate if the brain uses a (discrete) spike code or a
more analog rate code.

Then there 's also this paper that found 26 levels of synapse strengths in
total
[https://elifesciences.org/articles/10778](https://elifesciences.org/articles/10778)

------
DoctorOetker
continuity is not a set property, but a function property, a function can
satisfy the continuity condition.

sets can satisy a dense-ness property

------
hestefisk
This posits a very mechanistic view of human cognition, assuming that the
emergent parts of our thoughts can we reduced to discrete or continuous
numbers. Thoughts, information, knowledge are probably both continuous and
discrete at the same time. Cognition is a biologically emergent process, not a
number manipulation exercise.

To that end Shannon’s theory of information needs an overhaul.

------
KasianFranks
It’s always continuous. We get better similarity measurements with floating
points.

~~~
kevin_thibedeau
Floating point numbers are not continuous.

~~~
hderms
Yeah, that is important to recognize, but floating point numbers could
obviously sample continuous measurements with less loss than integers.

~~~
yorwba
Only if the measurements span several orders of magnitude and you want to keep
the relative error low. Otherwise you'd be better off with fixed-point
numbers, which are spread out more evenly (about half of the IEEE floats are
less than 1 in magnitude).

And if you don't limit yourself to linear mappings, you can probably do even
better with a non-uniform integer encoding.

------
toolslive
Is there uberhaupt anything in the physical universe that's continuous ?

------
bra-ket
Why a paper on information processing in the brain doesn't mention grid cells
[http://www.scholarpedia.org/article/Grid_cells](http://www.scholarpedia.org/article/Grid_cells)

------
snowsilence
No. (Betteridge's law of headlines)

~~~
JackFr
Clever.

