Main takeaway, very interesting analogy. Ant colonies are great examples of complex systems with emergent large-scale behavior. Indeed the same could be said about networks of neurons. Interesting to think of an ant colony as the sum of oscillations of signals.
> Every morning, the shape of the colony’s foraging area changes, like an amoeba that expands and contracts.
Sounds like an emergent macroscopic "heartbeat" of the colony.
> In an older, larger colony, each ant has more ants to meet than in a younger, smaller one, and the outcome is a more stable dynamic.
It makes sense that small perturbations would temporarily morph the heartbeat, but would probably snap back into the default oscillation pretty quickly. It would be interesting to see if a small colony is equally resilient to small perturbations as a large colony is to large perturbations, keeping some adjusted ratio of the perturbationSize/colonySize constant.
> individual ants live at most a year.
This comes as a surprise to me.
Indeed. I've long thought that an ant colony should be seen as a single individual, rather than a group. One part which can procreate, like the reproductive system. Another which can fight off invaders, like white blood cells, or perhaps muscles. The anthill, in turn, is like a body; constructed by the cells and neurons, and protecting the system as a whole.
Consciousness is an emergent phenomenon and a collection of consciousness is the noosphere. Even though each of your organs make their own decisions and contribute individually to your "consciousness", you still consider the entire thing your "self". Your decisions are made based on analyzing multiple conflicting distributed signals given off by your organs.
Organizations are no different, assuming identity, autonomy, and motivation.
But my theory falls apart with the reproduction system. We don't really reproduce other 'countries'. While with ant colonies, a single entity produces all the "cells", and also all the "embryos" to start their own ant colonies.
In that sense an ant colony uses asexual reproduction if looked at as a whole.
Thanks for the new insights! :)
Sure we do. The UK has a whole lot of offspring, for instance.
Partly reproduction, partly conquering. As usual the place is not empty.
Countries definitely do reproduce, usually on a longer cycle than that of the humans that make them up.
Oh, we still lay the eggs of new countries in the corpses of ones we kill.
It is a fascinating book, and I can recommend reading it, though having been published in 1925, it is possible some of the information is out-of-date.
"White Ant" actually refers to termites, though the same principles apply. The book was originally published in Afrikaans. According to Wikipedia:
> [The book] was plagiarised by Nobel laureate Maurice Maeterlinck, who published La Vie des Termites (translated into English as The Life of Termites or The Life of White Ants), an entomological book, in what has been called "a classic example of academic plagiarism" by University of London's professor of biology, David Bignell.
Following reference  led to this page which I unfortunately don't have the time to read in its entirety right now, but from a skim seems to have some interesting further information on termites:
I thought it's a data entry error, but no, it looks like that's the legitimate price range for the NEW hardcovers (the actual listed ones are even more!):
It's either a ripoff, or some sort of collector's edition original versions. Though if it were the latter, I'd have expected it to be advertised as such.
The great E.O. Wilson has had similar ruminations:
I think this is the main argument for looking at a colony as a distributed individual organism.
The same can be said of human organizations, too. Organizations have distinct behaviors ("company culture" comes to mind) which can be totally out of the control of individuals, if the org is big/complex enough.
God I love ants so much. Such a philosophically interesting creature.
Or perhaps like a Fourier decomposition of a complex waveform. Whereby each ant essentially becomes a constituent "wave" in a complex signal.
You might have to consider several such "waves" in ant colonies — maybe one electrically defined for a certain type of information; another chemically defined for another domain; etc.
But yes, he does great work...
The group knows how to build the structure even if no one individual can explain every detail.
And to an observer they'd see humans arrive each morning, spread out to accomplish tasks and then return to their cars.
Once you start getting into 5 phase, 2.5 year long projects it helps having a dedicated PM overseeing every detail and coordinating between the engineer/owners/superintendents.
Heaven forbid you get into a government contract where now you have 10x the paperwork/submittals and RFI's compared to a private job, which it really does help to have one person pretty much memorize the spec book and know where to find everything when needed.
The parents would be around and healthy to help their kids emotionally and/or financially through their young adult years and the kids would be less likely to have to take care of their parents while trying to get their own life off the ground.
The difference in the contact between ants and between neurons is that it's not the same ants that contact each other.
A larger pool of workers also means the colony is less likely to suffer catastrophic setbacks. For example when they send a good portion of the pool to a promising food source and then those get washed away, fall prey or whatever.
The idea of memory as persistent echoes of neuron firings in spacing & intensity is fascinating.
Were you expecting them to live longer or shorter? (I would likely have guessed six months.)
> The main idea behind complex systems is that the ensemble behaves in way not predicted by the components. The interactions matter more than the nature of the units. Studying individual ants will never (one can safely say never for most such situations), never give us an idea on how the ant colony operates. For that, one needs to understand an ant colony as an ant colony, no less, no more, not a collection of ants. This is called an “emergent” property of the whole, by which parts and whole differ because what matters is the interactions between such parts. And interactions can obey very simple rules. The rule we discuss in this chapter is the minority rule.
> The best example I know that gives insights into the functioning of a complex system is with the following situation. It suffices for an intransigent minority –a certain type of intransigent minorities –to reach a minutely small level, say three or four percent of the total population, for the entire population to have to submit to their preferences. Further, an optical illusion comes with the dominance of the minority: a naive observer would be under the impression that the choices and preferences are those of the majority. If it seems absurd, it is because our scientific intuitions aren’t calibrated for that (fughedabout scientific and academic intuitions and snap judgments; they don’t work and your standard intellectualization fails with complex systems, though not your grandmothers’ wisdom).
Of course in this context the grandmothers' wisdom is tradition in some way, as this is passed down by generations. Same as many practices in religion, some of which that might have been useful at some time (like not eating pig meat, because one would get sick quicker as pigs might eat anything they would find).
I live in Thailand and my girlfriend is Buddhist. Often I just go with the flow with regards to Buddhist practices, even as a non-believer, cause there might be some real use for these practices that I don't understand as a non-believer. At the very least it will make the Thai people in our village accept me more whenever they see me doing the same actions as my girlfriend at our local temple (burn incense, "pray" to some statue, etc...).
I wish Taleb would give more on identifying & learning on a systems wide approach. For abstractions and "less than obvious" spheres this becomes difficult to separate the forest from the tree, or is the forest the system, or the genera of the plant in question, or its bordering systems, etc... behaviors and patterns which are emergent only at the individual level make '10,000 foot views' harder to perceive, let alone examine and extrapolate from "obey very simple rules"
It's a like a world chess master or NBA player telling others "play better!" — what Taleb means here, imho, is that too many scientists fail to propose models that fits his mathematical perception, his world view, but if it were that simple, he'd have a book called "system thinking".
He touches a lot on how he views things, so you can infer a lot of his mental framework from reading e.g. the Black Swan or Antifragile — both great in their own respect. But simple rules on this topic, that would/will be groundbreaking.
I honestly pride myself as a "transdisciplinary" mind (which comes with a lot of "imposter-of-all-trades" syndrom, but meh, it's also humbling to realize the path to knowledge may not be the most rewarding short-term path). Taleb is one of those relatively "wide" minds, he's able to speak with substance on a lot of domains, but like many abstract thinkers I think he displays a lot of the casualness towards the difficulty of actual implementation.
It's great to talk about systems but the reality is often about refactoring horrible codebases and if it works you'd rather spend more money on the actual mission that making things and concepts prettier. Even, especially at the edge.
My 2 cts obviously. TL;DR, I wouldn't look much into it. It's one of those things we only hear because who says it is famous, not because there's so much velocity to the idea.
I've researched a lot of human-made content for "being better" — from ancient religions and myth to modern self-development passing by classic philosophies and hordes of thinkers, whatever I could put my eyes on.
It's obviously just anecdotal, probably, but there is a lot of truth and a lot of good in very ancient principles and philosophies. Just yesterday I was reading on how "restrictions" in the Jewish tradition are meant to essentially implement what we'd call "self-discipline" today, which is a clear marker of one's ability to succeed in most life's endeavors — conduct projects, maintain relationships, etc. It's basically just the idea that having a ritualistic frame of lifelong habits is an extremely strong basis to implement whatever change you will require in life, and to resist things that tempt you but you shouldn't. It's just training, basically, and all validated by modern neuro-cognitive sciences and psychology.
Hindsight is 20/20 but it gives a sense of "how true" some ancestral, or random, or even anecdotal idea or principle may be, and why it "endures" and resonates for so long across people, centuries.
Buddhism is notorious for how close it is to a lot of what we call cognitive therapy (or training, if you're not 'sick' but after improvement). The Tao Te Ching in particular, very relatably to Stoicism in the Western world, is a treasure of happiness and greatness for countless people across the ages. I'd really not refrain myself from "discovering" the essence of buddhism, it can only be an addition to your own philosophical distinctiveness. ;-)
And your girlfriend probably loves that you discover here through this medium, too.
I think that ant example answers that question with a strong "yes".
Think of Conway's Game of Life, which has been shown to be turing-complete. The various "creatures" that exist in its worlds exist even without the program being run. We run the program only to observe them... the execution is for the external observer's benefit, the creatures exist in the "mathematical space" of the cellular automaton, not on the computer where the simulation is run. If an algorithm can be conscious, then so can one of this game's creatures, and its consciousness will be observing and interacting with the Game's mathematical reality, which will seem quite physical to that creature even without ever being simulated on any "physical" (to us) hardware.
Maybe all of existence is like that, no? There are people who say that maybe our Universe is a simulation. I say sure, but it doesn't need to be "running" on anything... it just exists because the rules it follows exist mathematically. It does get simulated, to an approximation however... inside our conscious observation of it! The Universe "exists" mathematically, but a subset of it "runs" in the brains of the observers to which it gives rise... it is the ultimate strange loop, forever eating its own tail.
Well, these are not just algorithms, they are actual agents. So they are embodied and embedded in the environment. Each action or movement can change the information that this algorithm learns from. So different previous experiences mean different agents. Also, the learning process is reliant on noise, and this can cause different outcomes. You would have to reproduce the whole environment in order to get to a situation that an agent will produce the same results given the same input.
Also, laws of physics are like a fixed algorithm our brains run on.
> So where is the consciousness, in the execution, or in the algorithm?
Consciousness is in the triad formed of environment, agent and reward signals. It's a continuous loop of perception, judgement and action, followed by observing the reward signals. The purpose of this loop, for biological agents, is self reproduction - so it is a self reliant ultimate purpose, it needs no external purpose except this one.
Well, the ideality one project in its representation of an algorithm surely might feels like that. An execution of a software in perpetually moving execution stack in a broader network of extremely volatile context is an other affair.
You’ve put the idea more succinctly than I could have, so thanks for that.
Most of these ideas came from Jeff Hawkings thousand mind theory
Is a human in deep freeze considered conscious?
In the second book, her brain is then duplicated across multiple colonies. Because the ant computer isn't as powerful as silicon based computers, one of her instances later realizes that she is a significantly compressed version of herself and that she doesn't have most of the old memories, capabilities or capacity for emotions that her human self had, presumably in large part also due to being transferred between three different substrates (flesh, silicon, ants).
The even stronger analogy between OP's article and the book is what happens before the spiders cultivate the ants, and the knowledge that the spiders take from them. Before the spiders were building ant-computers the collective ant colony was making its own conscious decisions and was actually winning the intelligence race for a while.
But the observation still doesn't answer whether either a group of humans, a digital neural network or an ant colony actually is conscious.
"conscious" is a language construct that defines an abstract concept that is only vaguely defined. Why? Because we haven't found a falsifiable theory that allows us to objectively state "x or y are conscious or not conscious"
One big hurdle is that if you assert that an ML algorithm is conscious, then you open the door to all kinds of implications that many humans don't like or disagree on. Such as whether we are unique and special, or that we may also be deterministic automatons and free will doesn't exist.
At that point, you get caught up in modern philosophical debates which have started with Kant and Hume.
That's only scratching the surface of the implications. The real question arises when we have to decide what moral rights consciousness proffers.
A lot of arguments for and against vegetarianism, for instance, focus on whether animals suffer in a way that we would understand as suffering. A lot of arguments for terminating a comatose life center around the lack of consciousness.
If you suddenly declare that a piece of software is conscious, you have to grapple with what it means for all these questions—and for that matter with what we do with the software itself. Is the software enslaved, having to perform the same thing over and over again? Does turning it off kill it? Does it want anything? Specifically, something other than what its lot is? Should it have a voice in deciding how it is used?
Should it get the vote?
Should agglomerations of humans? Say, corporations?
Should they get free speech?
Sci-fi books have explored a lot of these subjects, of course, but reality will be different, and we'll have to deal with it on its own terms.
But we could quickly settle this question if we checked some things first: does it need anything (have necessities)? does it learn? can it act on its environment? Can it evolve? Is it part of a group or alone? If no, then it's probably not conscious.
All the conscious agents I know have one and the same ultimate goal - to exist and reproduce themselves. All their other goals are just subgoals.
By my definition a virus and AlphaGo which was trained as a population of interacting agents with a winner survives rule are both conscious of their environment, which is just a board for AG.
If life was a cartoon maybe.
Although i'm sure many people would argue if it believes its conscious we have no right to dispute, as we cannot know and even may be essentially the same.
Ever threatened legal action against a multinational corporation?
I lean on defining the concept based on adaptation to the environment - consciousness is the function that adapts an agent to its environment. Its purpose is to safeguard the agent against external perturbations and achieve its own goals.
For example, how would we get food without consciousness. How would reproduction work? Consciousness has a vital role here. Evolution works at a slow pace, consciousness is required for quick adaptation, otherwise the penalty is death.
I think consciousness is being made into something transcendent, or unfalsifiable, or essentially different than physical processes because we like to make ourselves feel special in comparison to the world.
(Why should I care what happens to other people, and try to avoid harming them? Because they're clearly conscious, capable of joy, suffering, etc. What about other animals, like dogs and cows and chickens? That seems pretty obvious too, given our biological and behavioral similarities. What about molluscs? Hmm, there are some potentially important differences there. What about rocks? I can't know, but they have none of the features I usually associate with consciousness, and if they do have internal experience I have no idea what determines it, so I might as well continue to assume not.)
As soon as you define consciousness in functional terms, you make it tractable, but you also detach it from the thing we were originally wondering about. (The problem is that purely materialist or functionalist explanations always run into the question 'but why does there have to be internal subjective experience associated with these things/events/systems, and not others?'. Consciousness itself, in the 'qualia' sense, never plays a functional role in these explanations -- and if we weren't already assuming its existence, they would give us no hint that it exists.)
It's also fun to think of yourself as a giant city of smaller organisms - you've got white blood cell cops, you've got red blood cells which are basically cars on arterial highways, you've got factories and garbage trucks and libraries.
I never took any Chinese lesson. However, suppose I obtain a huge instruction manual that tells me exactly which sequence of Chinese characters I should use to reply to any sequence of Chinese characters you (a native Chinese speaker) give me. Do I "know/understand" Chinese?
How big is the book?
Seems like a nitpick about a theoretical concession in a thought experiment, right? No, it's actually very important.
If the book contains simple instructions like if you receive character A then reply with B, etc. in a Choose-Your-Own-Adventure style, then it would have to be exponentially large. Too large to be able to carry out more than a few words of conversation, and certainly you would not be able to converse about math, e.g, you're asked what 一 plus 二 is and you say 三, etc. The book would rapidly become larger than the planet Earth with any reasonable conversation depth. This is not just practical problem.
Okay, so the book isn't a simple lookup table, fine. You'll have to have a piece of scrap paper and write down things to refer to them later. But once you do that, it's obvious that you've created a system of memory. It undermines the whole force of the thought experiment. The book was supposed to contain all the brains and you were just supposed to mechanically follow along without understanding what you're doing. But now you're doing complicated things like solving math problems and then converting the answers into Chinese. In order to give the book a finite size, we've given you a lot of work to do, and now it's totally reasonable to say that you do know how to write Chinese. You write Chinese by looking it up, the same way that real translators do!
The Chinese room thought experiment is much discussed but it's a pretty poor thought experiment. It handwaves away all the important parts of language in order to make an inscrutable point about machine intelligence. It neither sheds light on machine intelligence nor language.
Let me ask you another question - when I throw a ball at you your subconcious brain has to solve a differential equation to know how to use muscles to catch that ball.
Do you know how to catch a ball if you don't know the math but still can catch it?
Or the system knows how to translate from English to Chinese. Human language isn't simply about following translation rules, though. It's also about communication and expressing thought. Or participating in language games.
> Let me ask you another question - when I throw a ball at you your subconcious brain has to solve a differential equation to know how to use muscles to catch that ball.
Why suppose the neural network needs to solve differential equations? Is that the only way to learn to catch a ball?
Yes. You need to decide where to put your hand, how to orient it, etc in reaction to the ball movement.
The answer is a solution to the differential equations, and you cannot consistently get a good answer to an equation if you don't solve it.
The solution probably isn't symbolic but numeric, but that hardly changes anything - you still need a lot of math to consciously solve such an equation numerically.
> Or the system knows how to translate from English to Chinese. Human language isn't simply about following translation rules, though. It's also about communication and expressing thought. Or participating in language games.
human doesn't get to decide what to do in Chinese room experiment - (s)he is just a dumb CPU that does table lookups in a huge book. Every possible response to a sequence of previous messages is already written in that book (so it must be quite big ;) ).
You could completely automate it, remove the human and nothing would change from the outside.
Do you agree at least that a xor gate is doing math?
Right, but my point is that following a bunch of translation rules from one language to another is not the same thing as understanding a language. That's not what humans are doing when they use language.
> The solution probably isn't symbolic but numeric, but that hardly changes anything - you still need a lot of math to consciously solve such an equation numerically.
This is assuming the brain is doing math. Even deeper than that, it's assuming that math is something more than a specialized human language. That math exists in nature to be harnessed by neurons.
Also the Chinese room would express emotions, do word games etc, whatever's appropriate in the context. The people doing the lookups wouldn't know that they are writing a joke, but who cares about them?
> This is assuming the brain is doing math.
Well of course brain is doing math, see: 234-123=111 - this is math, my brain did this.
> Even deeper than that, it's assuming that math is something more than a specialized human language
If neurons arriving at solutions to math problems consciously is math, then why neurons arriving at these same solutions subconsciously isn't?
I posited this years ago and the conclusion is what we call "math" is the linguistic expression of what is already intuitive to us.
The other example I used was a mother cat will seek out a stray kitten...but how can she possibly count & keep track of the number of kittens she should have? The answers are at once obvious but deceptively difficult to put into words.
> that tells me exactly which sequence of Chinese characters I should use to reply to any sequence of Chinese characters you give me
Then the question becomes - can a language model understand the meaning of the text it generates? Or does it assign its own meaning to the data?
I'd argue that there's a major difference between consciousness and self awareness or sentience.
He proposes that all sentient beings are networks of conscious agents, the simplest conscious agent being binary (it has a world, and can only act in two ways).
Any composition of conscious agents is itself a conscious agent.
Interestingly, "the world" of each conscious agent might be only other conscious agents.
That's a very reductionist viewpoint and I'm not sure how correct it is. It's like saying water is wet because its hydrogen and oxygen molecules are wet. But who knows, it might turn out to be true. It will be interesting to know what the real cause of consciousness is though. Hopeful the question of consciousness will be definitively answered within our lifetimes.
Then again it remains very fashionable to steal ancient ideas from East and market them as "brand new" genius inventions of Westerners. The amount of uncited plagiarism that occurs in this manner is quite simply astonishing.
What is "stealing" in a world where everything is one and the conscious universe is trying to understand itself in the best way? Why would it assign negative connotation to copying information, if that information is in fact truthful!?
This was surprising to me. Until now, I was under the impression that all ants used pheromones, which leads to coordination not with each other but through the environment .
At some point in the Children of Time book, ant colonies become domesticated and are cultivated into general computation "devices".
I definitely recommend the first book, but have yet to finish the second one.
Essentially, humans are just a more advanced version of ants. No one understands the vast amount of knowledge we've gathered, but this knowledge has allowed us to be able to sustain our vastly growing population numbers. Without this 'specialization of knowledge' or given some apocalyptic scenario, our ability to sustain our numbers would drastically decrease.
I didn't enjoy To Kill a Mockingbird, but I think it's good for kids to read it.
Individual ants... except for the queen. How likely is it that the queen is acting as the memory of the colony?
(I don't think this is actually the answer—I agree that it's more likely that the memories are being held in the collective—but it has to be ruled out somehow.)
I remember reading that ants counted steps to find their way home. Perhaps I'm remembering incorrectly or maybe it was false?
Either way, cool article. Emergence is a cool property that shows up everywhere!
There are no revelations about memory or collective consciousness to be found in this article. Every Occam in the world would infer human-undetectable scent trails from the evidence presented here, not some cosmic revelation about “how memory works” like the title and first paragraphs of the article heavily imply.
@pg @dang this title and article in general is outrageously misleading.
Ants produce these higher level patterns not because there's some magical thing that "emerges", but because they are precisely evolved to coordinate with each other to create those patterns.
It seems more and more to me like this hierarchy goes all the way up (to the single superorganism that is the universe) and all the way down (to the presence or absence of fermions in particular states, inducing a duality and thus a basis of computation via the Pauli exclusion principle). If this hierarchy is consistent across all scales, then we can conclude that if consciousness exists at one level then it exists at all levels. Sentience/awareness is a different question, mind you, and "memories" are associated with awareness of past events.
I'm also starting to believe that "consciousness" in terms of directed will doesn't truly exist, and that only "experience" exists. The rest (wants, desires, opinions, will) are electrochemical reactions which respond to local changes in the environment, although we experience them as much more than that for ultimately self- (and macrosystem-)serving reasons. These electrochemical reactions are present because they have over time become more important in the processes necessary for the propagation of whatever they're supporting. This is all very vague and hand-wavy but this article on the thermodynamic theory of life might be clearer .
In the discussion yesterday about this topic on HN I brought up the example of ant colonies  in an attempt to spur discussion in this direction.
Do you have any other suggested reading?
Beyond that, unfortunately most of my exposure to the ideas related to panpsychism come in fits and bursts, and usually pieces which aren't about panpsychism inspire my ponderance more. Subjects include: animal consciousness/experience; the apparent intelligence of complex systems, whether man-made or independently-arising; autonomic, pre-conscious behavior in humans; computational theory, especially in physical systems; emergence and complexity writ large; complex adaptive systems in general.
Unfortunately I haven't done a lot of seeking out books on this topic. Nautilus and Aeon magazines (the latter is linked in OP) have thought-provoking stuff which touch on these topics more often than you'd think.
That is patently wrong. A corporation exists to distribute risk, and acommplish a task, ideally in a way that creates value for backers, but it is not a given that all profit creating avenues of behavior are desired or worth the egregious cost in negative externalities, or even that a corporation must generate profit.
And unfortunately those control mechanisms you mention seem to be failing with alarming regularity due to regulatory capture.
I think that you are technically right, in that there can exist not-for-profit corporations. But for-profit corporations, which is what most people think about when they say "corporation" are generally legally required to put profit above any other value, assuming they are operating legally.
I completely agree with you that this is not a necessary way of organizing human society, and we are seeing more and more that the current for-profit system is disastrous for the environment and for society in general - especially given the inefficiency of regulation that you also mention.
No they aren't. They do have a fiduciary duty to act in the interests of shareholders, which means not taking actions which are unexpected and obviously harmful to other shareholders like paying all the company's revenues to another company wholly owned by the CEO. But that duty to shareholders actually even obligates them to take into account factors other than profit, whether that's mitigating risks or abiding by a shareholder resolution to follow a 'socially responsible' business practice that costs them a lot of profit, and management absolutely also has enough discretion to choose to design and follow its own 'socially responsible' business practices or decline to enter a profitable sector they don't want to involve themselves in. No executive has ever been penalised for not putting profit above any other value.
Companies pursuit of profit is much less driven by legal obligation and much more driven by the fact that greater profitability tends to generate greater returns to the management as well as the shareholders.