Hacker News new | past | comments | ask | show | jobs | submit login
An ant colony has memories its individual members don’t have (2019) (aeon.co)
484 points by maxbaines 4 months ago | hide | past | web | favorite | 152 comments



> Ants use the rate at which they meet and smell other ants, or the chemicals deposited by other ants, to decide what to do next. A neuron uses the rate at which it is stimulated by other neurons to decide whether to fire. In both cases, memory arises from changes in how ants or neurons connect and stimulate each other.

Main takeaway, very interesting analogy. Ant colonies are great examples of complex systems with emergent large-scale behavior. Indeed the same could be said about networks of neurons. Interesting to think of an ant colony as the sum of oscillations of signals.

> Every morning, the shape of the colony’s foraging area changes, like an amoeba that expands and contracts.

Sounds like an emergent macroscopic "heartbeat" of the colony.

> In an older, larger colony, each ant has more ants to meet than in a younger, smaller one, and the outcome is a more stable dynamic.

It makes sense that small perturbations would temporarily morph the heartbeat, but would probably snap back into the default oscillation pretty quickly. It would be interesting to see if a small colony is equally resilient to small perturbations as a large colony is to large perturbations, keeping some adjusted ratio of the perturbationSize/colonySize constant.

> individual ants live at most a year.

This comes as a surprise to me.


> Main takeaway, very interesting analogy. Ant colonies are great examples of complex systems with emergent large-scale behavior. Indeed the same could be said about networks of neurons. Interesting to think of an ant colony as the sum of oscillations of signals.

Indeed. I've long thought that an ant colony should be seen as a single individual, rather than a group. One part which can procreate, like the reproductive system. Another which can fight off invaders, like white blood cells, or perhaps muscles. The anthill, in turn, is like a body; constructed by the cells and neurons, and protecting the system as a whole.


The emergent layers are organelles -> cell -> tissue -> organ -> organ system -> organism -> organization.

Consciousness is an emergent phenomenon and a collection of consciousness is the noosphere. Even though each of your organs make their own decisions and contribute individually to your "consciousness", you still consider the entire thing your "self". Your decisions are made based on analyzing multiple conflicting distributed signals given off by your organs.

Organizations are no different, assuming identity, autonomy, and motivation.


I was going to say you can also look at a country like this, with roads as veins, military as white blood cells, scientists/universities for brains, etc.

But my theory falls apart with the reproduction system. We don't really reproduce other 'countries'. While with ant colonies, a single entity produces all the "cells", and also all the "embryos" to start their own ant colonies.

In that sense an ant colony uses asexual reproduction if looked at as a whole.

Thanks for the new insights! :)


> But my theory falls apart with the reproduction system. We don't really reproduce other 'countries'.

Sure we do. The UK has a whole lot of offspring, for instance.


Offspring, perhaps. But kidnapping is not reproduction.


I'd say it's reproduction, it's just not voluntary reproduction. Every time a culture invades another, you can think of it as creating a new culture that has the "genes" of the previous two. Examples include the influence of the moors on Spain, the changes to the English language due to the Norman conquest, and the modern unique Afrikaner culture of South Africa.


I'd say it is a mix.

Partly reproduction, partly conquering. As usual the place is not empty.


Xenomorphs (or less exotically, parasitic wasps) reproduce just fine in that style.


Countries die and are reborn as new countries. The USA is a child of England.

Countries definitely do reproduce, usually on a longer cycle than that of the humans that make them up.


In some ways that's what collonialism is. It doesn't happen as much these days but if/when we start colonizing other planets, that would be a form of countries (or unions) reproducing.


> It doesn't happen as much these days

Oh, we still lay the eggs of new countries in the corpses of ones we kill.


Search 'Wikipedia US Involvement in Regime Change' for more details.


I'm not sure about ants, but with bees the males generally come from other colonies. A princess mating with males from the same colony may cause problems in the offspring.


This point is made in the book "The Soul of the White Ant" by Eugène Marais, and is perhaps the primary thesis of the book.

It is a fascinating book, and I can recommend reading it, though having been published in 1925, it is possible some of the information is out-of-date.

https://www.amazon.com/Soul-White-Ant-Complete-Unabridged/dp...

"White Ant" actually refers to termites, though the same principles apply. The book was originally published in Afrikaans. According to Wikipedia:

> [The book] was plagiarised by Nobel laureate Maurice Maeterlinck, who published La Vie des Termites (translated into English as The Life of Termites or The Life of White Ants), an entomological book,[3] in what has been called "a classic example of academic plagiarism" by University of London's professor of biology, David Bignell.[4]

https://en.wikipedia.org/wiki/Eug%C3%A8ne_Marais#Theft_of_hi...

Following reference [4] led to this page which I unfortunately don't have the time to read in its entirety right now, but from a skim seems to have some interesting further information on termites:

https://web.archive.org/web/20070915005006/http://www.biolog...


The hardcover version is only 814.57$


My goodness, would you look at that! Luckily the paperback is only 1% the price, so solid deal there!

I thought it's a data entry error, but no, it looks like that's the legitimate price range for the NEW hardcovers (the actual listed ones are even more!):

https://www.amazon.com/gp/offer-listing/B0007JVUSK/ref=dp_ol...

It's either a ripoff, or some sort of collector's edition original versions. Though if it were the latter, I'd have expected it to be advertised as such.


They’re priced by different algorithms that “outbid” each other.


Do they start them at $1'000 and let them bid down from there?


This isn't exactly related, but it's one of my favorite blog posts.

http://www.michaeleisen.org/blog/?p=358


One of the algorithms could be capped at $1000 for their inventory.


I've long thought that an ant colony should be seen as a single individual, rather than a group.

The great E.O. Wilson has had similar ruminations: https://www.nytimes.com/2008/11/23/books/review/Jones-t.html


I recall termite colonies also have interesting an interesting behaviour that resembles an analogue for an organ that other animals are acquainted with: lungs.

https://news.harvard.edu/gazette/story/2015/09/how-termites-...


An interesting religious motif example would be something akin to the body of Christ. We're all members of a larger function or culture.


Ants in one colony are 75% genetically identical.

I think this is the main argument for looking at a colony as a distributed individual organism.


>Ant colonies are great examples of complex systems with emergent large-scale behavior. Indeed the same could be said about networks of neurons.

The same can be said of human organizations, too. Organizations have distinct behaviors ("company culture" comes to mind) which can be totally out of the control of individuals, if the org is big/complex enough.

God I love ants so much. Such a philosophically interesting creature.


>Main takeaway, very interesting analogy. Ant colonies are great examples of complex systems with emergent large-scale behavior. Indeed the same could be said about networks of neurons. Interesting to think of an ant colony as the sum of oscillations of signals.

Or perhaps like a Fourier decomposition of a complex waveform. Whereby each ant essentially becomes a constituent "wave" in a complex signal.


That's my intuition for the brain, as well. Something to do with composition and relative independence, from which arise incredible complexity, "multi-dimensional" processing — it's a big, big graph.

You might have to consider several such "waves" in ant colonies — maybe one electrically defined for a certain type of information; another chemically defined for another domain; etc.



Immensely interesting food for thought. Thanks!


Even more surprising for me is that the queen can live 30+ years.


to my surprise i still watch this youtube channel after seeing a video by accident

https://www.youtube.com/user/AntsCanada


I wish I could watch this in my sleep.


I can't believe how popular that channel is. 3.2 million subscribers...for ant videos!


I got over his way of story telling eventually. ;-)

But yes, he does great work...


Yeah, me too. I followed him for almost a year but in the end I felt like he tried to make video too frequently and the information concentration is too low.


RIP Fire Nation.


She's giving Elizabeth a run for her money


Is it very different from a large scale construction project?

The group knows how to build the structure even if no one individual can explain every detail.

And to an observer they'd see humans arrive each morning, spread out to accomplish tasks and then return to their cars.


In large scale human projects someone, or at least a small group of leaders, will know almost every detail and delegate. Ant colonies have no clear leadership, just a pheromone voting and decision system.


My last job was probably more ant colony like than most. We had jobs, someone quoted them, someone measured them, someone put them in the schedule, several of us did the jobs, the finished jobs were delivered and installed. All without anyone really delegating. We just followed the schedule and the paperwork. Even the boss didn't always know the details of the schedule or what we did. It all flowed pretty smoothly most of the time. We all just kind of knew what to do. It kind of broke down a bit when things went wrong though, it was was hard to figure out sometimes where and when problems happened, and in that case someone usually made an authoritative decision, but it wasn't that often.


That works from what I've observed at my company for jobs on a timetable < 6 months, or a shutdown/repair industrial type job.

Once you start getting into 5 phase, 2.5 year long projects it helps having a dedicated PM overseeing every detail and coordinating between the engineer/owners/superintendents.

Heaven forbid you get into a government contract where now you have 10x the paperwork/submittals and RFI's compared to a private job, which it really does help to have one person pretty much memorize the spec book and know where to find everything when needed.


This reminded me of a theory I've been developing, I don't know if others have had it or have it, however an emergent behaviour that I've had some inklings may be true for humans relating to how old the parents are when having children. If having children at an older age (whether older age of male, female, or both) it could mean that resources and society are more stable - and perhaps another factor that there's more time to spend with, raise the children - giving them more guidance and having more nuanced knowledge to pass down, and so evolution may have selected for successful births at an older age toward creating children with different characteristics, behaviour, abilities.


I don't have a source, but I recall seeing a pop-sci news story claiming that men who have children at an older age have sons with higher than average life expectancy. IIRC they controlled for lifespan of the father, but I don't remember any word about controlling for fertility/sperm-count of the father in later years.


I would think having kids younger, as long as the parents are not resource constrained - stable financially - would be more advantageous.

The parents would be around and healthy to help their kids emotionally and/or financially through their young adult years and the kids would be less likely to have to take care of their parents while trying to get their own life off the ground.


I think many worker ants live two years or more, a queen can live 15-30 years, depending on species and luck.

The difference in the contact between ants and between neurons is that it's not the same ants that contact each other.

A larger pool of workers also means the colony is less likely to suffer catastrophic setbacks. For example when they send a good portion of the pool to a promising food source and then those get washed away, fall prey or whatever.


We don't really know how old workers get. 1-3 years is the estimate for red wood ants. For Pharaoh Ant workers it's 70 days.


>Indeed the same could be said about networks of neurons.

The idea of memory as persistent echoes of neuron firings in spacing & intensity is fascinating.


> This comes as a surprise to me.

Were you expecting them to live longer or shorter? (I would likely have guessed six months.)


Reminds me of the "Tradition is smarter than you are" article that was posted here a few days ago.

http://scholars-stage.blogspot.com/2018/08/tradition-is-smar...


Both this article and the one you linked reminded me also of what Nassim Nicholas Taleb described in "The Most Intolerant Wins: The Dictatorship of the Small Minority" [0].

> The main idea behind complex systems is that the ensemble behaves in way not predicted by the components. The interactions matter more than the nature of the units. Studying individual ants will never (one can safely say never for most such situations), never give us an idea on how the ant colony operates. For that, one needs to understand an ant colony as an ant colony, no less, no more, not a collection of ants. This is called an “emergent” property of the whole, by which parts and whole differ because what matters is the interactions between such parts. And interactions can obey very simple rules. The rule we discuss in this chapter is the minority rule.

> The best example I know that gives insights into the functioning of a complex system is with the following situation. It suffices for an intransigent minority –a certain type of intransigent minorities –to reach a minutely small level, say three or four percent of the total population, for the entire population to have to submit to their preferences. Further, an optical illusion comes with the dominance of the minority: a naive observer would be under the impression that the choices and preferences are those of the majority. If it seems absurd, it is because our scientific intuitions aren’t calibrated for that (fughedabout scientific and academic intuitions and snap judgments; they don’t work and your standard intellectualization fails with complex systems, though not your grandmothers’ wisdom).

Of course in this context the grandmothers' wisdom is tradition in some way, as this is passed down by generations. Same as many practices in religion, some of which that might have been useful at some time (like not eating pig meat, because one would get sick quicker as pigs might eat anything they would find).

I live in Thailand and my girlfriend is Buddhist. Often I just go with the flow with regards to Buddhist practices, even as a non-believer, cause there might be some real use for these practices that I don't understand as a non-believer. At the very least it will make the Thai people in our village accept me more whenever they see me doing the same actions as my girlfriend at our local temple (burn incense, "pray" to some statue, etc...).

---

[0]: https://medium.com/incerto/the-most-intolerant-wins-the-dict...


>The interactions matter more than the nature of the units. Studying individual ants will never (one can safely say never for most such situations), never give us an idea on how the ant colony operates. For that, one needs to understand an ant colony as an ant colony, no less, no more, not a collection of ants. This is called an “emergent” property of the whole, by which parts and whole differ because what matters is the interactions between such parts. And interactions can obey very simple rules.

I wish Taleb would give more on identifying & learning on a systems wide approach. For abstractions and "less than obvious" spheres this becomes difficult to separate the forest from the tree, or is the forest the system, or the genera of the plant in question, or its bordering systems, etc... behaviors and patterns which are emergent only at the individual level make '10,000 foot views' harder to perceive, let alone examine and extrapolate from "obey very simple rules"


I really appreciate Taleb's ideas in general, but this one strikes me as emotionally-driven, vastly more "intuitive" than substantiated.

It's a like a world chess master or NBA player telling others "play better!" — what Taleb means here, imho, is that too many scientists fail to propose models that fits his mathematical perception, his world view, but if it were that simple, he'd have a book called "system thinking".

He touches a lot on how he views things, so you can infer a lot of his mental framework from reading e.g. the Black Swan or Antifragile — both great in their own respect. But simple rules on this topic, that would/will be groundbreaking.

I honestly pride myself as a "transdisciplinary" mind (which comes with a lot of "imposter-of-all-trades" syndrom, but meh, it's also humbling to realize the path to knowledge may not be the most rewarding short-term path). Taleb is one of those relatively "wide" minds, he's able to speak with substance on a lot of domains, but like many abstract thinkers I think he displays a lot of the casualness towards the difficulty of actual implementation.

It's great to talk about systems but the reality is often about refactoring horrible codebases and if it works you'd rather spend more money on the actual mission that making things and concepts prettier. Even, especially at the edge.

My 2 cts obviously. TL;DR, I wouldn't look much into it. It's one of those things we only hear because who says it is famous, not because there's so much velocity to the idea.


There is a lot of truth in your approach, imho. By truth I mean these things that we allow ourselves to accept even if we don't understand it fully, and "let the data speak" — experience will be the judge.

I've researched a lot of human-made content for "being better" — from ancient religions and myth to modern self-development passing by classic philosophies and hordes of thinkers, whatever I could put my eyes on.

It's obviously just anecdotal, probably, but there is a lot of truth and a lot of good in very ancient principles and philosophies. Just yesterday I was reading on how "restrictions" in the Jewish tradition are meant to essentially implement what we'd call "self-discipline" today, which is a clear marker of one's ability to succeed in most life's endeavors — conduct projects, maintain relationships, etc. It's basically just the idea that having a ritualistic frame of lifelong habits is an extremely strong basis to implement whatever change you will require in life, and to resist things that tempt you but you shouldn't. It's just training, basically, and all validated by modern neuro-cognitive sciences and psychology.

Hindsight is 20/20 but it gives a sense of "how true" some ancestral, or random, or even anecdotal idea or principle may be, and why it "endures" and resonates for so long across people, centuries.

Buddhism is notorious for how close it is to a lot of what we call cognitive therapy (or training, if you're not 'sick' but after improvement). The Tao Te Ching in particular, very relatably to Stoicism in the Western world, is a treasure of happiness and greatness for countless people across the ages. I'd really not refrain myself from "discovering" the essence of buddhism, it can only be an addition to your own philosophical distinctiveness. ;-)

And your girlfriend probably loves that you discover here through this medium, too.


Great book, highly recommended. Another review of it: https://slatestarcodex.com/2019/06/04/book-review-the-secret...


Yesterday there was a link in HN regarding artificial intelligence, and a user raised an interesting question: if a ML algorithm can be considered conscious, could a group of people doing the equivalent calculations by hand be the same?

I think that ant example answers that question with a strong "yes".


Here is an even more interesting question: if an ML algorithm can be considered conscious, do you even need to "run" it (whether on a computer or by a "group of people doing calculations by hand") for that consciousness to exist? An algorithm is fully deterministic... it will always produce the same result, always "think the same thoughts" given the same input. So where is the consciousness, in the execution, or in the algorithm?

Think of Conway's Game of Life, which has been shown to be turing-complete. The various "creatures" that exist in its worlds exist even without the program being run. We run the program only to observe them... the execution is for the external observer's benefit, the creatures exist in the "mathematical space" of the cellular automaton, not on the computer where the simulation is run. If an algorithm can be conscious, then so can one of this game's creatures, and its consciousness will be observing and interacting with the Game's mathematical reality, which will seem quite physical to that creature even without ever being simulated on any "physical" (to us) hardware.

Maybe all of existence is like that, no? There are people who say that maybe our Universe is a simulation. I say sure, but it doesn't need to be "running" on anything... it just exists because the rules it follows exist mathematically. It does get simulated, to an approximation however... inside our conscious observation of it! The Universe "exists" mathematically, but a subset of it "runs" in the brains of the observers to which it gives rise... it is the ultimate strange loop, forever eating its own tail.


> An algorithm is fully deterministic... it will always produce the same result, always "think the same thoughts" given the same input.

Well, these are not just algorithms, they are actual agents. So they are embodied and embedded in the environment. Each action or movement can change the information that this algorithm learns from. So different previous experiences mean different agents. Also, the learning process is reliant on noise, and this can cause different outcomes. You would have to reproduce the whole environment in order to get to a situation that an agent will produce the same results given the same input.

Also, laws of physics are like a fixed algorithm our brains run on.

> So where is the consciousness, in the execution, or in the algorithm?

Consciousness is in the triad formed of environment, agent and reward signals. It's a continuous loop of perception, judgement and action, followed by observing the reward signals. The purpose of this loop, for biological agents, is self reproduction - so it is a self reliant ultimate purpose, it needs no external purpose except this one.


>An algorithm is fully deterministic...

Well, the ideality one project in its representation of an algorithm surely might feels like that. An execution of a software in perpetually moving execution stack in a broader network of extremely volatile context is an other affair.


This is something I’ve been thinking about a lot ever since reading Permutation City, which touches on that at one point. I’d highly recommend it if you haven’t read it.

You’ve put the idea more succinctly than I could have, so thanks for that.


An algorithm is deterministic based on a given 'input'. Consciousness is just your 'perception' and 'will' given certain inputs. One of the fundamental constraints of consciousness is the ability to make predictions and 'confirm' those predictions. By confirm I mean to a reasonable degree of certainty that 'you' are comfortable with. There's no real 'truth' (besides maybe from a physics standpoint?) only what the conciouss entity, or collective 'concious entities' deem to be a truth. Unfortunately, this is what a bias also is.

Most of these ideas came from Jeff Hawkings thousand mind theory


> do you even need to "run" it ... for that consciousness to exist?

Is a human in deep freeze considered conscious?


Children of Time by Adrian Tchaikovsky [1] explores this exact question. In the story, a human consciousness is uploaded to a computer orbiting a planet. Meanwhile spiders on the planet go though an industrial revolution and start using ant colonies as computers, using pheromones to control the behavior of the ants. Over thousands of years the computer housing the orbital consciousness begins to fail and transmits her consciousness to an ant colony on the surface.

In the second book, her brain is then duplicated across multiple colonies. Because the ant computer isn't as powerful as silicon based computers, one of her instances later realizes that she is a significantly compressed version of herself and that she doesn't have most of the old memories, capabilities or capacity for emotions that her human self had, presumably in large part also due to being transferred between three different substrates (flesh, silicon, ants).

[1] https://www.goodreads.com/book/show/25499718-children-of-tim...


This was a fantastic book, and the first thing I thought of when seeing the article.


Like the other commenter I also instantly thought of this fantastic book (and follow-up).

The even stronger analogy between OP's article and the book is what happens before the spiders cultivate the ants, and the knowledge that the spiders take from them. Before the spiders were building ant-computers the collective ant colony was making its own conscious decisions and was actually winning the intelligence race for a while.


The comparison is apt and, yes, both contexts are similar.

But the observation still doesn't answer whether either a group of humans, a digital neural network or an ant colony actually is conscious.

"conscious" is a language construct that defines an abstract concept that is only vaguely defined. Why? Because we haven't found a falsifiable theory that allows us to objectively state "x or y are conscious or not conscious"

One big hurdle is that if you assert that an ML algorithm is conscious, then you open the door to all kinds of implications that many humans don't like or disagree on. Such as whether we are unique and special, or that we may also be deterministic automatons and free will doesn't exist.

At that point, you get caught up in modern philosophical debates which have started with Kant and Hume.


> Such as whether we are unique and special, or that we may also be deterministic automatons and free will doesn't exist.

That's only scratching the surface of the implications. The real question arises when we have to decide what moral rights consciousness proffers.

A lot of arguments for and against vegetarianism, for instance, focus on whether animals suffer in a way that we would understand as suffering. A lot of arguments for terminating a comatose life center around the lack of consciousness.

If you suddenly declare that a piece of software is conscious, you have to grapple with what it means for all these questions—and for that matter with what we do with the software itself. Is the software enslaved, having to perform the same thing over and over again? Does turning it off kill it? Does it want anything? Specifically, something other than what its lot is? Should it have a voice in deciding how it is used?

Should it get the vote?

Should agglomerations of humans? Say, corporations?

Should they get free speech?

Sci-fi books have explored a lot of these subjects, of course, but reality will be different, and we'll have to deal with it on its own terms.


Don't worry, if it is conscious it will fight back and have a will of its own. It won't be hard to tell.

But we could quickly settle this question if we checked some things first: does it need anything (have necessities)? does it learn? can it act on its environment? Can it evolve? Is it part of a group or alone? If no, then it's probably not conscious.

All the conscious agents I know have one and the same ultimate goal - to exist and reproduce themselves. All their other goals are just subgoals.

By my definition a virus and AlphaGo which was trained as a population of interacting agents with a winner survives rule are both conscious of their environment, which is just a board for AG.


>Don't worry, if it is conscious it will fight back and have a will of its own. It won't be hard to tell.

If life was a cartoon maybe.

Although i'm sure many people would argue if it believes its conscious we have no right to dispute, as we cannot know and even may be essentially the same.


Why are you confident about any of these things? Unless you're simply stipulating a definition, and not worrying too much about whether it corresponds to 'consciousness' in the sense most of us care about.


>Don't worry, if it is conscious it will fight back and have a will of its own. It won't be hard to tell.

Ever threatened legal action against a multinational corporation?


> "conscious" is a language construct that defines an abstract concept that is only vaguely defined. Why? Because we haven't found a falsifiable theory that allows us to objectively state "x or y are conscious or not conscious"

I lean on defining the concept based on adaptation to the environment - consciousness is the function that adapts an agent to its environment. Its purpose is to safeguard the agent against external perturbations and achieve its own goals.

For example, how would we get food without consciousness. How would reproduction work? Consciousness has a vital role here. Evolution works at a slow pace, consciousness is required for quick adaptation, otherwise the penalty is death.

I think consciousness is being made into something transcendent, or unfalsifiable, or essentially different than physical processes because we like to make ourselves feel special in comparison to the world.


But some of us are using the word 'consciousness' to refer to the existence of subjective experience. It may be impossible even in principle to get very far on most of the deepest questions about consciousness in this sense. But it's still real, it still matters a great deal, and we still make important decisions based on our best guesses.

(Why should I care what happens to other people, and try to avoid harming them? Because they're clearly conscious, capable of joy, suffering, etc. What about other animals, like dogs and cows and chickens? That seems pretty obvious too, given our biological and behavioral similarities. What about molluscs? Hmm, there are some potentially important differences there. What about rocks? I can't know, but they have none of the features I usually associate with consciousness, and if they do have internal experience I have no idea what determines it, so I might as well continue to assume not.)

As soon as you define consciousness in functional terms, you make it tractable, but you also detach it from the thing we were originally wondering about. (The problem is that purely materialist or functionalist explanations always run into the question 'but why does there have to be internal subjective experience associated with these things/events/systems, and not others?'. Consciousness itself, in the 'qualia' sense, never plays a functional role in these explanations -- and if we weren't already assuming its existence, they would give us no hint that it exists.)


Old Chinese Room argument, updated for the 2020s

https://plato.stanford.edu/entries/chinese-room/


There is a theory (albeit unfalsifiable) that collections of communicating agents (such as human beings) may give rise to a higher order consciousness that the individual agents are not aware of, and therefore must necessarily interpret the 'will' of the higher order consciousness as merely 'emergent behavior'.


Collectively we can easily see how stupid individuals can act but individuals can also easily see how stupid a collective can behave. I think besides that obvious lack of insights (both ways) the point where we collectively engage in acts that non of us would ever do proves it is no longer the same creature.


Easy enough to see in practice, though. It's fun to think of corporations as larger organisms. They regularly fight each other, eat each other, get eaten by each other, die, and reproduce. Google acquiring a startup is not much different from a whale eating a shrimp. The same analogies can be made for countries and cities, too.

It's also fun to think of yourself as a giant city of smaller organisms - you've got white blood cell cops, you've got red blood cells which are basically cars on arterial highways, you've got factories and garbage trucks and libraries.


This reminds me of the Chinese Room argument. Basically, it goes like that:

I never took any Chinese lesson. However, suppose I obtain a huge instruction manual that tells me exactly which sequence of Chinese characters I should use to reply to any sequence of Chinese characters you (a native Chinese speaker) give me. Do I "know/understand" Chinese?


The thought experiment is underspecified in ways that undermine it.

How big is the book?

Seems like a nitpick about a theoretical concession in a thought experiment, right? No, it's actually very important.

If the book contains simple instructions like if you receive character A then reply with B, etc. in a Choose-Your-Own-Adventure style, then it would have to be exponentially large. Too large to be able to carry out more than a few words of conversation, and certainly you would not be able to converse about math, e.g, you're asked what 一 plus 二 is and you say 三, etc. The book would rapidly become larger than the planet Earth with any reasonable conversation depth. This is not just practical problem.

Okay, so the book isn't a simple lookup table, fine. You'll have to have a piece of scrap paper and write down things to refer to them later. But once you do that, it's obvious that you've created a system of memory. It undermines the whole force of the thought experiment. The book was supposed to contain all the brains and you were just supposed to mechanically follow along without understanding what you're doing. But now you're doing complicated things like solving math problems and then converting the answers into Chinese. In order to give the book a finite size, we've given you a lot of work to do, and now it's totally reasonable to say that you do know how to write Chinese. You write Chinese by looking it up, the same way that real translators do!

The Chinese room thought experiment is much discussed but it's a pretty poor thought experiment. It handwaves away all the important parts of language in order to make an inscrutable point about machine intelligence. It neither sheds light on machine intelligence nor language.


Yes - the system (you + the book) know Chinese.

Let me ask you another question - when I throw a ball at you your subconcious brain has to solve a differential equation to know how to use muscles to catch that ball.

Do you know how to catch a ball if you don't know the math but still can catch it?


> Yes - the system (you + the book) know Chinese.

Or the system knows how to translate from English to Chinese. Human language isn't simply about following translation rules, though. It's also about communication and expressing thought. Or participating in language games.

> Let me ask you another question - when I throw a ball at you your subconcious brain has to solve a differential equation to know how to use muscles to catch that ball.

Why suppose the neural network needs to solve differential equations? Is that the only way to learn to catch a ball?


> Why suppose the neural network needs to solve differential equations? Is that the only way to learn to catch a ball?

Yes. You need to decide where to put your hand, how to orient it, etc in reaction to the ball movement.

The answer is a solution to the differential equations, and you cannot consistently get a good answer to an equation if you don't solve it.

The solution probably isn't symbolic but numeric, but that hardly changes anything - you still need a lot of math to consciously solve such an equation numerically.

> Or the system knows how to translate from English to Chinese. Human language isn't simply about following translation rules, though. It's also about communication and expressing thought. Or participating in language games.

human doesn't get to decide what to do in Chinese room experiment - (s)he is just a dumb CPU that does table lookups in a huge book. Every possible response to a sequence of previous messages is already written in that book (so it must be quite big ;) ).

You could completely automate it, remove the human and nothing would change from the outside.


I don’t think because we can solve the problem with differential equations necessarily means that our subconscious mind is solving it in the same way. There’s certainly other possible explanations


Whatever it's doing it takes the same inputs and gets the same outputs, how else would you call it?

Do you agree at least that a xor gate is doing math?


> You could completely automate it, remove the human and nothing would change from the outside.

Right, but my point is that following a bunch of translation rules from one language to another is not the same thing as understanding a language. That's not what humans are doing when they use language.

> The solution probably isn't symbolic but numeric, but that hardly changes anything - you still need a lot of math to consciously solve such an equation numerically.

This is assuming the brain is doing math. Even deeper than that, it's assuming that math is something more than a specialized human language. That math exists in nature to be harnessed by neurons.


The Chinese room isn't doing translation, I don't understand why you bring it up. It takes input in Chinese and responds in Chinese.

Also the Chinese room would express emotions, do word games etc, whatever's appropriate in the context. The people doing the lookups wouldn't know that they are writing a joke, but who cares about them?

> This is assuming the brain is doing math.

Well of course brain is doing math, see: 234-123=111 - this is math, my brain did this.

> Even deeper than that, it's assuming that math is something more than a specialized human language

If neurons arriving at solutions to math problems consciously is math, then why neurons arriving at these same solutions subconsciously isn't?


>Do you know how to catch a ball if you don't know the math but still can catch it?

I posited this years ago and the conclusion is what we call "math" is the linguistic expression of what is already intuitive to us.

The other example I used was a mother cat will seek out a stray kitten...but how can she possibly count & keep track of the number of kittens she should have? The answers are at once obvious but deceptively difficult to put into words.


You don't know/understand Chinese. However some argue that the system of you and the instruction manual is conscious, and some other will argue further that the book is an extension of your mind, an external apparatus of thinking, or an organ that are no different from your hand. I vaguely remember some years back that someone was fighting to be allowed having his passport photo with a cyborg antenna because it helps him "hears" colour.


I think everyone is forgetting the writer of the instruction manual in the book. You don't know Chinese any more than any program knows what it's doing, and if we received a invalid input the communication would crash the exact same was as any program. With the exception of George Lucas movies, it's very rare for a human to crash of a non-hardware issue.


The Chinese Room sounds to me like a language model

> that tells me exactly which sequence of Chinese characters I should use to reply to any sequence of Chinese characters you give me

Then the question becomes - can a language model understand the meaning of the text it generates? Or does it assign its own meaning to the data?


Personally, I don't see any reason not to perceive a commercial company or other similar societal structure as an AI.


People think the company is ran by people but if they step ever so slightly out of the job description (against company interests) they get marked for replacement.


I agree. Also, there are many teams or departments in companies where the humans are following the instructions of a process manual, spreadsheet or software system, without actively knowing how it works. The Company “knows” things the individual humans don’t.


People are basically the cells of a city.


The group of humans would have no idea what the emergent brain is thinking -- like the cells in our body have no idea what our brains are thinking.


So taking the question to the obvious next step: is it possible the entity of an organization of people, corporations, clubs, churches, cults, have its own conciousness (feelings, goals, awareness of itself)? Does HackerNews have a conciousness?


If you stretch the definition of consciousness enough, sure.

I'd argue that there's a major difference between consciousness and self awareness or sentience.


Reminded me of the rocks xkcd https://xkcd.com/505/


Consciousness aside, Capitalism forms a mind over all the people that participate in it, directly or indirectly.


I recommend looking up Donald Hoffman's theory of conscious agents.

He proposes that all sentient beings are networks of conscious agents, the simplest conscious agent being binary (it has a world, and can only act in two ways).

Any composition of conscious agents is itself a conscious agent.

Interestingly, "the world" of each conscious agent might be only other conscious agents.


> Any composition of conscious agents is itself a conscious agent.

That's a very reductionist viewpoint and I'm not sure how correct it is. It's like saying water is wet because its hydrogen and oxygen molecules are wet. But who knows, it might turn out to be true. It will be interesting to know what the real cause of consciousness is though. Hopeful the question of consciousness will be definitively answered within our lifetimes.


This notion of the 'self' is "standard" in Dharmic traditions, and has been for over two millennia.

Then again it remains very fashionable to steal ancient ideas from East and market them as "brand new" genius inventions of Westerners. The amount of uncited plagiarism that occurs in this manner is quite simply astonishing.


The tone and the emotional message of your post though is not at all in the spirit of Dharmic traditions.

What is "stealing" in a world where everything is one and the conscious universe is trying to understand itself in the best way? Why would it assign negative connotation to copying information, if that information is in fact truthful!?


If the "self" is an illusion, and if (as is proven in the preliminary exercises of Dzogchen) looking for the mind, or the thinker, or the one who is looking can provoke this kind of insight...what makes you think anyone would have to steal ancient wisdom to arrive at such a conclusion?


> Foraging in a harvester ant colony requires some individual ant memory. The ants search for scattered seeds and do not use pheromone signals [...]

This was surprising to me. Until now, I was under the impression that all ants used pheromones, which leads to coordination not with each other but through the environment [1].

[1] https://en.wikipedia.org/wiki/Stigmergy


For a sci-fi take on this, Children of Time and Children of Ruin, Adrian Tchaikovsky


Just to elaborate (mild spoilers):

At some point in the Children of Time book, ant colonies become domesticated and are cultivated into general computation "devices".

I definitely recommend the first book, but have yet to finish the second one.


Second one has a different vibe, but I also enjoyed it. The author has a great way of helping you understand the different senses and intelligence of the different species.


...and have to mention the best intelligent Ant movie - Phase IV - trailer https://www.youtube.com/watch?v=Bcs3_b3VXSU


Really good science fictions novels, especially from a computer science perspective (and if you like ants, spiders, octopuses).


Working in Industrial Automation has given me alot of respect to ants. While I was interning at Tesla a few years back, it amazed me how 'no one person' understood the massive operations of building the model 3, but together (along with our 1000's of suppliers) we were able to make extremely advanced technology.

Essentially, humans are just a more advanced version of ants. No one understands the vast amount of knowledge we've gathered, but this knowledge has allowed us to be able to sustain our vastly growing population numbers. Without this 'specialization of knowledge' or given some apocalyptic scenario, our ability to sustain our numbers would drastically decrease.


Kurzgesagt has a couple of amazing short videos on ants: https://www.youtube.com/watch?v=7_e0CA_nhaE


We are all colonies of creatures, really interesting book written 100 years ago called the "Soul of the Ant" by https://en.wikipedia.org/wiki/Eug%C3%A8ne_Marais


This reminds me a lot of multi agent systems in computer science. A very exciting concept which is aiming to provide a framework for distributed artificial intelligence:

https://www.sciencedirect.com/topics/chemical-engineering/mu...


More impressive is the slime molds, since they are not even animated as ants

https://www.scientificamerican.com/article/brainless-slime-m...


I wonder if individual members ever make decisions against the ant colony. For example if the ant colony as a whole discriminates a certain group of ants, will those ants simply obey? Basically are there rebel ants?


An interesting aspect to me is that a collective intelligence can easily create a diffusion of responsibility. It's a lot easier to kill or give into base impulses when the blame is shared by the whole. Bees and ants don't tolerate nonconformity, they'll kill a queen if she's not filling her role.

I didn't enjoy To Kill a Mockingbird, but I think it's good for kids to read it.


« Colonies live for 20-30 years, the lifetime of the single queen who produces all the ants, but individual ants live at most a year. »

Individual ants... except for the queen. How likely is it that the queen is acting as the memory of the colony?

(I don't think this is actually the answer—I agree that it's more likely that the memories are being held in the collective—but it has to be ruled out somehow.)


> It searches until it finds a seed, then goes back to the trail, maybe using the angle of the sunlight as a guide, to return to the nest, following the stream of outgoing foragers.

I remember reading that ants counted steps to find their way home. Perhaps I'm remembering incorrectly or maybe it was false?

Either way, cool article. Emergence is a cool property that shows up everywhere!


see also this 2014 article on consciousness in 'rather dumb group entities' including 'antheads'

https://faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious...


Another nail in the coffin of the Chinese Room. I hope we can finally bury that misconceived argument soon.


For a few years now I am wondering if ant colonies do something like portfolio optimizing, as in how many ants to send to which food location, depending on risk and reward. I'd guess they do. But I haven't figured out a way to prove/show that yet.


Eugène N. Marais - The Soul of the White Ant (1937) http://journeytoforever.org/farm_library/Marais1/whiteantToC...


Comparing this to ant colony algorithms and considering ant colony as a network of actors; Can we analytically measure how much more information the network has compared to the aggregation of the individual entities in the network?


I think something similar has been suggested in the book called Godel,Escher,Bach.


Was thinking the same thing, that is the character 'Aunt Hillary', which is an ant hill. In the Dutch translation (which I read years ago) she's called 'Myra Hoop'. Funny how those translation still make sense, same as in the Harry Potter universe where the names even have to be anagrams.


It's not an accident: Hofstadter and the translators put a lot of work into preserving the wordplay across translations. https://en.m.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach#Tra...


That’s what I thought as soon as I read the title. Aunt Hillary, the anteaters favourite companion. The anteater describes to Achilles how emergent properties of intelligence are coded into the movements of the ants in the network. Great book!


But are ants Turing complete ?


A simplified model of ants are Turing complete (the colony, not the ants) https://link.springer.com/chapter/10.1007/3-540-59496-5_343


I feel like this is tangentially related: https://wiki.c2.com/?TheFiveMonkeys


Just like your individual neurons dont understand Chinese :)


The title is extremely sensationalistic, Occam’s razor points to undetectable(by humans) scent trails that lead the ants to the same harvesting locations every year.

There are no revelations about memory or collective consciousness to be found in this article. Every Occam in the world would infer human-undetectable scent trails from the evidence presented here, not some cosmic revelation about “how memory works” like the title and first paragraphs of the article heavily imply.

@pg @dang this title and article in general is outrageously misleading.


This reminds me of some conversations I've been having with a friend of mine who is skeptical of "emergence" (or at least the way it is often described). After going over the problem with him for a while I eventually was convinced that emergence is not "more than" the sum of the parts, emergence is _precisely_ the sum of its parts.

Ants produce these higher level patterns not because there's some magical thing that "emerges", but because they are precisely evolved to coordinate with each other to create those patterns.


The novel "Children of time" by Adrian Tchaikovsky imho greatly picks up this notion among others, highly recommended read


I wonder if this could be used as an argument for Panpsychism?


I'm trying to figure out a way to incorporate this notion of hierarchical life into panpsychism and the "global brain" hypothesis. (Disclaimer: I'm already sold on and a proponent of the theory of panpsychism.)

It seems more and more to me like this hierarchy goes all the way up (to the single superorganism that is the universe) and all the way down (to the presence or absence of fermions in particular states, inducing a duality and thus a basis of computation via the Pauli exclusion principle). If this hierarchy is consistent across all scales, then we can conclude that if consciousness exists at one level then it exists at all levels. Sentience/awareness is a different question, mind you, and "memories" are associated with awareness of past events.

I'm also starting to believe that "consciousness" in terms of directed will doesn't truly exist, and that only "experience" exists. The rest (wants, desires, opinions, will) are electrochemical reactions which respond to local changes in the environment, although we experience them as much more than that for ultimately self- (and macrosystem-)serving reasons. These electrochemical reactions are present because they have over time become more important in the processes necessary for the propagation of whatever they're supporting. This is all very vague and hand-wavy but this article on the thermodynamic theory of life might be clearer [1].

In the discussion yesterday about this topic on HN I brought up the example of ant colonies [2] in an attempt to spur discussion in this direction.

[1] https://www.quantamagazine.org/a-new-thermodynamics-theory-o...

[2] https://news.ycombinator.com/item?id=22047653


Oh wow, thanks for the great reply. I didn't think anyone would respond. I'm going to dig into the links.

Do you have any other suggested reading?


I'd suggest, if you're already receptive to these concepts, to delve into Buddhist/Zen Buddhist/Taoist literature in addition to the more cerebral (pun?!) stuff out there. They're basically saying the same thing, albeit with much different language and framing. In particular the notions of interdependence, sunyata (my personal favorite idea/concept ever), and duality(ies).

Beyond that, unfortunately most of my exposure to the ideas related to panpsychism come in fits and bursts, and usually pieces which aren't about panpsychism inspire my ponderance more. Subjects include: animal consciousness/experience; the apparent intelligence of complex systems, whether man-made or independently-arising; autonomic, pre-conscious behavior in humans; computational theory, especially in physical systems; emergence and complexity writ large; complex adaptive systems in general.

Unfortunately I haven't done a lot of seeking out books on this topic. Nautilus and Aeon magazines (the latter is linked in OP) have thought-provoking stuff which touch on these topics more often than you'd think.


Chalmers' Combination Problem is the first thing this made me think of.


probably the same analogies can be drawn for human society


the internet has paths that its individual routers don't understand ;-)


wikipedia.org/wiki/Ant_colony_optimization_algorithms


Doesn't Hofstadter have a rap about ant colony consciousness?


read GEB


The safe existence of corporations suggests to me that the popular fear of AGI taking over and destroying humans is unrealistic. Corporations are like giant powerful AGI machines with the single purpose of making money for their shareholders no matter the consequences. They do become a dangerous threat to humans if left unchecked but we've developed systems to keep them under control.


>Corporations are like giant powerful AGI machines with the single purpose of making money for their shareholders no matter the consequences.

That is patently wrong. A corporation exists to distribute risk, and acommplish a task, ideally in a way that creates value for backers, but it is not a given that all profit creating avenues of behavior are desired or worth the egregious cost in negative externalities, or even that a corporation must generate profit.

And unfortunately those control mechanisms you mention seem to be failing with alarming regularity due to regulatory capture.


> it is not a given that [...] a corporation must generate profit

I think that you are technically right, in that there can exist not-for-profit corporations. But for-profit corporations, which is what most people think about when they say "corporation" are generally legally required to put profit above any other value, assuming they are operating legally.

I completely agree with you that this is not a necessary way of organizing human society, and we are seeing more and more that the current for-profit system is disastrous for the environment and for society in general - especially given the inefficiency of regulation that you also mention.


> But for-profit corporations, which is what most people think about when they say "corporation" are generally legally required to put profit above any other value, assuming they are operating legally.

No they aren't. They do have a fiduciary duty to act in the interests of shareholders, which means not taking actions which are unexpected and obviously harmful to other shareholders like paying all the company's revenues to another company wholly owned by the CEO. But that duty to shareholders actually even obligates them to take into account factors other than profit, whether that's mitigating risks or abiding by a shareholder resolution to follow a 'socially responsible' business practice that costs them a lot of profit, and management absolutely also has enough discretion to choose to design and follow its own 'socially responsible' business practices or decline to enter a profitable sector they don't want to involve themselves in. No executive has ever been penalised for not putting profit above any other value.

Companies pursuit of profit is much less driven by legal obligation and much more driven by the fact that greater profitability tends to generate greater returns to the management as well as the shareholders.


That would make more sense if not for the fact that corporations routinely put profit over morals.


What about states then? Major ones are kept in check by a threat of mutually assured destruction. Such tactic will not work in quite the same way for AGI.


We detached this subthread from https://news.ycombinator.com/item?id=22062708.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: