One of my favorite excerpts from :
> Harland says Portia’s eyesight is the place to start. Jumping spiders already have excellent vision and Portia’s is ten times as good, making it sharper than most mammals. However being so small, there is a trade-off in that Portia can only focus its eyes on a tiny spot. It has to build up a picture of the world by scanning almost pixel by pixel across the visual scene. Whatever Portia ends up seeing, the information is accumulated slowly, as if peering through a keyhole, over many minutes. So there might be something a little like visual experience, but nothing like a full and “all at once” experience of a visual field.
 - https://en.wikipedia.org/wiki/Portia_(spider)
 - http://www.dichotomistic.com/mind_readings_spider%20minds.ht...
 - http://news.nationalgeographic.com/2016/01/160121-jumping-sp...
Not.. if you don't say that...
And I'd wager that people who would read about the Portia, and would read Blindsight and Echopraxia are the type to think long and hard about such topics anyway.
In general, I love a good novel with a bibliography.
Also, I got interested and tried to find some documentation on Portia Labiata after reading Watts' book, and there's very, very little, but what he says appears to be true. They do have telescopic eyes with vibrating retinas. They do spend ages painstakingly scanning a scene before moving. They are absurdly intelligent for insects. I don't know if Watts' suggestion that they timeshare pattern-matching areas of their brains is true, but it seems plausible.
Here's a good article: http://www.dichotomistic.com/mind_readings_spider%20minds.ht...
And a nice fluffy white one with very stubby legs: http://www.jumpingspiders.co.za/images/main/187.jpg
Via https://www.reddit.com/r/awwnverts/comments/5o7nmg/my_favori... and https://www.reddit.com/r/awwnverts/comments/5lar18/floof_spi...
Ugh, unfortunately that trick doesn't work for every ugly scary image.
This breaks down in Australia where the spiders are as big as your head or in Puerto Rico where the foot long centipedes eat bats. Those aren't bugs, those are nightmare fuel.
But the OP is right, of course. Spiders are not insects. Although I suspect that most people I'd talk to in the UK would call them such. I'll have to hand in my Pedant's Society card (which is actually made out of plastic)...
The American use of "critter" also bemuses
"terrestrial arthropods, such as centipedes, millipedes, scorpions, and spiders, are sometimes confused with insects"
In real life many people call spiders insects, either from ignorance or from not caring about the distinction.
To refer to both insects and spiders & similar in one term you could use the (rather less formal) terms "bugs" or "creepy crawlies". I can 't think of anything more formal off the top of my head.
You know what I'm going to say don't you.
Last I saw, was what looked like a video game based on Blindsight.
No news on it's status, alas.
I'm now more excited for this than winds of winter, wow.
Wow, this guy sure knows how to make a statement.
It can mimic the footsteps of other spiders by plucking the web of the spider it's going to kill.
Doesn't the name comes from Portia in the Merchant of Venice a devious character?
Jumping spiders are cool we have the stripy ones in my region. Spider give me the creeps but not jumping spiders I think it's due to the shorter legs and small size.
> Other bumblebees learned by observing trained demonstrators from a distance. Only a small minority solved the task spontaneously. The experiments suggest that learning a nonnatural task in bumblebees can spread culturally through populations.
This makes it reasonable to think of the hive as one individual!
Just like we are made of individual cells and organs, an ant hill is made up of individual ants. That the parts can move separately is a fairly superficial difference.
Those numbers are true if a female mates once. Honeybee (Apis spp.) queens that lead successful colonies, for example, typically mate with a dozen or more male partners. This has beneficial genetic effects but is probably done just to satisfy sperm storage requirements since they only perform one mating flight. The result of this is that the thousands of sisters in the hive have many different fathers, reducing the worker-to-worker relatedness.
Bee reproductive setups are really diverse with honey bees at one end, solitary bees at the other, and just about anything else between.
Eusocial species behavior evolved from fully-functioning individual insects (solitary bees and wasps that still exist, for example) as a behavioral adaptation. A human organ is not a self-surviving entity with social behavior.
In a honeybee hive, queens typically mate with many male partners (10+). While all hive individuals are related, the majority female workers are each the product of diploid sex determination with multiple possible fathers.
It makes me think we are missing something when creating arificial neural networks which needs much more neurons to achieve only this specific task. Maybe artificial neurons are too simplified models compared to biological ones, maybe our training process could be much more efficient?
First, it's important to keep in mind the difference between artificial "neurons" and real neurons. Real neurons, with their complicated dendritic arbors, are much more complicated than anything you'll see in a typical ANN. So there isn't a one to one correspondence between the "few hundred or thousand" neurons in a bee and the number of units in an ANN. Now is there a one to thousand correspondence? I don't know. There's probably research on it, but I'm unfamiliar. Certainly for some neurons even a thousand unit ANN would seem inadequate (look at the arborization of a Purkinge cell, for example).
Point two: Absolutely modern ANNs are missing something fundamental. I would wager obscenely large amounts of money that they are missing more than one fundamental idea, and I doubt I could find another neuroscientist who'd take that wager. What are ANNs missing? Obviously I don't know or I would have published it already. But I'll guarantee you the first step is recurrence. Hell, intelligent recurrence might be the only thing missing and I'd lose my bet. But recurrence is hard. And anyway, back in point one, even the simple facial recognition in a bee using only a thousand neurons would take a few hundred thousand to a few tens of millions of modularly-recurrently connected "neurons." Not exactly a laptop simulation.
Look at the eyes of bees. Very different from our own (and from the cameras we build) and perhaps very specialized to the limited set of tasks that bees carry out?
"Eye smarter than scientists believed: neural computations in circuits of the retina." Gollisch, Tim, and Markus Meister. Neuron 65.2 (2010): 150-164.
That probably doesn't answer the question of how a bee's eyes work, though.
And different species have very different neural topologies: I've heard it described that Octopoda act more like an eight-member swarm intelligences than an intelligent eight-armed creature. Because their neural density is much more diffuse in their arms, they do so much more processing there, and the central brain acts more like a coordination unit than anything.
We are in the very early stages of another "AI Spring" - I have the inkling of an opinion that if we are to advanced further in the GAI direction, it will be by applying these same kinds of large-dataset tools toward other ML models of the past, much like we have done with neural networks - and also seeking to connect and unify these various parts into a whole (I don't think we should throw the baby out with the bathwater - I think all of our past ML systems have validity - it's more of where they fit in the overall scheme that may be lacking).
I do know that such approaches have been tried in the past, but I don't think as much consideration of using today's tools on yesterday's models has been approached as strongly, due to all the hype and money (and success!) being poured into neural networks today.
Much of his criticisms still stand even in the face of his failure to predict those topological advances. And his criticisms weren't even the derogatory kind...at their most ideological they were an attempt to refute the idea that conceptually simple neural networks are not essentially complex enough to describe the vast complexity of general intelligence. He still saw their place, as do I: NNs have performed remarkably in areas of sensory perception and processing, but still lag behind many other methods at higher level tasks like learning a mathematical model of a physical process. After reading a lot of Minsky, I'm pretty sure most of the advances in AI over it's entire history are due to AI Winters crushing the dreams of Neats and forcing them back to Scruffier methods.
And I'm right there with you. I'd love to see a new revolution in Expert Systems, Logic Programming, or tree-based models. Hell, we're kinda seeing a (IBM-centric) revolution in Symbolic AI and Logic Programming with Watson/UIMA. But I want more!
Watson appears to be a framework for applications using classical GOFAI techniques, so I would hesitate to term it "revolutionary". AFAICT its emergence is due to faster von Neumann hardware, not new algorithms. Not that NNs couldn't be rolled into the mix, too, of course.
I believe the current interest in NNs is an AI diversion, something to do until a breakthrough occurs. Now we find that bees can read faces, pull strings to get nectar, count, etc. So what? Are we closer to something that can navigate the world, solve problems like we do, using language to explain how it was done and answer questions?
IOW I await the first version of the Odyssey written by an AI, once it's excursions are complete (Kindle version: a Google car describing the perils of it's cross-country trip from NY to LA).
We Really Don't Know How to Compute!
Consider a vinyl record and record player. It's super simple - a long groove with smooth bumps and a needle that slides in the groove. Recording and playback via analog methods are super simple.
Compare that to a compact disk or MP3 file and the complexity required to encode, store, and playback the sound.
BEAM robotics (from Biology, Electronics, Aesthetics and Mechanics) is a style of robotics that primarily uses simple analogue circuits, such as comparators, instead of a microprocessor in order to produce an unusually simple design.
BEAM robots may use a set of the analog circuits, mimicking biological neurons, to facilitate the robot's response to its working environment.
 https://books.google.com/books?id=7KkUAT_q_sQC (there appears to be excerpts available for free as pdfs)
Maybe tin foil hats make good heatsinks..
Interestingly enough there is some evidence that the Gibbs free energy of ketone metabolism is more thermodynamically efficient than glucose in the brain (c.f. Dr. Richard Veech's work). You can measure a lower temperature gradient between the tongue temperature and the brain's.
But I'm not a doctor or a scientist, so I could be completely wrong. Hopefully someone more informed will weigh in.
If you get hotter than your behavioural adaptations (shade seeking, splashing yourself with water, turning on the air conditioner) plus your own inbuilt capabilities (vasodilation, sweating) can handle, then you pretty much overheat and die.
So endogenous human thermal regulation is efficient enough to deal with the normal range of terrestrial environments inhabited by humans. And when it can't (if you're naked in the arctic, for example), you die.
Yet another time when my Political Science education fails me. What should I take away from this?
(I'll add that there's no need to make this argument from thermodynamic principles. Your brain has a finite number of neurons, and each neuron can only fire a few times a second, so that limits the number of computations that your brain can do per second as well. And that limit is a lot tighter than the thermodynamic limit.)
Just that you should not omit with "..." the most important part of the statement for you. You haven't studied physics. Then you see the formula and the textual name of it with the most probable entry in Wikipedia and then you throw the most useful bit for you away and ask what the formula without the name means, even though the name would give you an explanation.
> it takes k T log 2 joules to erase one bit of information...and our brain uses around 20 Watts of power
> it takes k T log 2 joules to erase one bit of information (Landauer's principle), and our brain uses around 20 Watts of power.
Our brains are (very) far from being the perfect computing material (so are our chips).
The theoretical minimum is only useful here to show that our brains cannot do free work. (Irreversible) computation always
has a cost.
The 20 Watts are the important number. Our brains have a certain energy budget and have to perform within that constrain.
The comment mentions "Landauer's principle", which provides some explanation: https://en.wikipedia.org/wiki/Landauer's_principle
Here as well, you could have made your comment better if you just omitted this part: "which you'd find if you'd attempt to read what you avoided... twice."
The way he quoted the comment, it's not even visible that "the comment mentions" like you suggest. He kept the formula of the principle and only edited its very name out. Which is a direct name of the entry in Wikipedia.
What do you find condescending when the person asking doesn't show that he even tried to read not only what the original sentence pointed to but also what the previous answer directly links, both times on such an obvious place as Wikipedia?
I see it more either as intentional trolling, or as somebody who could actually profit of learning to read what's already written, and I don't see I'm wrong here. But I welcome if you explain exactly why I should pretend somebody didn't do what he did in my answer, or even more important, what most of the readers would benefit from? I'm much more a reader than an active writer here, so that's an important question for me.
I very often ask here for the clarifications of the statements for which I can't easily find the answers to. But here the editing out in the single sentence couldn't be more blatant.
I'll be sure to offer my opinion on the breadth and depth of your research.
I'm very impressed how you masterfully contributed to the discussion.
It seems crazy that it can be that close given the miniscule energy information carries, to me those ~8mW is completely mind boggling big.
The most reasonable explanation is that my calculation was wrong, but at the time it did not appear to be the case, although it probablys is anyway :)
Trying to avoid spoilers here, so ROT13 - Gur nyvraf rapbhagrerq va gur abiry ner abg pbafpvbhf, naq uhzna pbafpvbhfarff vf cerfragrq nf na ribyhgvbanel fvqr rssrpg / zvfgnxr juvpu erdhverf hf gb jnfgr uhtr nzbhagf bs cbjre guvaxvat nobhg guvaxvat, vafgrnq bs whfg guvaxvat. Bhe phygher, oebnqpnfg vagb fcnpr, nccrnef yvxr n QQBF nggnpx bs vafnar vachgf gb gur aba-pbafpvbhf nyvraf orpnhfr bs ubj zhpu rssbeg vg gnxrf gb cnefr guebhtu vg nyy.
(The neurological phenomena of blindsight is also very very interesting, and suggests that our brains may do more work than is strictly required)
Also, give them money and the idea that they will have to amass this stack pile paper before they die, instead of gathering the stupid nectar...
Are you serious? You can pick up your mobile phone and arrange to have nectar come to you in any amount you require (as long as you have the money to afford it).
In fact, to a honeybee, nectar is pretty much exactly analogous to money.
Can you or your newborn eat money?
I think not.
I see a brain like a FPGA or a programming language, it can become anything, but it has to be led in the right direction.
Like, some people did Facebook with PHP and some people hacked together one in a million CMS.
Some people wrote a OS in C and I wrote a asteroids clone.
After reading the article, I was impressed by the insects, not by the BBC explainers.
"It does so by way of neuroscience’s favourite analogy: comparing the brain to a computer. Like brains, computers process information by shuffling electricity around complicated circuits."
Also, neurons interface using chemical messages in the form of many different neurotransmitters. The electrical phenomenon of cellular depolarization might as well be an implementation detail. If anything resembles a wire, it's the axon.
The research on nematodes, fruit flies and dragonflies - non-colony creatures - seems to contradict this, except none of their behaviors seemed as complex to me as the problem the bees were solving. But that might just be my programmer's brain over-simplifying the seek and flee behaviors I've put into NPCs.
None of this is "clever" -- it's some amazing mechanisms they've selected into, but it's not figuring things out, using logic, or using tools. It's much more akin to a simple learning network playing super mario brothers and dying frequently until it eventually succeeds. It shouldn't surprise us that it only takes a few hundred thousand neurons to do (including a tiny, low-res, colorless visual cortex and olfactory system that maintain a tiny, low-res representation of their surroundings).
I'm not dismissing the wonders of nature, just trying to add some detail to a write-up that glosses over _how_ these things are working.