The wasp found an entry through another window, in another room open in the second floor. It was from the room next door. So, I just closed this other window.
Again, it found an entry from another window. But this time, it started to impress me: it was from a room across the corridor! The wasp flew to the back of the house, found an open window, entered it and mapped to path through the corridor to my room.
I then closed all the windows in the second floor of the house and bet it was over. I was wrong: the wasp came in from the front door, flow upstairs and found the room. It was even capable of doing the path in reverse so it could come in and out of my room.
I closed all the doors and windows on the first floor, but forget to close the small bathroom window. This was enough for the small wasp map an entire new path to my room again almost immediately!
Once it entered my room again, I opened the window again. It was smart enough to use it like it was simply choosing the shortest path.
So, even the small brain of a wasp is able to run "computer vision" algorithms and are fast running "machine learning" algorithms. They are even able to run A* perfectly and merge input from various sensors to perform superb navigation using very little energy, producing almost not heat in a very lightweight package in real time with very low latency!
I'm glad this critter is small. Nature is scary.
One wasp trasmitted a map of your house to another and another one!
It is clear and easy to see, (easier than that d* b* of a queen when you need her, most often), and it tells a lot about the cognitive abilities of a social group, the hive. How they can tell eachother complex directions by twerking.
Kids love it too. And will often play around, wiggling their behinds (a cousin called it bee-twerking) and giggling how they are showing eachother where the sweets are by dancing.
Of course bees can communicate between them, and probably also wasps.
This little one didn't mind a bit when I put my macro lens an inch or two away, and even did some tricks to show off!
Now i'm actually curious about its efficiency of the computation. If we could somehow map neurons to transistors (e.g. the logical calculations/algorithms, not the structure and not simulate neurons with transistors), which would be more efficient, the brain or the hardware?
But more generally I'd ask the question of, if you can train a conscious animal to execute 8 brainfuck instructions (+, -, <, > , etc..), could they also develop a writing system and express higher order languge concepts? Asking why animals don't write assumes they can't, as maybe we just need some kind of intermediate symbolic phoneme concept, as though - like BF for computation, there are some basic unit instuctions that you can combine and parse into anything, and if a border collie can distinguish words for hundreds of different things, I'd wonder if what we're missing is a set of metaphors that represent this underlying core instruction set.
Things like musical notation and knitting stitch patterns are also thought to be Turing complete, so it implies the representation doesn't matter for computation, and computation is sufficient for representing most things, so the idea of finding some underlying instruction set for producing languages across species seems both fantastic yet loosely plausible.
And another: https://www.youtube.com/channel/UCEa46rlHqEP6ClWitFd2QOQ
Of course as with the guy in the Chinese Room that doesn't speak Chinese, none of the individual animals would have any idea what the overall system was doing.
Anyway, aside from fantasy/sci fi stories, the interesting analogy with the story is just because I'm not smart enough to debug the system given no blueprints or explanation, does not mean an anthill is not running some novel and possibly interesting computational problem. Or bees nest or giant fungal mycelia in the dirt or herds of lobsters.
There are randomness detection and evaluation algos (distantly related to compression algos) and now that we have big data I suspect the next decades will have those algos rubbed up against big data and we'll discover interesting things about hive insects and what the hive is thinking as opposed to what individual insects are thinking. Or run it on dolphins or ancient human dwelling architecture.
And it was an "IBM 1602" although pretty obviously based very closely on the actual 1620.
The real world 1620 was a 50s era RTL scientific computer (IBM used to separate math processors from business processors, until the famous "360" that did everything pretty well). This used the SMS style hardware cards that everything IBM made used until the 360-era SLT cards. Each SMS card holds enough circuits to be roughly one 7400 series TTL chip.
He also has another series which he just finished that is very good as well about a family with the ability to step between diffrent timelines that's an interesting deconstruction of the trope.
While it starts as starts as a fantasy series, it eventually comes out their magic abilities are remnants of advanced nanotech from an interdimensional empire. Apparently a publisher he didn't like working with had the rights to his next Scifi series, so he had to start it as fantasy to get around the contract.
Strike "animals" out, and it sounds like any worker in a big enough org. (Maybe I should stop reading Kafka)
That just means you need a redundant array of parrots operating in parallel, with a regulator to detect different answers to the same problem and retry.
I'm glad you figured that 64 cat-cores is basically pure chaos. Well, for you, for the cores it's just their way of life: do random stuff for no apparent reason. Which is like the exact opposite of a computer.
The laser system has promise, but you need a source of random data to drive the movement of the laser pointer (maybe strap them to other cats?). Otherwise, an attacker who can predict the movement of your pointers may be able to predict the output of your entropy stream after observing the behavior of your cat colony (certain cats may prefer to hunt at different times of the day, or prefer to lay in wait and pounce vs giving chase, etc).
Alternatively, you could observe a field (bed) of cats and record their chirality, assigning 0 to sinistral cats and 1 to dextral cats. The science seems to indicate no preference for one or the other per cat, so your readings should be random. However, you may only observe one or two bit-flips per hour and during transitions it may take some time for the system to settle back down to a steady-state.
Good point. Would strapping a laser pointer to the back of one of the cats work to seed the system? After that I could divert some output into g-code movement instructions to the automated lasers.
One thing I'm unsure of is room size. I figure I could map 1/0 to each tile in a grid based on the presence of a cat, but that means with too large or small of a room I'll have varying degrees of bit density. Stakes are pretty high too, since I'm pretty sure that putting cats in the mix might allow me to exceed the theoretical maximum of 1 bit of entropy per bit.
Equally important, you must respond “No” when asked whether you want the “Neutering” feature enabled. This applies to both cats.
Cat ladies are evil nihilists.
I would recommend using a cat if available - otherwise, depending on the chair and on the weight of the tester, this might lead to permanent damage to the keyboard. I would definitely refrain from doing this if said keyboard is attached to a laptop.
I really like the way the community dealt with this.
(Disclaimer: My SO and me are also a foster home for cats. The cat behavior described is relatively normal esp. for one cat model currently "in use" here.)
> This bug report is a duplicate of: Bug #1538615: Cat causes login screen to hang.
I'll test this with my cat named Turing this weekend to see if he gives better results.
I love this, because I'm convinced this is how we all learn.
How my cats learn too, by the way. Good thing they have nine lives, because the amount of pans, doors, cupboards, firewood that has fallen on top of them just for the sake of "I want to know if I can misuse this new pile" is staggering.
Cats are smart critters, so I think it would be possible to train one to do a set of operations that's actually Turing complete. Cat behaviourists and comp sci folks could work together on this.
FIRST: The great potential for Catputation has been recognized since pre-hisssstory, by the early innovators known as Egyptians who awed at the great possibilities and awww'd at the great paws-abilities.
In those days, Catputers were primarily used to perform primitive functions, such as alerting household members of impending dawn (known to those in the field as "the early-morning zooomies") and high-frequency generation before electronic-oscillators were discovered (known to those in the field as "ddawww, the widdle kittdy is purrrring!").
Unfortunately, Catputation was limited in those days to largely clumsy, analog tasks. For example, Catputers had difficulty playing nicely with some peripheral systems, e.g. mice. It wasn't until much later that [the first digital-logic gate was invented, allegedly by Isaac Newton](https://en.wikipedia.org/wiki/Pet_door#History), paving the way for more advanced systems.
However, as we all know today, cats are fundamentally clawtum-mechanical creatures.
Early on in clawtum-mechanical discovery, [Bose and Einstein were studying Dogputation](https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statisti...). Dogputers have quanta which can share the same space, [forming a stack](https://en.wiktionary.org/wiki/dogpile) (known to those versed in the art as "dawwwww, look at dey widdle puppehs!"). This would later lead to [online-enthusiasts stockpiling Dogputation](https://en.wikipedia.org/wiki/Dogecoin).
However, in trying to form a stack for Catputation, Pauli made two simultaneous, serendipitous discoveries. First, they discovered that [kitties shouldn't be stacked up in the same place](https://en.wikipedia.org/wiki/Pauli_exclusion_principle). And secondly, that violating this newly-discovered rule can result in many little bytes forming a larger unit (a kill-0-byte). Together, these discoveries led to [an initial theory of clawtum-Catputation](https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics).
Progressing forward, optical computing became all the rage, and it was discovered that [optical-signaling interconnects were effective in facilitating high-speed Catputation](https://www.youtube.com/watch?v=aLagODygcbs).
There's more hisssstory to be told, though that's it for now!
-- They have a remarkably complicated inner life. This appeals to creative types, who also often have a lot going on mentally.
-- They are conducive to the lifestyle, apart from wanting to stand on my keyboard. They will sit in the cat tree or on my lap for a long time, just wanting to be near. They will remind you when it's time to stand up and stretch.
Well, first, cats most likely do have intentionality. But mostly, why do you think intentionality is a requirement for Turing Completeness ?