Hacker News new | past | comments | ask | show | jobs | submit login
Is my cat Turing-complete? (belaycpp.com)
286 points by zdyn5 12 days ago | hide | past | favorite | 88 comments





The other day there was a wasp going in and out of my room on the second floor by the window. I closed the window because it was annoying me.

The wasp found an entry through another window, in another room open in the second floor. It was from the room next door. So, I just closed this other window.

Again, it found an entry from another window. But this time, it started to impress me: it was from a room across the corridor! The wasp flew to the back of the house, found an open window, entered it and mapped to path through the corridor to my room.

I then closed all the windows in the second floor of the house and bet it was over. I was wrong: the wasp came in from the front door, flow upstairs and found the room. It was even capable of doing the path in reverse so it could come in and out of my room.

I closed all the doors and windows on the first floor, but forget to close the small bathroom window. This was enough for the small wasp map an entire new path to my room again almost immediately!

Once it entered my room again, I opened the window again. It was smart enough to use it like it was simply choosing the shortest path.

So, even the small brain of a wasp is able to run "computer vision" algorithms and are fast running "machine learning" algorithms. They are even able to run A* perfectly and merge input from various sensors to perform superb navigation using very little energy, producing almost not heat in a very lightweight package in real time with very low latency!

I'm glad this critter is small. Nature is scary.


The wasp might be using smell to figure out the way into your house; any odor would come out of any open window.

It might be using A* then :P

Exactly! The intensity of the smell is a perfectly fine consistent heuristic.

What if I told you it wasn't the same wasp?

One wasp trasmitted a map of your house to another and another one!


Your comment may sound like a joke but bees are known to transmit direction and distance to one another and are also able to solve complex problems:

  https://www.popsci.com/science/article/2010-10/bees-beat-computers-ability-solve-complex-math-problem/

  https://en.wikipedia.org/wiki/Waggle_dance

I'm a beekeeper, and one of the most profounding things to show people when they (anctously) look over my shoulder is this "bee waggle".

It is clear and easy to see, (easier than that d* b* of a queen when you need her, most often), and it tells a lot about the cognitive abilities of a social group, the hive. How they can tell eachother complex directions by twerking.

Kids love it too. And will often play around, wiggling their behinds (a cousin called it bee-twerking) and giggling how they are showing eachother where the sweets are by dancing.


And as per usual, there’s a futurama episode covering the bee dance: where bender, with his fantastic dancing skills goes out to communicate with space bees.

Yep, it was intended as a joke, halfway between a citation of The Matrix and the known joke about the Scottish black sheep: https://www-users.cs.york.ac.uk/susan/joke/3.htm

Of course bees can communicate between them, and probably also wasps.


Maybe it spoke with the same accent

It had a red mohawk.

Was it a paper wasp? They are so cute and friendly.

This little one didn't mind a bit when I put my macro lens an inch or two away, and even did some tricks to show off!

https://geary.smugmug.com/Macro/i-VsCPgMS/A

https://geary.smugmug.com/Macro/i-3xN7pMk/A


Can't tell, but looks similar.

genuinely thought the last line in this was going to be and then I found a wasps nest in the room next door!

> So, even the small brain of a wasp is able to run "computer vision" algorithms and are fast running "machine learning" algorithms.

Now i'm actually curious about its efficiency of the computation. If we could somehow map neurons to transistors (e.g. the logical calculations/algorithms, not the structure and not simulate neurons with transistors), which would be more efficient, the brain or the hardware?


You may have already, but you'd enjoy reading 'Conscious Mind Resonant Brain' by Stephen Grossberg.

Thanks for the hint/tip/advice. Never heard about it. Hope you found my tone similar or with some close ideas.

You might want to consider the “screen” algorithm :-)

Could there be a wasp nest outside your window or worse inside your room?

I was almost out the door to pet shops and animal shelters so I could build a 16 cat-core setup over Thanksgiving. Then I figured I should at least read the full article before going through the trouble. It's a good thing too. Now I know that I should probably go for 64 cat-cores and treat it more like a quantum computer with the potential for rapid decoherence.

Like Searle's Chinese Room, but with a cat in it, where observing it or asking it to do anything at all necessarily yields nothing, because it's a cat. Searledingers Catbox?

But more generally I'd ask the question of, if you can train a conscious animal to execute 8 brainfuck instructions (+, -, <, > , etc..), could they also develop a writing system and express higher order languge concepts? Asking why animals don't write assumes they can't, as maybe we just need some kind of intermediate symbolic phoneme concept, as though - like BF for computation, there are some basic unit instuctions that you can combine and parse into anything, and if a border collie can distinguish words for hundreds of different things, I'd wonder if what we're missing is a set of metaphors that represent this underlying core instruction set.

Things like musical notation and knitting stitch patterns are also thought to be Turing complete, so it implies the representation doesn't matter for computation, and computation is sufficient for representing most things, so the idea of finding some underlying instruction set for producing languages across species seems both fantastic yet loosely plausible.


There are several Youtube channels where people gradually train cats and dogs to use boards full of buttons (20+ buttons in some cases) to communicate their wants. It definitely seems to work in general but doesn't make the animals any less themselves... one of the funnier examples being a video of a cat repeatedly pressing the "outside" button, staring at a yard full of snow, and then stalking back and pressing the "mad" button several times. Simple compound phrases (for example, "play friend" or "love grandma") are fairly common, but any kind of actual syntax is basically nonexistent.

All of the videos of these kinds I've seen were clearly fake.


I'm sure you could train dogs, monkeys, or parrots to reasonably reliably perform logical operations. From that you could build a simple processor, but the error rate might be too high for it to scale much. It's not so much getting them to do the tasks, it's getting them all to reliably do them at the same time in synchronisation.

Of course as with the guy in the Chinese Room that doesn't speak Chinese, none of the individual animals would have any idea what the overall system was doing.


There's a classic Charles Stross short story where, to spoil the story, some old men apparently "randomly" playing a board game to pass the time, are performing combinatorial magick to fight a demonically possessed ... well don't want to spoil the entire story. The background of the story is magick is real and based on math and computer logical operations and such. cstross used to post here occasionally and worked as a sysadmin in the olden days.

Anyway, aside from fantasy/sci fi stories, the interesting analogy with the story is just because I'm not smart enough to debug the system given no blueprints or explanation, does not mean an anthill is not running some novel and possibly interesting computational problem. Or bees nest or giant fungal mycelia in the dirt or herds of lobsters.

There are randomness detection and evaluation algos (distantly related to compression algos) and now that we have big data I suspect the next decades will have those algos rubbed up against big data and we'll discover interesting things about hive insects and what the hive is thinking as opposed to what individual insects are thinking. Or run it on dolphins or ancient human dwelling architecture.


Can you share the name of that short story?

Found it: "Down on the farm"

https://www.tor.com/2008/07/20/down-on-the-farm/

And it was an "IBM 1602" although pretty obviously based very closely on the actual 1620.

The real world 1620 was a 50s era RTL scientific computer (IBM used to separate math processors from business processors, until the famous "360" that did everything pretty well). This used the SMS style hardware cards that everything IBM made used until the 360-era SLT cards. Each SMS card holds enough circuits to be roughly one 7400 series TTL chip.


Not sure of the exact short story he's referring to, but it must take place in his Laundry Files series

https://en.m.wikipedia.org/wiki/The_Laundry_Files

He also has another series which he just finished that is very good as well about a family with the ability to step between diffrent timelines that's an interesting deconstruction of the trope.

https://en.m.wikipedia.org/wiki/The_Merchant_Princes

********Spoilers*******

.

.

.

While it starts as starts as a fantasy series, it eventually comes out their magic abilities are remnants of advanced nanotech from an interdimensional empire. Apparently a publisher he didn't like working with had the rights to his next Scifi series, so he had to start it as fantasy to get around the contract.


> none of the individual animals would have any idea what the overall system was doing

Strike "animals" out, and it sounds like any worker in a big enough org. (Maybe I should stop reading Kafka)


> but the error rate might be too high for it to scale much

That just means you need a redundant array of parrots operating in parallel, with a regulator to detect different answers to the same problem and retry.


64 cat-cores ... rapid decoherence

I'm glad you figured that 64 cat-cores is basically pure chaos. Well, for you, for the cores it's just their way of life: do random stuff for no apparent reason. Which is like the exact opposite of a computer.


Yes, if I can't get the system up & running for computational tasks, I figure I can always pivot the business model and sell its output as a world-class entropy source. My only problem is the lack of a moat around the model since anyone with a steady supply of cats, automated laser pointers, and a few hundred $/month in kitty kibble can reproduce my setup and make the whole thing a race to the bottom on price. I think at that point I could turn my accumulated knowledge into an excellent cat-entropy system servicing business, fine-tuning and optimizing other people's setups, advising on breed mixtures, laser speed, etc.

The business seems fraught with hairy (furry?) problems.

The laser system has promise, but you need a source of random data to drive the movement of the laser pointer (maybe strap them to other cats?). Otherwise, an attacker who can predict the movement of your pointers may be able to predict the output of your entropy stream after observing the behavior of your cat colony (certain cats may prefer to hunt at different times of the day, or prefer to lay in wait and pounce vs giving chase, etc).

Alternatively, you could observe a field (bed) of cats and record their chirality, assigning 0 to sinistral cats and 1 to dextral cats. The science[1] seems to indicate no preference for one or the other per cat, so your readings should be random. However, you may only observe one or two bit-flips per hour and during transitions it may take some time for the system to settle back down to a steady-state.

[1] http://blog.dimview.org/math/2017/07/28/cat-chirality.html


>lasers

Good point. Would strapping a laser pointer to the back of one of the cats work to seed the system? After that I could divert some output into g-code movement instructions to the automated lasers.

One thing I'm unsure of is room size. I figure I could map 1/0 to each tile in a grid based on the presence of a cat, but that means with too large or small of a room I'll have varying degrees of bit density. Stakes are pretty high too, since I'm pretty sure that putting cats in the mix might allow me to exceed the theoretical maximum of 1 bit of entropy per bit.


You could also buy just two then wait a while and through some inexplicable magic, you'll gave 64 cores of cat.

I’ve heard that in order for this to work it is important to ensure that one of the cats is the Model M version and the other must be a Model F version.

Equally important, you must respond “No” when asked whether you want the “Neutering” feature enabled. This applies to both cats.


But be aware the resulting catsplosion will grind the simulation to a halt at some point

Are these the big/little cores I've been hearing so much about?

Ah yes, the famous "Meow's Law" of kitty density doubling.

So, the 'cat-bit' is worth 1 if the cat is alive and 0 if the cat is dead? Now I understand why quantum computing requires so much funding...

Eh, cats are cheap. Most of the money is spent on drugs and liquor. Quantum computing is well known for its party atmosphere.

Going for the 5dm or 3dm node size?

So the crazy cat ladies are really hardcore compute nerds that are just letting their cores rest?

They're trying to use the unique nature of cats to exceed the theoretical maximum of 1:1 bits of entropy. This would create an entropic contagion event and that spread a heat death out into the universe in all directions.

Cat ladies are evil nihilists.


Apple makes their own cats now days.

There's the makings of a science fiction novel in there, somewhere.

Do these cat-cores need cryogenic cooling?

Cats are a superset of Turing completeness. Cats are "undefined behaviour" https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1463112

> To replicate: In unity hit ctrl-alt-l, place keyboard on chair. Sit on keyboard.

I would recommend using a cat if available - otherwise, depending on the chair and on the weight of the tester, this might lead to permanent damage to the keyboard. I would definitely refrain from doing this if said keyboard is attached to a laptop.


Thank you so very much. This just landed in my bookmark list of great bug reports that made my day.

I really like the way the community dealt with this.

(Disclaimer: My SO and me are also a foster home for cats. The cat behavior described is relatively normal esp. for one cat model currently "in use" here.)


I think my favourite part about the bug report is that it was closed as a duplicate!

> This bug report is a duplicate of: Bug #1538615: Cat causes login screen to hang.


Chaos Cat - the final QA tester. Ensure that your service can survive it.

Chaos Cat, a.k.a. the fluffy fuzzer...

> Though they can, maybe cats are not designed to execute code after all?

I'll test this with my cat named Turing this weekend to see if he gives better results.


> one of the best ways to avoid misuses of a feature is to perform these misuses once, in my opinion

I love this, because I'm convinced this is how we all learn.

How my cats learn too, by the way. Good thing they have nine lives, because the amount of pans, doors, cupboards, firewood that has fallen on top of them just for the sake of "I want to know if I can misuse this new pile" is staggering.


Really did think this was the unix command 'cat' being Turing complete which makes sense.

The ops don't seem to compose though.

Cats are smart critters, so I think it would be possible to train one to do a set of operations that's actually Turing complete. Cat behaviourists and comp sci folks could work together on this.


There is a guy in Russia who trained cats to do all kinds of circus tricks. Going from personal experience, a cat would be better suited to get a human to the calcalution for the cat then the other way round. If said cat is bored enough to bother trying so.

I tried to repeat this experiment and my cat had a Segmentation Fault

You mean an undefined period of Sleep()?

You can expect a core dump.

Only if a sandbox is available for that routine

CATs are implemented in a turing complete language called Chialisp https://chialisp.com/docs/puzzles/cats/

What the author calls cat computing I like to call crab computing. I think crabs are a better analogy because crabs have successfully been used to perform logic operations in a lab and cats have not.

Warning, don't click if you need to be productive. https://www.reddit.com/r/crabcats

And if you don't like Reddit's new interface: https://old.reddit.com/r/crabcats/

It's apparent that the author of this article has independently discovered the field of Catputation. However, this field has a long and rich hissssstory, which I'll try to summarize as time allows (or until I'm bored or see something shiny).

FIRST: The great potential for Catputation has been recognized since pre-hisssstory, by the early innovators known as Egyptians who awed at the great possibilities and awww'd at the great paws-abilities.

In those days, Catputers were primarily used to perform primitive functions, such as alerting household members of impending dawn (known to those in the field as "the early-morning zooomies") and high-frequency generation before electronic-oscillators were discovered (known to those in the field as "ddawww, the widdle kittdy is purrrring!").

Unfortunately, Catputation was limited in those days to largely clumsy, analog tasks. For example, Catputers had difficulty playing nicely with some peripheral systems, e.g. mice. It wasn't until much later that [the first digital-logic gate was invented, allegedly by Isaac Newton](https://en.wikipedia.org/wiki/Pet_door#History), paving the way for more advanced systems.

However, as we all know today, cats are fundamentally clawtum-mechanical creatures.

Early on in clawtum-mechanical discovery, [Bose and Einstein were studying Dogputation](https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statisti...). Dogputers have quanta which can share the same space, [forming a stack](https://en.wiktionary.org/wiki/dogpile) (known to those versed in the art as "dawwwww, look at dey widdle puppehs!"). This would later lead to [online-enthusiasts stockpiling Dogputation](https://en.wikipedia.org/wiki/Dogecoin).

However, in trying to form a stack for Catputation, Pauli made two simultaneous, serendipitous discoveries. First, they discovered that [kitties shouldn't be stacked up in the same place](https://en.wikipedia.org/wiki/Pauli_exclusion_principle). And secondly, that violating this newly-discovered rule can result in many little bytes forming a larger unit (a kill-0-byte). Together, these discoveries led to [an initial theory of clawtum-Catputation](https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics).

Progressing forward, optical computing became all the rage, and it was discovered that [optical-signaling interconnects were effective in facilitating high-speed Catputation](https://www.youtube.com/watch?v=aLagODygcbs).

There's more hisssstory to be told, though that's it for now!


Cats do not run optimal algorithms:

http://www.threepanelsoul.com/comic/on-a


Wow, that took me on a quick tour from "hey, I remember when that comic was running" through "hey, that comic is running again" to "hey, turns out that comic never stopped running?"

That cat looks a lot like my Ashley, who died nearly 14 years ago but is still missed. White chest patch, white toe tips, and all.

Does it have an infinite tape?

People keep forgetting you need infinite memory to be Turing complete. On a machine with finite memory, all operations are O(1).

Almost, but there are examples of non-termination still unless I am misremembering a definition (ex 20 GOTO 10).

All operations are O(1)?

busy beaver function of (a finite number) is hilariously big, but a constant. So there is a constant bound on the duration of all terminating programs using finite memory.

I'll believe a cat is Turing complete when someone ports Doom to a cat

Honestly horses and carts are pretty good autonomous vehicles.

Why do programmers like cats so much?

-- Easy maintenance. They need food and cuddles, but they don't need to be let out to potty ten times a day or require excess walks.

-- They have a remarkably complicated inner life. This appeals to creative types, who also often have a lot going on mentally.

-- They are conducive to the lifestyle, apart from wanting to stand on my keyboard. They will sit in the cat tree or on my lap for a long time, just wanting to be near. They will remind you when it's time to stand up and stretch.


The null hypothesis here is that everybody likes cats.

Indeed. Cats and dogs are part of endless marketing materials for a reason. They're as generic appeal as it goes (something something everyone loves babies)

Cats aren't Turing complete because they don't have intentionality. Everything it was doing was a result of instinct.

Not much gap between instinct and intention. You could say instincts are default intentions selected by evolution.

> Cats aren't Turing complete because they don't have intentionality. Everything it was doing was a result of instinct.

Well, first, cats most likely do have intentionality. But mostly, why do you think intentionality is a requirement for Turing Completeness ?


You might argue that animals don't have intentions (especially if you never observed one closely). However, it seems difficult indeed to believe that a cat has less intentionality than an abstract Turing machine -- or your desktop computer for that matter.

Actual Turing machines (okay, finite state approximations of them, which are the only ones that can physically exist...) have externally imposed intentionality.

Ahah super, bien joué la frontpage sur HN !



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: