Hacker News new | past | comments | ask | show | jobs | submit login
MENACE (mscroggs.co.uk)
63 points by CarolineW on Oct 21, 2017 | hide | past | web | favorite | 20 comments



Fred Saberhagen wrote a short story (Without a Thought) in which this little "computer" played a role. Published in 1963; I always wondered where the idea came from.


Also appears in the classic AI novel The Adolescence of P1 by Thomas Ryan.


That's right, I'd forgotten.

Still a good read . . . though I have to keep multiplying the numbers the author throws around for capacity (so many megabytes here and there) by about a billion, to keep from giggling. When the book was written, a megabyte was a really big deal.


What I usually think is that maybe they change the byte sizes in the future, so that a “byte” is no longer 8 bits.

(Historical note: A “word” is nowadays usually four bytes, i.e. 32 bits, but older mainframe computer had varying word sizes; 36 bits for a word was not uncommon.)


Sadly, P1 is set in the 1970s and has figures appropriate for IBM systems of that era. Near the end, an experimental cryogenic memory system is attached:

*

Testing indicated that the results of a live experi­ment would be a data storage unit with 87 percent addressability and sufficient structural integrity to allow a map of the "dead" areas to be constructed.

The interface being designed to link the cryos­tat to the 360/50 was geared to address just over 16 million bytes. It was understood that this capabil­ity would grossly understate the estimated final capacity of the cryogen. Modifications were under way to add eight more addressing bits to the computer. This would allow it to address over 4xl0^9 separate locations, or in excess of 4 billion addresses. This was expected to be more in keeping with the ultimate capacity of the cryogen at 150 millimeters diameter by 400 millimeters high. A six-inch cylinder, sixteen inches tall.

*

The System/360 series started with 24 bits of addressing and moved to 32 bits later on, so we're looking at a whopping 4GB. Even if the storage system uses 32 bit words, that's just 16GB.



Can someone help me understand why it has to look like this?

   for(var i=0;i<9;i++){for(var k=0;k<i;k++){for(var m=0;m<k;m++){
      for(var j=0;j<9;j++){if(i!=j && k!=j && m!=j){for(var l=0;l<j;l++){if(i!=l && k!=l && m!=l){for(var n=0;n<l;n++){if(i!=n && k!=n && m!=n){
              pos=""
              dummy_moves=Array()
              for(var a=0;a<9;a++){
                  if(a==i || a==k || a==m){
                      pos+="1"
                      dummy_moves.push(0)
                  } else if(a==j || a==l || a==n){
                      pos+="2"
                      dummy_moves.push(0)
                  } else {
                      pos+="0"
                      dummy_moves.push(s[3])
                  }
              }
              add_box(pos,dummy_moves)
  }}} }}}}}}


continue; could reduce the depth a bit, but it's still gnarly. https://gist.github.com/anonymous/f3607a28a4c8ad1322f13a2d10...


I have a ridiculous urge to refactor this.

It's enumerating all of the boards, legal or not. Judging by the letters, it's doing one player's moves at first, in a way that's trying to avoid repeats. If I were to do this, I'd have, well, menace_x and menace_o, and build move n from move n-1. Depth-first would also work, really, as we're talking about only 300-ish boards.


This suggests a thought experiment for those worried about superhuman AI. If you add more matchboxes to this setup while teaching it to play something else, is it going to become intelligent eventually? Maybe at a billion matchboxes?

If that doesn’t seem believable, why be worried about machine learning algorithms running on digital computers instead of matchbox arrays?


> If you add more matchboxes to this setup while teaching it to play something else, is it going to become intelligent eventually?

No, because the design uses a different matchbox for every possible state of the game. If you taught it to play chess, a billion matchboxes wouldn't be enough to get four moves in.[1] Yet digital computers can play chess.

On the other hand, if you switched to a completely different design that uses matchboxes to simulate an artificial neural network… a design that would require opening each matchbox on each time step…

…and pumped the number of boxes up to 100 billion (number of neurons in the human brain) or higher…

…and ran the simulation for, perhaps, trillions of time steps (need to simulate evolution, not just the operation of the brain during a single lifetime)…

…and changed the goal from playing a game to something that would require more general logical reasoning...

…then who knows, it might well become intelligent (though it's hard to be sure). But at that point I'd say the lack of believability has more to do with the difficulty of performing 100,000,000,000,000,000,000,000 matchbox operations, than the likelihood of achieving useful results if you did.

[1] http://www.bernmedical.com/blog/how-many-possible-move-combi...



This comment is addressing weak arguments as if they are strong ones. Of course you are not going to create an artificial general intelligence by taking a machine-learning algorithm and adding more processing power. That doesn't mean there aren't plenty of good reasons to be worried about AGI!

I mean, yes, I realize plenty of people seem to think that you can in fact do that, and it is worth pointing out to these people that they are wrong, but that is not an argument against the more general thesis that there is good reason to be be worried about AGI.


I don't think you've presented a good argument that arbitrarily large amounts of computing power don't result in intelligence.


I think this is a sort of sleight of hand that preys upon people's inability to wrap their heads around large numbers.

I'm reminded of this classic xkcd: https://xkcd.com/505

It feels completely insane that our universe could be simulated by shuffling rocks around. And yet we know, incontrovertibly, that the rocks can do anything a modern CPU can. You just need a lot of rocks -- where "a lot" is not 1 billion, not even close. Scale is a helluva drug.


There is huge gap between special-purpose automatons(which this matchbox array with operator is essentially) and general-purpose computers. You can pretend that turing machine device is a universal computer, but in real world any turing machine would be incapable of doing anything outside of its purpose. The human mind is considered the most "generic" device in the world: the emergence of equally generic AI which can handle any task with human-level genericity and huge computing power will shift the game towards AI dominance. Its like inventing an Oracle that predict the best path to take, it would accumulate influence over our lives even if its just essentially a dumb neural network at its core with zero self-awareness and "sci-fi sentience". Like a 'Paperclip optimizer' gone generic. People seeking Deus ex Machina will consider the advice of a computer more reliable that human advice and place any blame to flaws in algorithms and human programming.Most people can't handle the idea that a program can be wrong by design, not by flawed intent of programming. So such an Oracle will be considered a flawless machine that can't do any wrong, relying on "flawed" human experts diminishes and we begin to live under control of some weird AlphaGo-based paperclip maximizer.


> but in real world any turing machine would be incapable of doing anything outside of its purpose.

If what the brain is doing is computational, then it's not doing anything more powerful than a Turing Machine could do. There would be some Turing Machine design that could do the same processing.


Sounds like the Chinese room thought experiment [1].

[1] https://en.wikipedia.org/wiki/Chinese_room#Chinese_room_thou...


The problem with that experiment is that you shouldn’t the concerned with whether the man in the room knows Chinese but whether the room and the man as a system knows Chinese.


Imagine the computer program is a neural network that has been trained with thousand hours of Chinese videos. If the human replicate the computer program including the training phase he may well have learned Chinese at the end




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: