Hacker News new | past | comments | ask | show | jobs | submit login
NES Tetris AI hits 102M points and level 237 (youtube.com)
197 points by zdw on Nov 27, 2021 | hide | past | favorite | 74 comments



Meanwhile human NES Tetris is also getting unbelievably good, including a whole new method of 'rolling' your fingers across the back of the controller

Here's a video of a former world champion 'hypertapper' Joseph versus the newly ascendant 'roller' Huff, from the world championship a couple weeks ago.

They both reach a million points (called a 'max out' because the unmodified game stops counting after that) at around 9 minutes in to the video:

https://youtu.be/wELh6bO_ze0?t=9m


What does the finger rolling accomplish? I see one playing doing it in the video, but it’s not clear why. IIRC, there are no buttons on the back.


I was curious as well and found this video explanation[1]. The idea is that tapping on the back of the controller pushes it up into the player’s thumb which is resting on the D-pad causing a button press.

[1] https://youtu.be/n-BZ5-Q48lE


So a detail I didn't see from the others to add on. Tetris technically allows you to push the button every frame, so 60 moves/second would be optimal if you ignore physical limits of the controller etc. With the oldest way of playing called DAS (I forget what it stands for) after a couple frames it starts inputting really fast, but you have to store inputs to manage to make it move the way you want efficiently. This new rolling is the fastest way found so far to press the direction buttons quickly and get as many presses per second possible to better position blocks.


I think you mean 30 moves per second, as Tetris (like most games) doesn't count two consecutive frames of "button pressed" as two button presses, but a press-and-hold. Still faster than the 10Hz of DAS, but by necessity a full, separate button press requires at least one frame of "button pressed" separated by one frame of "button not pressed".


You're right on 30fps. Been a while since I followed NES Tetris. Thanks for the correction.


Delayed Auto Shift. In NES tetris the rate is 10 moves per second, so to beat it in speed you have to press on the dpad at more than 10Hz.


They hold one finger over the desired button and the finger roll on the back pushes the controller into the first finger, generating multiple taps faster than you could with a single finger.


It's basically like bump firing but for controllers instead of guns. World-class revolver shooters also use this kind of method.


You rest your finger from the other hand on the button in front, so when rolling the button is pressed. That way you can use more than one finger to press, which allows faster pressing. The fastest rollers can hit the button around 20 times a second, whereas by holding the button down (i.e. manipulating DAS, the traditional technique) you can only reach 10 Hz.


Here's the world NES Tetris records leaderboard.

https://docs.google.com/spreadsheets/d/1ZBxkZEsfwDsUpyire4Xb...

Looks like Huff finally beat HydrantDude's 1.6 million.


What is Huff actually doing with his gloved right hand? Can’t really tell in the video.


Looks like he's flicking the back with multiple fingers one after another, so instead of pushing down on the dpad, he's pushing the controller into his left hand. I guess it's a faster way to multi tap it.


I wondered if this was using a machine-learning style AI and it was, among other things, learning the state of the random number generator so it could predict pieces accurately more than one turn out? (And if not, how would you prove that it wasn't? Perhaps tweak the game RNG and see if the AI performs badly?)

Looking at the github repo, it looks like it's actually more of a classical AI doing traditional game tree search. There is some interesting code around the RNG, though: apparently the RNG does make certain piece sequences more likely than others, and there's a lookup table for the probability of the next piece given the current piece:

https://github.com/GregoryCannon/StackRabbit/blob/master/src...

I suppose one could extend this to be a 3-dimensional lookup table with the probabilities of the next pieces given the last two pieces, or to extend it to 4 or 5 or (if you had infinite resources) 100. At some point you'd know enough to be able to predict the next piece with 100% accuracy.


The NES version is quirky, as your link points out. It's best described as an 8-sided die, with rolls for 8 or the previous piece triggering a single reroll. The probability of the next piece being an "I" do not increase after a long drought.

Newer versions of Tetris give random permutations drawn from a 7-tetrominoe bag, so the longest possible drought between two "I" pieces is 12 tetrominoes.


If it is a pseudo RNG then you can try a autogressive seq2seq model and see how it is doing.

If it is a true RNG, then good luck, it would be unpredictable by definition.


Anyone know what would be a good source of entropy on a NES system for a good RNG?


I'm guessing, the only thing available that was unpredictable is the user.. so things like the input timings.


Me: Clicks on video, see's it's 25 minutes long. Thinks, "I'll watch a few minutes and see if it gets interesting".

Me, 25 minutes later: Dang, that was cool and a fun nostalgia trip!


The "Classic Tetris World Championships" are my favorite and personally most entertaining e-sports championships I watch. I highly recommend it.


I love that there's something innate about our desire to do things like these. I caught myself c# a couple years ago on a project I made to solve Minesweeper (https://github.com/Loufe/GroundPenetratingRadar). Very much a "journey" and not "destination" oriented endeavour.


ha! great name :)


Came for the Tetris, stayed for the named color palettes. At first I thought these were official names but "Mexico According to Hollywood" kinda gave it away (level C8 or 182)


What a graceful way to handle levels beyond what you had the resources to include in the game. They could have wrapped around, but that would have been boring, they could have switched to random generation but that would have required additional code. Instead just let it slowly cycle through ROM and use whatever is there.

I'm not sure it was on purpose, but a nice solution.


That was really neat to watch. I started playing Tetris when it was first released and still play it on my Nintendo years later. My kids try but I can always beat their score and they think I am a master of the game. I can’t wait to show them this video. Very awesome to watch.


one of the few titles that are less click-baity than could be. it reached the level that broke and froze the game...


I just played the SNES version of Tetris today before seeing this. Got 123 lines, which I considered a pretty good run. :)


What causes it to freeze where it does?


I've heard somewhere that the CPU didn't have multiplication built-in, so in calculating how much level bonus to add would require a loop of additions. This was also an era where timing between frames was delicate, and if the housekeeping part of scorekeeping takes too long, weird things happen, it drops the ball on something else, and it seizes up.


I guessed that it was a math loop causing the slowdown, but I wonder what actually killed it. Naively I would have thought that you just missed a frame like many other games on the NES (see SMB1). It is surprising that a missed frame actually kills the game.


Maybe an integer overflow somewhere? Either that or the NES is just too bored at this point and cries uncle.


"cries uncle"? I am not a big fan of idioms but this one is bad. What does it mean?

Edit: keep piling on me, guys. That'll teach me to like obscure idioms


It's intentionally obscure, it's basically a safeword for young kids that enjoy torturing each other in one way or another. You're getting piled on for being weirdly critical about something you clearly don't understand.


It’s where your bigger cousin is giving you a noogie and you want his dad to help.


I always understood the usage of "to cry uncle", but never understood the origin until you described it this way. I guess, being the oldest cousin in my family, and not being the kind of person to beat up my younger cousins, it never came up.


I don't think that's the actual origin, Wikipedia says it likely comes from a 19th century joke about a parrot. https://en.wikipedia.org/wiki/Say_Uncle


from the Iowa Citizen of 9 October 1891:

> A gentleman was boasting that his parrot would repeat anything he told him. For example, he told him several times, before some friends, to say “Uncle,” but the parrot would not repeat it. In anger he seized the bird, and half-twisting his neck, said: “Say ‘uncle,’ you beggar!” and threw him into the fowl pen, in which he had ten prize fowls. Shortly afterward, thinking he had killed the parrot, he went to the pen. To his surprise he found nine of the fowls dead on the floor with their necks wrung, and the parrot standing on the tenth twisting his neck and screaming: “Say ‘uncle,’ you beggar! say uncle.’”


It's not a particularly obscure idiom (lots of pop culture tv shows make reference to it), and calling it "bad" makes zero sense.


You know its an idiom, and refuse to just Google it?



To admit defeat


He says earlier in the video it's building up memory pressure, so it's some sort of (obviously unconsidered) garbage collection issue.


Garbage collection on a NES? AFAIK Lots of (most) games were written in asm, not with very high level languages.


Going a bit off topic here, but there's no reason you couldn't use a garbage collector to manage heap memory for your assembly language program.

On the NES, such a memory management approach would be particularly insane.


Yes, that's what I meant.


The NES contains 2k of RAM. Although you can extend that in the cartridge I doubt games use garbage collection.

Why would you anyway? It’s such a basic, predictable game. There’s a fixed size playing field that contains only one fixed size moving object.


Better would've been to suggest a memory leak

But the level being near 256 suggests an integer overflow


I love to see all the Nintendo preservation, enhancement and AI research ;)

What's the best way to programmatically interface with an NES ROM in 2022? JSNES, from which you could run tensorflow.js, seems perfect for browsers. But NES-py integrates with Open AI Gym env

https://github.com/Kautenja/nes-py/wiki/Creating-Environment...


Gym-Retro works up to python 3.8 at the moment.

https://www.youtube.com/watch?v=sgEIoOQgjFg shows how to integrate an AI using gym-retro, stable baselines with pygame so you can fight it.


I didn't know the move at 21:25-21:26 was possible. The player rotates a z shape from vertical, into its final position. If you were playing with physical blocks, this rotation wouldn't be possible.


Yeah z-spins give bonus points in some later variants of Tetris.


In standard / official versions of the game they (including s, l, and j spins) don’t give bonus points (only t-spins do) and just count as normal line clears.


Ah, yes I was thinking of T spins. My bad.


these are called twists https://harddrop.com/wiki/Twist


There's a interesting page about Tetris AIs: https://web.archive.org/web/20081101211903/http://www.colinf...

It would be interesting how algorithm featured on the video compares to these described on the linked page, but it would need different runtime as NES crashing at just over 3000 lines is few orders of magnitude too short.


I’m not sure why this is called an AI. I feel like a deterministic algorithm to play Tetris in an efficient way is not super complicated to write.


> It's not AI if I can understand what it's doing

- found on the internet, 2021


Not what I said or meant at all. I’m arguing about the proper use of the “intelligence” noun here. Does this really show a “capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. ”(Wikipedia)? I don’t think so. All I see here is a simple list of steps that a machine keeps on repeating.


We have used "AI" to denote the computer player in games for as long as I've been alive, and I'm pushing my late 30s. This is not new ground, and I'm a bit surprised you're choosing this hill to make your stand.

e: This is precisely why there's the distinction of "artificial general intelligence"


It's not strange, it's precisely because AI is used as a term everywhere now, that we should be skeptical of using it that way.


“Everywhere now”? The title’s usage of “AI” has been common even before my parents were born: https://en.m.wikipedia.org/wiki/Artificial_intelligence_in_v...


There is the intelligence required to play the game and the intelligence required to learn it. As long as the activity of playing Tetris can be labeled as relying on intelligence then the machine can be labeled as an excised expression of said intelligence.


You're right. In my little joke I implied something else than you said. It's a low/middle complexity piece of control software.


That's a strange takeaway from this.

Ecco simply missed the fact that the Tetris AI used here is actually exactly that: a deterministic algorithm that uses heuristics to search the best placement options.

This is not deep learning in case you're wondering.


https://github.com/GregoryCannon/StackRabbit if you'd like to test your casually flippant statement.


All right then. Where's yours?


Didn't even know there was Tetris on the Famicom. I thought Bejeweled and Dr Mario had the niche covered.

Honestly, I am not super impressed. Where are the Quake and StarCraft bots? Deepmind hasn't shown anything interesting on that front after their last bot got smacked repeatedly buy a couple of low tier European players


I don't know if it was released for Famicom, but Tengen (Atari) had also released their version of Tetris for the NES, which hews closely to the coin-op Tetris. It was famously removed from the market because of licensing clarifications, Tengen didn't have the license to release Tetris on home gaming platforms, like they thought they did.


The point of the starcraft bot was never to be able to beat humans, it was about getting headlines and moving on. To be honest getting good at starcraft isn't really an interesting problem for AI to solve, you can probably do it if you throw enough GPUs at the problem, but you learn pretty much nothing in the process.


I'm skeptical of your claims that Starcraft is not interesting problems for AI.

For many problems, the AI is given as much information as possible of the state of the system to make decisions. The AI knows the entire state of the game in Tetris or Go or Chess.

Starcraft/RTSs are designed specifically to remove information from the player and force you to balance info gathering, building, and countering new information, which is a massive difference from the currently impressive AlphaGo and such


Strongly disagree. They were clearly pushing very hard right until their precious bot was defeated repeatedly by - again - a low tier European "pro".

I still remember the principal (somewhat curly hair and glasses) engineer looking extremely pissed after the matches. Funny how they switched priorities right after that. 100% completely unrelated, as you say.

Also, as NikolaeVarius pointed, SC is a game of imperfect information. If that's not interesting from an AI perspective, I don't know what is.


> starcraft isn't really an interesting problem for AI to solve

I completely disagree. This is much more interesting than the toy problems that most academics work on.

RL problems in this space are inherently difficult since you only get 1bit of “ground truth” information (win/loss) at the end of a game and have to guess which of your actions contributed to that outcome. Sure you can throw hardware at the problem, but you’ve got a huge incentive to minimize costs.

It’s only recently that the default consensus for RL problems changed from “this is an inherently creative endeavor and needs human input” to “you can probably do it if you throw enough GPUs at the problem”.


I'd say SC2 is still in the "inherently creative" part. Even with the very interesting AlphaStar League partially solving the self-play diversity and exploration problem, it's not at all obvious that just sinking in another 10-100x TPU-time into AlphaStar would patch up all of the errors and poor strategy AS exhibited - in fact, I'd bet heavily against it, since spending a large-but-still-feasible amount of compute doesn't appear to have fixed OA5 or AlphaGo's similar-looking problems, but AlphaZero was required.

Now, applying MuZero might be powerful to solve SC2 with much more compute, but that is still not something many DRL researchers would give you large odds on, and you still probably need to come up with some ways to abstract it and condense the game tree into something tractable for planning over, especially with the partial information making modeling the future much harder. (VQ-VAE and related generative modeling approaches are promising in this regard... but still no slam dunk. If DM announced tomorrow that MuZero+VQ-VAE had defeated AS and a bunch of human pros, people would be extremely shocked, even as they scrambled to tweet about how they saw it coming all along & also are very offended by the waste of CO2 and how unreplicable it is on a grad student budget.)


Name a system that Tetris hasn't been ported to. Maybe the LaserActive?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: