I quite like his presentation format for his online lectures (example - [1]); 3 panels - (1) video of him explaining/board work (2) notes (3) slides, all sync-ed. When he changes topic the slide and notes panel also move forward. And I think this has been around for ~ 10 yr now.
The definition of "Standard Tetris" is an idealized model of the most important characteristics and behaviors of the first IBM-PC implementation of the Tetris game (circa 1986-1988).
The idealized model is based upon inferring the apparent intentions of the developers of the first IBM-PC implementation of the Tetris game.
For example, it seems reasonable to infer that the developers of the first IBM-PC implementation of the Tetris game intended to select the shape of each new falling piece "randomly", and that the use of the Borland C implementation of the rand() function was merely a practical approximation of the intention. The definition of "Standard Tetris" specifies that the shape of each new falling piece is to be selected "randomly". This ideal behavior cannot be achieved by any implementation, but implementations can approximate the ideal behavior.
Yes, so much research in Computer Science has been based on Tetris that someone found it worthwhile to create a specification for the game.
You're probably thinking of the "Random Generator" which is used by the Tetris Company for new Tetris games around 2001 or so. Previous Tetris games did not necessarily use this system (I know the NES and Gameboy versions certainly did not) and the other popular variant, Tetris Grand Master, has its own system. Sources below:
Not random; there's no notion of randomness here. It's worst case analysis in a Tetris board that is provided in the input to the algorithm (i.e. with input length proportional to its size, as opposed to specifying integers corresponding to its dimensions, which will give a not-easier computational result), alongside a sequence of tetrominos; the question is to devise a strategy of player moves achieves a standard objective; e.g. the maximum number of rows cleared, or achieves the objective to within an approximation.
That it is NP-complete says that verification that a sequence indeed attains this maximum is easy (even if we don't know the maximum beforehand), but that it is unlikely we will find an efficient algorithm for the problem.
The authors note that there is a "relatively simple dynamic
program solves the case of a constant-size gameboard in time polynomial in the number of pieces." So what you seem to be thinking of is solvable quite efficiently.
Bag randomizers are a modern thing, imposed by the Tetris company on all Tetris games released for the last couple decades. Classic Tetris almost universally used a standard rng, sometimes with some memory mechanism to attempt to prevent runs (ie. Remember the last piece and attempt a reroll N times to get a non duplicate piece. I believe nestris used a 1 piece memory).
You were definitely not guaranteed to get an I piece in any particular number of drops in any Tetris released in the 80s afaik. I think bag randomizers came about in Tetris worlds
Interesting. I would think ML would do well to copy players. When I play it's a rather boring game, I'm almost always just setting up for the |. I can imagine trying to approximate a player using a complex branch if/else structure being complicated code. But I have to wonder if deep ML would be a rather impressive implementation (even without seeing the next block). The only thing that kills my game is the ever increasing speed, which wouldn't matter to ML.
That’s the high level, however t has thousands of micro decisions each time, such as some situations where you might choose to create a hole to maximize a clear later, or when it is better to try to Tetris at higher levels as compared to clear the lower ones to avoid getting too high and therefore harder or impossible to manipulates the pieces properly.
Imo NES Tetris gets a bit too much attention, there's plenty of better Tetris games if you want to talk about speed (SEGA Tetris, TGM, clones like DTet or Cultris, and so forth.)
Ironically, tgm gets much faster (nestris tops out at 1 row dropped per frame, tgm at 20) but the faster speeds in tgm are much much easier (and more fun imo) to control because of various other variables in how dropping and locking work.
Agreed. TGM is probably my favorite Tetris game; once you get used to the rotation it feels very natural without needing to go through the wall kick gymnastics of modern guideline Tetris. Sadly it seems Arika is either not able or not willing to produce more TGM games using their own ruleset. The guideline Tetris ruleset feels ass backwards in comparison, but I guess it really doesn't matter too much.
I never quite hit GM in any TGM game anyways; as fun as it was, I definitely had trouble keeping a clean stack at high speeds.
I got my first GM a couple months ago. Definitely felt like an achievement.
Afaik Arika is not allowed to make non-guideline Tetris games, but the fact that they made Tetris 99 for the switch has got some rumors going that they reached some kind of agreement and maybe a tgm4 could actually happen again.
TGM4 didn't fail due to TTC licensing issues. They had a hard time getting arcade vendors to co-operate and distribute cabinets. TGM1 and 2 (plus TAP+) were made before TTC started to enforce their rotation guidelines for games and that's why TGM3 and TGM Ace had the "World" and "classic" modes. Former being the TTC required rotation system and Classic being the (superior imo) Arika rotation system.
TGM4 is essentially finished and just in hold until Arika figures out how they're going to distribute it. I think you're right though that T99 is kinda Arika testing the waters with a console version and I think they might release it on Switch at least.
I have formally dealt with approximation algorithms before and I skimmed the wikipedia article, yet I did not find anything that contradicts me. Note the in the section on hardness, P and NP (obviously) appear as bounds of solutions. Where is the contradiction?
Further, all this (approximation algos and P/NP algos) are discrete, complete algorithms. ML is a completely different story, as one is never certain whether some optimum is global or local. Hence, ML is a heuristic, while Djikstras shortest path algorithm is a series of steps with a success guarantee (with provable worst runtime behavior).
The paper proves that Tetris is NP hard, but that is only because Tetris is infinitely long game. It's still easy for a computer to play as well as a human and faster
That is the "even to approximate" part. It means that Tetris is NP-hard AND no polynomial time algorithm can be shown to be within a constant factor of the optimal solution. Eg no matter how good your Tetris bot is, there may be one that is a thousand times better waiting to be discovered.
I thought Tetris was pretty easy for a computer to play: to decide where to put the next piece, just maximize flatness of the surface while penalizing holes.
I'm hence surprised about the hardness of approximation result.
You are right: the authors observe that when the size of the playing field is fixed and the only variable is the sequence of tetrominoes (which may vary arbitrarily in length), Tetris can be solved in polynomial time with a dynamic programming approach.
The authors' contribution is that the problem of choosing the optimal strategy is NP-complete when the playing field is itself variable, and provided as an input to the algorithm (as a matrix of 1s and 0s indicating whether or not the cell is filled, or equivalently, as the number of cells in the grid provided in unary base.)
https://en.m.wikipedia.org/wiki/Erik_Demaine
Also highly recommended basically any filmed class he’s teaching on MIT OpenCourseware