Hacker News new | past | comments | ask | show | jobs | submit login
What happens if you remove randomness from Doom? (jmtd.net)
450 points by pdw on Apr 23, 2015 | hide | past | web | favorite | 102 comments

Even the "randomness" is deterministic. The fixed, repeating table is an optimization, but it's also completely necessary for multiplayer to work.

Doom does things very differently to how more recent multiplayer FPS multiplayer works. Every player's host machine has identical state. That's not as superficial as things like score, who is alive/dead, etc - it's exact positions or all entities, their metadata... all the way down to the next number to be picked by the random number generator.

This is updated every tick (30Hz? can't remember) and every machine needs to synchronize with every other within that tick, or the game slows down. If you've ever played Doom over a modem, you know what it's like when someone has bad settings which result in high latency - in this case, poor frame rate. The data sent between machines consists of the inputs - e.g rotation, movement, fire, talk - and not higher level constructions such as motion vectors.

So it's entirely necessary that the "random" number generator is not so random at all. It's fairly hilarious to see the results of messing with it, though :)

It does make me think that cheats (not that I ever cheated) were missing a trick. If they knew what the next number to be pulled from the random number generator was, they could, for example, delay firing a weapon by a frame or two, to get more damage. Probably other more interesting things I can't think of immediately. If there was ever a bot vs bot Doom competition, this is something you'd definitely want to consider.

Yes. By the way this feature of Doom was allowing nice and symmetric multiplayer over slow modem or network lines. Only small client-side packets were send through the network (think keystrokes/mouse movement updates).

As a different example Quake was designed using a client/server model with hugely asymmetric packet sizes (~40 bytes for client->server packet and ~800 bytes for the opposite direction).

As a teenager at around 1996-1997 I once tried to re-factor Quake to use the same model as Doom - run server/client combo on both machines with servers synchronized using exact same random generator seeds / etc. Spend about two months on that, but it had proved to be very difficult - the sync between the servers was inevitably lost due to some indeterminism in the code that I couldn't uncover. Maximum that I was able to achieve was around two seconds running in sync. And I still don't know where the sync was getting lost ;)

Learned a lot from that experience, big thank you to John Carmack (and the Id Software team) for writing that code as good and clean as he/(they) did...

Probably floating-point oddities. The bane of deterministic programming everywhere.

Quake 1 & 2 largely used fixed point arithmetic, for speed, AFAIK.

They've got fixed point code in a few spots (especially where it makes sense if you're e.g. indexing into a texture), but the original Quake source really says float all over the place.

Floating point is deterministic.

I am one of the authors of Fire Fight game, which had the same multi-player model as Doom.

Main source of multi-player sync loss was a different number of rand() calls between machines e.g. a different animation sequence for own character vs enemy. If own animation calls one rand() too many, the next common rand will return diff values on own vs enemy machines. And you essentially start playing two different games together ;-)

That being said, we fixed all bugs. There were no additional checks to synchronize state other than keyboards. It was way easier than we anticipated!

Good times!

That was what, 1996?

Unfortunately, many of the edge cases that cause non-deterministic behavior have gotten substantially more exploited by compilers since then.

I really want to disagree. Assuming the code is correct: - proper handling of system calls and EINTR, - do not call illegal functions in signal handlers, - proper synchronization to avoid race conditions, - etc. it will work just fine.

The problems with today's systems does not lie in compilers. It lies in vastly increased complexity of the code itself.

Back in 96 we had a few sprites, with some animation sequence, sound effects. State of an object was kept in a few integers. Collisions were done using really stupid algorithm that would probe special mask in level builder with a circle consisting of 16 points, so we can get a vector pointing away from the wall. Etc.

Today, collisions, physics, presentation - all this is thousands of times more complex. More code, more bugs, more issues.

For that reason, it would be very difficult to write keyboard-state driven multiplayer today.

Ah, you weren't using floating-point. That makes things substantially easier.

W.r.t. floating-point, good luck. See https://news.ycombinator.com/item?id=9430958

Ah, yes, you are correct. We built our own fixed point library.

Couldn't you reset the rand every now and again, without too much of an effect. Such as every 10 seconds? For every machine.

Nope. Once the sync was lost, you were playing two different games: player A on machine A is in room 1, player A on machine B is in room 2. Resetting rand sequence would not correct the introduced error.

The only way to correct such error would be to change model to sync the actual state among machines. But that means completely different code.

All in all, I remember we added the multiplayer code at the very end. If we decided to sync state rather than exchange keyboard+mouse, we would not finish on time.

So is rand() and similar. The point is that Heisenbugs and such at the app level are hard to debug because behavior is not absolutely repeatable under what seem to humans to be identical circumstances.

That most certainly gives the appearance of non-determinism.

And floating point is frequently infamously tricky to deal with, I think is the obvious point without arguing terminology.

Different machines have different internal precision. See http://gafferongames.com/networking-for-game-programmers/flo...

Games I've worked on have used libraries e.g. http://nicolas.brodu.net/programmation/streflop/index.html

If only. Even something as simple as this, where x and y and a are floating-point values:

    if( a == x/y )
      assert( a == x/y );
Is legally allowed to trigger (!). Worse, it's also legally allowed to not trigger.

That's right. The above can be legally compiled to always trigger, or never trigger, or even to sometimes trigger (although this is rare in practice).

Why? C / C++ allow floats to be stored with excess precision. And they lose that excess precision when spilled to RAM. And GCC isn't deterministic. So x/y may be pulled out as a common subexpression and spilled to a register after the first usage, and it triggers. And then the next time it isn't, and it doesn't. (Effectively, the difference between this:

    temp = x/y
    if( a == temp) {
      assert( a == temp );
and this:

    temp = x/y
    if( a == temp) {
      assert( a == temp );
Yes, it'll (generally - though the standard doesn't dictate that it will be!) be deterministic in terms of always with a given example in an single binary triggering or not triggering, but that is irrelevant, as binaries aren't cross-platform and you aren't always loading exactly the same binaries of the dynamic libraries you've linked to.

You can (most of the time) work around this by manually setting FPU precision (note that you need to change the precision of both the mantissa and exponent, as otherwise you can still end up with issues). (Except that you, in C++ at least, have to work deep magic - you have to set it before main even runs, because you can end up with problems with static initialization otherwise.) And in the process losing compatibility. And finding that random code you call occasionally resets things. Or random code you don't call (a classic example: the printer driver!). And using exactly one of float or double throughout. And hoping that none of the libraries you use ever resets FPU precision for anything, or uses the other of float or double itself. (And hope that your OS properly saves / restores the FPU everywhere, though this has largely become a non-issue).

And, worst of all, you've got to hope that the compiler performs constant folding with the same level of care as you've just taken.

Look at these:






Yes, we got bitten by this when using an algorithm with a k-means clustering step in the middle; minor perturbation in the least significant bits could change which cluster an item got assigned to, which would then result in massive changes to the eventual output.

(Relatedly, several ASIC and FPGA tools suffer from nonlinearity so badly that they will let you specify the random seed at the start and/or do multiple runs with different seeds to pick the best result)

That bad? Yow.

This is like a case study in why any computer architecture from the metal to the top should try like hell never to lose determinism.

If only to have deterministic test cases...

Yep, and we (mostly) manage to have determinism all the way from the metal up... Only to lose it at the language level.

Which is why you should never use == with floats/doubles, but always <=, >=, <, > and why modern compilers have warnings you can enable to tell you when you do ==.

While true, that's irrelevant to my point. The same thing occurs with this:

    if( a <= x/y )
      assert( a <= x/y );
Or with any other of the operators you mention (<, <=, >, >=).

Not necessarily. The x87 implementation (which is relevant to everything PC, prior to SSE) of IEEE 754 internally uses an 80-bit extended precision representation for floating point numbers. Numbers are rounded to 64 bits for loads and stores of course...

Now, what do you suppose happens during a context switch, when a second process wants to use floating point math?

You can save/load 80-bit values to memory with the regular x87 instructions, it's just called "extended precision". And the accelerated "save FPU state" instructions save the full 80-bit state. Otherwise it would be pretty nutty.

Where you can get unpredictable (though still repeatable) behaviour is when your compiler spills into 64-bit memory slots, and does it a little differently with each little modification to the code.

For distributed scenarios where the architecture, language and library are unspecified and effectively random, it is not. Not so long ago x87 80-bit precision was common while non x87 systems typically used single and double precision. Finally, IEEE-754 does not specify transcendental functions, so once you use those your results vary all over the place as approximations vary. The fact that Go does not use the the CRT for FP will demonstrate this in the real world.

On a single chip, yes. But different chips handle some corner cases differently, leading to divergence between machines.

Actually with C / C++, it can be different even with the same chip. It can actually be different using the same binary and still be compliant.

Something like this:

    if( a == x/y )
      assert( a == x/y );
Is allowed to assert (!). It's also allowed to not assert. Worse, the following is a legal "optimization" of the above code:

    if( a == x/y )
      if ( getRandomBooleanFromSomewhere() ) 
        assert( a == x/y );
(I'd use rand, but I don't want to get into matters of state here. Just assume that getRandomBooleanFromSomewhere, well, pulls a random boolean from somewhere. No side effects.)

Why, you might ask? It's allowed to use extra precision, but not required to. And it's not required to even be consistent about it. So: if it uses extra precision, and it decides to lose the extra precision (say, by spilling x/y to stack) between the first and second uses, the assert could potentially fire. If it doesn't do both of those, the assert cannot fire. So it's legal to remove the assert, but also legal to keep it. And also legal to randomly decide which to do at runtime.

Now, this is a stupid thing for it to do in this case. But nonetheless, it is allowed. And there are cases where analogous optimizations actually make a plausible amount of sense.

You can work around it to an extent with FPU flags. But that way lies madness. To put it mildly. ("It desyncs when I print a document.")

AIUI the C standard mostly regards floating-point as out of scope / platform-specific. Your vendor should (hopefully) provide stricter specs for how it works on your particular machine.

The particular machine tends to be deterministic.

It's the compiler that's mainly the problem here. Or rather, the language standard.

May not be same on different machines

May not even be the same on the same machine. At least if you're using C or C++ (or anything that ever passes through C or C++, i.e. almost everything.)

Deterministic but too hard to predict is a form of non-determinism, as far as humans are concerned.

It's not even deterministic.

At least for anything that ever passes through C or C++, or uses code that does. (I.e. everything, close enough.)

See: https://news.ycombinator.com/item?id=9431372

The Quake and QuakeWorld source weren't released until 1999.

Yes. It was that leaked version, not the officially released code. Fortunately Id Software was very cool about the leak, looks like they've even accepted patches :)

""" wikipedia John Carmack ... and the Doom source code in 1997. When the source code to Quake was leaked and circulated among the Quake community underground in 1996, a programmer unaffiliated with Id Software used it to port Quake to Linux, and subsequently sent the patches to Carmack. Instead of pursuing legal action, Id Software, at Carmack's behest, used the patches as the foundation for a company-sanctioned Linux port.[citation needed] Id Software has since publicly released the source code to Quake, Quake 2....

Cool. Do you have blog that talks about old game hacks?

>If they knew what the next number to be pulled from the random number generator was, they could, for example, delay firing a weapon by a frame or two, to get more damage. Probably other more interesting things I can't think of immediately. If there was ever a bot vs bot Doom competition, this is something you'd definitely want to consider.

If you're interested in that kind of stuff, check out tasvideos.org, a community dedicated to beating old videogames as fast as possible using that type of knowledge. Since all videogame 'randomness' is really deterministic pseudo-randomness, the kind of RNG-knowing you described is ubiquitous

In fact, by coincidence, it looks like someone just recently published a DOOM (second episode) run: http://tasvideos.org/2825M.html

Wow! Yes, I am interested in that kind of thing.

Also a coincidence: me and my brother used to do cooperative 2-player speeed-runs of Doom/Doom2 as part of a thing called "H2HMud". We had a setup which consisted of a fast 486, and a really slow 386. It ran really slowly - about 1/4 normal rate when you woke up a room full of entities. The 486, however, rendered every frame, which meant the person on the fast machine could happily pick every off pixel-perfect, while the slower machine player would do "support" stuff (flip switches).

I wondered what the "limit" would be if you could slow it down arbitrarily, and re-stitch, and I guess that's what it looks like in the video you posted. Thanks!

My favourite example: http://tasvideos.org/1145M.html completing a game in under 10 seconds by manipulating the RNG to spawn the final quest completion item under the player's initial spawn point.

This isn't an uncommon strategy today. Many online RTS games work in a synchronized model like this still; it's much cheaper to for all players to send each other their inputs than for the server to send the positions of possibly thousands of characters constantly. The recent Halo games use a synchronized network model only in their cooperative modes. (It's obvious because lag will cause the game to slow down.)

>It does make me think that cheats (not that I ever cheated) were missing a trick. If they knew what the next number to be pulled from the random number generator was, they could, for example, delay firing a weapon by a frame or two, to get more damage. Probably other more interesting things I can't think of immediately. If there was ever a bot vs bot Doom competition, this is something you'd definitely want to consider.

See http://taeb-nethack.blogspot.com/2009/03/predicting-and-cont... for an example of that sort of thing done in Nethack!

> Even the "randomness" is deterministic.

But isn't rand() deterministic too? It's only governed by the seed.

Only /dev/random is "really" random, AFAIK. And even that is only as good as the nearby entropy.

Indeed, and I should have said "pseudorandom". What I'm trying to say is that it's fully predictable.

Yes, they could have used rand() with an appropriate srand(seed), communicated between machines. They could even have used a DRBG built with an AES cipher, except it would have been too slow. However, even a bad implementation of rand() usually uses an algorithm which is essentially just "seed = seed * m + a", which is slower than a table lookup.

Anyway - the interesting thing is that a 256 entry table is sufficient for game purposes (not obvious to players), is fast, and makes the synchronous multiplayer setup work.

seed = seed * m + a would certainly have been much slower than a table lookup at the time, but that's not the case with current microarchitectures.

Since we're bikeshedding...

>Yes, they could have used rand() with an appropriate srand(seed)

rand() is shared across the whole OS, so another program calling rand() could make you skip a number; or another program could reseed rand. Then you've lost determinism.

Even if that weren't the case, you don't want you FX system to call rand() and "steal" the next number that that AI system was expecting.

With Doom's table, each system can keep a separate index into the table; now calling getRand(myIndex++) from one system doesn't interfere with other systems.

> rand() is shared across the whole OS, so another program calling rand() could make you skip a number; or another program could reseed rand. Then you've lost determinism.

Is there actually any implementation of rand that does this? I feel like this would be a buggy implementation, or at the very least, goes against the spirit of the standard,

> If srand is then called with the same seed value, the sequence of pseudo-random numbers shall be repeated. If rand is called before any calls to srand have been made, the same sequence shall be generated as when srand is first called with a seed value of 1.

> The implementation shall behave as if no library function calls the rand function.

I stand corrected, thanks.

rand() is per-process and not shared across the OS.

I stand corrected. My point about different systems still stands.

Just a thought: a FPS multiplayer game could use the proof of work generated by the Bitcoin mining network (e.g. a pool share 1/20th the difficulty of a block hash, which would be generated on average once every 30 seconds) as the seed for the RNG. This would be:

* unpredictable in practice (theoretically a player could reliably predict the hash, but only if they overpower the entire Bitcoin mining network, at a cost of billions of dollars in mining hardware)

* globally verifiable, allowing reliable synchronization

The only drawback would be that the time between proof of work shares meeting a particular difficulty target is somewhat random, in being a Poisson distribution that only averages to a particular value.

Is there a problem that solves, or is it just "because Bitcoin!"?

I'm not sure if it solves any problems, but I think it might, for a couple of reasons:

* it's provably fair, with no reliance on a trusted third party to provide a fair random number, with fair being defined as a random number that none of the participants can predict / obtain foreknowledge of.

* propagation might be faster than what can be achieved through the internet, because dedicated broadcast channels are being created for Bitcoin blockchain data, like the BitSat program, which will broadcast Bitcoin blocks to the entire planet via a cluster of 24 nanosatellites, or the Kryptoradio project, which broadcast the data over radio in Finland.

I very much doubt you could synchronize with the bitcoin consensus faster than synchronizing with the other players. If you're willing to communicate with each other then you can generate a random number in a consensus way easily (e.g. everyone generates their own and XOR them all).

How do prevent one party delaying the provision of their R to the other players while calculating the final R for themselves?

> If they knew what the next number to be pulled from the random number generator was, they could, for example, delay firing a weapon by a frame or two, to get more damage.

This is similar to a crit hack in TF2: https://wiki.teamfortress.com/wiki/Hacking#Critical_Hacks

This rng cheating can be prevented. Have everyone generate a random number in any way they want. Send that number out along with their inputs. xor all the numbers together. Then deterministically stretch the result into however much randomness is needed.

This approach alone is not enough. A cheating bot could wait until they received the input+number from all other players, then choose their own number as needed to obtain the desired "random" result.

A workaround is to have all parties pre-commit to their numbers by first sending a hash of the number. Each party will reveal their number only after receiving the hashes of everybody else. This adds another round-trip, which is bad for latency.

As a compromise, one could have clients pre-generate a whole sequence of numbers, pre-commit to that sequence by publishing its hash, and then reveal the sequence one number at a time. Then it is possible to detect tampering with the randomness at the end of a game.

However, even with this modification, another source of cheating in a peer-to-peer networked game may be that one party might want to adjust their input after having received the inputs of everybody else. This can again be mitigated with the same commit-before-publish scheme, except that now the loss of latency cannot be prevented.

(Obviously, all of these cheats can be eliminated when you have a trusted server, but in that case you can just let that server generate the randomness...)

You need to additionally add a salt to the hash taken from each peer.

No you don't.

The reason behind needing a salt normally is password reuse / poor entropy. But you don't have either problem with this situation.

Just have each peer generate a random number of sufficient length.

So, roughly. When you need more random numbers:

1) Have every client generate a random number of sufficient length and publish a cyyptographic hash of it.

2) Wait for everyone to ack that they have every hash

3) Everyone publishes the random numbers.

3) Every client verifies that everyone's random numbers match the hashes, and computes the final random value as the xor of everyone's random numbers.

As long as the random numbers generated by the clients are long enough / actually randomish, this should work.

AFAIK, this is secure. A client cannot delay until after they know other peoples random numbers, as it'll stall at step 2. A client cannot find someone else's random number from their hash, as inverting the cryptographic hash of a random number of sufficient length is infeasible.

About the only concern with this is spoofed network communication. And that's detectable.

I can force a bit one by simply doing a preimage attack on that bit; I can calculate two messages (0 and 1) that both have the same hash. Because there's no salt, I can spend as much time as I want doing that, but when it comes time to show the numbers (3) I simply wait for everyone else to publish, xor their bits together, and choose which message I want to reveal.

> I can calculate two messages (0 and 1) that both have the same hash.

It's part of the definition of a (secure) cryptographic hash function that this is impossible. For a secure hash function, finding two messages with the same hash cannot be done faster than by birthday collisions, which means that for a 256 bit hash function you need to compute 2^128 hashes, for a 512 bit hash function you need to compute 2^256 hashes. Even 2^128 is a pretty big number.

So you're right: adding the salt prevents the attack of precomputing a collision using an insane amount of computing power. Then again, if that's what you're worried about then you also have to consider AES-128 as broken, which (today) puts you in a rather small minority. I suppose one can never be paranoid enough ;-)

Good point. Yes, if the range of random numbers chosen is small.

If the random numbers being exchanged are N bits long and the length of the hash is also N (say N = 256), then a salt should be unnecessary, right? Inverting an essentially random hash value should be impossible. But it's always possible that I'm missing something.

Very interesting. One could, though, have a fair multiplayer game where each player picks a random number, commits to it (e.g. by hashing it and publishing the hash), then all players reveal their numbers, XOR them all together and use that as a seed for a CSPRNG. With a good CSPRNG, they could even mix player actions into the generator over time, resulting in a truly random game.

>the game slows down

You act like games today don't do this. Most games by Nintendo, notably the latest Super Smash Bros., embarrassingly still do this.

I believe modern hacks for counter strike behave exactly that way.

if I recall correctly, the accuracy spread for bullets is determined by a "random" seed on the client. aim hacks can counteract recoil and bullet spread by "predicting" the next values the client is going to generate.

Not since December. It has been changed so that hit testing is done with a server-side seed. With the drawback that decals and tracers are wrong client-side.

I've only been programming for a couple years, and I thought I was a genius for coming up with a game engine exactly as you describe here. deterministic randomness, doom beat me to it by a couple decades.

Obligatory XKCD: https://xkcd.com/221/

This instantly reminded me of the (true) story where people were able to cheat an online poker game because the site released its source to "prove" its fairness/randomness, showing that the deck was shuffled from a fixed state using a standard RNG with the time of day (in ms) as the seed.

Of course this meant there were only 86,400,000 possible decks, people could generate all the possible decks ahead of time and as soon as cards would be revealed in the game they could easily match the hand being played to the limited set of possibilities. Essentially they had foresight of which cards were coming out. Furthermore, once a few decks were matched up you could vastly reduce the set of possibilities even further by working out the approximate time of day on the server.

This is PlanetPoker[0]

The gist of it is their algorithm never chose the last card in the array to swap with, and the last card was never guaranteed to be the last card. They also committed the common mistake of implementing the "naive" shuffle, which is not equally distributed.

[0] http://www.cigital.com/papers/download/developer_gambling.ph...

By the birthday property, this means that it is very likely to experience the exact same deck being played out more than once.

> Monsters do not make different death noises at different times; only one is played for each category of monster.

This, BTW, is a really interesting trick that Doom pulled. There's actually only a single death sound for each type of monster.* But for some monsters, the game plays it back at a random speed--also changing the frequency, like an audiotape on fast-forward--so it becomes a brief shriek, a drawn-out groan, or something in between. This gives an impressive variety of sounds without adding extra memory-heavy samples.

Hardly anyone was aware of this as far as I can tell, which is a testament to how well it worked. I only noticed it when playing a goofy little mod that turned zombie sergeants into Energizer bunnies and played the "still going" clip when you killed them. The effect is obvious when there's actual speech involved.

*Not counting the "splatter" sound when you gib any of the gibbable critters.

On further consideration, I think it only does this with imps and the various zombies, which are the only monsters that you're likely to mow down at a rate of several per second--it might have been specifically intended to stop zombie fragfests from sounding like an echo chamber.

Hi, article author here: it was the pitch shifting behaviour in particular that I wanted to explore when I modified the random table. Early versions of doom (<1.4) did it, but they accidentally removed that feature with a rework of sound code in 1.4 and onwards (including all versions of doom 2). It was originally applied to all but two sound effects, with special casing for chainsaw sounds (less variance afaik).

There were, however, three distinct zombie death noises, independent of pitch shifting.

Interesting! Are the three zombie death sounds perhaps speed-shifted versions of the same original? My recollection (it's been a while) is that zombies made dramatically different sounds on death but--once I started looking for it--they all seemed to be speed-shifted versions of the same sound, while pitch-shifting in other sounds was subtle enough that I never noticed it. If pitch-shifting normally applied equally to most sounds, that implies that there was something else going on with the zombie noises I recall.

Or maybe I'm completely confused because I haven't played it since the '90s. Might be time to give it another try...

The three zombie death sounds are on this page http://www.wolfensteingoodies.com/archives/olddoom/music.htm

Half Life 1 did the same audio trick too. (It's based on the Quake 1 engine)

When I added friendly monsters and Wolf3D dogs to Doom, I tweaked the randomness to make the dog behavior more life-like.

Friendly monsters go after enemy targets, but absent any targets, they follow you in semi-random motions, but they stay a comfortable distance away from you, except when getting onto a platform or other crowded space.

Disclaimer: I'm the author of MBF.

Also, demo reproduction was a huge issue. We had to do all kinds of versioning to make sure that a demo which was recorded with one version of Doom/Boom/MBF/etc. was faithfully replayed with the current version. One tiny mistake, and the whole demo could go into a tailspin. This sometimes meant we had to preserve bugs like Lost Soul stuck-in-wall bugs.

Oh, and randomness could have fixed the monster-stuck-in-doorjam bug, which is easily reproduced in E2M4 with the Baron by the blue key. The code assumed that if a monster was "in" a door sector and was blocked, then the door simply needed to be opened, and then the monster could continue in its current direction. Better would be to rotate the monster's direction randomly to help them to get out of the sticky situation. That was our fix in Boom.

Beautiful. This article made me smile. I also like that the author gave a simple "how-to" at the end in case I wanted to try it myself (i do!).

It might be better to call this Low Probability Doom-- indeed, it's deterministic, but so would changing the random number generator to a counter.

The impacts being discussed are mostly the effect of playing the unluckiest game of doom possible. :)

Videos of such experiments would be greatly appreciated.

Took me a while but here you go https://www.youtube.com/watch?v=SvfJoy1rjmo

I'm always impressed by the simple ingenuity of the Doom source code and the immersive environment it creates.

PRNG determinism is widely utilized in TAS (Tool-Assisted Speedrun, basically a game played using tools such as frame advance and save states to simulate a "omnipotent" player.) to manipulate games to achieve favourable outcomes. Usually the game code is reverse engineered and a Lua script is created that searches forward in the RNG outcomes to find the ones you want, it is then up to the runner to execute actions in the game that give you that state.

One example of this technique which I particularly like is the "100% Souls" run of Aria of Sorrow, where the normally very grindy process of getting enemy soul drops is trivialized by making sure that every enemy will drop a soul the first time it's killed: https://www.youtube.com/watch?v=DfkJpzBoJ-M

I'm intrigued by this:

> I hex-edited the binary and replaced the random number lookup table with static values

If you have a look up table that you randomly look up, doesn't that mean you already have a random source? In this case, why need a random look up table at all?

It reads the next number in the list, it's not a random lookup. See https://github.com/id-Software/DOOM/blob/master/linuxdoom-1....

Doesn’t that already make it deterministic though? It’s always the same sequence of “random” numbers, even before replacing the lookup table it should’ve been deterministic.

This is neat... since Doom is open source, why not make the binaries available for a quick romp?

Also, try $80 as well as $00 and $FF?

(Yeah, I'm lazy)

It's very easy to compile any port. I personally recommend you to try Chocolate Doom.

yaay it compiled!

Just grabbing an IWAD, I'm looking forward to trying this! Thanks for the suggestion

> The chance of monster in-fighting is never, or a certainty. The player is either mute, or cries in pain whenever he's hurt.

This is interesting. I thought that both of those things were deterministic to begin with--monsters that get hit by other monsters will go after the offender until it's dead or something else hits them, and the player makes the "pained gasp" sound effect every time he's hit.

OP is wrong about Doom's determinism I guess, it is already deterministic, random values are predefined. He just set them to constant, so his random is literaly "int random() {return 4;}", still deterministic but not 'random'

Regular Doom may not be random in the mathematical sense, but it is as far as a player is concerned. In combat, pulls from the "random number" pool are happening fast enough that I don't believe it could be exploited in any meaningful way.

What I meant was, I hadn't thought that infighting target choice and player pain-noise referred to the "random" pool at all. I thought they were 100% predictable in the original game. I wonder where the randomness actually comes in.

Timings of user input probably.

It's interesting that such a simple implementation of randomness works so well, in particular enemy in-fighting and their decision to pursue or not. I'm sure when you're playing, you end up anthropomorphizing many of these decisions.

Note that these decisions might still be "intelligent" (to a certain degree, i.e. not actually random but based on reasoning) without that hack.

But there might be a small chance involved where the entity would do another thing than the algorithmically determined action, and depending on how the values are interpreted this then triggers, the overriding of the game AI now just happens every time.

Wish there had been videos posted!

Yeah a video would be a good way to demonstrate this but it would take a lot of editing to do properly and I ran out of time last night and really wanted to get this out. I might post a follow up video later.

Took me a few days, but here's a video https://www.youtube.com/watch?v=SvfJoy1rjmo

> The chance of monster in-fighting is never, or a certainty.

Still seem a little random, though

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact