Hacker News new | past | comments | ask | show | jobs | submit login
John Carmack on QuakeWorld latency and business model (1996) (githubusercontent.com)
395 points by Reedx on May 15, 2019 | hide | past | favorite | 162 comments

Carmack's dotplans are always interesting history to read.

If you weren't a gamer (or alive) at the time when quakeworld came around, you might not appreciate how amazing it was for multiplayer games on the internet. On dial-up, you were lucky to have 150ms latency. Before client-side-prediction, that latency applied to every action you took in game, including player movements. Hit the up-arrow, and you wait 150-300ms before the game responds and moves your character forward. CSP really was an amazing break through, and made multiplayer action games feasible on the internet.

This is particularly relevant now, that we are entering the era of cloud-based streaming game platforms, like Stadia. The latency problems of the pre-CSP 90's will be rearing their heads again. Its going to be interesting to see how these same problems will be tackled in this new context. Internet speeds are higher now, but so are our expectations.

I doubt we'll have the nice, simple dotplan files that Carkmack left for us to read and remember, from all the SRE's at Google, sadly.

Sadly we live in a world today where some local code editors could really benefit from CSP because they add 100-200ms of latency to every character you press.

I'm not sure if it's funny or sad that there's more key press latency typing into most local Electron apps than connecting to a Quake 3 server 200 miles away back when I had 56k dial-up in 2000.

If you want to fast forward to today's internet, with an average internet connection it takes around 150ms to ping a server in the Netherlands from California. That's over 5,000 miles (8,000 kilometers). Somehow a local key press has the same latency with certain code editors. What have we gotten ourselves into.

Not just Electron either, writing an email in Outlook frequently gives me half-a-second pauses before the typing catches up with the cursor

Ah, so the Outlook lag isn't just me. This is incredibly frustrating when typing emails. I even have that "intelligent" predictive service turned off and yet, it constantly seizes as if it is trying to work out what I'm saying in an email... Just let me type dammnit!

Actually, there's nothing sad about the state of code editors in the world today. I'm not sure what code editors you're regularly using that is giving you that kind of latency, but there's plenty -- more than ever before, in fact, that definitely don't have this problem (some of which are even electron based!).

my iOS phone freezes approx twice a day for 2 seconds whenever i type text. I remember playing with the keyboard buffer overflow sound when i was playing on my cpc464 (as in 64k of ram and 4Mhz cpu) in the 80s, and it took me longer than that to trigger it.

"The mess we're in" famous talk by joe amstrong should be transformed into a website listing all of those absurdities, as a way to public shame the culprits.

> "The mess we're in" famous talk by joe amstrong

For anyone else: https://www.youtube.com/watch?v=lKXe3HUG2l4

Fantastic watch, thanks for recommending. Sad to hear that Joe Armstrong passed away a few weeks ago.

i discovered recently that there could be some progress to be made wrt latency in editors: https://makepad.github.io/makepad/

// This is Makepad, a work-in-progress livecoding IDE for 2D Design. // This application is nearly 100% Wasm running on webGL.

Typing delay is a pet peeve of mine. This is why I have stuck with Sublime and Vim even though there are more powerful editors out there like VsCode or PyCharm.

If you want a fast editor, switch to Sublime 3.

I find VS Code to be just about the only electron-based editor I can use without getting frustrated with typing latency. It's usually not noticeable unless the process is chugging for unrelated reasons.

VS Code is Electron-based, but not Atom-based, thankfully.

vs code typing latency is on the order of 50ms.

its more than necessary but not as bad as you are implying

VS Code is pretty good for an Electron app

Usually that kind of typing lag is caused by something running amok on the machine. Often for me it has been company installed backup software, to which I send SIGSTOP (not SIGKILL or SIGQUIT, since those cause re-start.)

You can't really do prediction if you are not rendering the game locally, so all you can do is have a lot of servers all over the world and rely on most customers having a low latency fiber link.

For the camera alone one could do a few tricks on the client like VR's timewarp/reprojection, but that doesn't work for gameplay actions (like pressing the fire/jump button).

Theoretically the server could speculatively render and transmit a number of different "potential future frames", and the client throws the wrongly predicted frames away.

That's a nice way to burn even more energy and bandwidth though ;)

Stadia controller connects directly to WiFi to the remote server though, so it sounds like the Chromecast or whatever isn't even getting the local inputs to do such tricks.

TVs has been so slow for so long anyway. I suspect we will just have a ton of point and click style games or games that are mostly simulations with relatively uninteresting inputs but with potentially bigger visuals, or casual games where even 300ms+ latency doesn't make much of a difference. For better or worse, reflex (time) based games will just not be played on Stadia by serious gamers. We have VR for that now.

> We have VR for that now.

VR actually introduces further latency problems. With a TV, your typical PS4 can cover up latency issues with spectacle, as you mentioned. That's a big reason I suspect so many console games feel like a movie today, with tons of cutscenes and quick time events. I've been playing a lot of Bloodborne lately, and while it's an action game, it's still incredibly slow compared to something like Quake.

But with VR, when you move your head you expect the world to feel as if it is real. A TV is artificial, and latency is not an intrusion in the experience. But with VR, latency is felt on a deeper level. Potentially resulting in headaches and nausea.

The funny thing is that John Carmack is riding the state-of-the-art decades later on this front as well: https://www.wired.com/2013/02/john-carmacks-latency-mitigati...

> For better or worse, reflex (time) based games will just not be played on Stadia by serious gamers.

True, but don't overestimate the importance of "serious gamers" to the industry's bottom line. IIRC, mobile gaming is now making more money than all PC and home systems combined.

The servers could predict what the user does in the next 100ms. Not the same kind of CSP, but fits well into "powered by AI" marketing...

They probably would, but the creative players would suffer from it. AI never predicts creativity.

I am now imagining a (probably short) story in which the AI does learn to predict players perfectly, even the creative ones, and ends with a gamer taking his hands off the controller and allowing the AI to play exactly as he would have and wondering what was ever the point.

I think prediction failures are more likely to punish the opponents of the unpredictable guy. In a lot of online shooters, people with 300ms+ ping blink around unpredictably and appear to suddenly murder you out of nowhere, but they don't seem to have any trouble themselves.

No offense but I think you underestimate the predictive potential of millions of hours of game state

None taken. But your claim is that creativity is not possible anymore, since it was all done in the "millions of hours of game state". However, if creativity is possible, then my argument is correct.

Another issue is that machine learning/AI don't predict rare events, like earth quakes. So even with all the knowledge in the world, it won't predict a rare creative move of a player.

But every event, creative or otherwise, is made up of hundreds of smaller events. That complicated wall jump - 360 kill you just did used several input signals. Even if the server side AI can't predict the exact final outcome, it can definitely help with the intermediate, well known states for at least some of the input systems.

I say some but I do believe a large enough volume of data can improve the performance of this class of input/states.

Yes, and then you predict something, broadcast it to your clients, and it ends up being wrong so the clients have to roll back. Would not be a good experience.

Perhaps if you train the model on the existing movement and action history of the particular player

Oh, that's a great idea!

Stadia could sell it as an add-on to players which not only don't want to play their games themselves, but also aren't satisfied by watching other people play through their games on YouTube or Twitch. With this add-on, they can finally watch themselves play through their games, without having to lift a finger to, you know, actually play!

They would make a killing on Twitch, where a streamer could just buy someone else's training data and use that instead of playing themselves.

Black market dealers would swap Terabytes of hot RAM with manually inputted data from the best e-sport players in the world. Corporate enforcers would hack those dealers to delete the data.

Call it Sonic Mnemonic.

I thought about it. But first, you need a lot of information to train a model, so it would only work for very heavy players. Second, creativity is not defined only as doing an action that others didn't do (often or never) before, but also of doing an action that you didn't do (often or never) before.

The problem there seems two fold, one that you need even more beefy hardware to predict, simulate and render ahead of the player input particularly when it has to catch up to deal with misprediction and secondly that mispredictions are going to make the game feel really imprecise at best and jarring at worst.

I'm also skeptical about the ability for a model to generate predictions without having too many mispredictions to make it viable.

I think the problem here is, the game is made by Studio A, but it's run by Google in the cloud. Studio A probably doesn't have enough of an incentive to put this in the binary, but Google doesn't have the source, so they can't change the game loop.

...what's the point of playing competitive first-person-shooters when an AI is at the controls.

And those types of game are the ones that are suffering most from input latency.

I actually kind of find the thought hilarious.

In a twitch shooter, you mouse over a visible opponent. Would the AI be more likely to pre-emptively pull the trigger for you? Or more likely delay and swing the aim passed the opponent before shooting at air? AKA Is the training set for predictive user actions based on experience of players better or worse at the game than you?

That would make the delay much worse if you mis-predict the users action. Games would also have to add additional 100ms lag for all important events.

If you had a big enough server, you could render multiple frames for each possible user action...

True, that could work, and the two frames are probably very similar, so wouldn't even require that much more bandwidth. As someone here pointed out, however, the game controller is connected to the cloud directly, so the display doesn't even know of the inputs until the roundtrip is already done.

>we are entering the era of cloud-based streaming game platforms, like Stadia. The latency problems of the pre-CSP 90's will be rearing their heads again. Its going to be interesting to see how these same problems will be tackled

My take is that it'll be the a primary competitive point, nearly as important as the library available. Companies that can deliver the service without introducing these issues will succeed and ones that cannot will fail. If nobody can reliably crack it, cloud gaming won't take off.

Back in the 90s there was no viable competitor aside from LAN parties, and those weren't available to you every evening.

There was modem dialup between two players though. Among my circle of friends in 94/95 there was someone who wanted to play DOOM every night. The games was often coordinated during the day at school, or later by phone.

Also I agree that is not a given that "Cloud gaming" will take off. We have emerging VR were latency is absolutely critical, even more so than for FPS e-sports

Yeah I remember that. There were also some gaming-specific low-latency premium-rate dialup services (e.g. Wireplay in the UK) that hosted game servers on-net. With the right TCP/IP settings and Modem firmware, you could get very close to the minimum theoretical modem latencies with barely any jitter, and it made a big difference.

Now and as it was then the game maker controls the viability of its game's ecosystem. Some game companies think its a market advantage to have open hosting, mod ability, and some don't.

It's funny. Sure, DSL brought the (much) bigger bandwidths, but when moving from ISDN to DSL, ping times increased again and it made total sense to play ESL matches over telephone dialup instead of DSL. But either they improved or it's just the normal. Anyway, I stopped playing shooters in the early 00s and so I don't really care anymore.

source: German who never had a 56k modem but started with ISDN in 1998 and can't really remember a ping > 100 on EU servers ;)

> but when moving from ISDN to DSL, ping times increased again and it made total sense to play ESL matches over telephone dialup instead of DSL.

Telekom used "interleave" by default, which provided very slightly faster download speeds. And caused about 70 ms latency to the first hop.

I had to contact them and ask them to change my ADSL to "fast path". This dropped the latency to maybe 20 ms (IIRC, my memory might fail me on this number).

I think ISDN was about 40 ms to first hop, but again, it's been a long time.

Thanks for the reminder! Yes, I remember fast path. But if memory serves it wasn't immediately available with the 768kbit plan, only a little later. Or at least the knowledge hadn't widely spread.

I guess it was available in a few months after it was launched? Yeah, you had to know about it and request it from the service number. AFAIK, it wasn't mentioned in any instructions etc.

I was thinking the other day how simple and elegant dotplans were. They were truly the original social media.

The worst part was having a 200ms ping and getting constantly smoked by the guy with the sub-100ms ping. Hence the acronym LPB (low ping bastard).

I dunno, I actually hated the HPBs more, because at least I could rationalize getting beat by an LPB, i.e. someone with a technological advantage over me. ;-)

I remember when I was a kid in the 90s and my dad connected two computer to play Doom. The connection was so slow it was almost impossible to play. When my brother and I saw each other’s characters walking around my mind was so blown at the time. Really a great memory. Wait, you can play the same game from 2 computers?

I recall staying late in the office at British telecom and using our super high end oracle forms dev pcs (£4k) and 20 inch monitors(£2k) to play doom.

What is CSP ?

Someone correct me if I'm wrong, but I believe this was the genesis of client-side prediction for games.

If you don't know what that is, Valve has some good documentation on it (relating to Source engine, but it's the same concept).[1]

Latency was still a problem even with CSP because we didn't yet have lag compensation on the server. So although you got an authoritative server (no cheating) and instant inputs (no round-trip wait), you could still shoot someone on the client and miss them on the server. You had to shoot ahead of them, further if your latency was worse. As far as I know, Source engine was the first to add lag compensation, which has the server rewind the other players to be where you would have seen them on the client.

Since client-side prediction can be performance intensive, and lag compensation can be difficult for anything beyond simple raycast checks, many games still don't do both. Team Fortress 2 for instance does lag compensation on basic guns but not on things like the soldier's rocket or the pyro's flamethrower. Some games simply allow the client to decide who they hit, which removes the need for either solution, but totally opens things up to cheaters without some very careful server-side validation.

[1] https://developer.valvesoftware.com/wiki/Latency_Compensatin...

Source may have been the first engine to formally support lag compensation on the server side, but something similar was already implemented previously as a mod for Quake 3 called "Unlagged". Essentially what it did was maintain a buffer of all player positions over the last X number of milliseconds and whenever a raycast check needed to be done, it looked back in that buffer based on the current ping of the player requesting the raycast.

There is still some documentation left on the internet: https://www.ra.is/unlagged/faq.html https://openarena.fandom.com/wiki/ModCompat/Unlagged

dm17 unlagged instagib. good times.

The engine for the first Halflife was a customized and modified version of Id’s engine, so originally they shared not only some principles but also code.

“GoldSrc is a game engine developed by Valve Corporation, first showcased in the 1998 first-person shooter game Half-Life. Elements of GoldSrc are based on a heavily modified version of id Software's Quake engine. “


I searched the entier HN article and everyone missed a big paper, the tribe networking model: https://www.gamedevs.org/uploads/tribes-networking-model.pdf

This is one of the most influential paper for multiplayer games. ( probably #1 ). Tribe was the first game to have client side prediction and rollback, this is what most games use nowdays.

Related note: you can download and play any of the classic Tribes games from https://www.tribesuniverse.com

Here's my humble contribution to provide a clearer explanation, and a simple live demo with source code, of client-side prediction, entity interpolation, and server reconciliation: https://gabrielgambetta.com/client-server-game-architecture.... It's a relatively popular alternative source to Valve's (excellent) documents.

The Wikipedia page has a link to a thingie that says Duke Nukem 3D had clientside prediction.


I clicked your link and at first kind of brushed it off as dubious since the only linked source is a press article, but the article is an interview with Ken Silverman himself! So that's pretty credible.

The source of the released game is available, so it should be possible to check, except that it's a huge mess (check out BUILD.C).[1]

Seems weird that no-one has actually verified this when it could be the first game ever doing CSP. The only other sites I can find mentioning it are just quoting your linked Wikipedia page.

[1] https://github.com/videogamepreservation/dukenukem3d

I have no idea how I remember this, but there's a write-up about the Age of Empires netcode (written ~96, released end-of-97): https://www.gamasutra.com/view/feature/131503/1500_archers_o...

Because of the unique challenges, they did a lot of synchronized client-side simulation (based on minimal state transfer over the net).

... and apparently encountered a lot of the issues with doing that (no random anything!!!).

This isn't the same as client side prediction though, the entire game simulation in the aoe case practically runs on every client, where the only input to the simulation are commands that are passed between all players and executed in lockstep, where the simulation essentially stops if for a given turn no command for that turn has been received.

Gives you really low bandwidth reqs (and randomness is still possible assuming you just seed things the same on all machines).

But also means you might have to wait 50-200ms (or some arbitrary sliding window depending on network conditions) until your click actually registers ingame as a move as commands are scheduled to be processed far enough in the future (some number of command turns in the future) that you at that point would have received every other players commands for that given turn and thus be able to execute all commands for all players locally for that turn, which is not ideal if you're playing a twitchy shooter, but alright if it's an RTS.

Here's the quote for those who don't want to deal with the bizarre pressreader interface:

In fact, in some ways Build had even managed to scoop Quake, according to Ken: "People may point out that Quake's networking code was better due to its drop-in networking support, but it did not support client side prediction in the beginning," he explains. "That's something I had come up with first and first implemented in the January 1996 release of Duke 3D shareware. It kind of pisses me off that the Wikipedia article on 'client side prediction' gives credit to Quakeworld due to a lack of credible citations about Duke 3D."

TIL Ken Silverman was 18 freaking years old when he started working on Build.

I feel a bit mean to all involved in linking this but at the same time it's an extremely interesting piece of history; check out GEORGE.TXT, found in a leaked alpha version of Blood (another Build engine game): https://tcrf.net/Proto:Blood/Alpha_Demo#GEORGE.TXT

Damn. Well, that _does_ sound like something a 18-year old would do!






Wow, there it is!

Yeah probably. JC created a lot of standards for the ones to follow. And I remember reading the valve stuff the first time, it blew my mind.

The Valve docs there are very well written. It was the first time I really understood all the concepts too.

Overview of Source engine networking: https://developer.valvesoftware.com/wiki/Source_Multiplayer_...

Ways they combat latency: https://developer.valvesoftware.com/wiki/Latency_Compensatin...

Lag compensation (server): https://developer.valvesoftware.com/wiki/Lag_compensation

Prediction (client): https://developer.valvesoftware.com/wiki/Prediction

Any idea why the would not implement lag compo for projectile-based weapons? Too much processing power required?

I think the effect on other players would be pretty jarring, especially if said projectiles / explosions jolted players in the air at high velocity.

IIRC QW's client side movement prediction didn't attempt to predict the effects of, for instance, shooting a rocket at your feet/rocket jumping, which meant on high latency connections there'd often be a bit of a delay before your view caught up with where the server thought you actually were. This also applies to rockets fired by other players. Lower latency had the effect of not only making it easier to fire rockets at enemy players, it also made everything seem smoother when people fired rockets at you.

Later versions of QW created from the GPL source release have introduced lag compensation for hitscan weapons, but I'm not aware of any attempts to do so for projectiles.

lag compensating projectiles on the client decreases usability more often than you'd think.

Projectiles like rockets apply a lot of forces to the player which are not easily predictable. If you fire a rocket and client-side prediction has it leaving your vicinity without collision but the server has someone stepping into the path of the rocket so it explodes close enough that the explosion exerts force that changes your position, you're going to have a really bad misprediction on the player's position, which of course can get waaaay worse if the player launches another rocket while the first rocket's effect on player position has not been sorted out properly.

When you aim someone and click you want to hit what you see. This is a big problem with lag compo, you are in the present but you see other players in the past. In order to avoid this behavior, a common practice is to send what you see, your target, when you shoot.

Projectiles are simulated on the server

All weapons are simulated on both the server and the client in the case we're talking about here. It's just that raycast weapons are rewound on the server to compensate for the client's latency, and projectiles aren't. The server has the authority on actual hits and damage done for all weapons.

But why is it called prediction? It's not predicting the future, it's simply simulating the present! Is it because the client is "predicting" where the server thinks the player should be?

Basically yeah.

The server has the authority on where you are, to prevent you cheating by sending it bogus positions. You send inputs (not your position) and the server gives you your new position back.

Problem is of course it takes time for the inputs to go to the server, and for your new position to come back. So in a naive implementation you press to move forward, and don't actually move for a while while you wait for a return signal.

So instead, say it's tick 100 on the client and you get a position from the server marked as tick 90. Instead of moving the player to the tick 90 position, which is in the past for you, you take that and "predict" ahead (a.k.a. run the simulation) 10 more ticks to get to tick 100. Including re-running any new inputs you did during those ticks.

There are situations where the client may predict differently to what the server actually does - maybe you bumped into another player and due to latency they were in a different place for you than for the server. In that case you'll either get suddenly teleported to a different place as the server data keeps coming through, or if you're lucky the game will kind of smoothly correct you over a few frames.

> Is it because the client is "predicting" where the server thinks the player should be?


Predicting it, because it only knows the facts up to ~200 milliseconds in the past.

Complete aside about Carmack and reducing VR latency from https://www.gamasutra.com/view/news/226112/How_John_Carmack_... :

"He gave the example of increasing the refresh rate on the Gear VR during development. He was working, at that time, with its Galaxy S III phone. Android triple-buffers graphics, inducing a 48 millisecond delay into the system -- making VR impossible.

Carmack pulled apart Android to hack that out. Though he'd written several emails to Samsung in attempts to convince them to give back that buffer, "It's easy to argue against an email, but it's much harder to make the argument when you can have the two things and stick them on your face and, 'Tell me this isn't better.'"

Carmack is probably the most self-consistent opinionated creator out there, given that so many years on he still sticks to one of his original quotes:

> "Focused, hard work is the real key to success. Keep your eyes on the goal, and just keep taking the next step towards completing it. If you aren't sure which way to do something, do it both ways and see which works better."

He'll even do Samsung's work for them to fulfill the purpose of that quote.

Carmack's brain is so much like a computer that it even does speculative execution branch prediction!

Is android doing triple buffering wrong? Where does the extra 16ms delay come from?

Games usually let you control if and which buffering to use so for an OS to be stuborn about it and waste Carmacks time over a simple toggle is pretty maddening

I fondly remember the old Quake 1 days. It was pretty much the last game I played seriously. Reading Carmack .plans were a regular routine and while many of the details went over my head, it made us feel part of the whole id experience.

Finally networked Quake was an amazing experience when it came out. I had started my first job in an ASIC design company. All the computers were Solaris based Sun workstations, yet we were able to get network quake up and running on them! I just can't imagine such a thing happening today but maybe that's more to do with me being 20 years older than anything else. For sure no one is porting game engines to Solaris these days ;)

Same story, but not as glamourous, on a Windows NT4 Rollout (1997). Also recall us downloading South Park at a (then) stellar speed - only took half an hour to download a highty compressed half hour episode, a thing of sheer magic at the time :)

Pretty impressive when you consider this was in 1996!

I really like his ability to take out code and 'shoot it'. Back to the drawing board. It's a quality of a great engineer, to be able to reflect on what's been done and admit it's not good enough.

> "While I can remember and justify all of my decisions about networking from DOOM through Quake, the bottom line is that I was working with the wrong basic assumptions for doing a good internet game."

I'm not one for hero worship, but I think that's as close to engineering zen as one can get.

Aka 'I'm brilliant. I thought I was doing the smart thing. Turns out reality was otherwise. I'm changing my approach.'

What's kind of sad and funny is that these assumptions changed again and no one seemed to have noticed. We're not on PPP or SLIP connections anymore and yet game devs are still writing netcode as though we are. Client side prediction should have been a temporary hack until low latency connections were mainstream not a permanent aspect of all gaming netcode, and yet it is.

Maybe Carmack will wake everyone up again and tell them to stop copying his ancient hack two decades later.

I think gaming netcode engineers would love to ditch CSP, but the problem is even though low-latency connections are mainstream, stable latency connections are not. It's always possible to have lag spikes or temporary slowdowns in your routing or connection.

Second, it's also possible to end up through matchmaking lobbies on a server halfway around the world from one or more of the players in a given match. At some level the limitations of physics still result in noticeable latency and CSP is still mandatory for any fast-paced real-time online multiplayer game.

Within a US city or state it's very rare to have much packet loss at all. It's very easy to spot using the builtin netgraph in Quake Live and other games.

Players that are separated by thousands of miles should not be playing competitive FPS games together. It's physically impossible to make it a consistently good experience.

You must be living under a very pleasant rock if you think that low latency, to the degree that client side prediction is useless, is mainstream. Not accounting for routing, drops and mux/demuxing, and assuming that the signal travels at the speed of light as the crow flies, from west coast USA to where I'm at, the latency is 29ms or roughly two frames.

This would be debilitating in an online game without CSP where even the latency introduced by screen buffering and vertical sync may make or break yor game, and in reality the latency is of course much higher, usually around 100ms or higher. Check your assumptions: https://wondernetwork.com/pings

By your numbers then the OP is right. Carmack says:

the bottom line is that I was working with the wrong basic assumptions for doing a good internet game. My original design was targeted at <200ms connection latencies. People that have a digital connection to the internet through a good provider get a pretty good game experience. Unfortunately, 99% of the world gets on with a slip or ppp connection over a modem, often through a crappy overcrowded ISP. This gives 300+ ms latencies, minimum.

So he designed for 200ms and then the real world was 300+ so it didn't work. Your 100ms would then work fine in that context. Maybe games have gotten more complex or our expectations higher so prediction is still worth it even on <50ms connections. But OPs point that the assumptions underlying Carmack's redesign have once again shifted seems correct, at least for that specific design.

> By your numbers then the OP is right.

By my numbers (though as you can clearly see from the link I posted, >200ms latencies are not uncommon), and Carmack's assumptions in 1996 about what latencies would be acceptable.

> So he designed for 200ms and then the real world was 300+ so it didn't work. Your 100ms would then work fine in that context.

He designed for <200ms which by the frame of reference he cites (T1 connection) probably was a lot lower than 200ms on average.

> Maybe games have gotten more complex or our expectations higher so prediction is still worth it even on <50ms connections.

Quake was not a very complex world, but more important (to latency) is that it's a fast paced game by today's standards (IME). It seems more likely to me that perception on what an acceptable hand to eye latency is has changed, just like the perception on acceptable frame rates. To me, joining an online shooter with >100ms latency (even with prevalent client side prediction) feels debilitating, especially if other players have lower latencies.

> But OPs point that the assumptions underlying Carmack's redesign have once again shifted seems correct, at least for that specific design.

Yes, I agree that PPP and SLIP are not common any more. High latency unfortunately is, and client side prediction is still an effective mitigation strategy that makes a huge difference to online play. The specific design used in Quake has of course been superseded.

>200ms latencies are not uncommon

Those numbers match my experience a long time ago with Quake3 based games where you'd pick servers that are local to you and get 10-40ms latencies.

> He designed for <200ms which by the frame of reference he cites (T1 connection) probably was a lot lower than 200ms on average.

I didn't forget the < sign. Designing for <200ms means you can handle the 200ms worst case. But maybe he meant 200ms worst case but much lower average like you are suggesting.

> It seems more likely to me that perception on what an acceptable hand to eye latency is has changed, just like the perception on acceptable frame rates.

Right, makes sense that the expectation today is much higher.

>High latency unfortunately is, and client side prediction is still an effective mitigation strategy that makes a huge difference to online play.

Just for curiosity, how low does it need to go for prediction to no longer be useful? I guess if you want to be able to have players from all over the world play each other then you have to deal with 300+ ms of latency. But if you are willing to do servers on each geography it seems 10-30 ms would be feasible. Would that be enough?

NQ (NetQuake, as the original Quake is sometimes known) 'feels off' on just 10ms. Movement becomes slightly just out of sync with your input.

10-30ms is a bit optimistic, depending on how large you define a geography. Many ADSL/VDSL connections, best case, start with 5ms of latency, and often higher (say 20ms) due to interleaving. Cable tends to be around 10ms IIRC but can suffer from significant jitter which makes things worse. So for a lot of players the servers would need to be in the same city to achieve that target.

NetQuake feels excellent at < 30ms. Even higher is easy to get used to with some practice because it's extremely predictable due to a lack of CSP. And yes, the solution is to have servers within ~500 miles of players so ~20ms avg ping is the norm. Most gamers have cable not DSL connections.

> 10-30ms is a bit optimistic, depending on how large you define a geography.

Just checked my fiber connection and it has 2ms of latency. google.com is 18ms away. I'm in Portugal and seem to be routed to a Google server 400+ km away in Spain. So my memory of playing on 10-20ms to local servers in Portugal is probably accurate. Covering the US or the EU with enough servers so everyone is never more than 30ms away seems feasible. Within easy reach of a startup with just ~50 VMs across different cloud provider locations and for popular games easy for users to setup regional servers themselves.

I get 8ms RTT to servers in my city on a normal cable modem today. People in major cities have nearly LAN-quality latency within ~500 miles.

Packet loss is extremely low, which is easy to tell using a builtin netgraph.

Screen buffering should be reduced as much as possible and vsync is not something competitive players would ever want. Pros all disabled it in Quake Live for sure.

Players on modern connections that are playing on servers in their city do not need CSP.

In practice it's not that bad to have CSP if it's not really doing much (due to all players being low latency) but I know from playing NetQuake vs Quake Live with low latency that it's nicer not to have it at all.

Go listen to actual competitive FPS players, and hear them complain about having 60 ms latency. Listen to them complain about living on one U.S. coast and playing on servers on the opposite one and having 80-100 ms latency. Listen to players in EU who play on NA servers because they don't like players' behavior on EU servers, and they're willing to suffer 150 ms latency. Then visit Australia and listen to players who suffer 300 ms latency to play with people who aren't in the relatively small Australian population for whatever game they're playing.

Then take out CSP from their games and watch what happens.

CSP will always be necessary for any Internet-based game. Only LANs have low enough latency to not need it.

This is why you ping restrict servers. There's no way to make a game competitive when one player has 10ms ping and another has 100ms (10x higher!) ping. It's just bad for everyone.

Which is exactly what competitive Quake Live players do and I suspect CS players probably do. Every player has 15-40ms ping and it's amazing.

I don't get it, many people want to play a game and there just aren't enough players in their city, games want as many players as possible to be playing them.

Your requirement that games ping-restrict servers is an absolute non starter: unless the game is already guaranteed to have hundreds of players active and available near any given player at any given time of day, and evenly distributed redundant always-up servers that scale for demand near every player cluster across the planet. I think CSP is a far more realistic solution to giving a good player experience than magicking player base activity and distribution

For major games like PUBG and Fortnite this shouldn't be a problem at all. It might mean that it takes 60 seconds of matchmaking to join a new game instead of 6 seconds.

What you get in exchange is a stable and predictable competitive game. I, and many others, experienced endless frustration in PUBG due to players warping around corners and shots that were clear hits missing due to CSP. And I'm not one to complain about this kind of thing unnecessarily. It really sucked.

The PUBG people didn't even geolock their servers at all last time I played, so you could be playing with people with 300ms+ ping which is just miserable.

For less successful games maybe you have to increase the maximum ping from 40-60ms to 80-100ms or something but it should be kept as low as possible. The more players per city, the lower you can make the maximum ping.

I realize that this might make it more difficult for people in Australia (or whereever) to find a game but it seems even more wrong to make the experience bad for everyone all the time. And at least when they do find a game it will be a great experience instead of terrible.

From what I hear, PUBG just has poor code, so let's leave it out of the discussion and compare well-made games.

Of course, there is an upper limit on how much latency CSP and lag compensation can compensate for, and this varies by game. One game might work well up to 100ms, while another might work well up to 50ms.

> I realize that this might make it more difficult for people in Australia (or whereever) to find a game but it seems even more wrong to make the experience bad for everyone all the time. And at least when they do find a game it will be a great experience instead of terrible.

Here you're failing to account for an even more important factor: ability. Locking matches to such low relative pings would greatly restrict the number of players who were eligible to play together. That would greatly increase the skill gap between players, leading to a much worse experience.

Consider, e.g. Overwatch, which has all the Overwatch League pros on the west coast, leading to a skill gap between top players on the east and west coasts (which are usually matchmade within their own region). The more you compartmentalize players, the less likely that good players are going to be able to play with other good players, and vice versa, leading to a frustrating experience for everyone. (Not that Overwatch has good matchmaking in general. It's pretty bad, IME.)

Issues like getting hit after getting behind cover is frustrating, but it's going to happen sometimes--it's the nature of online games. But nothing is more frustrating than having teammates who aren't on the same level, or who don't have the same goals. It really makes the game feel like a complete waste of time. Matchmaking is definitely a harder and more serious problem than network latency now.

> It might mean that it takes 60 seconds of matchmaking to join a new game instead of 6 seconds.

Probably more like 600, even semi-popular games using server locking (like Blizzard's Heroes of the Storm) struggle to complete matchmaking in under 10 minutes for 10 players. Matchmaking is a complex and challenging problem. Constraining MM to both servers <80ms away and clients that are all within 80ms of said servers, barring players with slower connections from playing the game, is a great way to ensure your game will never have PUBG/Fortnite levels of popularity.

As a game developer I'd rather people be able to play the game at all and feel playable than they all have only The Optimal Competitive Experience of Getting Headshots 100% of the time, after 10 minute queue times, or nothing at all. I would also like people who live in places where they can afford a computer, an internet connection, and my game, to play my game, and not just the continental US and EU metropolitan areas. Vietnam, Thailand, China, India are huge markets for games even with questionable internet quality.

Thankfully, most game developers and I concur on this so we all have more games to play. Maybe a compromise would be to allow players to check an option to only match with high connection quality servers/players when queuing, but so few players would check this option that one who does will have a relatively astronomical queue time to the majority who won't.

edit: for the record, I believe most such games running an actual tournament will use ping restrictions (or just run a LAN). Many also allow dedicated hosts to post servers with their own restrictions and constraints on who can join the server. I think that's a fine compromise. CSP shouldn't harm the experience if the latency of all the players is low enough that the prediction is almost instantly overwritten with server-side truth.

Maybe you could elaborate on this?

CSP is still relevant as far as I can tell. Yes, connections in a lot of countries are now on average lower latency, and yes on average CSP might not be required.

However, it's not uncommon to have a couple of high latency connections on a game server (player in another country, roommate is running bit torrent, or just a poor connection). You can't always guarantee low latency, and sometimes connections are interrupted temporarily and packets get lost (assuming UDP).

Short of full sync more often, CSP helps make these unpredictabilities a little more tolerable and transparent for the player.

Sure, this is why you have a < 60ms ping restriction or whatver. You simply don't let players play on a server with high latency because then it's a bad experience for everyone else. This is how competitive Quake Live servers worked.

That's not really a great solution, particularly for less popular games where there may not be many local servers, or servers that are not full, or servers that have at least some other players.

Latency is better than it was so prediction times are shorter, but there's still latency so prediction is still important.

> "We're not on PPP or SLIP connections anymore and yet game devs are still writing netcode as though we are."

Your average multiplayer today is significantly more complex than Quake. More data needs to pass through, at higher rates. The network is still very much a bottleneck.

Fair point but bandwidth has also increased from 56 kbit/s 100,000 kbit/s. CPUs have had similarly massive increases in speed as well.

A lot of the extra bandwidth usage is also probably in large part due to sloppiness. John Carmack did some impressively smart things in Quake Live to reduce bandwidth usage. I doubt most modern games like PUBG are anywhere near as optimized.

It helps that he was his own manager and didn't had anyone else to convince that this was a good idea :-P

What a great summation of the internet in the 90's: "Client. User's modem. ISP's modem. Server. ISP's modem. User's modem. Client. God, that sucks."

Then followed by Carmack's 90's internet experience: "Ok, I made a bad call. I have a T1 to my house, so I just wasn't familliar with PPP life. I'm adressing it now."


> If it looks feasable, I would like to see internet focused gaming become a justifiable biz direction for us. Its definately cool, but it is uncertain if people can actually make money at it.

Classic! Reminds me of the time I told my dad I wanted to make websites for a living and he replied "hey, that's pretty cool, but I don't think there's enough work available to make it a full time career". (This was circa 1996)

I always preferred regular NetQuake to QuakeWorld and I played a lot of both for many years. The weird jello-jiggly behavior of client-side prediction with 400-600ms difference between players was always horrible.

With NetQuake you could learn to predict everything in your head over time, which is the primary skill competitive players use. No one has a super fast reaction time (even competitive gamers are ~200ms), the best players simply know what's going to happen next better than other players.

I did play thousands of hours of QuakeWorld (Team Fortress mostly) but it was never as fun or competitive feeling as NetQuake.

And once I got an ISDN connection it was even worse to use QuakeWorld because with a perfectly reliable (low jitter) ~50ms ping on NetQuake everything was incredibly smooth. Watching players with 150ms+ ping ("HPBs") warp around the map while you tried to shoot them was no fun.

Today far too many games rely on bad client side prediction. The last game I played seriously was PUBG and they put all of their servers in Ohio. Being on the west coast with 80ms+ ping it was just a terrible experience all the time, but they apparently (and stupidly) assumed it would work well enough due to client side prediction. PUBG could have been so much more fun with ~20ms California servers and a < 60ms ping restriction, which is how most Quake Live servers operated when I used to play that game.

The way to make online competitive games as good as possible is to embrace low latency connections now that most people have them and focus on placing servers in as many cities as possible. The speed of light can't be overcome no matter what we do, so the solution is for people to play with other people that are within ~500 miles.

I hold out hope that eventually netcode authors will realize this fact and finally re-create the incredible long-lost solidity of NetQuake. All they really have to do is stop being so damn clever!

The only thing QW really added over vanilla "netquake" was client side movement prediction. Hitscan weapons (infinite velocity bullets) only let you know if you had hit or missed based on the response from the server. You had to lead your shots a certain amount based on your current latency in order to hit your target.

It was great though. It meant you could bunny hop around corners at extremely high speeds without running into walls.

Hm, I think the engineering story is the interesting aspect:

What an elegant client-as-terminal I have for this incredibly popular physics simulation! Oh no! it takes up too much bandwidth. Welp - time to throw out the old lock-step game-loop architecture, and allow each client to stream incremental updates

I'm sure many contemporary engineers had similar problems, and considered the same approach, but either shrank-from or gave-up on rustling together an elegant solution. Like many problems in the early days of 3d/internet gaming, Carmack seems to have rapidly and nicely solved it with imagination & experimentation, all informed by a broad and deep understanding of data structures, & algorithms, and presumably an industry-leading amount of experience.

You say "only", but for those of us who's ping on a good day was between 250-300ms, it was miraculous. It is interesting though, how lag compensation on one particular aspect of the game can make such a big difference, even though the lag is still there in full force in other (just as critical) aspects of the game.

Even to this day playing from Australia it's not uncommon to have a 250-300ms ping on US/EU servers (depending on the game may be the closest English servers).

There are some changes in the movement physics which might not be obvious. QW is more forgiving wrt bunnyhopping as you can gain momentum more easily compared to NQ. The speedrunners still use NQ afaik because it is the original single player game.


What were .plan files like? I've only ever heard of Carmack using them -- did other prominent devs also publish their .plans?

I wonder if there'd be any interest in reviving this. I think it would be cool to have something like an RSS feed of the .plan files from various developers I respect/follow. These days you have to settle for reading their Twitter+GitHub issues.

Well, Twitter, Facebook, etc. are the modern replacements for the "status updates" you used to put in your .plan file. .plan was just a simple text file you left in your home directory; it would be displayed to any user who attempted to query you with the 'finger' command. These days the 'finger' protocol is not really used anymore due to security vulnerabilities. But back then, Unix really was a social-moedia OS, and the internet was a decentralized, federated social network.

(There were actually two such files; .project was used to give a high-level summary of what you were working on, while .plan was used to talk in detail about what you were currently doing at the moment.)

I think Carmack popularized their use at the time. A bunch of news sites started tracking them. Blue's News still has an archive from back then:


Click on a company, a person and then use the drop down to view older entries.

Other technical people used them as well but probably no one more popular than Carmack. I still remember clearly fingering his .plan directly and excitedly reading them. The good ol' days.


I've been keeping personal .plan files since the last time I saw them mentioned on HN. So only a couple of weeks at this stage.

In some ways they reflect the way I've gone about a days work more clearly than other project management/bug tracking software I'm required to use.

I a developer on FortressOne - a still actively developed continuation of the original QuakeWorld Team Fortress. We have an extremely active community with games taking place nightly. It's a testament to Carmack that his code from 1996 is still widely used in 2019.


It seems to be the premises of client-side prediction. I submitted a link on HN yesterday that may interest some of you guys: https://news.ycombinator.com/item?id=19908875

Why is this article hosted on github? Obviously it wasn't created there as it was written in 1996. Are all the message boards and forums where this type of material was likely posted really defunct or unusable now? Do we now need to rely on enterprising internet historians to find gems on older platforms just to copy them to the newer platforms? Will Github be around in 20 years? Will the next historian have to unearth it from github just to copy it to the next platform?

> Are all the message boards and forums where this type of material was likely posted really defunct or unusable now?

The original source of this material (and the other articles) was John Carmack's .plan file. See https://en.wikipedia.org/wiki/Finger_protocol

The .plan file is by its nature ephemeral, as the user can change it whenever. But the content was archived.

The GitHub is just the latest mirror.

> Will the next historian have to unearth it from github just to copy it to the next platform?


Hey, repo owner here.

That does seem to be the case here. I had re-hosted these .plan files from a website – which has removed them – that had re-hosted them from Shacknews – which had also removed them.

GitHub seemed like a quirk-and-dirty way to re-host these while giving others an easy way of duplicating the original files so they’re less likely to be lost in the ether.

Why not? While this is a bit of an exception, given the prominence of the author, most internet content is always at risk of being wiped away from the record. Let a thousand mirrors bloom.

> In a clean sheet of paper redesign, I would try to correct more of the discrepencies

Sounds like that is what he did with Quake 3 (don't know about Quake 2). At least in Quake 3, the whole game logic is running in a so-called 'Quake Virtual Machine' file (.qvm) that is exactly the same on server and client (even cross-platform compatible). That way he doesn't just predict the client movements but the whole game world.

It seems like almost every time something about Carmack is posted, people point to the book "Masters of Doom", about the development of id games (Wolf3d, doom, commander keen). It's pretty interesting read imo: https://www.goodreads.com/book/show/222146.Masters_of_Doom

I recently read the book and it blows my mind how Carmack was able to program so relentlessly. Always moving on to the next thing and doing the impossible. He's definitely on a whole other level.

“I have a T1 to my house...”

In 1996, one would have been lucky to be able to have ISDN, so having a T1 was more like 'money is no object' territory.

In the late 90s a T1 cost about US$1,500/month. It came with a bunch of service guarantees and the like because it was aimed squarely at businesses. A T1 is only 1.5mbps too, fast for the time but in absolute terms not that great. The only other options were 56k modem (48kbps on a good day) and frame relay or ISDN, which was a solid 64kbps and lower latency than a modem but also far more expensive than it should have been (US$200-300/month).

ISDN ("It Still Does Nothing" to give its facetious full name from the time) lines were often bonded, usually to give 128kpbs though faster configurations were possible, though of course there was an increased cost for this.

Or you were in university/connected to a university.

The Swedish qwctf scene at least was very clearly divided in 20ms people who went to or worked at, a university, and 150ms people who played from home.

There were some 50-60ms ISDN people too but I can only remember a few.

The office of one of my early programming jobs had an ISDN line. I would carry my home desktop into the office to download things/play games.

The next job I had was in a much bigger company, and the office I was in had equivalent of 70+ T1s. Most of those were to support the phone system though.

Link to the origin repo if you’d like to read the entire .plan archive: https://github.com/ESWAT/john-carmack-plan-archive

Maybe FRP (functional reactive programming) can give more abilities to reason when designing networked shooters and internal simulation in games. Just like React had revolutionized GUIs (it's not FRP library, but has ideas from it). With FRP, it's even possible to build "tickless" game logic. However, for now it has limited practicability and I never seen something complete that was built with FRP.

Is he happy with his Oculus job after the FB deal or will he leave after the Quest comes out?

I always wondered about how he felt about that role. I feel like Carmack could still run one of the best software shops on earth, even if he had to start from scratch. The man invented the concept of "it will be ready when it's done", and has an incredibly positive attitude towards handling difficult (challenging) problems.

If the stars aligned and Carmack decided to get back into making new software-based gaming experiences (especially around the DTX/ATX area), I would be incredibly tempted to investigate opportunities.

Good times. If I remember correctly, it was possible to drop everyone from a quake server (especially team fortress, when your team was about to lose) by spamming chat messages in the console

> I would like to see internet focused gaming become a justifiable biz direction for us.

> Its definately cool, but it is uncertain if people can actually make money at it.

It is interesting to see so many grammar and spelling errors in this short article. Almost sounds like he didn't write this himself.

Was the dot plan written to himself or for his teammates ?

>There are all sorts of other cool stats that we could mine out of the data: greatest frags/minute, longest uninterrupted quake game, cruelest to newbies, etc, etc.

I agree with his use of a double etc there.

pushlatency -120

play'd all the doom/quake... was a pleasure reading this...

I see everyone in this thread reference CSP [0] and I keep thinking they’re all talking about the foundations of core.async or Golang.

[0] https://www.reaktor.com/blog/why-csp-matters-i-keeping-thing...

Presumably by HN readers, saved to archive already 7 times today!


Is there a good browser extension for archiving pages in one click? Firefox.

There's https://warcreate.com/, but it's for Chrome. I don't know details of it, but aren't both browsers use the same API for extensions? Then probably it's possible to port it, or to install right away.

It creates .warc files, which are supported by Internet Archive. I don't know if it integrates uploaded .warc files to Wayback Machine, but Archive Team uses this format to upload manually archived sites to Internet Archive.

I do not know about submitting to archive.org, but i use Save Page WE for Firefox to make full local copies of sites (it essentially takes the current DOM state and inlines all external files/dependencies to a single HTML file and then prompts you to save that file which can be opened later):


Thanks for the suggestion. You might be interested in Zotero [0] for a more general solution. It supports saving to HTML, saving to PDF, etc. and allows you to organize everything with tags.

[0] https://www.zotero.org/

There was Scrapbook, now replaced by ScrapbookQ.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact