
John Carmack on QuakeWorld latency and business model (1996) - Reedx
https://raw.githubusercontent.com/ESWAT/john-carmack-plan-archive/master/by_day/johnc_plan_19960802.txt
======
SmirkingRevenge
Carmack's dotplans are always interesting history to read.

If you weren't a gamer (or alive) at the time when quakeworld came around, you
might not appreciate how amazing it was for multiplayer games on the internet.
On dial-up, you were lucky to have 150ms latency. Before client-side-
prediction, that latency applied to every action you took in game, including
player movements. Hit the up-arrow, and you wait 150-300ms before the game
responds and moves your character forward. CSP really was an amazing break
through, and made multiplayer action games feasible on the internet.

This is particularly relevant now, that we are entering the era of cloud-based
streaming game platforms, like Stadia. The latency problems of the pre-CSP
90's will be rearing their heads again. Its going to be interesting to see how
these same problems will be tackled in this new context. Internet speeds are
higher now, but so are our expectations.

I doubt we'll have the nice, simple dotplan files that Carkmack left for us to
read and remember, from all the SRE's at Google, sadly.

~~~
nickjj
Sadly we live in a world today where some local code editors could really
benefit from CSP because they add 100-200ms of latency to every character you
press.

I'm not sure if it's funny or sad that there's more key press latency typing
into most local Electron apps than connecting to a Quake 3 server 200 miles
away back when I had 56k dial-up in 2000.

If you want to fast forward to today's internet, with an average internet
connection it takes around 150ms to ping a server in the Netherlands from
California. That's over 5,000 miles (8,000 kilometers). Somehow a local key
press has the same latency with certain code editors. What have we gotten
ourselves into.

~~~
gameswithgo
vs code typing latency is on the order of 50ms.

its more than necessary but not as bad as you are implying

~~~
dep_b
VS Code is pretty good for an Electron app

------
Nition
Someone correct me if I'm wrong, but I believe this was the genesis of client-
side prediction for games.

If you don't know what that is, Valve has some good documentation on it
(relating to Source engine, but it's the same concept).[1]

Latency was still a problem even with CSP because we didn't yet have lag
compensation on the server. So although you got an authoritative server (no
cheating) and instant inputs (no round-trip wait), you could still shoot
someone on the client and miss them on the server. You had to shoot ahead of
them, further if your latency was worse. As far as I know, Source engine was
the first to add lag compensation, which has the server rewind the other
players to be where you would have seen them on the client.

Since client-side prediction can be performance intensive, and lag
compensation can be difficult for anything beyond simple raycast checks, many
games still don't do both. Team Fortress 2 for instance does lag compensation
on basic guns but not on things like the soldier's rocket or the pyro's
flamethrower. Some games simply allow the client to decide who they hit, which
removes the need for either solution, but totally opens things up to cheaters
without some very careful server-side validation.

[1]
[https://developer.valvesoftware.com/wiki/Latency_Compensatin...](https://developer.valvesoftware.com/wiki/Latency_Compensating_Methods_in_Client/Server_In-
game_Protocol_Design_and_Optimization#Client_Side_Prediction)

~~~
timeimp
Any idea why the would not implement lag compo for projectile-based weapons?
Too much processing power required?

~~~
forrestthewoods
Projectiles are simulated on the server

~~~
Nition
All weapons are simulated on both the server and the client in the case we're
talking about here. It's just that raycast weapons are rewound on the server
to compensate for the client's latency, and projectiles aren't. The server has
the authority on actual hits and damage done for all weapons.

------
robocat
Complete aside about Carmack and reducing VR latency from
[https://www.gamasutra.com/view/news/226112/How_John_Carmack_...](https://www.gamasutra.com/view/news/226112/How_John_Carmack_is_bending_Samsungs_VR_strategy.php)
:

"He gave the example of increasing the refresh rate on the Gear VR during
development. He was working, at that time, with its Galaxy S III phone.
Android triple-buffers graphics, inducing a 48 millisecond delay into the
system -- making VR impossible.

Carmack pulled apart Android to hack that out. Though he'd written several
emails to Samsung in attempts to convince them to give back that buffer, "It's
easy to argue against an email, but it's much harder to make the argument when
you can have the two things and stick them on your face and, 'Tell me this
isn't better.'"

~~~
keerthiko
Carmack is probably the most self-consistent opinionated creator out there,
given that so many years on he still sticks to one of his original quotes:

> "Focused, hard work is the real key to success. Keep your eyes on the goal,
> and just keep taking the next step towards completing it. If you aren't sure
> which way to do something, do it both ways and see which works better."

He'll even do Samsung's work for them to fulfill the purpose of that quote.

~~~
AdmiralAsshat
Carmack's brain is so much like a computer that it even does speculative
execution branch prediction!

------
diarmuidc
I fondly remember the old Quake 1 days. It was pretty much the last game I
played seriously. Reading Carmack .plans were a regular routine and while many
of the details went over my head, it made us feel part of the whole id
experience.

Finally networked Quake was an amazing experience when it came out. I had
started my first job in an ASIC design company. All the computers were Solaris
based Sun workstations, yet we were able to get network quake up and running
on them! I just can't imagine such a thing happening today but maybe that's
more to do with me being 20 years older than anything else. For sure no one is
porting game engines to Solaris these days ;)

~~~
newaccoutnas
Same story, but not as glamourous, on a Windows NT4 Rollout (1997). Also
recall us downloading South Park at a (then) stellar speed - only took half an
hour to download a highty compressed half hour episode, a thing of sheer magic
at the time :)

------
keyle
Pretty impressive when you consider this was in 1996!

I really like his ability to take out code and 'shoot it'. Back to the drawing
board. It's a quality of a great engineer, to be able to reflect on what's
been done and admit it's not good enough.

~~~
ethbro
> "While I can remember and justify all of my decisions about networking from
> DOOM through Quake, the bottom line is that I was working with the wrong
> basic assumptions for doing a good internet game."

I'm not one for hero worship, but I think that's as close to engineering zen
as one can get.

Aka 'I'm brilliant. I thought I was doing the smart thing. Turns out reality
was otherwise. I'm changing my approach.'

~~~
staunch
What's kind of sad and funny is that these assumptions changed again and no
one seemed to have noticed. We're not on PPP or SLIP connections anymore and
yet game devs are still writing netcode as though we are. Client side
prediction should have been a temporary hack until low latency connections
were mainstream not a permanent aspect of all gaming netcode, and yet it is.

Maybe Carmack will wake everyone up again and tell them to stop copying his
ancient hack two decades later.

~~~
boomlinde
You must be living under a very pleasant rock if you think that low latency,
to the degree that client side prediction is useless, is mainstream. Not
accounting for routing, drops and mux/demuxing, and assuming that the signal
travels at the speed of light as the crow flies, from west coast USA to where
I'm at, the latency is 29ms or roughly two frames.

This would be debilitating in an online game without CSP where even the
latency introduced by screen buffering and vertical sync may make or break yor
game, and in reality the latency is of course much higher, usually around
100ms or higher. Check your assumptions:
[https://wondernetwork.com/pings](https://wondernetwork.com/pings)

~~~
pedrocr
By your numbers then the OP is right. Carmack says:

 _the bottom line is that I was working with the wrong basic assumptions for
doing a good internet game. My original design was targeted at <200ms
connection latencies. People that have a digital connection to the internet
through a good provider get a pretty good game experience. Unfortunately, 99%
of the world gets on with a slip or ppp connection over a modem, often through
a crappy overcrowded ISP. This gives 300+ ms latencies, minimum._

So he designed for 200ms and then the real world was 300+ so it didn't work.
Your 100ms would then work fine in that context. Maybe games have gotten more
complex or our expectations higher so prediction is still worth it even on
<50ms connections. But OPs point that the assumptions underlying Carmack's
redesign have once again shifted seems correct, at least for that specific
design.

~~~
boomlinde
_> By your numbers then the OP is right._

By my numbers (though as you can clearly see from the link I posted, >200ms
latencies are not uncommon), and Carmack's assumptions in 1996 about what
latencies would be acceptable.

 _> So he designed for 200ms and then the real world was 300+ so it didn't
work. Your 100ms would then work fine in that context._

He designed for _< 200ms_ which by the frame of reference he cites (T1
connection) probably was a lot lower than 200ms on average.

 _> Maybe games have gotten more complex or our expectations higher so
prediction is still worth it even on <50ms connections. _

Quake was not a very complex world, but more important (to latency) is that
it's a fast paced game by today's standards (IME). It seems more likely to me
that perception on what an acceptable hand to eye latency is has changed, just
like the perception on acceptable frame rates. To me, joining an online
shooter with >100ms latency (even with prevalent client side prediction) feels
debilitating, especially if other players have lower latencies.

 _> But OPs point that the assumptions underlying Carmack's redesign have once
again shifted seems correct, at least for that specific design._

Yes, I agree that PPP and SLIP are not common any more. High latency
unfortunately is, and client side prediction is still an effective mitigation
strategy that makes a huge difference to online play. The specific design used
in Quake has of course been superseded.

~~~
pedrocr
_> 200ms latencies are not uncommon_

Those numbers match my experience a long time ago with Quake3 based games
where you'd pick servers that are local to you and get 10-40ms latencies.

 _> He designed for <200ms which by the frame of reference he cites (T1
connection) probably was a lot lower than 200ms on average._

I didn't forget the < sign. Designing for <200ms means you can handle the
200ms worst case. But maybe he meant 200ms worst case but much lower average
like you are suggesting.

 _> It seems more likely to me that perception on what an acceptable hand to
eye latency is has changed, just like the perception on acceptable frame
rates._

Right, makes sense that the expectation today is much higher.

 _> High latency unfortunately is, and client side prediction is still an
effective mitigation strategy that makes a huge difference to online play._

Just for curiosity, how low does it need to go for prediction to no longer be
useful? I guess if you want to be able to have players from all over the world
play each other then you have to deal with 300+ ms of latency. But if you are
willing to do servers on each geography it seems 10-30 ms would be feasible.
Would that be enough?

~~~
reifiedgent
NQ (NetQuake, as the original Quake is sometimes known) 'feels off' on just
10ms. Movement becomes slightly just out of sync with your input.

10-30ms is a bit optimistic, depending on how large you define a geography.
Many ADSL/VDSL connections, best case, start with 5ms of latency, and often
higher (say 20ms) due to interleaving. Cable tends to be around 10ms IIRC but
can suffer from significant jitter which makes things worse. So for a lot of
players the servers would need to be in the same city to achieve that target.

~~~
staunch
NetQuake feels excellent at < 30ms. Even higher is easy to get used to with
some practice because it's extremely predictable due to a lack of CSP. And
yes, the solution is to have servers within ~500 miles of players so ~20ms avg
ping is the norm. Most gamers have cable not DSL connections.

------
SmirkingRevenge
What a great summation of the internet in the 90's: "Client. User's modem.
ISP's modem. Server. ISP's modem. User's modem. Client. God, that sucks."

Then followed by Carmack's 90's internet experience: "Ok, I made a bad call. I
have a T1 to my house, so I just wasn't familliar with PPP life. I'm adressing
it now."

:p

------
pault
> If it looks feasable, I would like to see internet focused gaming become a
> justifiable biz direction for us. Its definately cool, but it is uncertain
> if people can actually make money at it.

Classic! Reminds me of the time I told my dad I wanted to make websites for a
living and he replied "hey, that's pretty cool, but I don't think there's
enough work available to make it a full time career". (This was circa 1996)

------
staunch
I always preferred regular NetQuake to QuakeWorld and I played a lot of both
for many years. The weird jello-jiggly behavior of client-side prediction with
400-600ms difference between players was always horrible.

With NetQuake you could learn to predict everything in your head over time,
which is the primary skill competitive players use. No one has a super fast
reaction time (even competitive gamers are ~200ms), the best players simply
know what's going to happen next better than other players.

I did play thousands of hours of QuakeWorld (Team Fortress mostly) but it was
never as fun or competitive feeling as NetQuake.

And once I got an ISDN connection it was even worse to use QuakeWorld because
with a perfectly reliable (low jitter) ~50ms ping on NetQuake everything was
incredibly smooth. Watching players with 150ms+ ping ("HPBs") warp around the
map while you tried to shoot them was no fun.

Today far too many games rely on bad client side prediction. The last game I
played seriously was PUBG and they put all of their servers in Ohio. Being on
the west coast with 80ms+ ping it was just a terrible experience all the time,
but they apparently (and stupidly) assumed it would work well enough due to
client side prediction. PUBG could have been so much more fun with ~20ms
California servers and a < 60ms ping restriction, which is how most Quake Live
servers operated when I used to play that game.

The way to make online competitive games as good as possible is to embrace low
latency connections now that most people have them and focus on placing
servers in as many cities as possible. The speed of light can't be overcome no
matter what we do, so the solution is for people to play with other people
that are within ~500 miles.

I hold out hope that eventually netcode authors will realize this fact and
finally re-create the incredible long-lost solidity of NetQuake. All they
really have to do is stop being so damn clever!

------
kzahel
The only thing QW really added over vanilla "netquake" was client side
movement prediction. Hitscan weapons (infinite velocity bullets) only let you
know if you had hit or missed based on the response from the server. You had
to lead your shots a certain amount based on your current latency in order to
hit your target.

It was great though. It meant you could bunny hop around corners at extremely
high speeds without running into walls.

~~~
SmirkingRevenge
You say "only", but for those of us who's ping on a good day was between
250-300ms, it was miraculous. It is interesting though, how lag compensation
on one particular aspect of the game can make such a big difference, even
though the lag is still there in full force in other (just as critical)
aspects of the game.

~~~
hulahoof
Even to this day playing from Australia it's not uncommon to have a 250-300ms
ping on US/EU servers (depending on the game may be the closest English
servers).

------
nemo1618
What were .plan files like? I've only ever heard of Carmack using them -- did
other prominent devs also publish their .plans?

I wonder if there'd be any interest in reviving this. I think it would be cool
to have something like an RSS feed of the .plan files from various developers
I respect/follow. These days you have to settle for reading their
Twitter+GitHub issues.

~~~
bitwize
Well, Twitter, Facebook, etc. are the modern replacements for the "status
updates" you used to put in your .plan file. .plan was just a simple text file
you left in your home directory; it would be displayed to any user who
attempted to query you with the 'finger' command. These days the 'finger'
protocol is not really used anymore due to security vulnerabilities. But back
then, Unix really was a social-moedia OS, and the internet was a
decentralized, federated social network.

(There were actually two such files; .project was used to give a high-level
summary of what you were working on, while .plan was used to talk in detail
about what you were currently doing at the moment.)

------
shayolden
I a developer on FortressOne - a still actively developed continuation of the
original QuakeWorld Team Fortress. We have an extremely active community with
games taking place nightly. It's a testament to Carmack that his code from
1996 is still widely used in 2019.

[https://www.fortressone.org](https://www.fortressone.org)

------
jspdown
It seems to be the premises of client-side prediction. I submitted a link on
HN yesterday that may interest some of you guys:
[https://news.ycombinator.com/item?id=19908875](https://news.ycombinator.com/item?id=19908875)

------
speedplane
Why is this article hosted on github? Obviously it wasn't created there as it
was written in 1996. Are all the message boards and forums where this type of
material was likely posted really defunct or unusable now? Do we now need to
rely on enterprising internet historians to find gems on older platforms just
to copy them to the newer platforms? Will Github be around in 20 years? Will
the next historian have to unearth it from github just to copy it to the next
platform?

~~~
CaliforniaKarl
> Are all the message boards and forums where this type of material was likely
> posted really defunct or unusable now?

The original source of this material (and the other articles) was John
Carmack's .plan file. See
[https://en.wikipedia.org/wiki/Finger_protocol](https://en.wikipedia.org/wiki/Finger_protocol)

The .plan file is by its nature ephemeral, as the user can change it whenever.
But the content was archived.

The GitHub is just the latest mirror.

> Will the next historian have to unearth it from github just to copy it to
> the next platform?

Maybe!

------
arendtio
> In a clean sheet of paper redesign, I would try to correct more of the
> discrepencies

Sounds like that is what he did with Quake 3 (don't know about Quake 2). At
least in Quake 3, the whole game logic is running in a so-called 'Quake
Virtual Machine' file (.qvm) that is exactly the same on server and client
(even cross-platform compatible). That way he doesn't just predict the client
movements but the whole game world.

------
Insanity
It seems like almost every time something about Carmack is posted, people
point to the book "Masters of Doom", about the development of id games
(Wolf3d, doom, commander keen). It's pretty interesting read imo:
[https://www.goodreads.com/book/show/222146.Masters_of_Doom](https://www.goodreads.com/book/show/222146.Masters_of_Doom)

~~~
apezdedookie
I recently read the book and it blows my mind how Carmack was able to program
so relentlessly. Always moving on to the next thing and doing the impossible.
He's definitely on a whole other level.

------
ttul
“I have a T1 to my house...”

~~~
usefulcat
In 1996, one would have been lucky to be able to have ISDN, so having a T1 was
more like 'money is no object' territory.

~~~
jandrese
In the late 90s a T1 cost about US$1,500/month. It came with a bunch of
service guarantees and the like because it was aimed squarely at businesses. A
T1 is only 1.5mbps too, fast for the time but in absolute terms not that
great. The only other options were 56k modem (48kbps on a good day) and frame
relay or ISDN, which was a solid 64kbps and lower latency than a modem but
also far more expensive than it should have been (US$200-300/month).

~~~
dspillett
ISDN ("It Still Does Nothing" to give its facetious full name from the time)
lines were often bonded, usually to give 128kpbs though faster configurations
were possible, though of course there was an increased cost for this.

------
eswat
Link to the origin repo if you’d like to read the entire .plan archive:
[https://github.com/ESWAT/john-carmack-plan-
archive](https://github.com/ESWAT/john-carmack-plan-archive)

------
ungzd
Maybe FRP (functional reactive programming) can give more abilities to reason
when designing networked shooters and internal simulation in games. Just like
React had revolutionized GUIs (it's not FRP library, but has ideas from it).
With FRP, it's even possible to build "tickless" game logic. However, for now
it has limited practicability and I never seen something complete that was
built with FRP.

------
k__
Is he happy with his Oculus job after the FB deal or will he leave after the
Quest comes out?

~~~
bob1029
I always wondered about how he felt about that role. I feel like Carmack could
still run one of the best software shops on earth, even if he had to start
from scratch. The man invented the concept of "it will be ready when it's
done", and has an incredibly positive attitude towards handling difficult
(challenging) problems.

If the stars aligned and Carmack decided to get back into making new software-
based gaming experiences (especially around the DTX/ATX area), I would be
incredibly tempted to investigate opportunities.

------
tarruda
Good times. If I remember correctly, it was possible to drop everyone from a
quake server (especially team fortress, when your team was about to lose) by
spamming chat messages in the console

------
sshine
> I would like to see internet focused gaming become a justifiable biz
> direction for us.

> Its definately cool, but it is uncertain if people can actually make money
> at it.

------
system2
It is interesting to see so many grammar and spelling errors in this short
article. Almost sounds like he didn't write this himself.

------
john4534243
Was the dot plan written to himself or for his teammates ?

------
ashika
>There are all sorts of other cool stats that we could mine out of the data:
greatest frags/minute, longest uninterrupted quake game, cruelest to newbies,
etc, etc.

I agree with his use of a double etc there.

------
chr1xzy
pushlatency -120

------
womd
play'd all the doom/quake... was a pleasure reading this...

------
hellofunk
I see everyone in this thread reference CSP [0] and I keep thinking they’re
all talking about the foundations of core.async or Golang.

[0] [https://www.reaktor.com/blog/why-csp-matters-i-keeping-
thing...](https://www.reaktor.com/blog/why-csp-matters-i-keeping-things-in-
sync/)

------
delhanty
Presumably by HN readers, saved to archive already 7 times today!

[https://web.archive.org/web/*/https://raw.githubusercontent....](https://web.archive.org/web/*/https://raw.githubusercontent.com/ESWAT/john-
carmack-plan-archive/master/by_day/johnc_plan_19960802.txt)

~~~
soulofmischief
Is there a good browser extension for archiving pages in one click? Firefox.

~~~
Crinus
I do not know about submitting to archive.org, but i use Save Page WE for
Firefox to make full local copies of sites (it essentially takes the current
DOM state and inlines all external files/dependencies to a single HTML file
and then prompts you to save that file which can be opened later):

[https://addons.mozilla.org/en-US/firefox/addon/save-page-
we/](https://addons.mozilla.org/en-US/firefox/addon/save-page-we/)

~~~
soulofmischief
Thanks for the suggestion. You might be interested in Zotero [0] for a more
general solution. It supports saving to HTML, saving to PDF, etc. and allows
you to organize everything with tags.

[0] [https://www.zotero.org/](https://www.zotero.org/)

