Hacker News new | past | comments | ask | show | jobs | submit login

Pretty impressive when you consider this was in 1996!

I really like his ability to take out code and 'shoot it'. Back to the drawing board. It's a quality of a great engineer, to be able to reflect on what's been done and admit it's not good enough.






> "While I can remember and justify all of my decisions about networking from DOOM through Quake, the bottom line is that I was working with the wrong basic assumptions for doing a good internet game."

I'm not one for hero worship, but I think that's as close to engineering zen as one can get.

Aka 'I'm brilliant. I thought I was doing the smart thing. Turns out reality was otherwise. I'm changing my approach.'


What's kind of sad and funny is that these assumptions changed again and no one seemed to have noticed. We're not on PPP or SLIP connections anymore and yet game devs are still writing netcode as though we are. Client side prediction should have been a temporary hack until low latency connections were mainstream not a permanent aspect of all gaming netcode, and yet it is.

Maybe Carmack will wake everyone up again and tell them to stop copying his ancient hack two decades later.


I think gaming netcode engineers would love to ditch CSP, but the problem is even though low-latency connections are mainstream, stable latency connections are not. It's always possible to have lag spikes or temporary slowdowns in your routing or connection.

Second, it's also possible to end up through matchmaking lobbies on a server halfway around the world from one or more of the players in a given match. At some level the limitations of physics still result in noticeable latency and CSP is still mandatory for any fast-paced real-time online multiplayer game.


Within a US city or state it's very rare to have much packet loss at all. It's very easy to spot using the builtin netgraph in Quake Live and other games.

Players that are separated by thousands of miles should not be playing competitive FPS games together. It's physically impossible to make it a consistently good experience.


You must be living under a very pleasant rock if you think that low latency, to the degree that client side prediction is useless, is mainstream. Not accounting for routing, drops and mux/demuxing, and assuming that the signal travels at the speed of light as the crow flies, from west coast USA to where I'm at, the latency is 29ms or roughly two frames.

This would be debilitating in an online game without CSP where even the latency introduced by screen buffering and vertical sync may make or break yor game, and in reality the latency is of course much higher, usually around 100ms or higher. Check your assumptions: https://wondernetwork.com/pings


By your numbers then the OP is right. Carmack says:

the bottom line is that I was working with the wrong basic assumptions for doing a good internet game. My original design was targeted at <200ms connection latencies. People that have a digital connection to the internet through a good provider get a pretty good game experience. Unfortunately, 99% of the world gets on with a slip or ppp connection over a modem, often through a crappy overcrowded ISP. This gives 300+ ms latencies, minimum.

So he designed for 200ms and then the real world was 300+ so it didn't work. Your 100ms would then work fine in that context. Maybe games have gotten more complex or our expectations higher so prediction is still worth it even on <50ms connections. But OPs point that the assumptions underlying Carmack's redesign have once again shifted seems correct, at least for that specific design.


> By your numbers then the OP is right.

By my numbers (though as you can clearly see from the link I posted, >200ms latencies are not uncommon), and Carmack's assumptions in 1996 about what latencies would be acceptable.

> So he designed for 200ms and then the real world was 300+ so it didn't work. Your 100ms would then work fine in that context.

He designed for <200ms which by the frame of reference he cites (T1 connection) probably was a lot lower than 200ms on average.

> Maybe games have gotten more complex or our expectations higher so prediction is still worth it even on <50ms connections.

Quake was not a very complex world, but more important (to latency) is that it's a fast paced game by today's standards (IME). It seems more likely to me that perception on what an acceptable hand to eye latency is has changed, just like the perception on acceptable frame rates. To me, joining an online shooter with >100ms latency (even with prevalent client side prediction) feels debilitating, especially if other players have lower latencies.

> But OPs point that the assumptions underlying Carmack's redesign have once again shifted seems correct, at least for that specific design.

Yes, I agree that PPP and SLIP are not common any more. High latency unfortunately is, and client side prediction is still an effective mitigation strategy that makes a huge difference to online play. The specific design used in Quake has of course been superseded.


>200ms latencies are not uncommon

Those numbers match my experience a long time ago with Quake3 based games where you'd pick servers that are local to you and get 10-40ms latencies.

> He designed for <200ms which by the frame of reference he cites (T1 connection) probably was a lot lower than 200ms on average.

I didn't forget the < sign. Designing for <200ms means you can handle the 200ms worst case. But maybe he meant 200ms worst case but much lower average like you are suggesting.

> It seems more likely to me that perception on what an acceptable hand to eye latency is has changed, just like the perception on acceptable frame rates.

Right, makes sense that the expectation today is much higher.

>High latency unfortunately is, and client side prediction is still an effective mitigation strategy that makes a huge difference to online play.

Just for curiosity, how low does it need to go for prediction to no longer be useful? I guess if you want to be able to have players from all over the world play each other then you have to deal with 300+ ms of latency. But if you are willing to do servers on each geography it seems 10-30 ms would be feasible. Would that be enough?


NQ (NetQuake, as the original Quake is sometimes known) 'feels off' on just 10ms. Movement becomes slightly just out of sync with your input.

10-30ms is a bit optimistic, depending on how large you define a geography. Many ADSL/VDSL connections, best case, start with 5ms of latency, and often higher (say 20ms) due to interleaving. Cable tends to be around 10ms IIRC but can suffer from significant jitter which makes things worse. So for a lot of players the servers would need to be in the same city to achieve that target.


NetQuake feels excellent at < 30ms. Even higher is easy to get used to with some practice because it's extremely predictable due to a lack of CSP. And yes, the solution is to have servers within ~500 miles of players so ~20ms avg ping is the norm. Most gamers have cable not DSL connections.

> 10-30ms is a bit optimistic, depending on how large you define a geography.

Just checked my fiber connection and it has 2ms of latency. google.com is 18ms away. I'm in Portugal and seem to be routed to a Google server 400+ km away in Spain. So my memory of playing on 10-20ms to local servers in Portugal is probably accurate. Covering the US or the EU with enough servers so everyone is never more than 30ms away seems feasible. Within easy reach of a startup with just ~50 VMs across different cloud provider locations and for popular games easy for users to setup regional servers themselves.


I get 8ms RTT to servers in my city on a normal cable modem today. People in major cities have nearly LAN-quality latency within ~500 miles.

Packet loss is extremely low, which is easy to tell using a builtin netgraph.

Screen buffering should be reduced as much as possible and vsync is not something competitive players would ever want. Pros all disabled it in Quake Live for sure.

Players on modern connections that are playing on servers in their city do not need CSP.

In practice it's not that bad to have CSP if it's not really doing much (due to all players being low latency) but I know from playing NetQuake vs Quake Live with low latency that it's nicer not to have it at all.


Go listen to actual competitive FPS players, and hear them complain about having 60 ms latency. Listen to them complain about living on one U.S. coast and playing on servers on the opposite one and having 80-100 ms latency. Listen to players in EU who play on NA servers because they don't like players' behavior on EU servers, and they're willing to suffer 150 ms latency. Then visit Australia and listen to players who suffer 300 ms latency to play with people who aren't in the relatively small Australian population for whatever game they're playing.

Then take out CSP from their games and watch what happens.

CSP will always be necessary for any Internet-based game. Only LANs have low enough latency to not need it.


This is why you ping restrict servers. There's no way to make a game competitive when one player has 10ms ping and another has 100ms (10x higher!) ping. It's just bad for everyone.

Which is exactly what competitive Quake Live players do and I suspect CS players probably do. Every player has 15-40ms ping and it's amazing.


I don't get it, many people want to play a game and there just aren't enough players in their city, games want as many players as possible to be playing them.

Your requirement that games ping-restrict servers is an absolute non starter: unless the game is already guaranteed to have hundreds of players active and available near any given player at any given time of day, and evenly distributed redundant always-up servers that scale for demand near every player cluster across the planet. I think CSP is a far more realistic solution to giving a good player experience than magicking player base activity and distribution


For major games like PUBG and Fortnite this shouldn't be a problem at all. It might mean that it takes 60 seconds of matchmaking to join a new game instead of 6 seconds.

What you get in exchange is a stable and predictable competitive game. I, and many others, experienced endless frustration in PUBG due to players warping around corners and shots that were clear hits missing due to CSP. And I'm not one to complain about this kind of thing unnecessarily. It really sucked.

The PUBG people didn't even geolock their servers at all last time I played, so you could be playing with people with 300ms+ ping which is just miserable.

For less successful games maybe you have to increase the maximum ping from 40-60ms to 80-100ms or something but it should be kept as low as possible. The more players per city, the lower you can make the maximum ping.

I realize that this might make it more difficult for people in Australia (or whereever) to find a game but it seems even more wrong to make the experience bad for everyone all the time. And at least when they do find a game it will be a great experience instead of terrible.


From what I hear, PUBG just has poor code, so let's leave it out of the discussion and compare well-made games.

Of course, there is an upper limit on how much latency CSP and lag compensation can compensate for, and this varies by game. One game might work well up to 100ms, while another might work well up to 50ms.

> I realize that this might make it more difficult for people in Australia (or whereever) to find a game but it seems even more wrong to make the experience bad for everyone all the time. And at least when they do find a game it will be a great experience instead of terrible.

Here you're failing to account for an even more important factor: ability. Locking matches to such low relative pings would greatly restrict the number of players who were eligible to play together. That would greatly increase the skill gap between players, leading to a much worse experience.

Consider, e.g. Overwatch, which has all the Overwatch League pros on the west coast, leading to a skill gap between top players on the east and west coasts (which are usually matchmade within their own region). The more you compartmentalize players, the less likely that good players are going to be able to play with other good players, and vice versa, leading to a frustrating experience for everyone. (Not that Overwatch has good matchmaking in general. It's pretty bad, IME.)

Issues like getting hit after getting behind cover is frustrating, but it's going to happen sometimes--it's the nature of online games. But nothing is more frustrating than having teammates who aren't on the same level, or who don't have the same goals. It really makes the game feel like a complete waste of time. Matchmaking is definitely a harder and more serious problem than network latency now.


> It might mean that it takes 60 seconds of matchmaking to join a new game instead of 6 seconds.

Probably more like 600, even semi-popular games using server locking (like Blizzard's Heroes of the Storm) struggle to complete matchmaking in under 10 minutes for 10 players. Matchmaking is a complex and challenging problem. Constraining MM to both servers <80ms away and clients that are all within 80ms of said servers, barring players with slower connections from playing the game, is a great way to ensure your game will never have PUBG/Fortnite levels of popularity.

As a game developer I'd rather people be able to play the game at all and feel playable than they all have only The Optimal Competitive Experience of Getting Headshots 100% of the time, after 10 minute queue times, or nothing at all. I would also like people who live in places where they can afford a computer, an internet connection, and my game, to play my game, and not just the continental US and EU metropolitan areas. Vietnam, Thailand, China, India are huge markets for games even with questionable internet quality.

Thankfully, most game developers and I concur on this so we all have more games to play. Maybe a compromise would be to allow players to check an option to only match with high connection quality servers/players when queuing, but so few players would check this option that one who does will have a relatively astronomical queue time to the majority who won't.

edit: for the record, I believe most such games running an actual tournament will use ping restrictions (or just run a LAN). Many also allow dedicated hosts to post servers with their own restrictions and constraints on who can join the server. I think that's a fine compromise. CSP shouldn't harm the experience if the latency of all the players is low enough that the prediction is almost instantly overwritten with server-side truth.


Maybe you could elaborate on this?

CSP is still relevant as far as I can tell. Yes, connections in a lot of countries are now on average lower latency, and yes on average CSP might not be required.

However, it's not uncommon to have a couple of high latency connections on a game server (player in another country, roommate is running bit torrent, or just a poor connection). You can't always guarantee low latency, and sometimes connections are interrupted temporarily and packets get lost (assuming UDP).

Short of full sync more often, CSP helps make these unpredictabilities a little more tolerable and transparent for the player.


Sure, this is why you have a < 60ms ping restriction or whatver. You simply don't let players play on a server with high latency because then it's a bad experience for everyone else. This is how competitive Quake Live servers worked.

That's not really a great solution, particularly for less popular games where there may not be many local servers, or servers that are not full, or servers that have at least some other players.

Latency is better than it was so prediction times are shorter, but there's still latency so prediction is still important.

> "We're not on PPP or SLIP connections anymore and yet game devs are still writing netcode as though we are."

Your average multiplayer today is significantly more complex than Quake. More data needs to pass through, at higher rates. The network is still very much a bottleneck.


Fair point but bandwidth has also increased from 56 kbit/s 100,000 kbit/s. CPUs have had similarly massive increases in speed as well.

A lot of the extra bandwidth usage is also probably in large part due to sloppiness. John Carmack did some impressively smart things in Quake Live to reduce bandwidth usage. I doubt most modern games like PUBG are anywhere near as optimized.


It helps that he was his own manager and didn't had anyone else to convince that this was a good idea :-P



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: