"6. We did not plan for a patch. The version 1.0a patch, even though it was a success, was problematic in that as a company we had not planned for it. THE GENERAL ARGUMENT IS THAT IF YOU KNOW YOU ARE GOING TO NEED TO RELEASE A PATCH, THEN YOU SHOULDN'T BE SHIPPING THE GAME IN THE FIRST PLACE."
Those were the days!
My thinking is to expect patches when making a multiplayer game. There's just no practical way around it
I gladly accept the occasional 10gb update in return for a game that doesn’t crash multiple times a day or week.
Anyway, I don't get how updates are so big these days. The games as well, but I guess with some lazy programming... but the updates? I really wonder how hard it is to reliably patch a binary without replacing the whole binary, since apparently it outweighs the additional bandwidth cost for pretty much every company that does any sort of software patching.
And in some cases this is not only easier, but also can provide major performance benefits. For instance imagine an update changes something about a local database that requires some expensive process like reindexing or something. And then another update carries out further changes, and so on. Going straight from 1 to n instead of 1 to 2 to ... n can be vastly more efficient from a user perspective.
But if you're seeing huge updates then more often than not it's probably data, not code, though the same story applies there.
If you own the hd version, there is a patch to freely convert it to voobly.
It turns out the game is synchronized by each player having the commands for each 200ms "turn" a couple of turns in advance, and then playing the actions so that the same happens on all player's machines. That includes sending random seeds around. And then there's a load of provisions for lost packets, slow machines and thus forth.
Is this how most games do it? I would think something like WoW couldn't do this, and indeed sometimes I'd see glitches where a character would blink (like the spell) to somewhere new.
Yes, you have to be more-or-less real time, so you must compensate for latency, unreliable/slow connections, jitter, etc. If you're used to the web and the request/response model, you have to throw all of this out the window. The 200ms delay "hack" is pretty much standard practice, the window will differ from game to game (smaller in FPS's), but it's usually there.
Most games use UDP, since transmission of any single package doesn't have to be reliable, and in case some packets are lost, it's cheaper to re-calculate the state diff and resend one slightly larger package instead of two (or more) standard-size packages. Sometimes this can result in a "blink".
With sending around seeds and other "secret" data, you have to make a trade-off, since sending too much allows for cheating (map hacks, wall hacks, etc), but sending too little will create unpleasant surprises (enemy "teleporting" from around the corner).
Also often it's cheaper to run most of the calculations on the client (even including the critical stuff like hit tests, damage calculation, etc), and only occasionally verify the results on the server - especially in MMO's. Clients that are found suspicious get verified more often, and eventually get penalized / kicked.
Source: never actually wrote a networked game, but love reading about this stuff.
Got any favourite sources where I can learn more? It sounds pretty interesting!
Some interesting case studies are anything by Id Software (Quake etc), and Lineage (that's mostly tales of a friend who is a hardcore player and a developer; he'd have the relevant source code open in a separate window while playing).
Unreal Networking Architecture: http://unreal.epicgames.com/Network.htm
88 points by jbrennan on Jan 8, 2011 | 12 comments
Creating a simple multiplayer example: https://unity3d.com/es/learn/tutorials/s/multiplayer-network...
They had another interesting wrinkle to make it handle huge populations. Having enough players in the same region would trigger “time dilation” and slow down the simulation. In a big fight (thousands of players), it could take 10 real minutes for one minute of game to pass. It made big battles a slog, but at least they were possible.
If you're curious, this is an actively maintained implementation of a WoW server.
In our implementation, the player runs ahead on the client (client autonomous) but server verified (actions replayed on the server). The new authoritative server position for the player is sent back to the client, and the client replays whatever player movements last made since the response to a movement is heard back from the server, with the player resynchronizing to that point in time where you moved against server objects and playing forward from that point transparently; the client (and server) maintain a short queue of movement history for each moving entity. Thus, if you ran into something on the server but it didn't obstruct your movement much, you would tend to blip much less. The physics framerate was very low compared to graphics framerate, and there would be some degradation in updates received on the client by distance as throttled by the server based on area of interest management. Position updates represented most of the bandwidth of the game. Everything is UDP based with different forms of reliability options on top.
NPCs were "server authoritative" and their actions are replayed on the client. Interpenetrations are resolved via rigid body physics resolution on the client if something blips, but the server is ultimately the source of truth (nothing can interpenetrate on the server), so if a rigid body resolution on the client doesn't resolve some condition, the eventual resynchronization of player position from the server would make it happen at some point.
It worked out pretty well most of the time; certainly you can construct many scenarios where it goes awfully bad from the perspective of a client (on the server everything is always fine), but we preferred the illusion of immediate feedback/low latency versus this queuing up everything to take place N milliseconds in the future, and we didn't need exact reproducibility between clients, just eventual (and hopefully pretty quick) consistency.
Many other games require higher frequency such as driving, shooting and anything that has physical simulations.
They instead use prediction and reconciliation and aim for the highest update frequency
Counter strike has servers at 120hz iirc.
Here’s some pro/cons of the approach http://www.gabrielgambetta.com/client-side-prediction-server...
edit: try the live demo too http://www.gabrielgambetta.com/client-side-prediction-live-d...
Similar approach, with some extra fanciness (you see the input at a fixed delay, always, and roll back if necessary to handle mispredict).
Can anyone grok this? I can't see why this (each player's simulation making the same number of calls to random) would ever not be the case if all players are running the same patch of the game and are executing the same commands.
For example, maybe in a FPS, part of the non-gameplay-critical graphics use particle generators for a cool effect that not all players see (because it's behind a building for some of them and thus doesn't even need to be rendered); if these generators used a synchronized RNG, then all players would have to do computations for every particle effect happening anywhere, just so that the combat and more game-important RNG values would be in synch when they really need to be.
If you're using a global random pool, you've descyned here unless all players have the same thing on screen.
Once it comes time to simulate that next turn, if you have something different than other clients because of a missed update or graphics lag, even if object positions and the random seed going to be "fixed" by another turn update, all future interactions with any objects that over- or under-sampled random will be wrong and could create further sync problems.
if you apply the compatibility patch, you can load your steam copy onto voobly for free and play with next to no lag. The difference is night and day plus better features with Voobly.
I wonder if the rise of competitive RTS has changed this guideline. In SC2 people will start to complain if their ping to the server is more than about 130ms, and anything above 150ms starts to become painfully noticeable.
In Brood War being able to play on "LAN Latency" was a always a huge deal---to the point that unauthorized third party ladder systems enabled players to play at "LAN Latency" even when battle.net official didn't support it.
This is an important realization. Our brain can perceive reasonably fast actions, but our reaction is much slower. Under good conditions we can easily tell a 60fps animation from a 30fps animation from a 10fps slideshow, but the fastest reaction time we can manage is around 100ms (the time one frame is visible at 10fps).
We are reasonable tollerant to latency because our brain has quite high latency, and all our actions have to account for that (for example the point in time where you decide to release a ball you are throwing is very different from the point it is actually released). On top of that, many real-world interactions behave similar to latency (e.g. springs). What throws us off is inconsitent latency, because then we are suddenly not able to predict when to perform an action in order to have the effect at the desired point in time.
The 250ms pure read-react time deals with arbitrary events, but when we can chunk reactions into a practiced technique our precision goes way up, to nearly the individual millisecond: thus musicians can play rapid passages with unusual rhythms in time if they have time to plan and prepare, but they lose this ability if dealing with unusually high latency(extreme reverb, amp across the stage, digital audio with huge buffer sizes). The technique, after all, is based on fast confirming feedback that your execution is correct.
And like you say, "bouncy" latency is even more disruptive. We can adjust to a small and consistent lag, but inconsistency will degrade any level of skill.
> And like you say, "bouncy" latency is even more disruptive
The technical term for this is "jitter". Networking and telecoms people pay a lot of attention to this metric, both for the reasons you cite and because jitter is much more noticeable than high-but-constant latency in voice or video communications.
I guess I could have specified networked over the Internet......
The word "networked" is instead used to distinguish from "non-networked" gaming, which involves playing on a single PC. This could be either a single-player game like the Starcraft campaign, a turn-based hotseat game like Civilization hotseat, or a simultaneous shared-keyboard game like Achtung, die Kurve!
A) Build a navy!
B) Stop building a navy!
A) Build a navy!
B) Stop building a navy!
Here's a decent primer on it. In general chips implement floating point math differently and you could google terms like "floating point register" and "floating point stack" to get you started. Most importantly, floating point math is not generally commutative due to precision/rounding.
This game is amazing and has aged so well. The only downside is the network pauses and out-of-sync errors that inevitably happen 40 minutes into a two-player game.
I can't remember the last time I didn't set the population limit to 75 in an effort to alleviate network issues.