Hacker News new | past | comments | ask | show | jobs | submit login
Never trust the client (gafferongames.com)
360 points by netinstructions on Apr 27, 2016 | hide | past | web | favorite | 152 comments

"Never trust the client" is good advice for every kind of application. Clients are filthy liars.

Clients have bugs. Old clients don't upgrade. Packets arrive out of order. ACKs never make it to clients, causing clients to repeat what the client believes to be a failed operation. All of this is before even considering an attacker actively trying to subvert you.

The server should always be the source of truth. It should always enforce all the consistency rules. Database schemas can be a critical tool here as well.

Don't blindly slap whatever the client sends into a no-SQL database, then vomit it back out for queries.

> The server should always be the source of truth. It should always enforce all the consistency rules. Database schemas can be a critical tool here as well.

Amen. And I'm a client developer. Clients aren't for data validation, etc. That's the server's job because the server has to store it, use it, ship it off to other things.

Clients are for displaying all that data in a way a human can understand. It has people skills, damn it!

The server is the model and the controller. The client is the view, and marshals input to the controller. Doing it any other way is bananas.

I saw a talk where a guy was showing how to cheat at some games. A lot of games will detect and crash you out if they detect a debugger attached to them.

Things like Punk Buster, et. al. are more like anti-virus software: they try to find the signatures of known chat clients.

So it's best to write you own...and rename your debugging process to "Google Chrome." He got pretty far with his auto-walker in some MMOs. He had limiters so his runs wouldn't be too fast and would go up/downhill correctly...or so he though. He forgot to multiple by -1 somewhere and the server banned him for attempting to fly of into space.

More modern anti-cheat software seem to have a trend to move to kernel space[1] and block user-mode access. So it's not just signature checking, but some generic attack prevention as well. (Cheats also move to kernel space, of course.)

Wonder how soon there will be a market of PCI-e cheat rigs for DMA attacks...


[1] Yep, gamers are literally accepting rootkits that allow arbitrary remote code execution on their machines.

> Yep, gamers are literally accepting rootkits that allow arbitrary remote code execution on their machines.

That's certainly risky and rather horrible, but what's the alternative? Multiplayer games are a huge business and a huge culture, and all that goes down the drain if people cheat.

Game companies have millions of people around the world competing for prestige and money. They're running game software on their own personal computers--machines that are also full of their financial records, naked pictures and personal correspondence. If your game is full of cheaters, you're done. If your game gets players hacked, you're really done.

I have no idea what the solution is, and I'm really glad it's not my job to figure it out.

The solution is simple: do everything server side. Look at World of Tanks for a great example of this. They pretty much let people run free with mods in multiplayer and even the worst of those mods can barely provide an advantage in online play. They do this by managing absolutely everything server side. Ammo, direction of aim, position, enemy players appearing and disappearing etc. Basically your server should be built to send only what's absolutely necessary and your client should be built to use absolutely everything so cheats can't provide much advantage rather than vice versa.

An even better solution: let the competitive advantages be given only by doing things the computer is bad at and human beings enjoy doing.

If you get a competitive advantage from doing something like grinding, which computers can do better and more tirelessly than humans, and humans tend to dislike, then inevitably you will have cheating. (e.g. WoW)

If on the other hand the competitive advantage is from working out a strategy of in what order to play your cards to maximise your chances of building up your strength while attacking the opponent and yet keeping something in store for surprises the opponent may spring on you - a really complex strategic task - then that's something people enjoy doing and the computer sucks at. (Hearthstone)

People may be addicted to the first kind of games, but is that something that should be protected? It's kind of similar to people addicted to slot machines... sad, a reality we have to accept, but not something we should expend vast amounts of intelligence trying to sustain.

Of course, there are people who will do it for the money, and don't stop them, but realise that they are basically just a few steps short of people refining formulae for meth to make it more addictive - the fact there's a business case that exploits a human weakness doesn't mean it's a productive use of intelligent people's time.

There are some things computers are really good at and that human beings also enjoy doing. Computers are insanely fast, yet plenty of reflex-based games exist. A lot of high-level competitive games have a dexterity dimension. FPS games for example are a mix of strategy and dexterity, with a big emphasis on dexterity. RTS games like Starcraft put a bigger emphasis on strategy but still require insane finger dexterity at the highest levels. The dexterity part of the game could be done far better by a simple AI. Aim bots in FPS or inhumanly micro-management in RTS have been demonstrated, yet those games are still played, and interesting.

One of the examples you gave, Hearthstone, actually had many bots running on the ladder for a time. Their goal was grinding, but even so, a number of them was better than a significant portion of the human playerbase. With the recent advances in artificial intelligence, computers are not that bad at this kind of task.

Overall, I don't disagree with your main point though. I have no qualms automating boring and repetitive tasks in non-competitive games. And if a competitive game has those parts, I'd probably avoid playing it. I'd rather spend my time coding than wasting it grinding because it's part of some game I otherwise enjoy. I feel it is indeed a more productive use of my time. I wouldn't go as far as judging other people who like the grind though.

Doing everything server-side is not enough to prevent cheating. There are plenty of client-side tricks that can give the player an advantage, starting from simple visual cheats.

E.g. turn the opponent's textures bright purple so you can see them better. Make the walls transparent. Aimbots that spoof mouse input.

The server must be authoritative or all bets are off, but you still do need client side anti-cheat countermeasures (stuff like verifying screenshots).

This stuff can't be made 100% fool proof, so competitive games where big bucks are at stake must still take place with the organizers providing the hardware and the players located in the same room.

You're right - you can never prevent that. Which means your client needs to offer it to keep it from becoming an advantage. If the players are already bright, recognizable colours you're basically already there. Transparent walls don't help when the server doesn't send you the enemies hiding behind them until the very instant they pop out. And if you're only spoofing mouse movement, well, that is getting pretty high risk, you may have a slight advantage, but human players will still be able to kill you without too much to effort. And I think that's the main thing anticheat should be preventing.

Even all that still doesn't stop a very old type of cheats: aim bots. Even if the server only sends you information about enemies the instant they pop out, a computer will be faster and more precise to aim and shoot at this enemy than any human player could be. Add in some randomization and delays so the server can't easily differentiate the movements from a human player, and you're a mechanically world-class player. Doing everything server-side is not enough to prevent cheating.

Sure, but aimbots aren't as much of a problem if they have to aim like humans - which you can enforce. You can buy some advantage like this, yes, but not much if it's implemented correctly. Additionally if you can't detect a player behind you for example, you're still dead, doesn't matter if you aimbot or not. "World class player" perhaps, but only if your bot can really play as well as a human within the confines of the game - knowing which corners to check, knowing when to look behind you, where to be aimed at a given time, etc. Your bot needs to be a hell of a lot smarter than a bot that's just given all the data and has to do basic trigonometry to aim at their head - your bot has to think a lot more like a human and may even underperform many humans.

Ultimately no matter what you do, some small percentage of your players will cheat. Every major game on the market has paid cheats which fully bypass the protections available for it. It's better if those cheats can only buy them a little than buy them a lot. It's not worth investing a lot of effort in more than basic cheat protection, it is worth investing it in handling as much as you can server side and exposing as much as you can client side as that is something cheaters can't game.

An aim bot doesn't have to play all the game for you, it doesn't need to think. I don't know how it is currently as I haven't played FPS games in a while, but back in the day, there were aim bots that would just activate when a target is in sight and take over, aim and shoot.

Yeah, modern aimbots are a lot more advanced than that. Mostly in that they'll know when someone's anywhere visible to you and they'll aim straight at their head.

This sort of behavior becomes impossible. If they have to aim in a fairly human manner, they can still be killed by anyone walking up behind them for example. It may provide some bit of unfair advantage. But not nearly as much as is possible in many games today.

World of Tanks is exponentially less reliant on latency than an FPS, right?

Honestly, I immediately smell naivete BS when someone says "the solution is simple"...

It might be naive, but then again I don't think games development has ever been easy. I'm constantly amazed by the skills of games devs, they just seem to surpass anything done for business applications.

If a company wants to attract competitive gamers, then they have to develop games you can't cheat in during competitions. Right now, the most reliable way of doing so is server side, with all the challenges that entails. Otherwise, they have to install what essentially amounts to rootkits - which isn't exactly what most gamers want to encourage.

Yeah and modern cheat software is doing the same. A friend of mine recently used a Battlefield 4 cheat which required a boot loader entry so it could bypass kernel patch protection and get hooks in which the anticheat couldn't detect.

How does the server know you are running the anti-cheat software, and not your own fake?

(Alternatively, run game in VM? I guess performance would suffer too much with that approach?)

Running the game in a vm is actually completely possible using Vt-d to forward the GPU these days.

I'm really interested in game hacking. Do you have a link for the video/talk?

I used to visit Game Deception, the best game hacking technical forum. Unfortunately it closed.

I lost interest in game RE a long time ago, but http://www.unknowncheats.me/ seems to be its successor, though from a cursory glance I see there are a lot more cheats and less technical information.

GD we miss you.

I used to frequent http://aimbots.net/ when I was younger. Its a forum dedicated to mostly open source cheats so there is a whole discussion thread with sources and RE strategies.

Don't forget the incident where Punk Buster banned anyone with a certain string in memory...

I remember it unfortunately being timed around the same time an exploit string would cause one of the AV packages to drop connections.

Made gaming IRC channels utter chaos for weeks.

I would love to play around with "botting" in some games, but I just don't even know where to begin. I tried to create super naive autoplayer years ago in C# by trying to just send keyboard keys into games, but most games didn't even receive the inputs.

If you're ready to throw a bit of money at it (and you're playing on Windows) I'd recommend having a look at Innerspace from Lavishsoftware.

I used that to bot on WOW back in the day, this tool was fabulous and can be adapted to probably any game. Knowledge of C++ is required to create an extension, after that you can code your bot in C# or Lavishcript (their own script language, quite powerful).

Yeah, I'm not really willing to put up with monthly costs for this, but thanks for recommendation

> "Never trust the client" is good advice for every kind of application. Clients are filthy liars.

It's true that you can't trust clients not to lie. However, in some situations, it's the correct decision to accept a constrained number of lies if that brings you benefits.

Like so many other absolute statements in software, in reality it's a trade off, and you need to consider the business context to make the right decision.

Not every.

Right now I'm working on an app where we trust the client. We know that a hacker can compromise a client and send any kind of bogus data, we're just completely OK with it from business perspective.

Can you give any more insight into this product? Curious what kind of product would have that be ok from a business perspective...

Eye training app with server creating parameters of the next training session (based on the whole training history and some data analysis magic). If you want to get a training seasion that doesn't fir your real results — go ahead, we don't care.

What I'm trying to figure out isn't the business option where it's ok, but the business where you actively are avoiding doing it.

I mean why not just do it the right way?

You can handle orders of magnitude more users with the same hardware if you don't have to run the game simulation on the server.

Closed network with locked down terminals?

With caveats.

What you're also implying is that clients should do no processing - because clients can't be trusted to do anything. If you apply this logic to web-browsing, you're asking the server to send the client a bitmap image of what the rendered page should look like.

I think that is misinterpreting what the person you are replying to is saying.

They aren't saying "don't let the clients do anything", they are saying "don't trust the values the clients send back"

If the client messes up the rendering, the only person affected is the client. There is no 'trust' required, because you are not relying on anything the client has done.

The idea of trust only comes into play if there is a consequence to that trust being broken.

Exactly. The nuance of every discussion is critical.

When somebody says "Never trust the client" inevitably somebody will find a use case that sounds like trust even though it's not what they're saying here and use that as justification.

The lack of nuance is how things like "never use stored procedures" gain traction when they shouldn't.

> If the client messes up the rendering, the only person affected is the client.

No, just google "wallhack".

Arguably the client was trusted with information that was not needed.

Alas, performance trade-offs might make this mandatory.

Well, a bank website would want to use this to make sure nobody MITMs them, or that the client does not have malware, etc - which would affect the user.

MITM isn't really relevant for the purposes of this discussion. We're talking about a different application with different threat models.

nah, a better example involving a bank would be: you have two accounts. the client can do transactions.

the server sends to the client: account A has $100, account B has $200, make your transaction.

1. trusted client

    client says: i've done my transaction, now account A has $50 and account B has $250.

    the server updates the database accordingly.
2. never trust the client

    client says: i want to do a transaction from account A to account B with 50$.

    the server decreases account A by 50$ and increases account B by $50 and updates the database accordingly.
3. the difference is, with the trusted client model, the client could be hacked and respond with:

    client says: i've done my transaction, now account A has $52 and account B has $252.

    the server updates the database accordingly.

congratulations, you - as the bank operator - have now lost $4 to the hacker and this is practically what happens with in the division (except that the other players have to pay for it instead of the bank/ubi soft).

they try to do server checks to prevent this, but due to network timing problems they can't check the exact values, only calculate if this is maybe approximately right.

so a players maximum running speed is, say, 6m/s. they could do periodical checks (every second) if the player moved more than 6m since the last second. but what if the player falls down a building or is thrown by an explosion further than that? it'd register as cheating, even though it's completely legal to jump off a building. so they have to increase the checks for distance/s by 3 to allow those situation (i.e. 18m/s). a cheater with a modified client can now continually run three times as fast as any other player without getting flagged as a cheater.

why do they do this? because it's

a) easier to implement (the client has to do all the computations anyway - doing them a second time on the server and then sending back to the client isn't an easy problem - you have to interpolate) and

b) cheaper (less work = less man hours, less complicated work = cheaper programmers* and less server power needed)

* i'm completely sure the devs would have been 100% able to do it right, but it would have taken them longer, thus costing more.

so, how to fix it? there's only one way: rewrite the model. but this is expensive and the titles have already been sold, so why bother investing another couple hundred thousand bucks? even the disgruntled gamers will buy the next ubi soft title if it's shiny enough. i mean, few people will buy the game now that multiplayer is infected with cheaters but they already exploited enough of the market and long time players are probably less likely to buy the next title if they're still playing an old one. maybe.

Not at all. It's fine for the client to handle processing when the only thing they could hurt is the user of the client. If the client renders a web page incorrectly, only the user of the client gets hurt by that. But the server API must not allow a malicious client to do damage that affects other clients.

Your example is about the client misreporting server state to the end user.

The big deal is instead about a badly behaved client being able to make "illegal" changes to the server state.

Since this is a multi-player game this is obviously a big problem but I've always wondered how you'd handle this in a single player game with a leaderboard.

Let's say you have a Bejeweled clone/match-3 game and a global high score board. What can you do to prevent fake scores? You can obviously obfuscate things, sign requests, etc but at some point the client needs to sign the score it's sending up so the client has the key.

You could use this model of recording inputs and playing them back on the server but that seems like it would be a ton of work if your game is popular and an extraordinary cost for keeping a simple score board clean.

Do you try and rely on subtle math tricks that mean that certain score numbers are de-facto invalid because no combination of scores could end up with that as a final total?

Is there any way to handle the issue other than deleting scores that seem likely fake (large round numbers, for example, or those orders and orders of magnitude higher than other scores)?

If you have a deterministic game model(RNG seeded properly, pure functions, etc) you can actually just send the user inputs(which are very small compared to the whole game state) and replay the whole game to verify.

This is the common model in RTS games called Lock Step since all clients are advancing a number of frames based on all inputs from other clients in "lock step".

It's also very common way to implement replay and debugging for reproducing rare network bugs.

Using the Bejeweled example of a single-player game, and assuming it's properly deterministic, could a malicious actor not just falsify the steps taken as well as the score? It would be more difficult, but not that much more difficult considering the challenge is the time limit and lack of undo. Both would be removed if you were simulating steps.

This is entirely possible and its why things like Punk Buster, etc. exist. For a lot of single player games it'd be hard to verify that an actual human physically executed the plays vs. a bot. I'm ok with accepting a high score from someone that went through that level of effort to falsify replays to get on a leader board. If a puzzle game is truly interesting that kind of cheating isn't necessarily optimal anyway.

They could, but they can't falsify the game seed - the server determined order for what gems spawn when. This way I can send actions (whether real or hacked) that would be executed in the server against the "real game". If the client's move did not result in a score then no score is recorded, even if the client thinks it should.

But it would mean a bot could get the highest possible score for that seed by going back in time and trying again.

If you don't care about overkill, you could employ machine learning and anormaly detection: send the input sequences, timing and other difficult to fake data (e.g. gyroscope state) to the server and learn the "typical" distribution of those values. A perfect game played by a robot will likely be an outlier in a lot of the values.

(Note that some legit games will also look suspicious, so you'd want to use this to find out which players you should keep an eye on, not as a tool for auto-banning.)

If I was a game developer and a person developed a bot to play my game perfectly, I'd let them have the high score.

But that doesn't happen, that person distributes that bot and suddenly you have a huge issue where all the high scores are bots which frustrates real users.

Yeah, that's assuming you know what the RNG algo is.

There's a sibling comment below that mentions this. Most RTS games checksum the game state and send an out of sync when the checksums fail. If you generated different random values you'd definitely hit this.

Always sucked when it happens about 1 hour into a game of Homeworld.

This works great for proper RTS games because the steps are usually pretty lengthy (in game terms, like around 200ms) and the way the game genre typically works means relatively little prediction or complicated simulation design are needed to make even huge steps feel responsive.

Recording and playing back input events, at least, shouldn't be that expensive unless you're emulating the entire stack up to the application. If the logic is written in some portable manner (Xamarin [1], etc.) you could just re-execute on the server with the same logic as the client. It requires a fair amount of discipline to keep everything deterministic, too. The real difficulty would be identifying the AI players from the humans; you'd basically have to recreate re-captcha's humanity checkbox, and that's still not a guarantee.

Or, identify that your business need is just to provide 'some' sort of ranking and compare the user against their Facebook/G+/Twitter friends, who are probably less likely to cheat.

[1]: https://www.xamarin.com/

Your second point is the important one for most games I think. Global leaderboards on a popular game are meaningless for the vast majority of players (do you care that you are in position 324,675?) and almost all of the implementations I've seen have been hacked. As this thread has discussed it's hard to protect false data from getting in the leaderboard.

Limiting leaderboards to friends both gives relevant context for fun competition and mostly eliminates the cheating problem by making it a social issue. I know my friends wouldn't be happy if we were all competing on a game and I cheated.

Ah shoot, you beat me to it by 5 minutes :). Yeah, that's how you do it in practice.

Gah, sorry about that :/ fwiw, it looks like your explanation has more upvotes, so it worked out alright! :)

Nah, not an issue at all :).

FWIW it's a fun domain space, partially these cool technical approaches(lock-step, dead-reckoning) and partially smart user interactions(how you decide to give users immediate feedback when you won't actually know until 100-800ms later).

This is actually impossible without hardware encryption. Apple should have no problem with this since they have control over their hardware, but they don't do anything to prevent fake scores from showing up in their leaderboards. Xbox is a great example of being able to trust the client. The console tells the server that it has unlocked an achievement by using a signed request which was signed by the secure chip. The same chip verifies that the games you are playing are legitimate, etc. It's a very important piece of hardware and it would likely be the Xbox's demise if that secure chip suddenly became insecure.

Running the game on the server is the only option as far as I know for securing the integrity of the data. You could add a private key to your app, but then you're shipping the private key to the client.

> Apple should have no problem with this since they have control over their hardware, but they don't do anything to prevent fake scores from showing up in their leaderboards.

Apple has leaderboards? For what?

Apple's "Game Center" (https://en.wikipedia.org/wiki/Game_Center) provides integrated cross-game leaderboards and achievements/badges.

Not even such a secure chip is secure. People can take apart hardware security modules in credit card scanners and scam the credit card networks over millions (a "how to" on how to do this was shown on 32C3).

Your Xbox hardware is even less secure.

We had to deal with this issue with a website that hosted 3rd party games, so we were restricted in our options. Since we didn't make the games ourselves things like server side playback wouldn't work.

For us, there was no foolproof method. So our main goal was to ensure that the leaderboard wasn't full of obvious cheating.

First, to reduce the volume of cheating, we obfuscated things:

1) We encrypted all communication.

2) We required a single use submit token, generated by the server, with each game play.

We also manually reviewed scores. In particular, the top 10 scores of each game.

Beyond that, since we required a single use submit token, it means that we knew how long the player took to obtain their score. For most games, the higher your score, the longer you played. So we flagged any scores with an out of whack score/play time ratio for further review before showing the score to anyone else.

IIRC in Puzzle & Dragons (one of the highest grossing F2P mobile games ever) the matches are purely client side, with a simple "I won" or "I lost" sent to the server after the match. I don't think it has leaderboards as a focus, which I bet is not an isolated decision. In general, the design of online games of any kind need to take into account all the potential issues & hacks and not be designed in a vacuum.

There's no universal answer to your question, but analysing behaviour of players and flagging various kinds of suspicious activity for more detailed inspection is likely the most common way. Designing the way to respond to such incidents is also non trivial, i.e. banning the player vs deleting the score, doing it quietly vs doing a large batch and prominently talking about it with the community, etc.

There are basically two low cost solutions that work well. A rolling window so your score is in the top X% out of the last 1,000 scores. People may still cheat, but doing so get's boring in the long term so most scores are real. Second validating some chunk of the game. So, for example with chess if you validate every move was valid, and then pick a few random moves and validate that's what your engine would pick.

Note, both of these fail if the high score has any real value.

PS: Another option is to more heavily validate higher scores, keep the top 0.5% honest and cheating to get a lower high score has less value.

> Another option is to more heavily validate higher scores, keep the top 0.5% honest and cheating to get a lower high score has less value.

That's one kind of thing I've thought about in the past. I'm thinking things like Game Center's global scoreboard, so if you only every check scores that are supposed to break the top 1000 then you don't have to bother checking everyone's games. If you fake a score of 270 points, well there isn't a huge benefit in stoping you.

> You could use this model of recording inputs and playing them back on the server but that seems like it would be a ton of work if your game is popular and an extraordinary cost for keeping a simple score board clean.

You only need to validate scores that would actually make it onto the leaderboard, which would be a tiny fraction of all plays. To avoid being overwhelmed by bad submissions, you flag users who send them as "don't even bother checking, just log and reject".

The real problem is RNG manipulation and TAS-ing in general. How do you to ensure a malicious user automatically can't brute force the whole tree of possible plays?

> You could use this model of recording inputs and playing them back on the server but that seems like it would be a ton of work if your game is popular

The main costs would be setting up servers / network serialization code, and isolating the game logic enough that you can reuse the same code on the client and server.

The latter is potentially useful for replay systems as well (so you can see how other top players are playing the game.)

> and an extraordinary cost for keeping a simple score board clean.

Some ways to keep costs down:

- Offload validation onto other clients (this is better suited for matchmaking based multiplayer games where you're communicating anyways, where it can be as simple as having both clients upload the score or result for a given match id - flag both clients when they disagree, whoever racks up flags consistently is cheating / has bad ram / ???)

- Only validate N% of the games from the event stream at random

- Only validate the top N% of scores from the event stream (who cares if people are cheating to give themselves terrible scores?)

> You could use this model of recording inputs and playing them back on the server

Before King was known for its mobile games they were a cash tournaments website (now royalgames.com). This is what they did as well as recording a user's mouse movements.

Perhaps you can solve this as follows. The client is going to go through a series of states, guided by some inputs. You can compute a hash from these states. Now you let the server go through the same states (using the same inputs), and compute the hash, and compare the hash codes from server and client.

This sounds prohibitively complicated (and requires work from the server). But note that the state can be a partial state (with obviously weaker protection against hacking), but perhaps still good enough.

from my recollections forever ago, RTS games like the spring engine use checksums on state to prevent manipulation. the game client gets "out of sync" if a client's checksum doesn't match. The state checksum is computed at every tick of the simulation. This makes it so you don't have to run the simulation on the server. Debugging is very hard though, and odd things like FPU precision on different processors can create all kinds of difficult bugs.

Those fpu precision issues are a nightmare. The most common way to get Around them is to do integer math instead.

One thing that makes this possible is that many of those games run st much slower update rates (typical games are 30/60hz, many rts games run at 10hz) and they run in lock step. If you have different clients with slightly different views of the world it doesn't work at all

I thought that IEEE floating point operations were well-defined, and guaranteed to always behave in the same way (but there are a few CPU flags that can influence e.g. rounding).

You could send the score on a regular basis, and check to make sure it isn't increasing at an impossible rate?

You can send the replay of client actions taken, including the random seed to be sure the board and subsequent random results are accurate. A bejeweled game probably represents a few KB of data in this format. Here's the seed, a list of which two gems were switched, and when special abilities were activated. Done.

Chess has a standard notation, and there's even tools to detect when human players are cheating by having chess engines play their moves by replaying parts of the game to those engines and checking for too many move similarities.

Build a community that's honestly interested in skill and frowns on cheating. Then trust that community to detect and flag the (hopefully very few) cheaters. An example would be Dustforce.

Of course this only works with the right community.

Essentially a social instead of a technical solution.

You could send events to the server (connected these gems, hit this target), and let the server calculate the score. Of course, that's subject to abuse as well. Short of hosting the game remotely, I'm not sure there is a solution.

They missed one key bit on CS's network model(which is what made it so great) is that they would actually re-wind the gamestate to do hitchecks so you could have a latency of 200ms+ and have a reasonable experience.

Also the common term for resolving these types of things on the client is called "dead-reckoning". You've always got a diverging states from latency so you're continuously trying to reconcile this on the client.

Simple Newtonian physics based games(Subspace, etc) are famous for being latency tolerant since the simulation is incredibly deterministic and are based on the players extrapolating where things will be in X time rather than twitch responses(which is also why fighting games are so hard without using a solution like Counter-Strike). You could play SubSpace on a 500ms dial-up connection and still be competitive without needing to lead for lag.

The article does mention it, though without going into detail - it's called lag compensation.

A quick note, I think dead reckoning is not about correcting for latency. It is a conscious decision on the server to send less precise (cheaper), incremental data to the client most of the time. The client extrapolates from this imprecise data to still provide a fluid and continuous motion. The server performs the same extrapolation, and precisely tracks the error accumulated by the client to decide when to send a more expensive and precise packet that corrects the divergence.

That's the traditional term, but it has specific meaning in game network programming. For instance we never send back the "error accumulated" to the server. Instead the server usually sends a PVS(potentially visible set) that the client handles displaying in a meaningful way(and reconciling and discrepancies). This PVS set is usually absolute values but at a discrete snapshot in time(of which the client can never be certain due to latency and there not being a single global clock). The client is basically guessing where the simulation is going and then smoothly handling(normally interpolation over your usual time-slice) differences from the server snapshot.

That PVS is why you'll see map hacks where people can see through walls and whatnot. The server is doing less expensive checks(not doing a ray cast per-client to client which is n^2) and the cheat takes advantage of that data in memory.

There's something I always wondered about the gamestate rewind thing:

Is it correct that the gamestate rewind only works for hitscan weapons (instant hit bullet)? What about slow moving projectiles or physically accurate bullets? Can the server spawn the projectile 200ms in the past at the time you pressed the button? But then other clients would receive the information about 400ms later after the initial keypress, which does not seem right...

The projectile issue actually is much better.

Since as a player you're trying to shoot the projectile to "predict" where it will intersect you're actually playing to the strength of latency and dead reckoning(that's the SubSpace case above).

You want to segregate your action into twitch(reactionary: hitscan, parry, etc) and predictive(slow projectile, > 250ms ttl, etc) and only rewind/replay for the twitch case.

There's also the case that when you rewind and do confirm a gamestate change you need to gracefully handle resolving it on all clients. That's why you'd see people warp back around corners when shot in Counter-Strike. Their player movement speed slowed after being hit and the server reconciles it and then the clients interpolate the result.

In Source Engine games, there is no lag compensation for projectiles. The lower your ping, the faster the projectile is spawned on the server. You can see this very clearly when playing on high ping: you click mouse1, you hear the rocket fire, but the rocket doesn't appear for a good fraction of a second.

Since TF2 is very heavily based around projectiles (Soldier's rockets, Demo's pills and stickies, etc) many competitive players play with their cl_interp (determines how many ticks worth of game state are delayed before being displayed, interpolated, to the player) set lower than default (which is 2 ticks at 66 ticks per second by default). This results in jerky movement if packets are lost, but it ensures that they see threats as soon as possible so that they can fire their projectiles with minimal (real time) delay. I think it might also have some effect on client side input buffering; not sure, but that would be an even bigger reason for projectile users to want low interp.

another recent example of "never trust the client" is the amazon kindle unlimited debacle.

in case you missed it: the client syncs the last page read. so you can buy a book, jump to the last side and amazon marks it as read, without checking for intermediate states.

the problem is: amazon pays authors by "pages read". so people publish fake books with 3000 pages, let sweat shops try the book for free and just jump from the first to the last page, sync and it registers with "this customer read all 3k pages".

now create 20 fake author accounts and 30 fake reader accounts, publish 20 fake 3k books (one for each fake author account and they're practically filled with 3k pages of randomly scraped web content) and you got 20303000 = 1.800.000 pages read.

if you are an aspiring author and publish a novella with, say, 50 pages and 10.000 readers devour every single page of it, you've got 500.000 pages read.

the faker can do this in a week and take about 4 times more than, even though it took you 2 months to write it. payouts are out of a pot shared between all authors. thus, authors lose in the short run (less money now), cheaters win big time (a lot money now), amazon doesn't lose money (now) because the payout is the same.

in the long run it's still problematic because small authors are unhappy once they realize what's afoot (and earn 1k instead of 10k). big name authors don't care as much if they earn 1m or 1.1m. customers aren't likely to buy the fake books anyway (cheaters take them offline before the free trial period ends) so they're not really affected.

That could reasonably be considered fraud. If I was Amazon, I'd pursue them, publicly.

The Division is an interesting game from a QA perspective. While client-side architecture is one thing, blocking bugs are present in the game and have been a strong deterrent to community stability.

Case in point, it was recently discovered by the community that gear with the "Protection from Elites" bonus actually increased damage taken from Elites: https://reddit.com/r/thedivision/comments/4g6lnk/tested_conf...

And that's not even getting into the loot quality issues and the complete lack of a carrot-on-a-stick that has been codified in many, many games. (Massive actually made the game grindier because people were hitting end-game content too fast. Which only punished people who did not hit the content yet.)

coincidentally i re-read the post about the riot games league of legends automated testing infrastructure yesterday. it's fascinating (and honestly, i'm even a bit envious!).

in case you haven't read it yet: https://engineering.riotgames.com/news/automated-testing-lea...

guess it's down to the value of competitive multiplayer. e-sports are the focus of games like LOL (or, i guess, CS), instead of throw-away gaming.

the division won't have international competitions in 3 years - and that's ok with ubi soft. because they'll have a couple of new games out by then and players will have bought those instead of still playing the division.

I'm still constantly shocked at how many games do this. In the MMOG world it is really easy to do everything right, UO came out in 1997 and the devs outright told everyone working on similar games "never trust the client" and "the client is in the hands of the enemy". Yet games like FFXI and WOW that came out many years later have the client setting the position of the player and the server just blindly accepting it, allowing for common and popular speed/pos hacks.

UO had rubberbanding rampant - that is, you'd move on the client, then warp back to an old position after the server said you actually didnt move there.

It was terrible.

Most all MMOs with WSDA movement trust the client to set the player's position and it mostly works. Cheating through warping or speed is easily detectable through other means of post-verification and banning players that fail that verification. It makes the game feel more fluid to the player while still not really allowing cheating.

Everything else is generally handled serverside which is fine because it mostly doesn't need as high a reaction time as movement to feel natural.

If speed/warp hacks are a problem, it's not because the developers that use client-set-movement aren't able to fix it, it's because they don't care.

>UO had rubberbanding rampant

Only if you had high latency and tried to move somewhere you couldn't actually move.

>If speed/warp hacks are a problem, it's not because the developers that use client-set-movement aren't able to fix it, it's because they don't care.

That's precisely what I said. Obviously they can fix it, since it was fixed before they started development. They did it wrong and don't care.

Yep, I was playing it from Australia on dialup, had about a 200+ms ping. What ended up happening is my character would be running in the wilderness, then after a few seconds it would snap back because I hit the side of a house/castle.

Movement in UO worked ok most of the time, and that was with clients on 56k modems and servers powered by hamster wheels.

> it's because they don't care

That'd be nearly every company running an MMO then. You're quite right in principle: speed hacks and teleportation should be extremely easy to find by doing basic sanity checks every now and then. But in practice these things go on for months and months.

Yes, it's true that they don't care. I know this from experience as one of 'they'. It's I guess oversimplified that we don't care. It's just that I, as a developer, would not be able to justify allotted time to fixing hacks that don't hurt the game much until they do.

Because it's very expensive to run almost everything server side, it's not the incompetence of the devs it's just about costs.

No it is not, as I said many games do it correctly. It is a very simple distance check, you can perform millions per second on cheap commodity hardware.

So why do you think most games use p2p?

Most games use P2P because it's a hell of a lot cheaper to run 1 or 2 servers for matchmaking and offload all of the simulation and network cost to the players, rather than hundreds or thousands of dedicated servers, and because their developers were focused on consoles where cheating is much more difficult. If you're already using dedicated servers for whatever reason, though, the computational cost of strictness with player input is not so great in the grand scheme of things.

One I have experience with is DayZ, which has terrible hacking/scripting problems because it is based on the ARMA military simulator whose audience wasn't inclined towards cheating; the server thus trusts many client actions. They're rewriting the whole engine partly because of this.

WoW has some mild checks against this stuff. In WotLK they triggered all the time if you jumped or turned on a corner of several polygons, and you'd get disconnected :(

I doubt that the developer's didn't know not to trust the client - they likely made a business decision.

I (used to a lot more) play Elite: Dangerous. A space sim that has multi-player. They made the decision to use (mostly) p2p/client networking to save money - they wouldn't need nearly as many servers. This has caused other issues in addition to cheating - a low limit on the number of players in the same "instance" of the universe for example.

But it was a business decision - imo the wrong one, but what do I matter?

Yea and I'll go out on a limb and say it was the correct one. The shelf life of a popular game like this is probably shorter than most people think so it makes sense to offer an amazing experience that you couldn't necessarily achieve with an authoritative server model and then by the time hackers have defeated the game players have probably moved on to the next.

Guaranteeing your multiplayer blockbuster game designed to have "infinite gameplay" will be dead in the water the instant hackers discover its highly flawed networking model doesn't sound like a "correct business decision".

It was marketed to have infinite gameplay. What game do you know that honestly has achieved that?

Initial Release Date: March 8, 2016

How do we reconcile sentiments like "never trust the client" alongside this community's hope for decentralization, á la "The Internet Has Been Stolen From You. Take it Back Nonviolently"?

In decentralization you are always the server and everyone else is the client. You have ways to validate what they send to you. It works out because they all think of themselves as the server and everyone else as the client too.

This is of course a vast simplification but the trust part is the same. No one trusts anyone but themselves and validates everything everyone sends them.

Obviously, this applies to all networked applications—not just games.

It's embarrassing how often I see web apps which use something like Firebase with absolutely no validation or access control. Often the developers behind them don't even realize they have a problem. Their excuse is that "most users wouldn't have any reason to hack it, so why does it matter?"

Developers need to realize their clients are inherently in the hands of hackers. Any security needs to be done on the server if it is going to succeed at all.

Can you recommend a go-to book for this kind of stuff?

Ubisoft appears to have some institutional problems. Their other recent Tom Clancy title, Rainbow Six Siege, is hands-down one of the most rewarding & intense shooters I've ever played, however it too is yet to realise its full potential due to amateur-hour quality issues and ongoing troubles with hackers.

The Division also uses TCP instead of UDP so they're clearly totally incompetent when it comes to networking.

Does anyone have a mirror for the "this is super bad" video?

I couldn't find it, but I did find a reddit thread with over video examples:


Do you refer to the netcode analysis? https://www.youtube.com/watch?v=oc51gxwvyGc Or the clientside thrust problem?

I would think that in 2016 this should be something everybody knows....

Yeah, isn't this basically the first thing you learn in game networking?

Or in any sort of networking work where you don't control both ends of a connection.

Even if you do, you still should never trust the client. Who knows when you might lose control of one end?

Trusting the client, in essence, turns a game into one of your childhood: shouting matches of "I hit you!" "No you did!" "I totally did".

I think there's a false dichotomy here. You can't simply ignore the client, so no matter what, you have to have a set of server-side checks that ensure the client's inputs are valid.

Part of what this article offers is "you can't fix it by adding server-side checks", guessing that if they haven't done it already, it's impossible to do. I think it's entirely possible that their model is reasonable (takes movement, fire events, etc. instead of position, inventory, etc.), and they simply haven't guarded against malicious inputs. Adding server-side checks is _exactly_ what you need to do to make movement/firing inputs safe!

The existence of a certain class of bug doesn't mean the whole architecture is broken. It may simply mean they shipped the game without trying to detect cheaters. That may be a bad idea, but if the design is right, it's certainly fixable.

I think you might be misunderstanding the scope of an "input". It's something like "the player pressed the key/button that maps to move forward", not "move to position X". Teleport hacks wouldn't be possible in the former case because the player has to actually move their avatar to the desired position by traversing the game world through a series of inputs. The server would then process the movement inputs, and the avatar would say, bump into a wall, preventing it's movement. There would be no input for "my position is now X".

Is an input always valid? "Firing once" is an input: you have to validate that it's been long enough since last firing that you should act on it (limit the rate). "Moving left" is an input, but by how much? Can I send "Left, 9999999" and replicate a teleport hack by pre-calculating these inputs?

You could bound the inputs to a simple stream of enums (FIRE, LEFT, RIGHT, etc.), but it wouldn't be crazy to batch it up a little bit and send some magnitude data as well (FIRE 2, LEFT 50, etc.). It's a little harder to validate, but depending on the specifics of implementation, that could be a win.

"Moving left" is NOT an input. An input is "The controller is pushed to the left" (with perhaps a numeric value of how far it is pushed to the left). "Firing once" is similarly NOT an input. "The fire button is down" and "The fire button is up" are inputs. That is how well implemented first person shooters have worked since the Quake days (I think Quake 2 actually).

The point of the article is actually that the server doesn't validate anything. There's nothing to validate. The only data it gets from the client is the state of the controls. Then it runs the real simulation. The client only ran a prediction of what it thought the server would do. Sometimes, it's wrong and corrects itself when it gets new values from the server.

No, it's more like "move left" and you move left at your velocity (95, let's say) for that one frame. Next frame you can send any move, but it will only move you 95 spaces in that direction, no more.

It's the same as just holding down a button on your keyboard - the game engine moves you according to the rules. Now you're just transmitting the fact that the player is facing NW and sent a move left command. You're just running the game on the server like you would on the user's machine.

Conceptually it's actually simpler - you're just running the same thing on the server and sometimes backtrack in the client when the connection is laggy.

Another way to look at it: the game simulation runs in a certain number of frames per second. Each frame you send to the server the state of your input devices (holding left arrow key, left mouse button is clicked, etc) and this is translated to moving, firing weapons, etc according to the rules of the game.

You can't teleport hack by sending "left 9999" because your inputs only serve to tell the server what direction you want to move towards. Your maximum movement speed depends on the game rules and should not be under your control.

The server-side network model he describes here is the same architecture Meteor is designed with (see this page: https://www.meteor.com/why-meteor/features). With Meteor, you get client side prediction and latency compensation (they call it "optimistic UI" now) for free. I've always been impressed they decided to build that, because I sure never would have myself. In fact the Meteor team has always said you need this kind of architecture in order to build true real-time applications. But I haven't seen other web-oriented platforms take a similar approach. Did the Meteor team just know something no one else has picked up on (despite it apparently being common practice in the games industry)? What gives?

Even though I don't use Meteor anymore, I still like DDP[1] a lot. It's simple (can be implemented in 50-100 lines), but it is the core of Meteor. There's a specification for it in their Github repo. Meteor's "optimistic UI" is basically an abstraction on top DDP, IMO.

[1] https://www.meteor.com/ddp

One can easily observe this "server is the actual game" property in Minecraft when running on an underpowered server.

At home, Minecraft runs on my home server (with a small Athlon CPU) and it fails to keep up with the required speed of the game, so the server log will show messages like "server clock running behind, skipping 58 ticks" every so often.

An observable effect of this is that you have to hold the right mouse button just a little bit longer than the actual animation when eating stuff. And when you look at the sun, you will see it moving forward continuously (as calculated by the client), but skipping back a tiny bit every few seconds (when the client adjusts for the server's time drift).

I'll bet there was at least one developer who saw they were doing it wrong, and told them, and was overruled. I'd love to hear his/her account.

That type of story grows old after the 100th time you hear it.

Also, get off my lawn.

Console port gone horribly wrong? I mean, how else do you violate the most simple of game rules - never trust the client?

The irony is that the console versions are the least-impacted, as the client-side hacks are only loadable on the PC version.

Trusting the client works on consoles since they're locked down.

Porting that same idea to PCs gets you into trouble almost instantly.

Hence my question - is this a console port gone horribly wrong?

Walton's phrasing is kind of ambiguous, Console port could mean port from console or port to console.

It could have been developed for console first.

"Console port" in the gaming world unambiguously means "a game that was originally developed for a console, then ported to the PC." The opposite would be "PC port."

Generally these days it isn't a port per-se it's simply one codebase compiled for different platforms. So I'm sure they just take the stance of 'this always worked on the console (where it's much harder to run modified code)'.

Actually, after extraction of data files, it points out that it is likely a PC port to console instead of the usual console port way.

If you don't know this you should not be working on software. It's a latent risk to customers.

It's a video game, not a life critical system.

A game can include a payment system, or processing of personally identifiable information. Security is important.

I thought it was a common sense to treat the client as merely the data representation and nothing else, sad to see that multi million dollar companies make rookie mistakes.

I guess in the eyes of a corporate, it's only a mistake if it harms the bottom line.

Sounds like this would though, if players can't enjoy a fair experience.

I'm surprised that "never trust the client" isn't among the basic concepts that everyone should know, at least on HN. It's such a basic premise.

If you ignore the context, "Never trust the client/patient/enduser/etc..." is probably equally true.

now that i think about it - doesn't GTA5 suffer from the exact same problem? cheaters spawing tanks above people practically must be a client game state problem.

Yes and that is why I think it's a new business model to make games not last long on the PC. At least the multiplayer part.

They promote the shit out of games before release and expect them to expire a few weeks after. Reducing operational costs. They also don't have to develop and maintain an anti-cheat, a better server-side structure, and they can close down most servers.

I think it was a unexpected consequence of a bad console port to pc that proved to be quite lucrative. And now they know people will keep buying no matter what, hence the new disposable pc game business model.

Basically, if you play on a PC you're screwed. Another reason not to buy a console or support games like the division on the pc.

I was not interested in this game, but now knowing that it can be hacked in cool ways makes me interested. I only care about playing with friends, and this would allow for some super cool custom game modes. It's not a bug it's a feature!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact