Clients have bugs. Old clients don't upgrade. Packets arrive out of order. ACKs never make it to clients, causing clients to repeat what the client believes to be a failed operation. All of this is before even considering an attacker actively trying to subvert you.
The server should always be the source of truth. It should always enforce all the consistency rules. Database schemas can be a critical tool here as well.
Don't blindly slap whatever the client sends into a no-SQL database, then vomit it back out for queries.
Amen. And I'm a client developer. Clients aren't for data validation, etc. That's the server's job because the server has to store it, use it, ship it off to other things.
Clients are for displaying all that data in a way a human can understand. It has people skills, damn it!
Things like Punk Buster, et. al. are more like anti-virus software: they try to find the signatures of known chat clients.
So it's best to write you own...and rename your debugging process to "Google Chrome." He got pretty far with his auto-walker in some MMOs. He had limiters so his runs wouldn't be too fast and would go up/downhill correctly...or so he though. He forgot to multiple by -1 somewhere and the server banned him for attempting to fly of into space.
Wonder how soon there will be a market of PCI-e cheat rigs for DMA attacks...
 Yep, gamers are literally accepting rootkits that allow arbitrary remote code execution on their machines.
That's certainly risky and rather horrible, but what's the alternative? Multiplayer games are a huge business and a huge culture, and all that goes down the drain if people cheat.
Game companies have millions of people around the world competing for prestige and money. They're running game software on their own personal computers--machines that are also full of their financial records, naked pictures and personal correspondence. If your game is full of cheaters, you're done. If your game gets players hacked, you're really done.
I have no idea what the solution is, and I'm really glad it's not my job to figure it out.
If you get a competitive advantage from doing something like grinding, which computers can do better and more tirelessly than humans, and humans tend to dislike, then inevitably you will have cheating. (e.g. WoW)
If on the other hand the competitive advantage is from working out a strategy of in what order to play your cards to maximise your chances of building up your strength while attacking the opponent and yet keeping something in store for surprises the opponent may spring on you - a really complex strategic task - then that's something people enjoy doing and the computer sucks at. (Hearthstone)
People may be addicted to the first kind of games, but is that something that should be protected? It's kind of similar to people addicted to slot machines... sad, a reality we have to accept, but not something we should expend vast amounts of intelligence trying to sustain.
Of course, there are people who will do it for the money, and don't stop them, but realise that they are basically just a few steps short of people refining formulae for meth to make it more addictive - the fact there's a business case that exploits a human weakness doesn't mean it's a productive use of intelligent people's time.
One of the examples you gave, Hearthstone, actually had many bots running on the ladder for a time. Their goal was grinding, but even so, a number of them was better than a significant portion of the human playerbase. With the recent advances in artificial intelligence, computers are not that bad at this kind of task.
Overall, I don't disagree with your main point though. I have no qualms automating boring and repetitive tasks in non-competitive games. And if a competitive game has those parts, I'd probably avoid playing it. I'd rather spend my time coding than wasting it grinding because it's part of some game I otherwise enjoy. I feel it is indeed a more productive use of my time. I wouldn't go as far as judging other people who like the grind though.
E.g. turn the opponent's textures bright purple so you can see them better. Make the walls transparent. Aimbots that spoof mouse input.
The server must be authoritative or all bets are off, but you still do need client side anti-cheat countermeasures (stuff like verifying screenshots).
This stuff can't be made 100% fool proof, so competitive games where big bucks are at stake must still take place with the organizers providing the hardware and the players located in the same room.
Ultimately no matter what you do, some small percentage of your players will cheat. Every major game on the market has paid cheats which fully bypass the protections available for it. It's better if those cheats can only buy them a little than buy them a lot. It's not worth investing a lot of effort in more than basic cheat protection, it is worth investing it in handling as much as you can server side and exposing as much as you can client side as that is something cheaters can't game.
This sort of behavior becomes impossible. If they have to aim in a fairly human manner, they can still be killed by anyone walking up behind them for example. It may provide some bit of unfair advantage. But not nearly as much as is possible in many games today.
Honestly, I immediately smell naivete BS when someone says "the solution is simple"...
If a company wants to attract competitive gamers, then they have to develop games you can't cheat in during competitions. Right now, the most reliable way of doing so is server side, with all the challenges that entails. Otherwise, they have to install what essentially amounts to rootkits - which isn't exactly what most gamers want to encourage.
(Alternatively, run game in VM? I guess performance would suffer too much with that approach?)
I lost interest in game RE a long time ago, but http://www.unknowncheats.me/ seems to be its successor, though from a cursory glance I see there are a lot more cheats and less technical information.
GD we miss you.
Made gaming IRC channels utter chaos for weeks.
I used that to bot on WOW back in the day, this tool was fabulous and can be adapted to probably any game. Knowledge of C++ is required to create an extension, after that you can code your bot in C# or Lavishcript (their own script language, quite powerful).
It's true that you can't trust clients not to lie. However, in some situations, it's the correct decision to accept a constrained number of lies if that brings you benefits.
Like so many other absolute statements in software, in reality it's a trade off, and you need to consider the business context to make the right decision.
Right now I'm working on an app where we trust the client. We know that a hacker can compromise a client and send any kind of bogus data, we're just completely OK with it from business perspective.
I mean why not just do it the right way?
What you're also implying is that clients should do no processing - because clients can't be trusted to do anything. If you apply this logic to web-browsing, you're asking the server to send the client a bitmap image of what the rendered page should look like.
They aren't saying "don't let the clients do anything", they are saying "don't trust the values the clients send back"
If the client messes up the rendering, the only person affected is the client. There is no 'trust' required, because you are not relying on anything the client has done.
The idea of trust only comes into play if there is a consequence to that trust being broken.
When somebody says "Never trust the client" inevitably somebody will find a use case that sounds like trust even though it's not what they're saying here and use that as justification.
The lack of nuance is how things like "never use stored procedures" gain traction when they shouldn't.
No, just google "wallhack".
the server sends to the client: account A has $100, account B has $200, make your transaction.
1. trusted client
client says: i've done my transaction, now account A has $50 and account B has $250.
the server updates the database accordingly.
client says: i want to do a transaction from account A to account B with 50$.
the server decreases account A by 50$ and increases account B by $50 and updates the database accordingly.
client says: i've done my transaction, now account A has $52 and account B has $252.
the server updates the database accordingly.
they try to do server checks to prevent this, but due to network timing problems they can't check the exact values, only calculate if this is maybe approximately right.
so a players maximum running speed is, say, 6m/s. they could do periodical checks (every second) if the player moved more than 6m since the last second. but what if the player falls down a building or is thrown by an explosion further than that? it'd register as cheating, even though it's completely legal to jump off a building. so they have to increase the checks for distance/s by 3 to allow those situation (i.e. 18m/s). a cheater with a modified client can now continually run three times as fast as any other player without getting flagged as a cheater.
why do they do this? because it's
a) easier to implement (the client has to do all the computations anyway - doing them a second time on the server and then sending back to the client isn't an easy problem - you have to interpolate) and
b) cheaper (less work = less man hours, less complicated work = cheaper programmers* and less server power needed)
* i'm completely sure the devs would have been 100% able to do it right, but it would have taken them longer, thus costing more.
so, how to fix it? there's only one way: rewrite the model. but this is expensive and the titles have already been sold, so why bother investing another couple hundred thousand bucks? even the disgruntled gamers will buy the next ubi soft title if it's shiny enough. i mean, few people will buy the game now that multiplayer is infected with cheaters but they already exploited enough of the market and long time players are probably less likely to buy the next title if they're still playing an old one. maybe.
The big deal is instead about a badly behaved client being able to make "illegal" changes to the server state.
Let's say you have a Bejeweled clone/match-3 game and a global high score board. What can you do to prevent fake scores? You can obviously obfuscate things, sign requests, etc but at some point the client needs to sign the score it's sending up so the client has the key.
You could use this model of recording inputs and playing them back on the server but that seems like it would be a ton of work if your game is popular and an extraordinary cost for keeping a simple score board clean.
Do you try and rely on subtle math tricks that mean that certain score numbers are de-facto invalid because no combination of scores could end up with that as a final total?
Is there any way to handle the issue other than deleting scores that seem likely fake (large round numbers, for example, or those orders and orders of magnitude higher than other scores)?
This is the common model in RTS games called Lock Step since all clients are advancing a number of frames based on all inputs from other clients in "lock step".
It's also very common way to implement replay and debugging for reproducing rare network bugs.
(Note that some legit games will also look suspicious, so you'd want to use this to find out which players you should keep an eye on, not as a tool for auto-banning.)
There's a sibling comment below that mentions this. Most RTS games checksum the game state and send an out of sync when the checksums fail. If you generated different random values you'd definitely hit this.
Always sucked when it happens about 1 hour into a game of Homeworld.
Or, identify that your business need is just to provide 'some' sort of ranking and compare the user against their Facebook/G+/Twitter friends, who are probably less likely to cheat.
Limiting leaderboards to friends both gives relevant context for fun competition and mostly eliminates the cheating problem by making it a social issue. I know my friends wouldn't be happy if we were all competing on a game and I cheated.
FWIW it's a fun domain space, partially these cool technical approaches(lock-step, dead-reckoning) and partially smart user interactions(how you decide to give users immediate feedback when you won't actually know until 100-800ms later).
Running the game on the server is the only option as far as I know for securing the integrity of the data. You could add a private key to your app, but then you're shipping the private key to the client.
Apple has leaderboards? For what?
Your Xbox hardware is even less secure.
For us, there was no foolproof method. So our main goal was to ensure that the leaderboard wasn't full of obvious cheating.
First, to reduce the volume of cheating, we obfuscated things:
1) We encrypted all communication.
2) We required a single use submit token, generated by the server, with each game play.
We also manually reviewed scores. In particular, the top 10 scores of each game.
Beyond that, since we required a single use submit token, it means that we knew how long the player took to obtain their score. For most games, the higher your score, the longer you played. So we flagged any scores with an out of whack score/play time ratio for further review before showing the score to anyone else.
There's no universal answer to your question, but analysing behaviour of players and flagging various kinds of suspicious activity for more detailed inspection is likely the most common way. Designing the way to respond to such incidents is also non trivial, i.e. banning the player vs deleting the score, doing it quietly vs doing a large batch and prominently talking about it with the community, etc.
Note, both of these fail if the high score has any real value.
PS: Another option is to more heavily validate higher scores, keep the top 0.5% honest and cheating to get a lower high score has less value.
That's one kind of thing I've thought about in the past. I'm thinking things like Game Center's global scoreboard, so if you only every check scores that are supposed to break the top 1000 then you don't have to bother checking everyone's games. If you fake a score of 270 points, well there isn't a huge benefit in stoping you.
You only need to validate scores that would actually make it onto the leaderboard, which would be a tiny fraction of all plays. To avoid being overwhelmed by bad submissions, you flag users who send them as "don't even bother checking, just log and reject".
The real problem is RNG manipulation and TAS-ing in general. How do you to ensure a malicious user automatically can't brute force the whole tree of possible plays?
The main costs would be setting up servers / network serialization code, and isolating the game logic enough that you can reuse the same code on the client and server.
The latter is potentially useful for replay systems as well (so you can see how other top players are playing the game.)
> and an extraordinary cost for keeping a simple score board clean.
Some ways to keep costs down:
- Offload validation onto other clients (this is better suited for matchmaking based multiplayer games where you're communicating anyways, where it can be as simple as having both clients upload the score or result for a given match id - flag both clients when they disagree, whoever racks up flags consistently is cheating / has bad ram / ???)
- Only validate N% of the games from the event stream at random
- Only validate the top N% of scores from the event stream (who cares if people are cheating to give themselves terrible scores?)
Before King was known for its mobile games they were a cash tournaments website (now royalgames.com). This is what they did as well as recording a user's mouse movements.
This sounds prohibitively complicated (and requires work from the server). But note that the state can be a partial state (with obviously weaker protection against hacking), but perhaps still good enough.
One thing that makes this possible is that many of those games run st much slower update rates (typical games are 30/60hz, many rts games run at 10hz) and they run in lock step. If you have different clients with slightly different views of the world it doesn't work at all
Chess has a standard notation, and there's even tools to detect when human players are cheating by having chess engines play their moves by replaying parts of the game to those engines and checking for too many move similarities.
Of course this only works with the right community.
Essentially a social instead of a technical solution.
Also the common term for resolving these types of things on the client is called "dead-reckoning". You've always got a diverging states from latency so you're continuously trying to reconcile this on the client.
Simple Newtonian physics based games(Subspace, etc) are famous for being latency tolerant since the simulation is incredibly deterministic and are based on the players extrapolating where things will be in X time rather than twitch responses(which is also why fighting games are so hard without using a solution like Counter-Strike). You could play SubSpace on a 500ms dial-up connection and still be competitive without needing to lead for lag.
That PVS is why you'll see map hacks where people can see through walls and whatnot. The server is doing less expensive checks(not doing a ray cast per-client to client which is n^2) and the cheat takes advantage of that data in memory.
Is it correct that the gamestate rewind only works for hitscan weapons (instant hit bullet)? What about slow moving projectiles or physically accurate bullets? Can the server spawn the projectile 200ms in the past at the time you pressed the button? But then other clients would receive the information about 400ms later after the initial keypress, which does not seem right...
Since as a player you're trying to shoot the projectile to "predict" where it will intersect you're actually playing to the strength of latency and dead reckoning(that's the SubSpace case above).
You want to segregate your action into twitch(reactionary: hitscan, parry, etc) and predictive(slow projectile, > 250ms ttl, etc) and only rewind/replay for the twitch case.
There's also the case that when you rewind and do confirm a gamestate change you need to gracefully handle resolving it on all clients. That's why you'd see people warp back around corners when shot in Counter-Strike. Their player movement speed slowed after being hit and the server reconciles it and then the clients interpolate the result.
Since TF2 is very heavily based around projectiles (Soldier's rockets, Demo's pills and stickies, etc) many competitive players play with their cl_interp (determines how many ticks worth of game state are delayed before being displayed, interpolated, to the player) set lower than default (which is 2 ticks at 66 ticks per second by default). This results in jerky movement if packets are lost, but it ensures that they see threats as soon as possible so that they can fire their projectiles with minimal (real time) delay. I think it might also have some effect on client side input buffering; not sure, but that would be an even bigger reason for projectile users to want low interp.
in case you missed it: the client syncs the last page read. so you can buy a book, jump to the last side and amazon marks it as read, without checking for intermediate states.
the problem is: amazon pays authors by "pages read". so people publish fake books with 3000 pages, let sweat shops try the book for free and just jump from the first to the last page, sync and it registers with "this customer read all 3k pages".
now create 20 fake author accounts and 30 fake reader accounts, publish 20 fake 3k books (one for each fake author account and they're practically filled with 3k pages of randomly scraped web content) and you got 20303000 = 1.800.000 pages read.
if you are an aspiring author and publish a novella with, say, 50 pages and 10.000 readers devour every single page of it, you've got 500.000 pages read.
the faker can do this in a week and take about 4 times more than, even though it took you 2 months to write it. payouts are out of a pot shared between all authors. thus, authors lose in the short run (less money now), cheaters win big time (a lot money now), amazon doesn't lose money (now) because the payout is the same.
in the long run it's still problematic because small authors are unhappy once they realize what's afoot (and earn 1k instead of 10k). big name authors don't care as much if they earn 1m or 1.1m. customers aren't likely to buy the fake books anyway (cheaters take them offline before the free trial period ends) so they're not really affected.
Case in point, it was recently discovered by the community that gear with the "Protection from Elites" bonus actually increased damage taken from Elites: https://reddit.com/r/thedivision/comments/4g6lnk/tested_conf...
And that's not even getting into the loot quality issues and the complete lack of a carrot-on-a-stick that has been codified in many, many games. (Massive actually made the game grindier because people were hitting end-game content too fast. Which only punished people who did not hit the content yet.)
in case you haven't read it yet: https://engineering.riotgames.com/news/automated-testing-lea...
guess it's down to the value of competitive multiplayer. e-sports are the focus of games like LOL (or, i guess, CS), instead of throw-away gaming.
the division won't have international competitions in 3 years - and that's ok with ubi soft. because they'll have a couple of new games out by then and players will have bought those instead of still playing the division.
It was terrible.
Most all MMOs with WSDA movement trust the client to set the player's position and it mostly works. Cheating through warping or speed is easily detectable through other means of post-verification and banning players that fail that verification. It makes the game feel more fluid to the player while still not really allowing cheating.
Everything else is generally handled serverside which is fine because it mostly doesn't need as high a reaction time as movement to feel natural.
If speed/warp hacks are a problem, it's not because the developers that use client-set-movement aren't able to fix it, it's because they don't care.
Only if you had high latency and tried to move somewhere you couldn't actually move.
>If speed/warp hacks are a problem, it's not because the developers that use client-set-movement aren't able to fix it, it's because they don't care.
That's precisely what I said. Obviously they can fix it, since it was fixed before they started development. They did it wrong and don't care.
> it's because they don't care
That'd be nearly every company running an MMO then. You're quite right in principle: speed hacks and teleportation should be extremely easy to find by doing basic sanity checks every now and then. But in practice these things go on for months and months.
I (used to a lot more) play Elite: Dangerous. A space sim that has multi-player. They made the decision to use (mostly) p2p/client networking to save money - they wouldn't need nearly as many servers. This has caused other issues in addition to cheating - a low limit on the number of players in the same "instance" of the universe for example.
But it was a business decision - imo the wrong one, but what do I matter?
This is of course a vast simplification but the trust part is the same. No one trusts anyone but themselves and validates everything everyone sends them.
It's embarrassing how often I see web apps which use something like Firebase with absolutely no validation or access control. Often the developers behind them don't even realize they have a problem. Their excuse is that "most users wouldn't have any reason to hack it, so why does it matter?"
Developers need to realize their clients are inherently in the hands of hackers. Any security needs to be done on the server if it is going to succeed at all.
Part of what this article offers is "you can't fix it by adding server-side checks", guessing that if they haven't done it already, it's impossible to do. I think it's entirely possible that their model is reasonable (takes movement, fire events, etc. instead of position, inventory, etc.), and they simply haven't guarded against malicious inputs. Adding server-side checks is _exactly_ what you need to do to make movement/firing inputs safe!
The existence of a certain class of bug doesn't mean the whole architecture is broken. It may simply mean they shipped the game without trying to detect cheaters. That may be a bad idea, but if the design is right, it's certainly fixable.
You could bound the inputs to a simple stream of enums (FIRE, LEFT, RIGHT, etc.), but it wouldn't be crazy to batch it up a little bit and send some magnitude data as well (FIRE 2, LEFT 50, etc.). It's a little harder to validate, but depending on the specifics of implementation, that could be a win.
The point of the article is actually that the server doesn't validate anything. There's nothing to validate. The only data it gets from the client is the state of the controls. Then it runs the real simulation. The client only ran a prediction of what it thought the server would do. Sometimes, it's wrong and corrects itself when it gets new values from the server.
It's the same as just holding down a button on your keyboard - the game engine moves you according to the rules. Now you're just transmitting the fact that the player is facing NW and sent a move left command. You're just running the game on the server like you would on the user's machine.
Conceptually it's actually simpler - you're just running the same thing on the server and sometimes backtrack in the client when the connection is laggy.
You can't teleport hack by sending "left 9999" because your inputs only serve to tell the server what direction you want to move towards. Your maximum movement speed depends on the game rules and should not be under your control.
At home, Minecraft runs on my home server (with a small Athlon CPU) and it fails to keep up with the required speed of the game, so the server log will show messages like "server clock running behind, skipping 58 ticks" every so often.
An observable effect of this is that you have to hold the right mouse button just a little bit longer than the actual animation when eating stuff. And when you look at the sun, you will see it moving forward continuously (as calculated by the client), but skipping back a tiny bit every few seconds (when the client adjusts for the server's time drift).
Porting that same idea to PCs gets you into trouble almost instantly.
Hence my question - is this a console port gone horribly wrong?
It could have been developed for console first.
Sounds like this would though, if players can't enjoy a fair experience.
They promote the shit out of games before release and expect them to expire a few weeks after. Reducing operational costs. They also don't have to develop and maintain an anti-cheat, a better server-side structure, and they can close down most servers.
I think it was a unexpected consequence of a bad console port to pc that proved to be quite lucrative. And now they know people will keep buying no matter what, hence the new disposable pc game business model.
Basically, if you play on a PC you're screwed. Another reason not to buy a console or support games like the division on the pc.