It's impossible to prevent cheating from the server-side only. Something like an aimbot can operate purely on information you need to have as a client (to render the other players on the screen), and still be a huge advantage because it can respond faster than any human can.
I think server side statistical analysis can go a long way to detect stuff like that. Obviously its always a cat and mouse game between devs and cheaters, and there are always workarounds, but theres a lot more the devs could be doing without relying on invasive client side detection.
You can tune the aimbot to be as good as the server allows, maybe with a bit of variation to throw it off.
And realistically, some real non-cheating players will by chance just have similar statistics to bots, especially since the bots will start doing their best to mimic real players.
Also many players don't need to cheat all the time; just in that critical moment when it really matters. Didn't Magnus Carlsen say he only needs a single move from a chess computer in the right moment to be virtually guaranteed win? Something like that probably applies to a many people and fields. This is even harder to detect with just statistics.
Also also reminds me of the "you can't respond in less than 100ms, and if start the sprint faster than that after the starting pistol then you're disqualified"-type stuff they have in the Olympics – some people can consistently respond faster and there's a bunch of false positives. Not great.
> Also many players don't need to cheat all the time; just in that critical moment when it really matters. Didn't Magnus Carlsen say he only needs a single move from a chess computer in the right moment to be virtually guaranteed win? Something like that probably applies to a many people and fields. This is even harder to detect with just statistics.
The difference is that IRL chess and a typical FPS game have very different availability of datasets. IRL chess has both fewer moves per game, and fewer games played in short succession than typical FPS games. Also, with FPS games there is a single metric to evaluate -- the shot landed or missed -- compared with chess where moves are ranked on a scale.
So I'd argue that it would be much easier to do a statistical model to predict a cheating aimbot than it would a cheating IRL chess player. I don't believe Magnus's proposition holds for prolific online chess players when they do dozens or more blitz/bullet games in a single day.
> Didn't Magnus Carlsen say he only needs a single move from a chess computer in the right moment to be virtually guaranteed win?
That's because he's an elite chess player. Him cheating once per game could make the difference between being number 1 or number 10 but either way he's up there.
But for you or me, cheating once per game wouldn't make a difference. We'd still be ranked as nobody plebs. To get ranked high enough for people to know our names we would have to cheat dozens of times a game, and experienced players would easily peg us as cheaters.
Try cheating on chess.com, if you cheat enough to make a meaningful difference their servers will automatically nail you with statistics.
I've always wondered about this too. It should be pretty easy to recognize statistical outliers. I'm sure cheaters would start to adapt but that adaptation might start to look more in-line with normal skill levels so at least the game wouldn't be utterly ruined
Valve has adapted this kind of thing in Counter Strike for almost a decade.
They try to make own matchmaking for possible statistical outliers so cheaters end up playing against each other. Of course, real good players can still get there and there are (at least used to) real humans on reviewing on those games to see if someone is actually a cheater. It is not a simple task, since you can cheat to be just slightly better than others and that is enough to be good.
The problem is that most cheaters don't just go full aimbot and track people through walls. That is a surefire way to make sure your account gets reported, reviewed, and banned regardless of what anti-cheat is in place.
Serial cheaters cheat just enough to give themselves an edge without making it obvious to the people watching them. By just looking at their stats, it can become very difficult (though not impossible) to differentiate a cheater from a pro player. This difficulty increases the odds of getting a false positive, necessitating a higher detection threshhold to avoid banning innocent players.
This post is so interesting because it highlights the people that don't know anything about the requirements or state of cheats/anticheat. What you're describing is 10 years out of date. Every modern cheat has a toggle, and (almost) every modern cheater masks augmented behavior with misses/native behavior.
This thread is full of armchair developers who see a problem and immediately think, "Oh, it's easy, just do this simple thing I just thought of," as is there haven't been billions of dollars and decades of research spent on this problem.
According to the latest study [1] estimating how much money cheat developers make annually it is an upper limit of ~ $75M. I would say that the very liberal estimation of anti cheating efforts will cost maybe $100M annually. That does not include only research efforts but actual cost of tackle them (extra compute, reviewers...etc). This is unrealistic but even through to reach the point of billions (2-3 billions) you would say that Gaming companies were spending on average $100M since the beginning of personal computers era (on research only). This is not something that is hard to believe even with the most liberal interpretation.
So I think it is fair to say the there haven't been billions of dollars of research spent on this problem.
That's only looking at western audiences. In 2020, Tencent said that the cheating market in China is worth $293M annually [1]. In China there are many individual games making billions in annual revenue. PUBG bans over a hundred thousand cheaters every week. I don't think adding up to billions is too farfetched, if you count globally over the decades, although it'd be close.
There are also the costs of the opportunities that cheating prevents from happening. Development would be much faster and more types of games could be made.
I think the problem is that that kind of work requires a good deal of developer resources for a long time. What company wants to pay upkeep on a shipped product? You could save hundreds of thousands of dollars a year by shipping a rootkit to players and not worrying about server security.
It only needs to be good enough that people keep buying (or not) the Prime when their old account gets banned. There is good reason that it exist, also from cheating perspective.
Client <-> Server architecture can still take you a long way. Culling what you send to the client and relying less on client-side "hiding" of state, server authoritative actions with client-side prediction, etc.
At the end of the day someone could be using hardware "cheats" but you can get down to a pretty good spot to stop or disincentivize cheaters without running rootkits on their devices.
You don't need a "hardware cheat"; just a program that reads the memory representation of stuff. This is nothing new and already how many cheating tools work, and is exactly what all these anti-cheating things are designed to prevent.
Even having CE installed on your machine - or software like IDA or Olly back in the day - is enough for some games to immediately permaban you. In some cases, having virtualization enabled and having VM software on disk (VMware, wsl2, etc.) can trip up some anticheats.
The average player isn't a developer and doesn't need such things, so some game devs defer to the side of caution. The false positives are a miniscule and acceptable fraction of players.
Latency significantly reduces the effectiveness of culling via the server. There will always be a place for client side anti-cheat if games are running on players' computers.
Funnily, for example, using GeForce Now prevents almost all kind of cheats. Maybe the future of the competitive gaming is that you only use remote client for remote server which is hosted by the game company.
There are hardware cheating setups that utilize an external camera+device with basic image recognition and spoof a normal mouse over USB. From software's perspective, all you see on the client side is normal mouse inputs from a Logitech or razer mouse.
At the very least, this is less capable than wallhacks from reading memory.
Yes, it is true that you can use optics still, but everything memory related is always much more effective and can completely prevented. No need to use rootkits.
I would say that it takes few years still before people have good enough hardware before AI can be used in real-time. You can use OpenCV and train things against specific game to get a decent performance with image recognition, but it is not that reliable.
Yes, but even some cheats are possible through streaming. Basic things like scripted no-recoil all the way to aimbots based on image recognition. People are even using AI to recognize and highlight players on your screen - and even some built into monitors!
On the other hand an aimbot can operate purely on informations you /need/ to send in and out to the physical machine (input peripherals and the screen), so there's that...
It makes it way easier to detect it. If a player can pre move their aim to be at the point near where the aimbot would take it by using a wallhack they can hide the action much more clearly. If they're constantly doing 180 no scopes you've got a pretty good indication something is wrong.
Also if your guns aren't _perfectly_ accurate then the aimbot can't actually predict much of anything.
They even claim to be able to fingerprint players according their playstyle, thwarting all methods of ban evasion. Skepticism should be abundant here, but this one of the oldest tricks in ML: categorization/clustering. I'm cautiously hopeful.
It should be - if a server firehose streams all players' network data to an analysis thing, it should be able to detect patterns of impossible accuracy and response time, even though there is some margin for error due to e.g. lag and packet loss (iirc intentional lag / packet loss are some strategies cheaters use to obfuscate things like aimbots, e.g. generating movements that shoot someone in the head but holding them back for a second or so so that in theory a competent player could have done the required motions within a second instead of 1/100th thereof)
Without kernel level anti cheat you can detect (some) other usermode cheats, but not kernel level cheats. With kernel level anticheat, you can detect the vast majority of other kernel level cheats. Vanguard is effective enough that most successful cheaters are using external devices and DMA to bypass the kernel altogether (or they just use Macs because Apple doesn't allow Vanguard). And despite Riot's insistence to the contrary, they have not "detected" DMA cheats.
Advanced DMA/IOMMU attacks are hard, soft and firmware specific.
In order to detect it, you'll have to do a ton of very expensive work all the while you risk destroying the customers soft, firm and hardware.
Good luck explaining the judge what you did.
if you have a large enough player base to sample, you can determine who is cheating with math. EA Fairplay is pretty good.. Steam's VAC is good, and not some kernel level nonsense..
VAC is so not-good that there are not one but two popular third-party matchmaking services for Valves games whose main selling point is much stronger (read: more invasive) anti-cheat than VAC, and one of them even charges a subscription to play, which highly skilled players gladly pay to get away from the cheaters in high-rank VAC servers.
To some degree, yes. But there are actually many cheaters that intentionally don't play perfectly to avoid detection. That way they appear higher skilled but still within human range.