> (Rockstar really should’ve made this a separate launch option like other games do)
This is what happens when a producer's/product manager's cherished KPIs come before UX.
In their mind, adding a toggle in the launcher would lead to lower engagement and player acquisition.
We, the players, fail to recognize how our gaming experience can be enhanced by using social features like leader boards, guilds, or in game chat. We are not enlightened.
Think about all the fun and exciting connections you'd miss out on if all the social crap was off by default or in an easily accessible place.
I'm honestly surprised it's a command line option. My guess is that the requirement originated externally.
Not sure if, at this point, it costs SpaceX more money to get to Mars, or Rockstar Games to develop GTA 6. AAA budgets are insane (with IMO meagre results - I don't like most of them).
Yeah, it's just me being an idealist and projecting.
I acknowledge what the analytics show, but always allude to the hypothetical casual loner segment who we lack data on because we pushed them away or we don't measure things relevant to them.
I'm a boomer millenial, or whatever we're called now, and never took to online gaming, so I'm part of this segment.
Casual loners are irrelevant to the monetization and in game economy people, resulting in relegation to second class status.
Until someone figures out a way to milk this segment for that juicy recurring revenue, consumable$, $kins, etc..., we must accept our fate, largely an afterthought.
Call of duty 6 launches the single player campaign from the main launcher and I noticed they advertise the anti cheat stuff being enabled (I forget what it’s called). For a single player game. Smh.
This is the exact reason why is started with streaming services for games (Gfore/boosteroid/game pass).
Next the anti-cheat thing I also spend less waiting for updates. And this way I can still play these games with my buddies.
Yes. It's kind of an odd situation, because it's one where it's a benefit to me if other people are running anti-cheat. A limited sort of remote attestation that the people you're playing with aren't running certain kinds of software that peeks into or alters the memory image of the game or its graphics drivers.
I would bet 90% of people here have at least another laptop if they have a gaming PC, if you’re concerned about being compromised by rootkits, just do your taxes on that.
What's even worse is with this update they completely cut off Linux users. It had been performing better than on W*ndows but they had to ruin the game.
Surely this is foreshadowing the future of GTA VI and will have the same problems of being unplayable.
With the way that computer vision and AI continue to improve, I imagine that we will soon have completely external and undetectable cheating peripherals, simply capture the screen direct from the display output, and pass inputs via mimicking a usb human input device.
This won’t provide all the same capabilities as cheats that hook into the game process, such as wall hacks, but it would be possible to build a super human aimbot with such an approach.
We already have external “radar” cheats that use the game stereo sound to give the direction that a certain sound(such as footsteps) came from.
> With the way that computer vision and AI continue to improve, I imagine that we will soon have completely external and undetectable cheating peripherals, simply capture the screen direct from the display output, and pass inputs via mimicking a usb human input device.
This already exists. You can stream your screen to another machine running image recognition and pass your mouse input through a controller that injects auxiliary input (there are off-the-shelf products like kmbox, you can make your own as well).
However, it is very important to understand that only a tiny percentage of cheaters in games end up being determined enough to go through hoops to purchase hardware for it (it's much more expensive and not as simple as getting instant gratification by double clicking on an executable). It's basically considered a win to push people into needing go to such lengths to cheat without getting banned.
For hacks that don't access program memory directly, would external hardware make things less detectable? I don't know how anti-cheat programs work but I'd be surprised if they banned every skilled player that happens to be running AHK and OBS. More likely they work with heuristics that try to detect super-human mouse movements, precision/speed-wise.
And all of that for something that is worse - software cheats usually get access to more information than just the pixels rendered to the screen. Seeing through solid walls and stuff like that.
It is not acceptable on Linux. Apple will also not accept that shit further, that said Apple lost relevance in gaming with Mantle and the M-Processors (both mean a lot of incompatibility). And Microsoft is regretting every choice in this regard:
But that is a usual pattern. Microsoft makes bad decisions and everyone suffers. Even Linux. Their is a reason why closed-source kernel modules mark Linux as tainted, the system is not trustworthy.
It is the duty of game developers to secure their games themselves. Not manipulating user devices. Forcefully doing stupid and dangerous things because you cannot achieve your task in a safe why is not a reason.
It's hilarious that people actually think the publishers will ever have two hoots to give about notions like that. FWIW I agree we'd all be better off without those things but the entitlement to believe private businesses should run on your personal whims and that developers have a "duty" to make things only as you prefer is gobsmacking. I am always left in wonder what commenters like this think about themselves.
The reason games companies reach for KLA is not because they're dumb and can't be bothered to secure their network protocols on the server side, it's because they don't want to have to hire an inordinate number of human reviewers to make unreliable decisions on whether someone is cheating or not in their game.
While KLA is fundamentally flawed (DMA and even CV based cheats are becoming more popular as a result of KLA and they still give cheaters a significant (but now even harder to detect) advantage) it solves the problem of obvious and even most kinds of subtle cheating.
Attempting to detect cheating once inputs are being sent to your server (which is within your domain of control and on which you can implement non-intrusive anti-cheat) is very difficult to do reliably. An inexperienced player will make slow, delayed inputs. A highly experienced player will have reactions which are an order of magnitude faster (and in many cases faster than the speed of thought because of muscle memory). If you want to make a working but no longer detectable cheat, all you need to do is spend a bit of time and effort programming in human limits of reaction time into all your code and making sure all inputs look realistic (again, more limits).
At the end of the day, you can make a cheat which gives you a significant advantage without it being actually detectable by any statistical methods on the server.
At this point you might attempt to reach for AI but undoubtedly that will require human oversight or you will get false positives.
So, in summary, even if you were to design your game around server-side rendering and server-side input processing, forcing your players to effectively play over a remote desktop connection (which is impractical for any fast paced competitive multiplayer game due to latency issues but let's pretend those don't exist for a minute), you will still get cheaters with snap-to-head or recoil compensation or auto-fire making a significant impact on games. Heck, there's even the idea of using sounds (which need to be pretty accurate so human players can utilise them to determine where enemy players are) to implement a rudimentary wallhack.
This is just the nature of FPS games and why games companies end up implementing KLA for these games. The way to make an FPS game un-cheatable is to make a different game where cheating is more difficult or impossible just by nature of the format.
Want FPS without cheaters? Encourage people to do DIY matchmaking again, DIY server hosting and DIY administration. Except that "this doesn't scale". Neither does human review. Neither does server side rendering. The core reasons why game companies do KLA is that players will pay for games with KLA but won't pay for games without it. As much as I think Microsoft is one of the worst companies in existence, in this case I don't think they or KLA developers are to blame. KLA developers are simply doing what players want them to do and Microsoft is only allowing what their end-users want them to allow. If Microsoft removes KLA, it will be by replacing it with userspace code with hardware attestation support, it won't be by killing the concept of intrusive anti-cheat. All Microsoft is doing is trying to re-design the tools to cover their own ass.
Fundamentally, KLA has pushed cheating further into DMA and CV territory. This means that more obvious and annoying forms of cheating, undetectable by KLA are probably going to soon become more common. At this point the options are to have these games be console-only with blessed hardware and hardware attestation. And even that has flaws (as described). Eventually it will just be impossible to play a game in a public server without cheating. Maybe this will force people away from these types of games, or towards private lobbies. I don't know what the future holds here.
UEFI is here, browser DRM for video is here, browser DRM for text+ads on ordinary web pages is just around the corner, government won't ask anyone - the tools will be added at the ISP level, if they're not already installed and operational.
I built a separate Arch Linux box just for Steam gaming. I will never log into any of my sensitive accounts -- email, banking, etc. -- on that machine. It's a Framework laptop so I can physically keep the camera and microphone disconnected. I basically treat it like a public terminal.
Do you truly expect any steam games to have anything like a root kit that’d exfiltrate your credentials?
I feel if this were the case literally anything I install on my PC would be suspect. Installing ssh would be a much more scary thing than a random steam game.
"Downfall, a fan expansion for the popular Slay the Spire indie strategy game, was breached on Christmas Day to push Epsilon information stealer malware using the Steam update system.
Once installed on a compromised computer, the malware will collect cookies and saved passwords and credit cards from web browsers (Google Chrome, Yandex, Microsoft Edge, Mozilla Firefox, Brave, Vivaldi), as well as Steam and Discord info.
It will also look for documents containing 'password' in the filenames and for more credentials, including the local Windows login and Telegram."
That's not a steam game, that's a user mod (read: random binary downloaded from the internet and executed). Also, it doesn't need kernel level access to do any of that stuff, it can get by just fine with normal application level permissions.
This is no different to downloading a random binary off the internet and being surprised it's malicious.
Unity has some kind of data collection that can be used for analytics and advertising, so you might need to opt out of that in a Unity game. I think that came up in KSP as well.
client inputs have to be trusted, and there is no provenance. the kernel has no visibility of inputs.
i’m shipping a 100 player matchmaking game now. clients tick at 360hz, server ticks at 120hz. fair up to 60 ping, which covers entire continents. servers are metal, not vms. epyc 4244p with 2Gbps egress, 1 server per 15 minute game. mitigations=off and nosmt on all clients and the server.
i love steam, but won’t be releasing this there.
it’s reboot-to-play, a modified archlinux iso that boots directly into the game from a usb drive.
i control not only the kernel, but the os, and every running program. you don’t get cortana. you don’t get discord. you don’t get spotify. you get the game. for the duration of play, your pc becomes an arcade machine.
still, this is not enough.
to play ranked, you’re going to have to get a handcam over your left shoulder. it will see head orientation, both hands, full mousepad, and screen. you’re also going to use fixed mouse speed, mousepad size, and monitor size. reviewing any players inputs will look familiar, since everyone is playing with identical settings and setup.
kernel anticheat is not enough. we need a reproducible full os setup, down to running programs and network connections.
even that is not enough. we need provenance of user inputs hooked right up to the game replay system, so you or anyone can review engagements from any parties perspective.
obviously this should all be opt in. not everyone wants to play ranked, and whole-os anticheat should help even without input provenance.
have you ever wondered if you died to a cheater or a god? do you wish you could never wonder again? i do. soon, i won’t.
I wouldn't be so sure. Life has taught me that people will accept damn near anything in order to get the entertainment they want. If you worked your way up to the point OP describes slowly, over time, I wouldn't be at all surprised if people shrugged and said "it's just what you have to do if you want to play those games".
to be honest, all of this would be worth it just to never have to listen to fortnite streamers whine about cheating/bugs/ping/serverlag/stormsurge/etc ever again. i understand why they whine, but i just don't want to hear it anymore.
i sympathize with their pain, and i have the solution.
What an incredible testament to the lengths men can be driven by spite.
I would like to try your game, sir. But my problem is the people trying to take over my computer. I am not going to solve that by letting you take over my computer.
Let's talk system requirements- could I get away with running it on an old junker laptop?
possibly, depending on the gpu. wickedengine requires modernish gpu.
spite was definitely a part of it. at a certain point while playing and watching fortnite solo build, i wondered why there are so many bugs and if i could fix them. i wanted to understand why ping advantage and storm surge have to exist. this game was my journey to find out. it has been a privilege and a joy every single day.
as far as security, you’re asking the right questions. the typical gamer showing up in my discord doesn’t, so i guide them to them.
if you boot my game, you’ve booted an archlinux iso. what could i do? i could read/wipe your disks, so you should make sure they are encrypted/backedup. i could probe your network. i could maybe even install bios level persistent malware. i could do anything a userspace app with admin can do.
none of this is different from a windows app as soon as you click yes on the admin popup thingy which every multiplayer game needs.
reboot-to-play is better, because assuming you use bitlocker, i can't read every file on you c drive, unlike every game you've installed from steam. i also can't mess up your windows registry, or any other os config.
the reboot-to-play build process is not yet open, but soon will be. even then, the game binary it will download and run is not open.
this and more is explored in the faq on the games site, let me know if you want more answers up there!
the purpose of reboot-to-play is not to corrupt your disks. its purpose is to get all players into an identical state, for fairness, and to avoid finicky windows tweaking for performance. everyone’s pc is a special snowflake. what we want for multiplayer is identical arcade machines. every time you boot, you're in the correct state.
running software is ultimately about trust. you can trust epic, or riot. you can trust steam or apple to vet user provide apps. you can trust me.
do you want to trust me? that's up to you. i would say, wait for launch, watch some streams and videos, and see if it looks fun!
launching soon. working through final matchmaking issues now.
All of this, when you can just play on console.
I know cheaters theoretically exist there, but in low enough numbers on my PS5 games that they don't impact my user experience.
Kudos to your insane game plan. Gonna be hard to get any marketing from Twitch streamers though.
> ps5 can’t play 360hz and can’t use performance mode graphics.
This [0] is your game. Without running it (because I'm not installing your OS on my machine, no offence) there's no reason that wouldn't run at 360Hz on a PS5. A PS5 is an 8 core machine with a dedicated GPU; it's going to be vastly more powerful (and has the advantage of being standardised hardware) than the random beater laptops your players are going to run. If you're talking about rendering at 360Hz - How many people realistically have that sort of monitor? And if they need to splash out £250 for it, we're getting close to the price of a console _anyway_ where you can play other games too.
> consoles are great, but esports will always be on pc.
Except for fighting games, sports games, and most importantly CoD.
> input cheats are common on pc and console, there isn’t a difference anymore.
Theoretically, yes. Practically speaking, input cheats are widespread on PC, and non-existant on console. (that's not to say XIM and co don't exist, but they're nowhere near the adoption level that's seen on PC).
i mean, i haven't built an os. it's just standard archlinux with minor conf. my binaries will run, but they would regardless of the platform.
i can't tell users to go into bios and turn off smt. i can tell them to boot an iso, and have it preset to do that via kernel cmdline. owning the os means i can tune the network stack, the os, the kernel, anything that helps performance. the game itself has minimal config, only keybinds.
i'm really targeting one player base: fortnite ranked/competitive players.
fortnite competitive does exist on console, but barely. it's an after thought, and the game can barely run, even on ps5. most importantly, it doesn't let you set graphics to the settings every pro uses. on console you have default high graphics. all serious players migrate to pc.
the game will run on beater laptops, but it's not a great experience. the players i'm targeting all have desktop pcs and 1080p240 monitors already. if i can get 1% of 1% of fortnite ranked players to try, that would be inglorious. if it's just me haning out with 99 aws servers, that's fine too.
i want there to be an on ramp for new players, but optimizing for low end spec is not a goal. especially with 100 players all standing next to each other, low end hardware falls over. will probably have to limit to 50 or even fewer players depending on how much low end hardware shows up to play.
input cheats are common on console for fortnite, i'm not sure about other games. reads video from hdmi, mitm controller outputs. multiple generations of that, some of it works on pc too, and then pc has a whole new host of similar tech.
input provenance is the only real solution that i can see. the rest is shadows on a cave wall.
How does that scale? Handcam anticheat works well for exams and Olympiads where there's limited people and plenty of time to review footage after it's over, but I can't imagine it would work well on Fortnite or Counter Strike unless you staff entire offices reviewing footage full-time. Though I'm also doubtful there'd be many people willing to run an untrusted OS on their computer just for a random game so maybe you wouldn't have that much footage to begin with
when you lose a fight, you drop into replays without leaving the game. scrub back like in a video editor, and watch the engagement from the other players perspective. the handcam footage is available here. cheating should be obvious. report or no, then return to the game, where you can spectate the rest of the match like a ghost.
fortnite replays are commonly used by pros, but less so by more casual players. their main issue is that you have to leave the game to get to replays. they also only kind of work, and don't show full server fidelity. our server ticks at 120hz, and replays are full fidelity.
lots to figure out still, but that's the idea. ranked won't launch for a while, but i wanted to design around anti-cheat from day 1.
failure is a distinct possibility with this game, that's ok. success would be interesting too.
i'm not quitting the day job to build this game. if it never makes a dime that's ok.
i started this to study fortnite and understand it's tradeoffs. turns out, it's mostly just tech debt and accidents of history. i wanted to see if i could do better, and i could.
I hate to say this but a large percentage (in fact, I believe a majority) of gamers simply do not care about invasive anti-cheats. Right now CounterStrike players are mostly begging Valve for kernel-level anti-cheat since their current solution isn't working at all. If anything, this warning will actually make many player's more impressed with the game. That said, more consumer information is almost always better in any case, especially in this case considering that this is not a requirement of law but of a private company.
As a counter strike player, I definitely shy away from the invasive anti cheat stuff… but I’d let valve inject it into my veins if it meant I could actually play and not suspect everyone of cheating all the time. Mostly because Valve has earned my trust. I won’t install games from other companies using similarly invasive techniques though.
Valve wouldn't purposefully backdoor you for nefarious purposes. But any such code is not nearly reviewed enough to be sure it is free of unintentional backdoors that could be exploited by third parties.
While I trust valve, I'm not willing to mess up with my workstation to play.
Also, there's hardware cheats, so I don't need a rootkit on my machine, but a server side thing that properly weeds bad players out through reports/trust and automated bans.
> a server side thing that properly weeds bad players out through reports/trust and automated bans.
No. No no no.
Automated bans via the report system is very well-known to be abused.
Even if you implement a "trust" system where initially, all your reports are manually verified by game staff until its determined your reports are correct until your reports are acted on automatically, all it takes is a player to just be "good" until their trust is high enough, then start reporting people who don't actually deserve it.
And I'm not convinced that server-side anti-cheat can be effective. You have to rely entirely on heuristics. Sure, a simple aim-bot that instantly snaps someone's aim right on someone's head might be detectable, but one that simply lets you see through walls certainly won't be if the player doesn't make it stupidly obvious by pre-aiming around every corner.
Yeah, I generally trust Valve but gaming is definitely not important enough to me to give them kernel access to my system. I’m sure many gamers disagree with me though.
Yep. I would call myself a privacy focused person, but given that Windows is the primary platform for PC gaming, and I trust Microsoft about as far as I can throw a their corporate headquarters, the platform is already compromised. Treat it accordingly, play your games. Maybe watch your adult films and write your memoirs on a different system than your gaming rig.
I get the argument, but if that is more than a strawman argument to you, I am bewildered. Making a network connection is infinitely less problematic than having root level access to a kernel (translate to windows language for NT)
The secondary effect is that business will stop using processes and chemicals which require them to carry this warning. You've effectively created a new market segment.
Are the labels annoying to the point of comedy? Sure, but it's not /your/ behavior we were trying to modify.
There’s a lot of other stuff in the video but if you skip the robot building parts at the beginning he talks about an anti cheat system he developed with another person.
Behavioral analysis (the thing he's talking about) doesn't work that well and has a hard precision limit due to the nature of online gaming. What the player sees, what the server sees, and what other players see are entirely different things. I'm not even talking about plausibly deniable things like visual sound location.
Nobody's using complicated stuff like this in practice though, as there are easier methods. But of course this path can be taken, and it's not possible to block easily.
it's not always easy to tell if it's just a player playing weirdly or a mistuned AI though. Maybe the player just have too low mouse sensitivity so it is always a little lagged, maybe it's actualy AI. There is no easy way to tell, and require manual judgement in a lot of cases.
Do you have some examples of good anti-cheats that are not kernel-level? Do you have any that are as good as Riot's Vanguard? I'd prefer examples of FPS games since these are the most mechanically skill based compared to other genres that have more strategy, but would like to hear any examples you are thinking of. Lastly, if you say server-side, that may work, but many companies don't seem interested in it due to the cost, at least IIUC.
As someone that plays CS2 and Valorant regularly...
Vanguard hasn't been effective for a while now. The cheating situation is a lot worse than CS in my experience, but every discussion gets shutdown because... well... it's Vanguard.
With CS2 I have talked to many players about this and everyone says the same thing: "There's a very noticeable decline in cheaters above 10k Elo."... personally I have pushed beyond 15k and briefly above 20k Elo and the amount of cheaters have steadily declined (although less obvious cheats, eg. wallhack, are probably more common at that level) - for Valorant it has pretty much stayed at a constant amount of "cheatiness" across the ranks.
CS actually has a rich history of features, functions, services?... that aren't strictly anti-cheat...
Overwatch gave players the option to "police" others players replays - this wasn't only against cheating, but also griefing.
Prime? Is it still even a thing? It was great when CSGO went F2P... all the cheaters just annoyed the non-prime players (F2P).
The ominous Trust factor which is probably the single most effective piece in making my personal experience great. But there's no real way to tell?
Also, VacNet - which is running? is AI based? banning players? lowering their trust factor?... with Valve there's no real way to tell most of the time, but it's probably existent in some shape, way or form.
Not to say that CS2 has solved cheating, it's far from it - but neither has Valorant.
I have a very hard time believing that the rate of cheaters go down in high elo. IIRC the new CS2 leaderboard still regularly features cheat companies on it (eg. config by [cheat dev] as the leaderboard name.) I myself do not have any data to back up that claim, but yours completely goes against what I have experienced.
I think the point about wallhack being more common in higher elo is more likely. I would add that some forms of trigger botting and recoil control cheats are actually more difficult to tell than wallhacking. Spinbotters don't get very high elo because they get mass reported because of how blatant they are, likely not due to VAC. I would need some real evidence to believe that claim (although as I said I similarly have no evidence myself to convince you to accept my claim).
One thing I can say is that I do frequently meet cheaters in CS these days, and the issue has gotten so bad in my experience that many cheaters even announce at the start of the game that they eg. have wallhack. Or one team member will turn on cheats if a game is getting close towards the end of the game. Also, the main reason FACEIT exists is for its anti-cheat, and on FACEIT there are almost no reports of cheating, and it's a big deal when it happens. If VAC was really working now we would see more people leaving FACEIT. I must ask when you started playing CS? Because the only way your post makes sense to me is if you started playing around the time when CS2 came out, which indeed did have more cheaters then it does now, but that was truly an exceptional level of cheating and I don't think that is a fair point of comparison, especially as a comparison to Vanguard.
I admit to taking claims about Vanguard at face value and I've never played Valorant (in part due to Vanguard, as I don't want to install a rootkit). But what you say about Vanguard also completely goes against what I have heard about it.
but yeah I can agree, my friends say CS2 is full of cheaters, I have played 7-12k rating and I got only a few cheaters throughout this whole year of CS2.
and they say they keep playing Valorant because there's way less cheaters than CS2.
My question would be can't the netcode be improved to prevent this in the first place? The fact that all players receive full game state enables this. In the early 2000s this made sense. Does it still today?
That will only protect you against wall hacks. This is a strategy known as fog of war. The server will not send the positions of players far from you. However, you still need to send the positions of players near you, but still behind walls, otherwise lag compensation won't work properly.
This doesn't protect you against trigger bots (shoot automatically when you put your mouse on a target), aim bots (snap to targets, ranging from obvious hacks to very minute adjustments), and others.
That's already done entirely server side. It could only be. If you mean predictive positioning from the client side you can do that with far less state than gets transmitted today, and you could factor in the other players momentum on the server side to see if prediction would even be necessary in a given frame or not.
The server could also send lots of phantom updates so the player client has no idea which objects are real and which aren't. The hacks could work around this but it would take a lot of power to do so. There's room for asymmetric counterhacks here.
As for the other types of bots those are far less useful and more detectable by naked eye without wallhacks, which ironically, is because lag compensation is server side, these hacks do not have a deterministic outcome when used.
When you look at a video of what a wallhack enables and how much state data gets transmitted that shouldn't be, I would be embarrassed to have such unworthy netcode in the 2020s. They've had 20 years and have done next to nothing.
Blizzard’s Warden + their legal team. While not strictly an FPS directed solution, I can play Heroes of the Storm in more places without breaking my system like Vanguard does for League of Legends.
It's impossible to prevent cheating from the server-side only. Something like an aimbot can operate purely on information you need to have as a client (to render the other players on the screen), and still be a huge advantage because it can respond faster than any human can.
I think server side statistical analysis can go a long way to detect stuff like that. Obviously its always a cat and mouse game between devs and cheaters, and there are always workarounds, but theres a lot more the devs could be doing without relying on invasive client side detection.
You can tune the aimbot to be as good as the server allows, maybe with a bit of variation to throw it off.
And realistically, some real non-cheating players will by chance just have similar statistics to bots, especially since the bots will start doing their best to mimic real players.
Also many players don't need to cheat all the time; just in that critical moment when it really matters. Didn't Magnus Carlsen say he only needs a single move from a chess computer in the right moment to be virtually guaranteed win? Something like that probably applies to a many people and fields. This is even harder to detect with just statistics.
Also also reminds me of the "you can't respond in less than 100ms, and if start the sprint faster than that after the starting pistol then you're disqualified"-type stuff they have in the Olympics – some people can consistently respond faster and there's a bunch of false positives. Not great.
> Also many players don't need to cheat all the time; just in that critical moment when it really matters. Didn't Magnus Carlsen say he only needs a single move from a chess computer in the right moment to be virtually guaranteed win? Something like that probably applies to a many people and fields. This is even harder to detect with just statistics.
The difference is that IRL chess and a typical FPS game have very different availability of datasets. IRL chess has both fewer moves per game, and fewer games played in short succession than typical FPS games. Also, with FPS games there is a single metric to evaluate -- the shot landed or missed -- compared with chess where moves are ranked on a scale.
So I'd argue that it would be much easier to do a statistical model to predict a cheating aimbot than it would a cheating IRL chess player. I don't believe Magnus's proposition holds for prolific online chess players when they do dozens or more blitz/bullet games in a single day.
> Didn't Magnus Carlsen say he only needs a single move from a chess computer in the right moment to be virtually guaranteed win?
That's because he's an elite chess player. Him cheating once per game could make the difference between being number 1 or number 10 but either way he's up there.
But for you or me, cheating once per game wouldn't make a difference. We'd still be ranked as nobody plebs. To get ranked high enough for people to know our names we would have to cheat dozens of times a game, and experienced players would easily peg us as cheaters.
Try cheating on chess.com, if you cheat enough to make a meaningful difference their servers will automatically nail you with statistics.
I've always wondered about this too. It should be pretty easy to recognize statistical outliers. I'm sure cheaters would start to adapt but that adaptation might start to look more in-line with normal skill levels so at least the game wouldn't be utterly ruined
Valve has adapted this kind of thing in Counter Strike for almost a decade.
They try to make own matchmaking for possible statistical outliers so cheaters end up playing against each other. Of course, real good players can still get there and there are (at least used to) real humans on reviewing on those games to see if someone is actually a cheater. It is not a simple task, since you can cheat to be just slightly better than others and that is enough to be good.
The problem is that most cheaters don't just go full aimbot and track people through walls. That is a surefire way to make sure your account gets reported, reviewed, and banned regardless of what anti-cheat is in place.
Serial cheaters cheat just enough to give themselves an edge without making it obvious to the people watching them. By just looking at their stats, it can become very difficult (though not impossible) to differentiate a cheater from a pro player. This difficulty increases the odds of getting a false positive, necessitating a higher detection threshhold to avoid banning innocent players.
This post is so interesting because it highlights the people that don't know anything about the requirements or state of cheats/anticheat. What you're describing is 10 years out of date. Every modern cheat has a toggle, and (almost) every modern cheater masks augmented behavior with misses/native behavior.
This thread is full of armchair developers who see a problem and immediately think, "Oh, it's easy, just do this simple thing I just thought of," as is there haven't been billions of dollars and decades of research spent on this problem.
According to the latest study [1] estimating how much money cheat developers make annually it is an upper limit of ~ $75M. I would say that the very liberal estimation of anti cheating efforts will cost maybe $100M annually. That does not include only research efforts but actual cost of tackle them (extra compute, reviewers...etc). This is unrealistic but even through to reach the point of billions (2-3 billions) you would say that Gaming companies were spending on average $100M since the beginning of personal computers era (on research only). This is not something that is hard to believe even with the most liberal interpretation.
So I think it is fair to say the there haven't been billions of dollars of research spent on this problem.
That's only looking at western audiences. In 2020, Tencent said that the cheating market in China is worth $293M annually [1]. In China there are many individual games making billions in annual revenue. PUBG bans over a hundred thousand cheaters every week. I don't think adding up to billions is too farfetched, if you count globally over the decades, although it'd be close.
There are also the costs of the opportunities that cheating prevents from happening. Development would be much faster and more types of games could be made.
I think the problem is that that kind of work requires a good deal of developer resources for a long time. What company wants to pay upkeep on a shipped product? You could save hundreds of thousands of dollars a year by shipping a rootkit to players and not worrying about server security.
It only needs to be good enough that people keep buying (or not) the Prime when their old account gets banned. There is good reason that it exist, also from cheating perspective.
Client <-> Server architecture can still take you a long way. Culling what you send to the client and relying less on client-side "hiding" of state, server authoritative actions with client-side prediction, etc.
At the end of the day someone could be using hardware "cheats" but you can get down to a pretty good spot to stop or disincentivize cheaters without running rootkits on their devices.
You don't need a "hardware cheat"; just a program that reads the memory representation of stuff. This is nothing new and already how many cheating tools work, and is exactly what all these anti-cheating things are designed to prevent.
Latency significantly reduces the effectiveness of culling via the server. There will always be a place for client side anti-cheat if games are running on players' computers.
Funnily, for example, using GeForce Now prevents almost all kind of cheats. Maybe the future of the competitive gaming is that you only use remote client for remote server which is hosted by the game company.
Yes, but even some cheats are possible through streaming. Basic things like scripted no-recoil all the way to aimbots based on image recognition. People are even using AI to recognize and highlight players on your screen - and even some built into monitors!
On the other hand an aimbot can operate purely on informations you /need/ to send in and out to the physical machine (input peripherals and the screen), so there's that...
It makes it way easier to detect it. If a player can pre move their aim to be at the point near where the aimbot would take it by using a wallhack they can hide the action much more clearly. If they're constantly doing 180 no scopes you've got a pretty good indication something is wrong.
Also if your guns aren't _perfectly_ accurate then the aimbot can't actually predict much of anything.
They even claim to be able to fingerprint players according their playstyle, thwarting all methods of ban evasion. Skepticism should be abundant here, but this one of the oldest tricks in ML: categorization/clustering. I'm cautiously hopeful.
It should be - if a server firehose streams all players' network data to an analysis thing, it should be able to detect patterns of impossible accuracy and response time, even though there is some margin for error due to e.g. lag and packet loss (iirc intentional lag / packet loss are some strategies cheaters use to obfuscate things like aimbots, e.g. generating movements that shoot someone in the head but holding them back for a second or so so that in theory a competent player could have done the required motions within a second instead of 1/100th thereof)
Without kernel level anti cheat you can detect (some) other usermode cheats, but not kernel level cheats. With kernel level anticheat, you can detect the vast majority of other kernel level cheats. Vanguard is effective enough that most successful cheaters are using external devices and DMA to bypass the kernel altogether (or they just use Macs because Apple doesn't allow Vanguard). And despite Riot's insistence to the contrary, they have not "detected" DMA cheats.
Advanced DMA/IOMMU attacks are hard, soft and firmware specific.
In order to detect it, you'll have to do a ton of very expensive work all the while you risk destroying the customers soft, firm and hardware.
Good luck explaining the judge what you did.
if you have a large enough player base to sample, you can determine who is cheating with math. EA Fairplay is pretty good.. Steam's VAC is good, and not some kernel level nonsense..
VAC is so not-good that there are not one but two popular third-party matchmaking services for Valves games whose main selling point is much stronger (read: more invasive) anti-cheat than VAC, and one of them even charges a subscription to play, which highly skilled players gladly pay to get away from the cheaters in high-rank VAC servers.
To some degree, yes. But there are actually many cheaters that intentionally don't play perfectly to avoid detection. That way they appear higher skilled but still within human range.
I think most of these companies do do the server side properly. There are plenty of hacks that just make a client play ungodly well. Like macros, aimbots, cooldown tracking, auto-hex
GTA V is an exception because it's so easy to cheat in. I believe it's peer-to-peer with no verification among peers that what happened should actually have happened. It's basically impossible to secure that.
I suppose that was an intentional choice, I can imagine running the amount of worlds that GTA has (iirc you only have up to 32 or so players in a world? Something like that) doesn't scale well cost-wise. IDK if AWS and co were up for the task yet back when. But since you earn in-game currency, not having a central authority check these things is... an interesting choice.
I suspect GTA VI may improve on these things and have centralised/dedicated/anti-cheat-guaranteed servers. Then again, it never impacted their profit margins so idk.
What? The current PC gaming model where things run on a machine controlled by the user is fundamentally against solving the issue of cheats. You can't prevent everything server-side.
The problem of cheating in games does not weigh more than the users ultimate ownership of and control over their own property.
No one has a right to a business model.
They can do plenty enough server-side. It's not a blocking problem at all, it's just easier to take over all control of the users pc for your own convenience.
Everything, including all valid goals, is easier if you could just have the power to control whatever you want instead of having to cooperate and respect others and respect boundaries. It's no more valid than saying "Everything would be so much better if everyone would just do what I say.". Using that argument is invalid even if supposedly applied in service to some otherwise valid goal.
I basically agree with this. Which is why I run a Linux box for gaming, and why I don't play games that have this problem.
I used to play Quake-likes, and there are people who are just that good out there, but it assuages the ego so much more to call them cheaters. I saw this all the time on CS - as soon as someone even halfway good joined, everyone called them a cheater and the game dissolved. I eventually realised that this is not an anti-cheating problem, but a community/personality problem with the people that like playing these games. So I stopped.
What a bizarre take. If people consent to installing these invasive anti-cheat systems, then it doesn't matter if anyone has a "right" to a business model or not; in that case their business model is working.
> They can do plenty enough server-side.
No, they can't. The amount of responses in these threads by people who have no idea what they're talking about is... well, probably not surprising, unfortunately.
This is the same (correct) argument against the effectiveness of DRM: if you put things in the hands of a user and client you don't control, then it is a cat-and-mouse game to try to maintain control of those things.
Sure, a naive cheat program of 20 years ago will today obviously look like a cheater. But if you have a cheat that statistically makes you look like a skilled non-cheating player (these things exist today!), the server isn't going to be able to catch you.
I'm not saying that justifies letting another party install what is effectively a rootkit on your hardware. I personally won't do it; I just live without games that require it, and that's fine. Maybe there is some middle ground where some form of client-side anti-cheat can reliably run without kernel-level permissions. But it's a lazy, ignorant argument to just say that game companies haven't come up with it yet because it's "easier" to write a kernel-level system.
The bizarre take is granting a shred of validity to anyone who says "I need the keys to your house and bank account and a webcam in your bath room to protect the marketable value of my game service so that other platers will rent server access from me."
> The problem of cheating in games does not weigh more than the users ultimate ownership of and control over their own property.
What the users want to use their ultimate ownership and control over their own property for is preventing cheating.
It's not like Riot is forcing this on people against their will, people just don't like playing against cheaters.
The only place I ever hear complaints about kernel anti cheats are people complaining because they want to use Linux and it isn't supported or forums like Hacker News, where people paradoxically care so much about peoples computing rights that they are perfectly happy to limit what gamers are allowed to do with their computers.
It is if you want to be allowed to play with other people because...
> The problem of cheating in games does not weigh more than the users ultimate ownership of and control over their own property
...when you play a multiplayer game what happens on your property affects what happens on the property of the other players and often also on the property of the game company. If you want to be allowed to do that you might have to agree to do some things on your property because...
> No one has a right to a business model
...no one has a right to play any particular multiplayer game.
I'm not advocating for taking away users rights, just pointing out that the current model doesn't really jive with the desire to stop cheaters. This is going to be a never ending cat & mouse game.
I'm saying no such thing. I'm saying that that wrong is no excuse for the other wrong.
There are infinite ways to attack any problem, and it's not a requirement but a choice to persue only certain ideas vs others.
For instance, these approaches are based on removing agency from all users for the supposed goal of dealing with the bad users.
But there is no law of physics that says that is the only way to do that.
You could go the opposite direction and empower all users to deal with bad actors themselves just like in real life where anbasshole simply gets avoided or punched in the nose, which works by the simple math that the bad actors are outnumbered by everyone else. They still always exist but they are relegated to operating in the corners and shadows.
But their low level presence is a fact of life no matter what. Oppressive regimes don't get rid of them either. The sales pitch is we'll protect you but in fact they don't any better than you could have yoirself.
A company that has an easier option and has no other value meter than money divorced from any consideration of how it is attained, simply has no incentive to bother doing anything but the easiest thing. That's the only reason they want the keys to your house, because you stupidly give them, not because they need them or have the tiniest right to demand them to protect their entertainment business.
I'm not sure what point you're trying to make but in this context there is no difference. If you know someone is cheating, you prevent further cheating by banning them.
Now I'll ask: how do you detect someone wall hacking automatically? No human review and no false flags. Go!
> If you know someone is cheating, you prevent further cheating by banning them.
If you think it's statistically likely that someone might be cheating, but you're not sure, you can matchmake them with other people who might be cheating.
That seems flawed as you would punish people who are playing well. Statistics are great, but you'd inevitably match legitimate players with misfits, ruining their experience.
A prevention model would be like the xbox where technical measures are used to prevent user code. A detection model is server side and detects anomalies for bans.
This doesn't work well in real time games. The client needs to know another player is on the other side of that wall so it can
* Play sounds from their actions
* Actually be able to render them when either player comes around the corner without them obviously materializing out of thin air.
I've never seen a game request root privileges, and I would think installation of anything kernel-level would need that. None of the steam binaries have setuid nor capabilities set.
Have anyone seen games that request root privileges?
EDIT: I'm gathering from this[1] and the fact that no wine-related package have kernel modules included and no executable from any of those packages have setuid nor capabilities set, that this isn't really a problem in Linux, just in Windows.
The kernel level anticheats are almost always written for Windows. They are relevant to gaming on Linux because those games won't work on Linux even if wine/proton run the user space portions fine
From my understanding, if you play an EAC game on Linux with Proton, you're not really running the same EAC as Windows players. You're running a lite version that runs as a regular user and it tries to provide at least some level of protection like verifying game files or detecting anything clearly out of place that it can detect, but obviously it doesn't have the permissions to see everything running on your system or install a kernel module. This does mean cheaters could probably just cheat on Linux to bypass it more easily, so anticheats like EAC will put Linux support as an opt-in toggle which some developers choose not to enable.
It's worth noting that when you first install it, steam asks to install a service to assist with its duties, presumably for most install tasks. Steam has been around long enough and that service is now trouble free that it became part of the furniture most ignore as part of the background. That's aside from how users may be trained to hit 'yes' on any permission box that comes up to swat it away and play the game.
> It's worth noting that when you first install it, steam asks to install a service to assist with its duties, presumably for most install tasks.
They do this because Steam was originally designed in the XP era when you could write whatever you want to Program Files without escalating to admin, and instead of refactoring where they put their files when Vista made the permissions more strict they started installing that backdoor service which lets them keep putting everything in Program Files without triggering UAC prompts all the time. It's a pretty gross and unnecessary hack, but I doubt they're ever going to fix it at this point.
Although I'm not fully linux knowledgeable, I think they put everything under the user profile in ~/.local/share/Steam for similar reasons so they can do software installs with no elevations. They're not the only ones taking that approach though, it's become common across OSes to offer an easy/quick installer that dumps itself in your user profile because that's seems to matter most to getting users up and running.
I don't think this is why -- Steam actually sets permissions on its subdirectory so that any user can write to it. (This means that while installing mods, for example, I can write to that directory without having to deal with UAC/sudo.)
The anti-cheat problem is long-running and complicated. If you choose not to run anti-cheat because you understand that these are opaque rootkits, good for you! That's a totally, 100% valid choice. But please keep in mind:
- you are a tiny minority and not the target customer
- online multiplayer games are an absurdly big business (i.e. there are huge incentives here)
- no, you can't completely solve this server side
- elite players are insanely good - they are by definition outliers, so looking for statistical outliers is not in itself a solution
- game companies are highly incentivized to work with (or at least not antagonize) the elite players (so just throwing them in matches with cheaters is not a solution)
- the stakes are high both for the devs and their users, so "pretty good" anti-cheat is usually insufficient
You can sum things up by saying that kernel-level anti-cheat DRM is the worst solution, except for all of the other solutions.
I hope to see more discussion on possible solutions and tradeoffs - this is a challenging technical problem whose solution (if there is one) is fairly valuable.
While all of what you're saying is true, I think it is worth noting that historically a large chunk of this problem was solved by communities hosting servers. I agree that in the matchmaking era, remote attestation via kernel-level anticheat is the inevitable solution that you converge to after a few iterations.
And yes, servers would often kick out people who were too outside of the general skill level, even if they weren't cheating. As (say) a p80 player, playing against a p99 player feels roughly as bad as playing against a cheater. (But of course the p99 player is doing so honestly.)
> historically a large chunk of this problem was solved by communities hosting servers
Yes and no.
I lived through that era too, and there are serious scaling problems: at some point, trying to banhammer griefers with rotating IPs becomes a full time job, and then the public servers turn into a dumpster fire.
Yeah, having written that I was thinking about this as well. There's a lot of unpaid labor involved in that model. Maybe, between rootkits and that kind of exploitation of humans, rootkits are the less unjust option.
Not at all correct! Nothing of what was said is true. The actual reality is:
* Microsoft makes piles of money from Gaming
* Microsoft got involved with Gaming to damage Linux adoption and corporate support (Sony/Linux/Playstation)
* Microsoft spends massive amounts of attention on gaming to lock in the general public to Windows
* Microsoft continues to lose to Linux
* Microsoft uses cheating to lie about open source being 'something something' cheaters
The fact of the matter is that Microsoft has absolutely no interest in an open source solution to these problems and are using these issues to lie, mislead and spread FUD in some absurd fantasy world where only some superior microsoft driven closed source solution is the only possible way this can be solved. All of that is a complete lie. Nothing more.
A smart linux and free software lawyer would be wise to file a class action lawsuit for discovery documents inside Microsoft where one would undoubtedly find piles of emails between the executives hell bent on doing everything to damage Linux adoption have stupidly wielded this unidentified axe which is actually a -4 cursed boat anchor.
Anyone that tells you that computer security or trust can only be done with proprietary software is lying to you for their own benefit.
> game companies are highly incentivized to work with (or at least not antagonize) the elite players
Actually, this is generally untrue. Companies BELIEVE this but often times, these players are a vocal minority put on pedastal and they often end up making the game worse for the general player base.
Sorry for not being more clear, I was referring to the advertising or promotion that comes via the elite players. Take Valorant, for example. Riot Games leveraged their League of Legends user base and gave early access to high-end players and that apparently played a big part in helping its popularity take off. Now it has a robust presence in eSports, again helped by the high-end players.
It's not uncommon now for popular professional streamers to get early access to new features/modes because the game companies know that those players can help build or retain the player base.
> I'd love to see more curiosity from the HN community on this.
These kinds of sweeping comments are as frequent as they are tiring. There are other comments like yours in this thread and yours is currently at the top. It has nothing to do with a lack of curiosity, you’re simply seeing the contrarian dynamic at play.
Rejoinder:
Blizzard’s Warden. No bootkit, no invasive system configuration required, even plays nice with “niche/enthusiast” platforms like Linux, doesn’t even care if your keyboard isn’t a bit niche too.
Thought:
If they expect a console level of lockdown, why do they bother writing for the PC? If I wanted a $game_console, I’d buy the console.
Hmm... isn't Blizzard's main FPS title Overwatch though? Cheating seems pretty common in that game (and there are tons of forum threads where people are complaining about it).
Forum threads aren't a great measure of cheating though, given the toxicity and inability of the average gamer to admit "the other player was better than me."
Just use local servers and player validation signatures. Faceless matchmaking is bullshit. Local communities win. Don't mix e-sports with casual game-play. Just like you don't need a security detail for the average person, you don't need invasive anti-cheat for the average gamer.
Why isn't server-side anticheat a possible solution? Cheats can spoof inputs purely through visual output as well, meaning there cannot be full trust client-side.
We're mostly talking about FPS here, you've got 2 main cheat categories: aimbots and esps (visibility hacks)
Esps are purely client side, they read actors from game's memory and draw a client side overlay. It's impossible to protect against these on the server. Even if you had perfect culling from the server (didn't send players behind walls for example) you'd still have semitransparent surfaces like foliage and smoke. There are people making good money in PUBG just making enemy textures that are easier to see. You need kernel anticheat to prevent the cheat reading the memory. Also you want to take screenshots periodically and detect overlays.
Aimbots in the olden days could be detected on the server because their movements were instant, precise, unnatural snaps. But these days cheat developers have wisened up. Again the best protection is to prevent the cheat from reading the games memory in the first place, some anticheats go as far as to try to prevent input from any artificial device (so the cheat can't create mouse movement)
There are also movement hacks, but I don't think that these are really common these days. You can detect protect against these on the server side
Not an expert but I've done a little reading and basically the combination of real time actions and a network makes it intractable, you end up just having to trust the client on some things (or having to make trade offs like a client potentially not having the information needed to display the game state to the player, or choppy/unresponsive gameplay as a function of latency).
Any specific examples? I hear this said all the time and it's almost never true.
Movement, for example: many decide to just let clients be fully authoritative over their positions and then act shocked when teleport hacks drop. Just keep track of the player's max move speed server-side, continually validate, and flag if they consistently move faster than is possible according to the server. No one is ever saying you have to validate inputs server-side in lock step with zero client-side prediction whatsoever and enforce 200ms of input lag for all players.
It's not teleporting that's hard to deal with, it's aimbots and wall hacks. You have to trust the client with enemy position information that it shouldn't be able to see yet, and trust their shot position inputs.
Also, constantly flying around and teleporting is easy to catch, but using it in small bursts is very powerful and harder to catch.
>You have to trust the client with enemy position information that it shouldn't be able to see yet
That seems like something that would be solvable with location-style differential privacy. Report a number of plausible locations to the client small enough that it can efficiently anticipate them all, but large enough to prevent being able to auto-aim or wall hack. Run some bots or actual player movements recorded from other matches, originating from roughly the same point where you last saw the real opponent.
>constantly flying around and teleporting is easy to catch, but using it in small bursts is very powerful and harder to catch
Even small violations of continuity seem like they'd be observable server-side, no? I've not studied this, but presumably clients must be constantly phoning home with their position.
> Even small violations of continuity seem like they'd be observable server-side, no? I've not studied this, but presumably clients must be constantly phoning home with their position.
Jumps in position are not always illegal: network issues, quirks from physics-based forces, glitches in the game, are all very common and can all cause unexpected positions. Differentiating from bannable offenses is not easy. Yes, there's always heuristics you can use to narrow down possible issues, but you have a limited CPU budget: You need to be running multiple instances per machine, each updating 60 times a second, serving dozens of players, sending and receiving constant updates to and from all players 30-60 times a second, while simulating physics, large worlds, complex player states, and synchronizing the states of thousands of objects. It's tricky to get everything right and performant. And people will get extremely mad if you make a false positive.
> That seems like something that would be solvable with location-style differential privacy. Report a number of plausible locations to the client small enough that it can efficiently anticipate them all, but large enough to prevent being able to auto-aim or wall hack. Run some bots or actual player movements recorded from other matches, originating from roughly the same point where you last saw the real opponent.
But what is the client suppose to do when actually seeing the real position? At someone the waveform needs to collapse and reveal the real location. The only way to make the fake locations indistinguishable from the real ones is to make them a real enemy player from the client's point of view. But then you stumble across all these fake enemies that don't do anything? You could place them in unreachable positions so normal players wouldn't ever find them. But then the heuristics for checking if a client "knows" about the position is still quite fuzzy. Also, visuals aren't the only giveaway of an enemy location. Audio is also location based. Playing fake audio would be detrimental to normal players' experiences.
Having said that, the unreachable-fake-player technique is not bad, it can cut out some low hanging fruit. But it's only part of the equation of a robust anti-cheat solution. It's complex to implement and only gets you some cheaters.
> That seems like something that would be solvable with location-style differential privacy. Report a number of plausible locations to the client small enough that it can efficiently anticipate them all, but large enough to prevent being able to auto-aim or wall hack. Run some bots or actual player movements recorded from other matches, originating from roughly the same point where you last saw the real opponent.
Has already been done in COD: Warzone. Varying levels of success, cheat developers end up heuristically eliminating fake players.
> Even small violations of continuity seem like they'd be observable server-side, no? I've not studied this, but presumably clients must be constantly phoning home with their position.
This issue usually is game/game-engine dependent and is achieved either by exploiting bugs or manipulating lag compensation. Not exactly a very common thing.
Do you think it will escalate to the point that client side checks will be worthless? Say in 5yrs I can let an AI watch the screen and control the mouse and keyboard. From the rootkitted computer, it can't tell I used an external AI to control the USB keyboard and mouse.
This is what every dev who can't be bothered to implement relevancy filters says when their server broadcasts the locations of every hidden player to every other player every tick and wallhacks drop a week later
Exactly what can't be fixed server side? Are you just talking about aimbots and other situations where script kiddies can trivially author bots that generate optimal inputs? Because at a certain point that's more a problem with shitty, boring game design that got stale 20 years ago; if the top of your game's execution ceiling is "can the player click on heads perfectly" you have bigger problems
Relevancy filtering is more for network traffic optimization, it doesn't really help with cheating in most cases. In a FPS, for example, the actors the cheater most wants to know about are almost always also network relevant.
But taking a step back, for fast games (like an FPS), the latency requirements drive you to send semi-secret info to the client (like the positions of other players), and so that's where things start to break down. But the traffic in the other direction is a problem too, as you have all of the scenarios in which the messages to the server (e.g. aim info, timing of weapon of firing) can be spoofed or engineered.
The motivation for the client-side anti-cheat systems is to extend as far as possible the envelope of what is considered trustworthy - i.e. if they can't solve the latency problem, then they try to make the client more trusted.
It's impossible to completely solve the problem, so it's about finding a solution that solves as much of the problem as possible. Unfortunately the main thing going for kernel anti-cheat is that most users don't care that they have to let someone root their machines to play a game, though the tide would likely turn if there were a high publicity exploit.
"All cheats can be trivially solved server side, as long as I exclude all games I don't like, which are also the games where the problem is hardest to solve and most relevant to the discussion."
> I'd love to see more curiosity from the HN community on this.
I'd love to see more curiosity from developers - the disappointment is mutual. Instead of attempting to systematically stop all forms of cheating through innovative or competitive methods, it would appear the industry is converging on dangerous half-measures and excusing it with evidence from a clearly failing system.
What should we, the users, expect? Perfect, cheat-free software that surveils us endlessly, or "good enough" security that lets users decide for themselves which servers are suitable? Let me cast my vote, and I know which ideal I consider realistic and attainable.
Developers spent millions on Anti-Cheat. It's why entire products like EasyAntiCheat and BattleEye exists.
Valve spent a LOT of time and effort on VACNet, a server side machine learning based Anti-Cheat primarily trained only on CS:GO verdicts and it was awful still.
Developers know the common methods used by cheaters. That includes exploiting known vulnerable kernel drivers to run code in the kernel. The only way to monitor for this is to utilize a kernel module loaded before that of the cheater. That's why the current state of Anti-Cheat is the way it is.
The developers of various anti-cheats like Vanguard have been very transparent about this.[1]
There are two trends in the broader multiplayer game ecosystem which I think are worth highlighting:
1. More games are trying to cut costs with ad-hoc P2P servers, meaning that sometimes important logic is occurring on a not-so-trusted machine.
2. More games are using a revenue model which may be threatened by consumer-side tinkering.
For example, imagine a cooperative game that uses a P2P server, and the host has done something to make it much easier for the squad to get a drop of the Super Special Loot (#1) and the rarity of the loot through gameplay drives many players to purchase it though an in-game store.
Critical login happen at client machine is how fps games work at all. It's way too late to judge every hit on server due to the latency. A 40ms latency is 3 frame lag even on a 60fps monitor. And It can be a lot worse in a lot of cases. The server may detect some hit that is too far off and impossible. But it have to trust what client says as long as it is on some reasonable range or the game won't even work.
And that reasonable range isn't that small. It is enough to make every bullet that was supposed to shoot on air shoot on the enemies' heads.
Did you mean to post that to a different subthread?
I'm familiar with FPS networking, however I'm talking about a trend where a customer-machine is designated to act as a game-server, so that the company can avoid paying to host one in a dedicated but more-secure fashion.
If that machines happens to be the attacker's, then their scope for chicanery is so much greater than just wallhacks or aimbots.
For example, they might temporarily or permanently grant everyone equipment that is otherwise locked behind some grind-wall, where the company hopes to make money selling a "level boost". While not totally malicious, it's definitely a "hack" the company will oppose.
Are these anti-cheats kernel modules? Asking because I only play two games on Linux and they do not use rootkits. If so one could at least prevent the installation using a couple sysctl variables [1]. I do not recommend putting this in /etc/sysctl.conf or in the .d directory as it can break OS updates among other things... I would instead put it in a startup script so that it can easily be disabled and the node rebooted. This would be in the cases the game installer wants elevated privileges and silently tries to install the modules. Obviously it will break the game but maybe that will happen soon enough so that one can request a refund for the games that did not disclose the rootkit. Once these are set to 1 on a running system the only way to set back to 0 is to disable that startup script and reboot as it becomes immutable on a running system. Your OS update tools should also be wrapped to check if this is enabled, warn you and politely abort until it is unset. The failure conditions are not strictly binary and may work, or appear to work until the machine is bricked.
Related to this it may be worth installing something that does checksum snapshots of the filesystems to see if a game has tampered with system files. OSSEC, chkrootkit or even a cron job that just does this manually and runs diffs. While some package managers have this functionality they will usually ignore files outside of the package manifest that may get picked up by the system. Immutable off-system backups are of course good too.
# do not put in /etc/sysctl.conf, instead use a startup script or a script that is run prior to starting Steam.
sysctl -w "kernel.modules_disabled=1"
sysctl -w "kernel.kexec_load_disabled=1"
I built a dedicated gaming PC a couple of years ago. Too much cowboy coding in the industry for me to feel safe running this code on my main computer. Even games for which I pay have supposedly* been scanning/uploading personal data presumably for some adtech purposes.
Why should I ever trust a gaming company to take security seriously? There was a story a few years ago about how one guy at home debugged GTA5’s atrocious loading times without any resources. Loading times which were notoriously bad and surely had a negative impact on revenue, yet nobody in the company could be bothered.
*Never verified it, but I recall the new owners of Kernel Space Program were accused of reporting personal data files to the cloud.
They wrote their own json parser which used strlen() all over the place, which is O(n), resulting in O(n^2) complexity for json parsing. The guy shimmed the function to return a cached response if it was called with the same string consecutively, which it was for parsing the JSON. The JSON contained the items in the real-money store btw.
Does anyone know whether disclosure of Denuvo and similarly controversial "add-ons" does negatively affect sales? Maybe I am cynical, but I have come to the conclusion that whether it is always online DRM, rootkit-level anti-cheat or the need to have an account for offline play, community anger is often only maintained when a game had other things going against it from the get-go. Not against disclosing this of course, that is a great move for those who actually are willing to walk-the-walk, just asking whether we should perhaps temper our expectations on the impact of such a measure.
I can't figure out what that article is trying to prove. "When DRM remains uncracked, we can't detect any losses due to piracy." well duh. Does it otherwise effect sales? Do any small games use it, or just large studios?
> I could pirate every game I have on my Steam account.
According to the CrackWatch subreddit, there were 29 games released with Denuvo in 2024. Of those, only one has been cracked and it was done via a demo bypass [1].
You can pirate many games but not, for example, Final Fantasy XVI.
I could pirate every game on my collection but one, EA FC 24, wich uses Denuvo.
It also runs very bad, brings my CPU to its knees, and can't keep 60 FPS with a 500$ GPU, maybe cause Denuvo maybe not, but I will think twice the next time I buy a game with Denuvo.
Maybe I should clarify, I personally can see the value to cooperations of having protections in place during the initial sales period, when these meassures have been shown to make an impact. My comment was more pointed at the fact that of the people I personally know that are very much opposed to the use of Denuvo specifically, very few wouldn't buy a game they want because of its use, yet they still very consistently complain about the presence of Denuvo. Essentially, my point was that from where I am sitting, a large contingent of gamers complain about things without adjusting their behavior accordingly. I also feel (again, purely subjective) that the less someone complains about pre-ordering, the less likely they are to actually engage in pre-ordering.
That being said, beyond the first few months, I remain convinced that overly aggressive DRM does negatively impact game preservation, which is why I like the compromise some studios started engaging in of removing certain DRM meassures a few months post release. I recently bought two racing games from my childhood on eBay as new-old-stock physical media. One of the twos aggressive and no longer maintained DRMs made my Windows VM unbootable and I cannot access the game without relying on the work of pirates in circumventing that.
Also, I will point out that defending DRM as something that protects artists as you did doesn't fully track considering one of the uncracked Denuvo games in the list you linked is Hi-Fi Rush, an exceptional game and financial success that was critically acclaimed and made by talented creatives who are now out of a job [0], not because of piracy, but because of corporate mismanagement.
Whether and by how much DRM can protect profits, we can discuss that for days, but I have yet to see evidence that it ever directly benefits the creatives you mention, not least because outside of corporate games studios, where ones job security doesn't appear linked to game quality or sales, in the indie scene, few if any can afford solutions like Denuvo, so the one place where developers could directly benefit from it, they can't either.
Circling back to preservation, artists generally want to be able to learn from eachother and games outside the current generation can have immense value for that. Even and sometimes especially those games that are unlikely to ever receive a re-release (which often do make changes from the original experience), so I very much feel it isn't optimal if future generations of artist will have a hard time accessing past media due to overly agressive DRM meassures protecting corporate profits within only the first few months past release.
Gabe newell has 0 value in that discussion, Steam is defacto the monopoly on PC, when you make billions by not doing anything and taking 30% on every game it's less of an issue.
The other funny thing is that Half Life 2 came out with full blown DRM that only decrypted when the game released.
DRM is not going away because the extra power it provides can be monetized.- Shareholders and investors want money at all cost. Ask anyone in any creative field. Very few are rights holders. They have food on their table despite DRM and their rights being coerced from them.
I think a lot of the anti DRM crowd (who aren't just into it for piracy) believe;
1. DRM works (or more precisely, it has gotten somewhat better at working over time).
2. It will proliferate to everything that can possibly have electricity in it.
3. In the long run this will lead to an authoritarian dystopia which will make modern China look nice by comparison.
By 2124, you will own nothing and you will be happy, or the Neuralink Assistant chip you were given as a kid will restructure your brain to "correct" this deficiency of happiness with your situation.
This is only half satire, I do truly fear this is the direction that improved information technology will move the political economy equilibrium.
It only works on Linux if the developer allows it, because it's not nearly as effective on Linux. Rust (the game not the language) uses EAC but doesn't run on Linux by choice for example. Neither does Fortnite. Apex Legends uses EAC and does run on Linux, and now nearly every public cheat for that game targets the Linux version because it's such a soft target.
I don't really like the status quo of installing random kernel-mode crap either, but nobody has a compelling answer for how to not make cheating absolutely trivial without it. Usermode anticheat barely does anything, serverside anticheat can only do so much, and the only other alternative is switching to console platforms which prevent cheating by giving the user zero freedom.
Still wondering what kinda special sauce that Blizzard is using in Overwatch. In my literal thousands of hours of playtime I encountered so few blatant cheaters its probably still in the double digit. Are there probably a good amount of cheaters I didn't realize were cheaters? probably, but does it really matter if you don't realize they are cheating?
PirateSoftware on twitch/youtube talks about his time at blizzard working on catching cheaters in WoW. Their methods are usually about figuring out how they're cheating and what behaviors cheaters follow.
Before overwatch they had years of experience catching cheaters in wow.
> game targets the Linux version because it's such a soft target.
I was going to say games on Linux should require secure boot so cheat kernels and modules can't run, but then the kernel could just lie about it being enabled.
Most Linux cheats don't even bother with kernel modules, a process running as root can read and write arbitrary memory in the game process without an unprivileged usermode anticheat having any way to know it's happening. It's embarrassingly easy compared to the hoops you have to jump through to maybe avoid detection on Windows.
I suspect the only way that might balance everyone's interests would be to set up a separate OS installation for competitive games. This could be done via bare-metal dual boot, via a hypervisor, or just by having a completely different computer for playing games on (what I have). At least in that world you still have a lot more freedom than you do on console, such as the ability to mod games that don't need anti-cheat (which is almost all of them).
The problem is since Valve and Proton made windows games viable for Linux and the Steam Deck, most of that anti-cheat vermin does NOT work under Linux. Even if it did, if you run Linux, you likely take some objection to someone wanting to add kernel modules of unknown and/or ill repute to your pretty open-source kernel.
Valve knows this, kernel-level anti-cheat is simply not practical for use with Linux as a consideration. Most game companies care zero for Linux in the first place, which means for us, we just end up inadvertently boycotting those games and bad-mouthing them regardless, but hey, it's only 1%.
I think the end goal of Valve is to support anticheats in Linux. But they want the Kernel to provide an API for it, so you don't need to run the anticheat like a driver.
But will a canned, defined api ever be good enough? As soon as someone paints a border, someone will step over it. It's the reason security products in windoze as well as anti-cheats require kernel-level access, and why outages like the crowdstrike one a few months ago occur and why microsoft lets it (for now).
It's an arms race, and no api will ever be good enough to keep a miscreant from working against logical choices. If I have to play a game that I have to assume someone is cheating, I really don't want to play that game, or at least with others of dubious reputation. This is why I run my own server for games I like to play with others I trust.
If someone wants to play competitively publicly with anti-cheats, they should opt-in to do so, but I'd like the option to not, and simply play local or private instances with my own general TOS. If diplomacy fails, a ban option for the server.
I think the population of game developers and their knowledge of multiplayer networking is fundamentally getting worse over time, because I see things that should not be architecturally possible in a lot of newer multiplayer games.
This whole thing anti-cheat thing is just a separate problem entirely, but it's so painfully exacerbated by the first.
The anti-cheat also goes hand in hand with the predatory business models of "always online" and micro transactions. Those things sell because of advantage over other players or just social factors in the case of cosmetics. Wouldn't be as relevant in an offline game. But now, since the game is online (for business, not technical, reasons), we need some way to keep everyone honest.
I'm just hoping this entire business model dies, along with the anti-cheat and everything else with it.
Strange take. These things are being put forward by major companies who hire very good engineers. Riot Games makes the most popular game in the entire world (League of Legends) and they use kernel-level anti-cheating. I interviewed with them and found their test to be one of the more difficult ones I’ve taken. I’m not under the impression they lack the necessary knowledge.
I definitely think it's just a business decision being made in some cases.
Your developers have just built and demonstrated a functioning multiplayer prototype. They want to spend 3 months to rewrite some of it for better security, and 3 months to implement the missing features and make it fully functioning. You just got off a call from a sales person for an anticheat vendor who gave you a strong pitch, so you say no to the first 3 months because it's cheaper to just add anticheat than to pay 3 months of salary on this.
Do kernel-level anti-cheat measures even work if I'm running Steam as a Flatpak + Using the game under Proton? I (naively, perhaps) assumed the security sandboxing model of flatpak would restrict that level of access.
If you're running under proton, it can't work. Proton/wine are not virtualizing a windows kernel, they are intercepting syscalls/library calls and running the equivalent linux code.
Some anti-cheat has clients for Linux (the ones that don't generally just disallow playing on Proton). I don't think the Linux ones are kernel level but don't quote me on that.
This is the war. It's always been the war. It will always be the war. Digital changed the medium but war, war never changes.
The war in unwinnable in any real sense of the word win. However, security does not need to be impenetrable security only needs to dissuade the attacker.
Kernel level, blah-blah-blah, doesn't dissuade cheaters. Those things dissuade legitimate users. It's never the ideology that dissuades those users though as they don't know or care. What dissuades these users are the difficulties that these systems present to the uninformed user.
The typical end user doesn't know how to 'fix-it' when things go wrong. PC vendors won't support the issue. The game publisher won't support the issue. The game developer rarely supports the issue. Kernel level blah-blah-blah causes a blah-blah-blah. Nobody wants to hear it. Nobody wants to fix it.
And, to top off this defecation-confection, the user is left with software that they paid for and cannot use or access. No refunds. Sorry. And, and, and!!! There are still cheaters on the platform. Every platform. There's your f'n cherry.
This is Microsoft continuing to demonize free software and Linux. If they actually cared, they would support an open source solution to the problem. SOMETHING THEY ABSOLUTELY OPPOSE. That is the core issue. Say it over and over, Microsoft _DOES NOT WANT A SOLUTION TO THESE PROBLEMS_.
Good. I absolutely refuse to compromise my system by using these things. Games should be required to let people know what they are signing up for.
And if that means more companies choose to avoid kernel anti-cheat, so much the better. I'm still mad that I can't play Helldivers 2 - a freaking co-op game where cheaters can't pose a problem - because of this nonsense.
I still hope someday the European Union forces Steam to allow transferring of games "owned", even if it's time-restricted (e.g. can't transfer the same game twice in a month)
I know, that's why I added quotes around "owned", so in other words what I meant is that the EU should force Steam to create the option to transfer that license among its own users.
Yeah but I can just assume that this would also apply to e.g. Microsoft Windows licenses, and that Microsoft lobbies strongly against such a law (also every other vendor who locks software licenses to a particular end-user or licensee)
Note that I wouldn't very much welcome such a law but I wouldn't bet on it happening any time soon
After the crowdstrike disaster 3rd party kernel drivers need to be shunned for non critical applications.
Games publishers have been bad actors in this space for a long time now. The genshin impact anticheat was used in a malware campaign. Rockstar was very misleading trying to imply their kernel driver not being compatible with the steam deck was valves fault.
Getting a Steam Deck has done wonders for my piece of mind. I don't need to worry if whatever games I'm installing are malicious, because the machine is airgapped from anything critical.
Ultimately, this is why we have consoles. We can have rootkits, or we can have cheating. Nobody has solved cheat prevention without rootkits. If you can, you’d make millions, if not billions. It’s not like the game creators want to have software on your system that has the potential to brick your system.
The real solution is games designed for playing with friends and treat all non-friend players as potentially malicious.
Early first-person shooter games had this figured out (small servers with 20-30 regular players, the server admin could choose to ban you), RTS games have this figured out, many MMOs have this figured out (interact with non-friends sometimes, but they have to 'join your party', etc.)
Playing with random strangers on the internet who may want to grief/destroy your game, be incredibly toxic, or cheat against you in general.. that's the cost of playing with random people in a completely public forum.
But people largely want matchmaking. They don't want to deal with having to find a server of like-minded players, they want to hop in a lobby with maybe a few friends, pick a map pool, and go.
Nah. Consoles were a decade late to the online gaming party, and online gaming on consoles (counting Xbox Live as the first concerted attempt) has only been around half as long as consoles as a product segment have existed.
Running games in a VM appliance or an immutable container type of environment could be neat. Or some kind of hardware device. Like a console on an expansion card that could enable a secure environment while still letting you use your hardware.
This is a false dichotomy. Genshin is single player. Some people play multiplayer only with friends. The only legit use for anti-cheat is competitive multiplayer with strangers.
Not sure if you're referencing it but there was a recent scandal where it was suspected someone playing against Magnus might have had a wireless butt plug to enable some cheating...
The sibling comment makes a point about anonymity, I find these discussions interesting in comparison with the only online competitive game I play these days. It's Tekken, and neither the current rendition nor the previous one had any real form of anti-cheat. For the current Tekken 8, supposedly some players have been banned after manual review from the company of replay data, which of course doesn't scale. But at the same time it doesn't really matter. Cheaters don't seem to be that prevalent, their ability to spoil the experience of a match is limited by the fact that matches are short, and people can spoil the experience in non-cheating ways like plugging, lag switching, using a weak computer, and for some sensitive players they'll get unreasonably upset by ki charging/teabagging/taunting/continuing an attack after KO. The status of the highest rank is also not that much -- the most status comes from performing well at the big in-person tournaments, where it's going to be harder to cheat and players are somewhat de-anonymized. If the positive incentives to cheat are minimized in the first place, you don't need so many negative incentives like rootkits.
(It always amazes me how custom controllers and even keyboards are allowed in fighting game tournaments, officially certain macros are banned and at least for Street Fighter certain modes of leverless controllers got banned, but it'd be hard to perfectly enforce. And it's been hilarious to see the increasing use of fake buttons or controller-hiding covers/jackets because it was assumed some players were able to see inputs out of their peripheral vision before they were registered in-game and adjust.)
Hmm, here’s a thought I’ve never had (but might be obvious to others).
Could I run windows as a VM guest under Linux and play Fortnite in that (with good GPU performance)? I don’t mind their rootkit running on some dedicated VM - I’ll just consider it my Fortnite unikernel.
(I’m also ok with the host OS being Windows or MacOS).
Running a VM gives the parent the ability to read/write arbitrary memory without [even rootkit] anticheat being able to detect, which can facilitate cheating, and therefore can earn you bans. The whole point of the rootkit is that the game can confirm that you don’t have any way to read/write arbitrary memory.
Isn't Windows running under a hyper-v hypervisor these days anyway?
In practice, I'd settle for a peer Windows OS, like the WSL2 kernel, with the rootkit seperate from my main work one. Can I run two copies of Windows simultaneously as peers?
If you've already put a piece of hardware into your computer made by nvidia, installing a kernel driver also made by nvidia does not increase your risk at all.
Installing some random anti-cheat kernel driver is not the same thing, at all.
But you are not installing a random anti-cheat kernel driver, you're installing anti-cheat kernel driver provided by a game you've already put on your computer. It's very much the same thing.
User space applications can't access hardware or physical memory. They can't bypass permissions enforced by the OS. None of that applies to hardware or kernel drivers.
> This isn’t giving us any surveillance capability we didn’t already have. If we cared about grandma’s secret recipe for the perfect Christmas casserole, we’d find no issue in obtaining it strictly from user-mode and then selling it to The Food Network. The purpose of this upgrade is to monitor system state for integrity (so we can trust our data) and to make it harder for cheaters to tamper with our games (so you can’t blame aimbots for personal failure).
Where did I say they are the same? We have a kernel-space thing (anti-cheat or gpu driver) and a user-space thing ((a game actually talks to both) that talks to a kernel-space thing.
I understood that you were making an analogy between installing a piece of hardware and its associated kernel driver with installing a game and its associated kernel anticheat.
When you install a hardware device you are trusting the manufacturer with full access to your machine, so installing a driver does not give them any more powers. You have already "unlocked the door".
When you install a game that runs on user space you are not trusting the vendor nearly as much as you are trusting a hardware manufacturer. Installing a kernel anti cheat is granting them a level of trust and access to your machine that they didn't have before.
> Most people do install Nvidia’s out‐of‐tree graphics driver
Most people that use Nvidia. I specifically don't buy Nvidia graphics cards or laptops that use them in my Linux computers because they're not in-tree.
- This is an abnormal case. Most hardware will work with in-tree drivers. Indeed, few vendors provide out-of-tree drivers for Linux.
- Nvidia is an established and reputable source. We aren't talking about some small hardware developer who doesn't have the resources to create secure drivers.
- Most Nvidia cards have in-tree drivers. There is a loss in performance, but the option usually exists.
It's a risk, but a very minor additional one - if you trust their hardware with direct access to your PCIe bus, you have already given them the metaphorical keys to the vault.
Can't wait to find out what China hid in Riot's Vanguard rootkit for all their games. It's 100% a conspiracy theory, but nobody can convince me it's perfectly clean, or if it is, that there isn't an easy way to add some power to it quietly.
China's national security assistance law came up in the TikTok hearings. There's no reason to believe that the CCP doesn't have the legal authority to compel Riot to push an update with a backdoor to a few select high value targets.
If it is written in C you can always introduce a buffer overflow or something similar by just adding a little bit of line noise here or there and nobody can prove it was deliberate.
The vanguard drivers are signed by Microsoft, the procedure for which includes a safety audit by Microsoft.
The driver is just what the developers say it is (as with all other anti-cheat). It provides an untempered interface for the userland anti-cheat to use to get info from the kernel. Because modern cheats tend to alter the output of kernel syscalls by running in the kernel themselves.
I really don't see why anyone needs to think it's anything more than that.
If Tencent needed to spy on you so badly there's no reason kernel anti-cheats need anything to do with it...
It says something about Microsoft when they OK a known harmful bootkit that expects your computer to act like an XBox with a fancy keyboard (but not too fancy), requests invasive changes to UEFI that have broken systems, and have an overall opacity that rivals an Arthur C. Clarke Monolith.
MS usually don't bother with driver audit... They mostly rely on EV certificate to check driver dev is a proper legal entity.
If they audit properly, they should not let the Asus AuraSync driver certified at the first time. (basically opens PORT instruction to every userland app, unristricted)
EAC and other kernel-level anticheat software will dynamically load and execute signed payloads at runtime. Does Vanguard do this? If so, does Microsoft check these payloads?
If I wanted to deploy a trojan horse then the last place I would try to hide it is in an anti-cheat driver that will without any doubt be exhaustively analysed by people attempting to bypass it.
But also there's parties there with a big interest in circumventing these securities, and have done so for decades. The new release of RDR for PC (shamefully asking $50 for a 14 year old game) was cracked within days, if not earlier, of its releae.
There's a ton of gamers that like to figure out how the game itself works. There's a ton of them trying to figure out how anti cheats work, sometimes to cheat, but more often because they're curious, resourceful teenagers taking it as a challenge.
Oh, I know. That's how my career was started. I made invitational in CS: Source (CAL) and then sold cheats to pay for college. My first Real Job was through a teammate.
Far more would have accepted a RAT and been deprived money than expressed genuine interest. Some did... not many. Most wanted the acclaim without the effort.
How much shit, and how does it compare to the risk profile of, say, not wearing a five points seat belt and motorcycling helmet while driving, or a bulletproof vest when going to school, or an N95 mask literally everywhere?
Security theorists are always ready to tell us about the horrifying risks of installing kernel-level code from a vendor, but can they actually quantify the likelihood times damage those billions of installations have inflicted on Joe Random's life?
And contrast them to other risks that we regularly take in the name of comfort and convenience?
Funny that you initially used "Joe Ransom" as your example name (before your edit), as that describes one of the possible situations our friend Joe can end up in: malware that encrypts all his data and asks for a ransom to get it back.
I'm not really that interested in chasing this, but a point I do want to make: it isn't just risk.
If you want to participate in a lot of these multiplayer games that place cheating far too highly, you can't use a hypervisor. You must have gaming device and computing device. They cannot be the same.
That's fine for most, but I consider it shit. VFIO makes it possible for a big computer to make a smaller gaming one. Ask me how I know.
My greater point is I don't care if I get cheated out of a finals match. I can actually speak from experience. I prefer autonomy over my devices. I kind of want to eat poop with them. A little.
What do you mean? They burned several high value 0days on a high value target. Why wouldn't China burn a high value backdoor on a target they deem valuable enough.
I mean, they're not rootkits. Rootkits are either to gain root access (thus the name) or to hide something from a user. Anticheats don't do either of these.
They expose a kernel API to allow games to verify the state of the system, and they're knowingly installed by the user.
I'm already counting down the days for eBPF to blow up in our face.
But admittedly, it's the cheapest way of gaining more capabilities and privileges than you need, thus it's here to stay.
That's not really possible as long as the kernel allows the loading of arbitrary user-provided modules. Because the cheater will certainly run the cheat that requires kernel mode. If it's run in kernel mode, the API call can be intercepted.
How does the anticheat then work? Corewars. It's a cat and mouse game between the cheat provider and the game developer.
One would need a secure base layer, where also the MS anti-cheat lives, and all drivers can only run in a layer between this base layer and userland. I think that's already done for most of the graphics stack.
On the other hand, I am not convinced I want a system where I cannot load arbitrary kernel mode code if I choose to do so.
Riot games use theirs (Vanguard) to improve detection of cheating software. basically the idea is by being on from the moment the computer is booted up it can validate the environment better.
Here's a recent blog post by riot detailing their recent deployment of the system for league of legends, the biggest online multiplayer game in the world
> The genshin impact anticheat was used in a malware campaign. Rockstar was very misleading trying to imply their kernel driver not being compatible with the steam deck was valves fault.
I mean, nothing of this is new. ESEA, one of the most influential esports leagues, was caught using its anticheat to mine Bitcoin in 2013. [1] This is long out of control, probably since the days BattlEye switched to ring0 in 2012 due to chronic cheating in the DayZ mod, or maybe earlier. Modern anticheats are full-fledged rootkits with extremely complex and targeted payloads siphoning customer data and hijacking all sorts of stuff, and that's not a theory, they actively abuse players' trust and indifference.
If you care about your data and the control of your devices, you should probably avoid them entirely, or at least use them on dedicated gaming PCs on a clean identity, and keep them separate from your LAN and your non-gaming digital life.
I think it's fair to say that a lot of users have no idea they're doing so, hence why changes like the one in TFA are necessary to encourage transparency around these practices
I've ran the installer for Vallorent. I don't remember it telling me it was going to go run code in ring 0. And I would likely have ended the install there if they listed any of the downsides.
For most gamers you'd have to invoke Cloud Strike, to explain whats happening. They play games not study CS.
I agree, we need something that emphasizes that it executes undesired functions. "Trojan horse" would fit better but it's associated with computer virii now. I think I would call it something like "Traitor software".. it generally does the functions you installed it for and pretends to be normal software but then when you aren't looking it betrays you later.
It's quite literally not. Root is technically a user with extra rights (including modifying the kernel, but there is still an API the root user has to go through). This is running as part of the kernel. It's not running in userland "as root".
A rootkit is something that gives other users the power of root.
Crowdstrike isn't even the worst case. The SolarWinds disaster is the worst case scenario.
You have a closed source rootkit designed for finding data in raw memory (like passwords from an unlocked password manager), loaded into many gamer's machines, which many software engineers are. Some anti cheat explicitly support's arbitrary remote code execution by design. Many people mix their personal password vaults with their company's, which means that if you successfully hack an anticheat company and you can read the raw memory of an opened password manager with a program that is already designed to scan all processes memory, you now potentially have extremely valuable credentials. A small portion will even do things like add their 2fac keys into their vaults.
Of course the other problem is the 23andMe problem and enshitification. Even if the data uploaded by anti-cheat isn't used right now, the storage of data alone creates incentive for abuse.
Something slightly related happened recently. A bit of malware that was distributed as a mod for BeamNG was installed by a high up Disney employee, who was also logged in to some internal work stuff. The hackers were able to leak huge amounts of company data.
If they worked to any acceptable level of efficacy then they could be tolerated. They're only tolerated by people who think they work as well as they claim to work (security theater) but anyone who knows about the performance impacts and/or are tech-savvy enough to understand it is a rootkit and potential exploit (that would fully pwn your device) hates them.
Some cheats are getting rather sophisticated now. There's an ever-increasing number of Pi-devices where the cheating is done externally.
They're also chosen by users when the game is filled with cheater. Counterstrike 2 is an example of this with players moving to FaceIT and ESEA (with kernel anti cheat) as the higher ranks of official competitive matchmaking are filled with cheaters.
Proven by who and what proof? Because Denuvo is the only one outspoken about how it doesn't impact performance despite all evidence to the contrary and they provide no evidence of their own beyond claiming it doesnt. Then saying they'll prove it doesn't and then backing out of proving it.
Look at what Apple has done in recent years. kexts (kernel-level drivers) are basically all but unsupported today, and both DriverKit and IOKit are fully userland.
Is it fun to be a non-cheater, and join a multi-player game where there are other players using software cheats that let them easily beat you every single time?
I'm pretty sure I would quickly stop playing that game, and demand the publisher refund my money. That's just not fun.
And that's just as a casual gamer. For people who compete and win prizes, endorsements, etc., the stakes are a bit higher.
I'm not saying kernel-level rootkits installed on everyone's machine is the answer, but letting people cheat isn't going to work either.
Community-run and moderated servers easily fixed this issue decades ago. Maybe video games should be fun centers of community again instead of maximally isolating and atomizing skinner boxes designed to make children addicted to endlessly practicing and competing at worthless skills so the sunk cost keeps them buying loot boxes
Well, the problem is eventual consistency and these games have a hell to consolidate properly.
One user is on a connection with 10ms latency, the other user is on 50 ms latency. Now, if first user does something, and second user can either do something to evade or can do something that actually prevents the first user from acting, how do you consolidate that?
The actual timestamp of when exactly what happened helps immensely, but you have to trust the timestamp. And how can you know that is not manipulated?
But... that's just the surface. Consider: one client uses a rendering that takes 25ms longer to show up and another client does not render textures/shadows etc. That client is faster and the sender can even send "official" response times, but would still give an advantage.
So, I am not sure this can be solved serverside. But... I don't play these games anymore and would never opt for a rootkit to be installed just so I can play. I can imagine plenty of people, though, who would.
Remember that you don't need perfection: you need people to believe that they're likely enough to get caught that they don't want to use a pre-canned cheat, and you need just enough cheat detection mechanisms to make it hard for people to make new cheats. Not all of that has to be technological: you can spread rumours that your cheater ban waves are bigger than they actually are, for example, and that'll keep more people from even trying in the first place.
You don't have to trust the timestamp - and you shouldn't. You can use a bunch of methods to go from untrusted to grudgingly accepted: requiring monotonicity means cheating clients have to be permanently slower rather than selectively slower. Having tolerances for out of order packet rates or accepted deltas before discarding player actions will have some false positives for players on terrible networks, but will also reduce the impact of any possible timestamp-related cheats.
It can't be fully solved server side, not without sacrificing acceptable performance. I reckon it can probably be dealt with enough on server side to keep cheating to a tolerably low level. It's probably cheaper to just license a windows rootkit though.
You might be able to match-make between clients with similar latency and then "enforce" that latency server side by delaying things that "happen faster" then the previously measured latency
No, this implies that actions are in response to something. This is not true. I can shoot my gun at any time, and even randomly. It does not depend on an opponent starting to move.
> (I’d still lean towards expecting game houses to find another way, kernel drivers are still client side trust mechanisms).
Well, this problem simply can't be solved server-side only. Client-side can't be validated without rootkit (and even then it's not enough, but enough to deter majority of cheaters).
Keeping cheating to a low enough level that players don't quit in frustration (or never start playing due to bad press) is critical. Eliminating it entirely is not.
Valve added vote kicks to CS to help keep cheating (and other antisocial behaviours) under control - it seems pretty important to them.
I think the point is that competitive multiplayer games are not critical. Scripting in e.g. league of legends probably doesn't register on 99% of humanities "top 100 most critical things in my life" radar.
Back then you could just quit the server/match if somebody was obviously cheating (or they got banned).
With competitive matchmaking cheaters can hold players hostage until the end of the match, as leaving incurs penalties and cooldowns that temporarily ban you from playing.
There are also cheaters on old games (Modern Warfare II (2009)) that will inject code into your client to disable the quit menu, so you have to dashboard. I can't imagine what psyche someone must have to not only cheat, but force people to play against them.
Because those were community servers often built around community. There weren't a lot of them either.
If admins allow cheating - people that want to play would leave the server
If live in a non-metro area, you probably have a handful of server your latency allows you to play on - getting banned would be a big suck
Now you just click "play game" and you get match with some strangers you might never play ever again with. Financially, those privately hosted servers no longer make economical sense for game publishers.
Because games were less common. If you look at community hosted servers now they commonly have more anti cheat, not less. Counterstrike with FaceIT and ESEA. Even FiveM for GTA V rolled out a custom anti cheat before it was added to the official game.
Personally I find both unacceptable: I won't play a game that requires me to install a rootkit, and I won't play a game where cheaters and bots run rampant, ruining the fun for everyone.
So hopefully there's a solution to this that doesn't require a rootkit.
They have problems because they're cheap and don't want to pay to host servers. They don't want to let people host their own authoritative server either because of the $billions in fake money.
Yeah life sucks when everything and everyone has to be untrusted (applies not just video games).
The solution is to build trusted spaces again IMO.
For video games assume that each user is trusted by default. As soon as they violate that trust by cheating, they are banned permanently for that copy of the game. If they want to be trusted again they have to buy another copy of the game to get another license. Make it hard to become a member of a trusted community and easy to be kicked out of a trusted community for violating trust. This would eliminate the vast majority of cheating and bots because most gamers are kids and having to buy a fresh copy will hit hard. If they abuse it enough, make them jump through more hoops like ip bans and computer fingerprint bans.
This is a naive take. Of course these developers already permaban cheaters. Firstly many of these games are free to play so "getting another license" is a non issue. They're doing hardware bans nowadays which are harder to avoid but not impossible.
Half the battle is detection though. If you don't detect cheaters quick enough they ruin enough games that genuine players start getting frustrated and leave. Anti cheats help with this detection.
Probably every anti cheat idea you can think of, in terms of detection, prevention and punishment, has probably already been tried by a large online multiplayer game. It is an extremely difficult problem to solve, a constant arms race.
It's not possible to completely solve this problem with technology.
High level chess players (GMs) can win with just a few bits of information transmitted to them by a cheating accomplice (a cough if it's a critical position to spend extra time on, etc). Similarly, high level gamers only need the slightest of edges to win, and therefore only need the slightest of cheating.
That's why I think trusted user bases are the way to go. My initial ideas were naive, but I think the core idea is solid. If you had to pay $1000 to enter a "trusted club" which uses your hardware fingerprint, and all of your online interactions in a game were guaranteed to be with other people who paid $1000 to be in the club, would that not be a large deterrent to cheating?
It's going on a tangent, but one naive take which continues to amuse me when it comes up is community/third party servers and policing of cheating. As though delegating that responsibility is the goal or that it would scale to handle the size of modern playerbases including the ratio of admins to players to be able to monitor and respond to (alleged) cheaters
But as gaming has grown and become more mainstream, the ratio of enthusiasts who are willing to admin to casual players who don't has changed. Server sizes have changed over time with smaller games like 5v5 becoming way more common.
False positives would very much hurt in that model. But returning to a small multiplayer experience with chosen friends would work: the in/out decision is local and personal.
Talking just about games, this really doesn't work with free games. Even if there is a lengthy 'lockout' period from the real game, many games have rampant and cheap accounts for sale and doing so will make the game experience worse.
They had the balls to add a mandatory kernel extension into a game that I've bought 10 years ago and that I wish to play in single player only.
I find it utterly ridiculous. As usual, piracy would have been the superior experience.
reply