The exact same thing happened to me with League of Legends. I was inexplicably banned for cheating, despite never having done any such thing (and despite regularly playing on three accounts (this is fully permitted), the other two of which were not banned!) Their support people repeatedly said "we reviewed your case and the ban is correct", etc. all the while giving zero information about what I did so I could correct it. I have a couple of the rarest skins in the game, and have played thousands of hours since 2009. I only play ARAM, so the suggestion I was risking my account of great sentimental value by cheating at the most casual mode in the game is beyond ridiculous. Anyway, nothing in gaming has ever stressed me out more. I got unbanned solely because of a contact in the industry who had it looked into, and the ban was inexplicably lifted. I still play, but I think about the false ban almost every time, and League will probably be the last competitive multiplayer game I ever put any time towards. Part of me doesn't want to play it anymore because I dread that happening again. :(
I got a false permanent ban as well. Despite the fact that cheating is damn near impossible on consoles, and the fact that I worked way too long to get to an absolutely mediocre rank (gold 1) on ranked play, and the fact that I had never even had a warning or complaint for any behavior whatsoever, they permanently banned me with no explanation.
Unlike the blogpost, I just decided I would just never spend any money on an Activision product ever again. It's what everybody should do.
>>Despite the fact that cheating is damn near impossible on consoles
Unfortunately, aim assist devices for consoles are very widespread now and a big problem for competitive gaming.
.
>>I had never even had a warning or complaint for any behavior whatsoever
That's the gold standard in the industry though, you don't warn(suspected) cheaters to not give them opportunity to adjust their tactics. Sorry you got caught by this unfairly.
> That's the gold standard in the industry though, you don't warn(suspected) cheaters to not give them opportunity to adjust their tactics.
Is this supposed to do any good? The actual cheater is still getting a signal that they've been detected, because they get banned. Then they figure out how, make a new account and go back to cheating.
Meanwhile the normal user is both confused and significantly more inconvenienced, because their rank etc. on the account you falsely banned was earned legitimately through hard work instead of low-effort cheating.
>>The actual cheater is still getting a signal that they've been detected, because they get banned.
So....yes. But there are mitigating tactics around this, I really recommend looking into it because it's a fascinating topic. As the simplest thing - you don't ban cheaters the moment they are detected to not give off how you detected them. That's why Activision bans people in waves and all at once, even though they know some people are cheating and still active. Unfortunately a lot of people are paying for cheats nowadays, and the cheat makers usually have some kind of refund policy where if you get detected you get your money back - games companies want to inconvenience those buyers as much as possible, so you can't claim your refund straight away because hey, the game worked for a good while even while you were cheating, must have been something else :P
>>Meanwhile the normal user is both confused and significantly more inconvenienced
Yes, which is why the aim is to have 0 legitimate players getting caught by this, obviously.
The intent is usually to gather data then ban in waves. If a new tool comes out and you ban a couple of players the tool authors might figure out why and update it. Let it sit a while and you can get hundreds/thousands of players who get a message to rethink their choice to cheat.
An additional benefit is that this can include multiple cheat programs and versions in one ban wave, so it may be harder to narrow down exactly what the flaw was. That's the why for no warnings (or explanations) - false positives and recourse if mistakenly flagged is another matter entirely.
> This ban also ruined other games for me. If I ever did well in a game, someone would look at my profile to see how many hours I have and instantly see the red marker that shows “I am a cheater”.
I wonder if that label can be considered to be libel. Probably harder in the US, but from what I understand in UK (or just England?) the defendant must prove that it's true.
Holy ….. what a fight you had to do. So glad i hardly play any mulitiplayer shooter games. I’d hate to have my insane Steam library stripped away from me.
Interesting stuff! Though I don’t get why b00lin would have to prove that they weren’t cheating. This is not a criminal case, but still. Activision was denying access to a service that was paid for.
Honestly I'd prefer it if games could permaban based on just heuristics and the EULA simply stated "tough luck, buy the game again". I'd happily pay for that, knowing my money is at least not going to some 2 year legal fight.
I get that I might be the one accused of cheating next time. But if that risk is tiny and the cost when it happens is $50 or $100 it sounds a lot more attractive than the alternative.
Also (obviously) I don't care about the account itself. I wouldn't play a game where I aggregate long term stats/items/status/whatever.
In a perfect world you just have private servers where you can have 90% effective anticheat and have humans sort out the rest.
I think stat based bans are the ultimate solution for all the client side bullshit.
If you use statistics, you will sometimes get it wrong, but in the other cases the cheaters are completely out of luck. You could offer the source code to your game willingly and it wouldn't help them very much.
If the cost of a false positive is $50 for the gamer and the chance of it happening is rare, I think many would quickly understand the value proposition from a game experience perspective.
Assuming your false negative rate is low (I.e., you have high classification margins), you can make it extremely undesirable for players to engage in unfair play. Even soft cheating like aiding teammates with streaming and discord side channels could get picked up by these techniques.
Nah, that won't do it. Even if you had a rare false positive rate, it would be significantly higher for players with a profile similar to ones that trigger bans.
It would be even worse than the bans some developers hand out now because their inherit randomness would be essentially just that. Not acceptable for any form of service.
Needs to be a law against the taking away of product functionality after the sale, even if it's contractual/EULA. A ban should never take the game away from the owner, and in cases where it does then they need to be refunded (treble damages on top of license, lawyer, and court fees if it takes a judgment to induce the refund). Getting banned on Steam, say, in the sense that all of one's purchases are invalidated should be impossible legally. In cases where an account is prevented from login, items and inventory must still be accessible for trade as those represent real time effort put in by a paying customer. Want to enforce your code of ethics in a multiplayer game? Can't charge for the game or users legally have rights against bans, and bans must follow a proportionality continuum and you must have a human-attended cost capped (at license cost, and only on loss) appeals tribunal system with record.
Cheating will not get you banned on steam though, at worst your account is publicly shamed if its a VAC game.
People play multiplayer games to have fun and interact with others. If you behave badly, be it cheating or otherwise, you should be banned from using the multiplayer service because your behavior impacts other people.
Cheating is ultimately a human problem. You can have some safeguards and heuristics like the ones the article describe, to weed out 90% the most blatant cheaters, so I think anticheats like these are fundamentally a good thing. But the anti-cheat can and should err on the safe side because ultimately it should be the players and admins themselves that sort this out.
Online multiplayer games must (yes must) take place on servers with human admins. Admins should be present for a majority of the time any players are playing.
Ideally with admins the players recognize. Bonus points if players themselves can perform some moderation when no admin is present (votekick, voteban etc). There is no difference between kicking cheaters and kicking people who are abusing chat etc. Obviously this means that "private" or "community" servers are the only viable types of server for online multiplayer games.
This process of policing cheaters and other abuse can not be something that is done via a reporting system and handled asynchronously. Kicking/banning must be done by the admins of the game, and it must be handled quickly.
If you are considering buying/playing an online multiplayer game and it doesn't have this functionality (e.g. the only way to play online is via matchmaking on servers set up by the publisher, and the only way cheaters and chat abusers are policed is via some web form) then please, avoid that game. Vote with your wallet.
I'm very curious about the jump obfuscation. Maybe somebody who's done more reverse-engineering can answer this for me:
a) Are unconditional jumps common enough that they couldn't be filtered out with some set of pre-conditions?
b) It seems like finding the end of a function would be easy, because there's a return. Is there some way to analyze the stack so that you know where a function is returning to, then look for a call immediately preceding the return address?
Apologies if I'm wrong about how this works, I haven't done much x86 assembly programming.
There's some other cool tricks you can do, where you symbolically execute using angr or another emulator such as https://github.com/cea-sec/miasm to be able to use control flow graph unflattening. You can also use Intel's PIN framework to do some interesting analysis. Some helpful articles here:
1. Some jumps will be fake.
2. Some jumps will be inside an instruction. Decompilers can't handle two instructions are same location. (Like jmp 0x1234), you skip the jmp op, and assume 0x1234 is a valid instruction.
3. Stack will be fucked up in a branch, but is intentional to cause an exception. So you can either nop an instruction like lea RAX, [rsp + 0x99999999999] to fix decompilation, but then you may miss an intentional exception.
IDA doesn't handle stuff like this well, so I have a Binary Ninja license, and you can easily make a script that inlines functions for their decompiler. IDA can't really handle it since a thunnk (chunk of code between jmps), can only belong to one function. And the jmps will reuse chunks of code between eachother. I think most people don't use it since there was a bug with Binary Ninja in blizzard games, but they fixed it in a bug report a year or so ago.
This video[1] on reverse-engineering parts of Guitar Hero 3 covers a few similar techniques that were used to heavily obfuscate the game code that you might find interesting.
A function with an unlikely slowpath can easily end up arranged as
top part
jxx slow
fast middle part
end:
bottom part
ret
slow:
slow middle part
jmp end
There may be more than one slow part, the slow parts might actually be exiled from inside a loop and not a simple linear code path and can themselves contain loops, etc. Play with __builtin_expect and objdump --visualize-jumps a bit and you’ll encounter many variations.
In addition to what others said, I'd simply point out that all 'ret' does on x86 is pop an address off the top of the stack and jump to it. It's more of a "helper" than a special instruction and it's use is never required as long as you ensure the stack will be kept correct (such as with a tail-call situation).
the call is still in tail position whether or not it reuses the stack frame. there are also more involved ways to do tail call optimization than a direct single-jump compilation when you leave ret behind entirely, such as in forth-style threaded interpreters
i only meant that "optimized/eliminated tail call" is more useful terminology than an uneliminated tail call not counting as "a tail call". i find this distinction useful when discussing clojure, for instance, where you have to explicitly trampoline recursive tail calls and there is a difference between an eliminated tail call and a call in tail position which is eligible for TCO
i'm not sure how commonly tail calls are eliminated in other forthlikes at the ~runtime level since you can just do it at call time when you really need it by dropping from the return stack, but i find it nice to be able to not just pop the stack doing things naively. basically since exit is itself a threaded word you can simply¹ check if the current instruction precedes a call to exit and drop a return address
in case it's helpful this is the relevant bit from mine (which started off as a toy 64-bit port of jonesforth):
.macro STEP
lodsq
jmp *(%rax)
.endm
INTERPRET:
mov (%rsi), %rcx
mov $EXIT, %rdx
lea 8(%rbp), %rbx
cmp %rcx, %rdx # tail call?
cmovz (%rbp), %rsi # if so, we
cmovz %rbx, %rbp # can reuse
RPUSH %rsi # ret stack
add $8, %rax
mov %rax, %rsi
STEP
¹ provided you're willing to point the footguns over at the return stack manipulation side of things instead
My gut (been a while since I've been that low level) is various forms of inlining and/or flow continuation (which is kinda inlining, except when we talk about obfuscation/protection schemes where you might inline but then do fun stuff on the inlined version.)
If compilation uses jmp2ret mitigation, a trailing ret instruction will be replaced by a jmp to a return thunk. It is up to the return thunk to do as it pleases with program state.
Yeah, should be easy enough to filter these particular jumps out. It's an obfuscation designed to annoy people using common off-the-shelf tools (especially IDA pro)
Most obfuscations are only trying to annoy people just enough that they move on to other projects.
Not much has changed, except there are more entrants. Binary Ninja, Ghidra, radare (last two being open source). For debugging, there's x64dbg. Some use windbg and gdb (for non windows os), but it still is mostly IDA as king though the others are catching up.
I evaluated entering the space by building something with AI native however, the business case just didn't make sense
I tried Ghidra recently and the decompilation seemed decent enough. The UI seemed a bit less complete than IDA's though (I couldn't see a couple of things that IDA does/has though they might just be hidden away in menus).
I learned a lot of this stuff ~15 years ago from reading a book called Reversing: Secrets of Reverse Engineering by Eldad Eilam. The book is old but amazing. It takes you through a whole bunch of techniques and practical exercises. State of the art tooling has changed a bit since then, but the x86 ISA & assembly more generally hasn't changed much at all.
One of my biggest takeaways was learning about "crackmes" - which are small challenge binaries designed to be reverse engineered in order to learn the craft. They're kinda like practice locks in the lockpicking community. The book comes with a bunch on a CD-ROM from memory - but there's plenty more online if you go looking. Actually doing exercises like this is the way to learn.
You don't start trying to reverse engineer COD. You build up to it.
UnknownCheats. I'm active there and it has one of the best resources on this kind of stuff. I'm more interested in how Linux userspace Anti-cheats works notably VAC.
My recipe: "Windows 95 System Programming Secrets" by Matt Pietrek and "Unauthorized Windows 95" by Andrew Schulman, years of fooling around with NuMega SoftICE, lots of IRC, lost youth, yet lots of fun.
I used to frequent cs.rin.ru for all things non-steam back when I operated non-steam CSS servers.
UnknownCheats is also absolutely amazing for cheat development. Back when I was writing undetected kernel cheats for my own experimentation purposes, I learned so much there.
I have been doing a bit of reverse engineering on a popular Horde/Alliance based MMO game and it follows almost the exact same steps (including the FNV32 export hashes). It almost seems very similar as I have seen it employ very similar tricks. I wonder if it's packed using the same protection?
I don’t play this game, but my partner does. I sometimes see him “spectating” a player that is below the ground - regardless of if the client is hacked/cheating, aren’t there some server-side checks that the player state is valid?
Not really relevant, but this triggered a memory of being around 14 years old and getting scammed on Runescape which drove an evil character arch from me to somehow find out how to DDOS players in the duel arena and make absolute bank. I still feel a little guilty about my actions to this day. At the same time, I'm surprised that at 14 I was able to find and pay for a denial of service provider and figure out players IP addresses to intentionally disconnect them
It's like the most addicting part of reverse engineering to me. Building signature lists, and then writing bindings to scripting languages to call those function pointers.
It's also the foundation of how many third-party mod platforms work, because you need to build a meaningful API to modders that isn't exposed by the first-party.
Signature scanning is just scanning for unique bytes from a compiled function that will remain consistent across builds. You search memory for those bytes and when you find them, you find the function you're interested in.
Thanks for explaining. How do you identify such byte patterns that are likely stable across builds? Is it experimental - i.e., look at a few versions of the binary and check if it has changed?
You can actually usually get a pretty good starting point from just a single build, and only refine it once you find a build it breaks on. It's essentially just finding a unique substring. In my experience this almost always involves some wildcard sections, so the signature in the parent got lucky not to need them. I like to think about it as more of matching the shape of the original instructions than matching them verbatim.
To manually construct a signature, you basically just take what the existing instructions encode to, and wildcard out the bits which are likely to change between builds. Then you'll see if it's still a unique match, and if not add a few more instructions on. This will be things like absolute addresses, larger pointer offsets, the length of relative jumps, and sometimes even what registers the instructions operate on. Here's an example of mine that needed all of those:
Now since making a signature is essentially just finding a unique substring, with a handful of extra rules for wildcards, you can also automate it. Here's a ghidra script (not my own) which I've found quite handy.
From my limited experience, it refers to the act of reverse engendering the function (signatures) contained the code of a binary.
A binary, like the underlying code, has commonly used code split into functions that may get called in multiple places. These calls can be analyzed either through static analyzers or by a human, who may analyze context of the callsite to guess what each Arg is supposed to do/be.
For modding, e. G. in a single player game, one might want to find out where the engine adjusts the health points of a player or updates progress.
Cheating in multiplayer games has become such a huge problem, it has destroyed trust across every major FPS.
I am a long time CS player, but I did briefly play one of the new CoD games, before they went crazy with Nicki Minaj skins and bong-guns.
A person was so convinced I was cheating, they started doing OSINT on me while still in a match, and they found my old UnKnOwNcHeAtS account as some kind of proof that I am cheating (that account was 12 years old by that point).
I abhor cheating, and I have a lot of interest in computer science, so of course I wanted to see how all of it works and did my research during my youth, taking care to never compromise the competitive integrity of the games I played, but if you look around, there is not a single game that I can recommend to people anymore.
Games like Escape From Tarkov are so busted, cheaters are stealing the barrels off people's guns and crashing their game/PC on command.
My beloved counter-strike's premier competitive game mode has a global leaderboard that acts as a cheat advertisement section within the game.
Games like Valorant are a cut above the rest on account of their massively invasive anti-cheat, but are nowhere near as clean as most fans claim, I mean, you could write a cheat for the game using nothing but AHK and reading the color of a pixel.
There is a whole industry of private matchmaking for counter-strike, built solely on the back of their anti-cheat and promises of pro-level play to the top players.
EDIT: I found the screenshot, it was MPGH not UnknownCheats, but yeah, they also had a game ban on their account.
We’re seeing a clear divide where both competitive gamers and hackers are retreating into their own ecosystems, away from public matchmaking. Public matchmaking has simply become too optimized/lucrative to sustain trust or meaningful competition.
Private matchmaking and closed communities are thriving, raising the average skill ceiling in competitive. Similarly, hacking communities are evolving with easier forms of payment and distribution. The monetary aspects are huge. But most importantly, both cultures push each away. Your persona of someone who plays with integrity and crosses the competitive and hacker mentality is pretty much gone.
Escape From Tarkov was so busted, because first they've supported cheaters (one cheater, with bought cheat for a few $, made around $2k++ monthly boosting players etc.) when Tarkov dev banned them, they will easily rebuy new account. Easy money for both parties, win-win scenario.
Second, their code for networking was complete BS, they didn't even sanity-check player movement/location server-side and many more things. Ridiculous.
The game I probably have the most hours in is Overwatch. In that time I've encountered not enough cheaters (at least those that are noticable enough) to say that they are even remotely a problem. I don't know what they are doing, but they don't use a kernel-mode anti-cheat (to my knowledge).
You simply don't notice since overwatch cheats tend to be very advanced. They also have a really strict system around reports and players actually use it.
EFT also uses kernel level anti-cheat “Easy Anti-Cheat” (as invasive as what valorant uses (vanguard)). Don’t know why ETF implementation sucks.
I’ve been on CS since 1.3, and i think their system is pretty good. Sure you get cheaters sometimes, but it’s not that bad, maybe I’ve been pretty lucky.
EFT uses battleye. Most commercial anti cheats have had a kernel component for many years because cheaters moved there, anti cheats just followed them out of necessity. Valve VAC being one of the few exceptions, but its practically useless as an anti cheat. Vanguard is better because they designed the game with anti cheating in mind, not just slapping it on at the end as an afterthought. And it protects against certain cheats loaded at boot which other kernel based anti cheat don't protect against.
Unless you use multiple users on Windows a user space anticheat (or anything you run) can already read all your files and even memory of other processes (Windows provides an API for this), putting it in kernel adds the ability to do so for the other users. Invasiveness isn't really that good of an argument as normal software can already do so much.
One difference between EAC and Vanguard is that the latter needs to be loaded on boot, so you need to reboot every time you want to play if you don't want to have it running all the time (which is a common use-case since it has a history of interfering with legitimate programs).
fwiw, cheating in CS(GO) taught me x86 RE and low-level programming way younger than is usual. sophomore year of high school.
I still recommend writing an HvH cheat to anyone that wants to get into proggin' -- you get a taste of both static and dynamic RE, memory-level programming, UI development, bare dxsdk (usually), a skid-saturated environment, sysadmin (if you try to set yourself up an uber1337 cheat page), and a bunch of other little things, all in an environment where you're quite directly competing with others in the same situation.
it wasn't a brag or anything, i just don't know by what means i would've been introduced to that stuff other than game cheats. 15-year-old-me definitely did not care about crackmes or malware reversing.
i did start writing code in middle school, though. php, mostly :)
Cheating is such a bummer in CS, even in casual matches. Luckily it’s usually pretty obvious and you can either kick the cheater or find a better lobby. Having friends on there has made finding good lobbies in general much easier
around the year 2000, a friend of mine from school got banned from many large Half-Life servers because they claimed he was cheating. He was not, he was just that good. I swear even if you watched him playing you could have sworn he used an aim bot. The crosshair was almost permanently stuck to the other players' heads. But that's just how good he was. Shame that E-Sports wasn't a thing back then, he could have earned a fortune
I disagree that cheating "has become" a huge problem, it was always a huge problem.
I can't remember a single multiplayer game that didn't have cheaters of some form or another. None. Zilch. Zero. It's kind of why I never grew beyond playing MMORPGs, and even that passion ultimately died out.
Back in the old days, before even xbox, online play was almost exclusively on computers on privately hosted servers, so you had mods actively banning anyone who gave any hint of cheating.
That doesn't refute my point, though; probably supports it, even. Private server owners went scorched earth in ye olde days because cheating was (and still is) a huge problem.
As a player it was just less annoying back in the dedicated server days, since cheaters were dealt with immediately. Nowadays you have to report them in most of the competitive games and then it can take anywhere from several hours to weeks before anything happens. It just feels like the protections have become more and more invasive, yet are still far behind the original community managed servers from back in the day.
Sure, and that's why there's more and more "trusted" hardware to try and get computers to a place where their users cannot read and write to or from their own memory.
Those kinds of things tend to be their own undoing.
You added a security processor to your hardware at ring -2, but hardware vendors are notoriously bad at software so it has an exploit that the device owner can use to get code running at ring -2. Congrats, your ring 0 anti-cheat kernel module has just been defeated by the attacker's code running on your "trusted" hardware.
But in the meantime you've now exposed the normal user who isn't trying to cheat to the possibility of ring -2 malware, which is why all of that nonsense needs to be destroyed with fire.
This is true, but what is "reading and writing to memory" here? The article outlines dozens of ways of doing that with various hooks etc. And how they try to avoid that.
If I put a hardware connection to the memory (basically WIRES to my memory bus) then yes, it's very hard to detect. But that's also very hard and expensive to do...
reply