Hacker News new | past | comments | ask | show | jobs | submit login

Unfortunately, due to the OSS nature of the game, it's super easy to cheat :/

The patches to wallhack and autoaim are literally a 2 min google search away, and the compile process is well documented and easy.

I wonder what kind of measures can be put into place without utilizing what essentially boils down to DRM?




I am one the developers of the game. Improving the anti-cheat capabilities is certainly one of our long time goals. The game server currently lacks a complete simulation of the game, so most of the important logic is handled only in the clients.

The first big step would be isolating the simulation logic into its own library. Then both the client and server can share that library. We would be able to move some of the decisions to the server, or at the very least verify if the client's decision make sense.


The server shouldn't trust the clients. The clients should just send inputs or actions, and the server will do validation and decide what the result will be.


One of the patches involved is called 'autoaim'. How will the server know that the human user didn't actually aim that precisely?


Autoaim cannot be solved absolutely by technical solutions. Even if the executable is fully electric fenced and verified matching a checksum, aimbots can just read the video buffer from a different process. In practice, you can statistically analyze player hit metrics and then monitor people who appear to be too good. However, this requires basic customer service which is less likely to come from a free server for a noncommercial game.


I don't expect it would be that hard to make an aimbot look human by throwing in some random variation. It would probably then be difficult (even for a human auditor) to reliably differentiate genuinely good players from "too good" players.


Additionally, some humans are far more skilled at aiming in a video game than you might think possible.


It may send the position the player is aiming at at certain periods, if the player is way to accurate and linear moving to the target you can ban him. Some false positives and stuff should be considered too


That's a good first step but it doesn't really stop hackers. Specifically the two cheats that the parent mentions don't require the server to trust the client. There's mitigation that can be done to prevent wallhacks, but aimbot can be really difficult to detect without sophisticated tools, and even then fundamentally the cheat can operate as an intermediary between mouse input and the game receiving that mouse input which makes it impossible to directly detect without memory inspection. Essentially you're left with heuristic analysis of mouse movements, or more likely a "Overwatch" (CS:GO) type system where moderators/janitors are tasked with evaluating players to determine if they're cheating or not.


Cube2, another oldie but goodie FOSS games relies heavily on admins too. They are supported with stats (a server plugin, IIRC) and a observer mode which is actually available to all spectators (the game is more focused on fun than competition).


Cube1 solved the problem by keeping the communication stack closed source so you couldn't build a aerver-compatible client. The OSS version of Cube1 used a different communication stack.


We don't really have the technology to do that completely. Because it's really hard and expensive to stream pre-rendered video at a low enough latency, most action games still transmit player positions and trust the client to accurately render the corresponding video client-side.

Although, even in scenario with a completely untrusted client receiving only final video output, there's no guarantee that the actions being input are human.


> even in scenario with a completely untrusted client receiving only final video output, there's no guarantee that the actions being input are human.

I imagine it would be a fairly trivial application of computer vision to make an aimbot for first person shooters that works most of the time.


Back when I was a youngster playing nettrek, the same thing happened. Basically, those smart enough to hack the code could create their own cheats. The wikipedia page says that game mechanics were changed so that borgs no longer had a specific advantage, but I don't know what those game mechanics were. Though it was pretty obvious if you were playing against a borg or not at least in the early days. In play chat also alerted people to those that were playing borgs.

https://en.wikipedia.org/wiki/Netrek


Back in the day, you would shoot at a borg, and if your shell was aimed perfectly, it would side-step one tank width while the shell was in flight. You couldn't kill one unless you were right on top of it (or had guided missile or laser).


Do people still play netrek these days? Read somewhere that game has been around since the '80s.


The wikipedia article says it still is.


maybe just play it with friends, on a lan?


This is great LAN party game.

Although my daughter used to always pick up that damn laser and then shut out the rest of us


For wallhack: it would be impossible if the server would not send locations of opponents behind walls

For aimbot: let server slightly autoaim, but require more shots and slower rotation so it's a bit more about strategy than about aiming speed?


>it would be impossible if the server would not send locations of opponents behind walls

Wouldn't that require the server to do some amount of 3D logic every netcode tick for every player? And then you'd get into real latency problems, since either you do rollbacks to account for when people are allowed to see each other or you let the fastest connection win.


Depending on the complexity of the geometry this is within the realm of reason. Fairness might not be a huge issue as long as you make sure visibility is a two way street. People will probably complain about high latency players popping into existence after they're already most of the way around a corner though.


The server already has to do a full 3D simulation for each client. The server just doesn't send updates for players that are not visible to another player. So yes the server has to do a bit more work, on the other hand it reduces server bandwidth.


darkplaces quake does this. see sv_cullentities_trace.

https://www.youtube.com/watch?v=hf94PbE-b9I


Maybe that's why no matter how good my son got at the game, he always found himself with negative points in every session.


hashing the binary?


You can't stop cheating even on a closed source software with an anti-cheat running with kernel privileges. The same idea applies to open-source, especially as any anti-cheat feature is open sourced too. No matter how powerful your anti-cheat protection is, it can be bypassed (There were ROP-based cheats recently on HN if you want to take a look: https://news.ycombinator.com/item?id=21355058).

At the very least, you won't be able to protect info like the position of entities and players (You need them even before you see the players to avoid problems, meaning anyone can create a wallhack or an aimbot).

If one could create a trusted environment, that would be a different story. I do think this is possible with cloud gaming. If you can't interact with the game memory, the game files or the underlying operating system, the best you can do should be macro or AI based on what's displayed on the screen.


You can't necessarily detect the cheat software, but you can detect the behaviour of cheat software. Writing an algorithm that can play a first-person shooter at a superhuman level is pretty trivial, but writing an algorithm that plays like a really skilled human is vastly more difficult. This behavioural approach to anti-cheat is now highly practical thanks to deep learning.

https://www.youtube.com/watch?v=ObhK8lUfIlc


As long as the network and its weights are open-source, you can create adversarial examples automatically. So the idea is to run the auto-aim bot, but then run the network on your inputs backwards to morph your inputs to look more "human-looking".

Raw Autoaim Bot -> Neural Net (since its open source) -> tweak inputs until its considered a human -> send the tweaked autoaimed command.


The weights don't really have to be open-source though. After they are trained, they should be subjected to the same policies as database passwords to counter your attack, or to at least force the cheater to train the counter-network via the game.


> After they are trained

Are you suggesting that BzFlag sysadmins are going to be custom-training their own neural nets?

Based on my experience with LeelaZero, its far more likely for people to share weights with each other. And participate in community-training. Not everyone has the skill to spin up TensorFlow, carefully partition off training data, and monitor neural nets through training.

But almost anybody can download the "latest weight file" as a Cronjob and update their bzflag server every day or week.


To restrict this you may refuse to share the position of all units unless they are visible


1) A modified binary can send in the hash of an unmodified binary. The server has no way to know.

2) Linux distributions compile BZFlag themselves, they don't just distribute upstream's binaries.


I always keep a legitimate binary of BZFlag around and make sure my hacked version hashes that one instead of itself.


The server would need to provide the binary, or at least a large precompiled core library. Even then, with all of it open source it seems like one could hack it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: