On my game, it would check to see that the copyright notice was intact. If it wasn't, it would, at random times, overwrite a random byte in memory with a random value. The idea was for the "rebranded" game to appear to be uselessy buggy.
I have no idea if it was effective or not. Probably not.
At which point it isn't so much copy protection, but copy ... karmic punishment?
> We assume that the attacker has access to a compiled (binary) version of the program but not its source code. Although this is a severe limitation (it means that our technique cannot be used to protect open source software), it is a necessary one. Developers are unlikely to be willing to work with source code that has had extra bugs added to it, and more importantly future changes to the code may cause previously non-exploitable bugs to become exploitable. Hence we see our system as useful primarily as an extra stage in the build process, adding non-exploitable bugs.
I wonder if running software through this additional build step would also cause headaches for developers debugging and triaging real bugs in the software.
While clever in that it tries to use fundamental limits of reasoning against the adversary, I need more experimental analysis before I am convinced that these chaff bugs are actually indistinguishable to an attacker unless they truly collect profiling data that is exponential in size of programming paths.
I think that a few runs would suffice to distinguish the chaff bugs and real bugs.
Just like it were in 2000s: the fight between the people who were capable of unpacking the executables from their respective defensive mechanisms and the people who implement various mechanisms of obfuscating the executable.
For example: online en/decryption of the code, or the later stages of this concept implemented first in Assassin's Creed, where the servers would store various connecting components between stages of the game. So you would need to reach point A under constraints C1 and C2, before the server would tell you what follows A.
My point is, you can't take defensive mechanisms implemented in software at their extreme. Simply because of Turing-completeness. The moment we've got the tools that could, under a certain amount of constraints, prove that you can never execute any code that have not been compiled into the executable - is the moment when we don't actually need to develop any new software to defend ourselves.
Most of the time when we build software, the only source of truth, that defines the way a given software must behave - is it's own source code.
Thus even when we can prove for sure, that the source code that is being executed is the source code we had just written - we might be missing the cases when we were the ones responsible for introducing the behaviour that we ourselves had not expected.
I think the only way to fix that is to introduce even more constraints on the already built system.
We use exactly the same mechanics in out legal systems: while there's a single source of truth for most of the laws - there's always the court that checks every single day, if these laws still hold true for a given moment in time, or if they hold true for a given specific scenario.
I believe a set of closed-source tests provided by different providers for open-sources software, for example - could work really well. E.G. every time you push a new commit to nginx repository - there's organisations A, B and C which would also report their independent findings on the possible exploitation of the new code.
Those findings do not need to be aiming for the same bugs. One of them might be more people-oriented, and another one would be incredibly technical, reaching for the bugs that happen due to compiler design.
With strong enough constraints such tools exist.
This "optimization" is analogous to the same technique.
Control Flow Integrity should be built into modern processors.
This would probably be a nightmare to debug.
If the game knew it was pirated, it would still quietly run...
But it would make the enemies so difficult and powerful that you could barely get out of the first town or even enjoy the game.
Oh and as a bonus it would delete your whole game right before the final boss.
So if you running a multi-tenant service, where one of the tenants could be a potential adversary, having a crash-bug is enough for a serious denial of service attack.
You could say they had no obligation to you since you didn't pay for the software certainly they have no obligation to run for example or not to crash or lock the user out but it seems like once you start destroying user data you are violating the users rights to their own property.
Literally the user has no rights in that case.
What property rights do attach do is clearly peoples physical property with precedent going back thousands of years before we dreamed of copyright. By extension the files on said property ought to be secure against vandalism courtesy of more recent rules.
Believing that a running application on another persons computer transforms their physical property into your property requires magical thinking. The IP right is an orthogonal issue that has no bearing on their right to their own machine and their own files.
Its as if you stole my pen and went on to write a manuscript and I broke into your house and burned it.
Both acts are orthogonal and the ethics don't intersect in the wholly magical way you imagine. I would owe you one pen or the money to buy one and I would owe you for your manuscript plus owe the country for the crime of breaking and entry or in this case inappropriate access to a computer.