Hacker News new | comments | ask | show | jobs | submit login
Chaff Bugs: Deterring Attackers by Making Software Buggier (arxiv.org)
100 points by fniephaus 6 months ago | hide | past | web | favorite | 30 comments



Back in the olden daze, people would take branded software, overwrite the copyright notice, and sell the result as their own "compatible" version.

On my game, it would check to see that the copyright notice was intact. If it wasn't, it would, at random times, overwrite a random byte in memory with a random value. The idea was for the "rebranded" game to appear to be uselessy buggy.

I have no idea if it was effective or not. Probably not.


It reminds me of the old "protection" in Settlers: when using an illegal game everything would work as normal except the iron smith would produce pigs instead of iron bars. That'd be surely missed by early crackers but definitely not by users.


Earthbound had a particularly punitive bit of hard-to-notice copy protection - it would be exactly identical up until right before the final boss, and then immediately crash and delete your save.

At which point it isn't so much copy protection, but copy ... karmic punishment?


Probably the users just thought the game was buggy normally and were relieved they didn't buy it.


funny at least


The trick was to corrupt memory at a slow enough rate that the crackers wouldn't notice it (and so wouldn't defeat it), but their users would.


The idea is quite fascinating and 'out there'. However, the authors themselves cover several clear drawbacks on the second page:

> We assume that the attacker has access to a compiled (binary) version of the program but not its source code. Although this is a severe limitation (it means that our technique cannot be used to protect open source software), it is a necessary one. Developers are unlikely to be willing to work with source code that has had extra bugs added to it, and more importantly future changes to the code may cause previously non-exploitable bugs to become exploitable. Hence we see our system as useful primarily as an extra stage in the build process, adding non-exploitable bugs.

I wonder if running software through this additional build step would also cause headaches for developers debugging and triaging real bugs in the software.


This could be a viable direction but I have some concerns. I feel it will simply push attackers to be selective and it will be difficult for these techniques to avoid standing out. For example, consider their technique of making things non exploitable by overwriting unused variables. What’s to stop adversaries from prioritizing which bug to investigate based on profile data counting uses of variables? Now these chaff bugs need to somehow match the dynamic use counts of real variable. What about the over constraint? Well the authors point out that attacker’s can use SMT solvers to determine that a bug is safely constrained. They propose spreading out the constraining to force the attacker to reason about all paths. Again, Nothing stops the adversary from profiling the memory regions to learn what constraints are being applied followed by prioritizing the least constrained regions.

While clever in that it tries to use fundamental limits of reasoning against the adversary, I need more experimental analysis before I am convinced that these chaff bugs are actually indistinguishable to an attacker unless they truly collect profiling data that is exponential in size of programming paths.

I think that a few runs would suffice to distinguish the chaff bugs and real bugs.


I think you're missing the point here: the exploitable bugs in their nature will exist forever. As you are either giving up the Turing-completeness, or are allowing them to possibly exist.

Just like it were in 2000s: the fight between the people who were capable of unpacking the executables from their respective defensive mechanisms and the people who implement various mechanisms of obfuscating the executable.

For example: online en/decryption of the code, or the later stages of this concept implemented first in Assassin's Creed, where the servers would store various connecting components between stages of the game. So you would need to reach point A under constraints C1 and C2, before the server would tell you what follows A.

My point is, you can't take defensive mechanisms implemented in software at their extreme. Simply because of Turing-completeness. The moment we've got the tools that could, under a certain amount of constraints, prove that you can never execute any code that have not been compiled into the executable - is the moment when we don't actually need to develop any new software to defend ourselves.


There's another addendum to this: an another issue that is incredibly relevant to software per se.

Most of the time when we build software, the only source of truth, that defines the way a given software must behave - is it's own source code.

Thus even when we can prove for sure, that the source code that is being executed is the source code we had just written - we might be missing the cases when we were the ones responsible for introducing the behaviour that we ourselves had not expected.

I think the only way to fix that is to introduce even more constraints on the already built system.

We use exactly the same mechanics in out legal systems: while there's a single source of truth for most of the laws - there's always the court that checks every single day, if these laws still hold true for a given moment in time, or if they hold true for a given specific scenario.

I believe a set of closed-source tests provided by different providers for open-sources software, for example - could work really well. E.G. every time you push a new commit to nginx repository - there's organisations A, B and C which would also report their independent findings on the possible exploitation of the new code.

Those findings do not need to be aiming for the same bugs. One of them might be more people-oriented, and another one would be incredibly technical, reaching for the bugs that happen due to compiler design.


> The moment we've got the tools that could, under a certain amount of constraints,

With strong enough constraints such tools exist.


Yep, as I mention in this thread (https://twitter.com/moyix/status/1025393125567737857) indistinguishability is the hardest missing piece! You're right that you could try to do some kind of profiling and come up with heuristics, but I would note that real security bugs are most often not on a "hot" path anyway (more frequently executed code is actually less likely to contain security bugs, as we found in a paper from last year: https://www.usenix.org/system/files/conference/atc17/atc17-l...).


"Half the electronic engineers in the galaxy are constantly trying to find fresh ways of jamming the signals generated by the Thumb, while the other half are constantly trying to find fresh ways of jamming the jamming signals."


One could think of this as a whitelist for allowed flow control paths.

This "optimization" is analogous to the same technique.

https://fgiesen.wordpress.com/2012/04/08/metaprogramming-for...

https://news.ycombinator.com/item?id=7739599

Control Flow Integrity should be built into modern processors.


Imaging having a bug in your software that introduces bugs. Instead of introducing new non-exploitable bugs, it introduces exploitable ones.

This would probably be a nightmare to debug.


The game Earthbound for SNES had code that detected if it was pirated or not.

If the game knew it was pirated, it would still quietly run... But it would make the enemies so difficult and powerful that you could barely get out of the first town or even enjoy the game.

Oh and as a bonus it would delete your whole game right before the final boss.


Vaguely reminiscent of Ron Rivest's chaffing & winnowing scheme for secure communication from twenty years ago: http://people.csail.mit.edu/rivest/chaffing-980701.txt


Is this a just a honeypot or a variant of one where you lead the attacker to believe they've succeeded, instead of actually making them succeed with an actual bug that leads nowhere?


It sounds like you just overwhelm them with dead-ends so they don't have the resources to exhaustively search for the things that aren't dead ends.


Alternate subtitle: "Complicated method of shutting down your bug bounty program"


Interesting idea, but the article greatly downplays the dos angle. Most applications consume fair amount of resources to start up - there are config files to parse, caches to warm, and so on. So there is an imbalance of cost between an attacker, who has to submit another input, and target,who has to restart the process. Hence a great dos angle. Pooling only helps if you can replenish the pool faster than the attacker can drain it.

So if you running a multi-tenant service, where one of the tenants could be a potential adversary, having a crash-bug is enough for a serious denial of service attack.


Yep, there are plenty of applications where crashing is a big deal! But, we think, also plenty where that's not an issue. Note also that the "unused" strategy creates bugs that don't cause crashes (but they could be detected with something like ASAN).


Chaff bugs like something from WW2 is a valid technique of obfuscation, provided it doesnt eat into the time and money of the key revenue making activity. Whilst equates, libraries and OOP can reduce the incidence of bugs, perhaps the whole method of coding is flawed from the off? If you can abstract the code enough, you can reduce the incidence of bugs by many factors, but there is only one IDE/language I know of which seriously reduces the incidence of bugs and thats called Clarion by Softvelocity. Its old school (dos) DB orientated but the templates reduce the incidence of bugs but in my experience coders cant get their heads around the abstract nature of the templates. The templates not only give you standardised coding (in any language if someone wants to invest in developing templates for other languages), this means its easy to write programs which can rehash some of the exported app & dct code. In turn every customer/end user can have an identical in function but unique in code and thus hashed EXE to run on their own systems making the exploitation of bugs even harder. I think templates put most programmers off, but its the crown jewels for taking most business systems to the next level.


I still like what 3d studio Max did in the 90s. The gradually degraded the polygon of a model over the course of months when a pirated copy was detected. The other funny one was with Adobe and technical support..."sorry but there was no Photoshop 4.5.1, and now we know you are pirating our software"


I don't think vandalizing users work is the same as making a game buggy to take it to the extreme what if windows if it decided your key was illegitimate decided to corrupt your files.

You could say they had no obligation to you since you didn't pay for the software certainly they have no obligation to run for example or not to crash or lock the user out but it seems like once you start destroying user data you are violating the users rights to their own property.


I disagree, you had no actual right to create the property in the first place. It's like trespassing and the actual owner grabbing a dumpster and cleaning out your stuff.

Literally the user has no rights in that case.


Property rights including intellectual property rights don't attach to the tools used to create them. This is why your drawings aren't copyright bic and bram moolenaar isn't a billionaire.

What property rights do attach do is clearly peoples physical property with precedent going back thousands of years before we dreamed of copyright. By extension the files on said property ought to be secure against vandalism courtesy of more recent rules.

Believing that a running application on another persons computer transforms their physical property into your property requires magical thinking. The IP right is an orthogonal issue that has no bearing on their right to their own machine and their own files.

Its as if you stole my pen and went on to write a manuscript and I broke into your house and burned it.

Both acts are orthogonal and the ethics don't intersect in the wholly magical way you imagine. I would owe you one pen or the money to buy one and I would owe you for your manuscript plus owe the country for the crime of breaking and entry or in this case inappropriate access to a computer.


If you can provable show that the bug is non-exploitable, then can’t the attacker?


This is why the bugs are made non-exploitable by construction rather than by doing a bunch of work to show they're non-exploitable. There's an information asymmetry at work: the injector can craft all bugs in such a way as to ensure their non-exploitability, but the attacker has to laboriously prove the non-exploitability of each one.


i feel so bad for security researchers trying to actually make software better when meanwhile stuff like this is being worked on.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: