Hacker News new | past | comments | ask | show | jobs | submit login
What if we had mutex revocation lists? (josephkirwin.com)
33 points by iou on Sept 5, 2016 | hide | past | favorite | 22 comments



This is classic cat and mouse. Once most AV vendors and OSs support this, they will find a different way. In other words, increased complexity for a temporary relief.

It's really is about time that OS vendors to start looking at this problem differently. There is a part of the system that is for the system only, and a part that is for the user. Keep the two seperate, 100%. Each and every program should be in their own little sandbox, so any compromises are confined to that sole program.

Sandboxing this way will not make the problem go away, but it will significantly reduce the attack surface that needs to be hardened to prevent a random exploit to be used to jump to a systemwide exploit.


For most programs to be useful the need access to data/resources outside of that sandbox. The challenge is making that work without being super annoying to the user.


That is perfectly fine. Finding that balance of what to make available and what to keep locked away, is what the sandbox is all about. My point still stands however, the attack surface is significantly reduced.


That's exactly what Apple is doing and it's not getting them universal approval from techies, to say the least.

"They're locking down my computer", "They shouldn't tell me which program to run", etc.

But IMHO it's the best solution we've found so far.


The problem with these "trusted" and "secure" systems is not that they restrict what the computer can do. The problem is when only the manufacturer gets to choose what is allowed and what is not.

For example, a secure boot system where the root keys can be set by the user would be perfectly fine, even with the FSF. (Yeah, that opens a whole can of worms, but original Chromebooks showed that you can have a physical "dev mode" switch and still keep unknowledgeable users safe by default.)


I'm a developer (web / frontend) and I have yet to come across a situation where Apple's sandboxing feature restricted me or any of the application which I use. The users who are negatively impacted by sandboxing are a minority. A vocal minority, but a minority nonetheless.


For web/frontend you are already living in the browser's sandbox. It's app developers that are restricted.


We use other applications besides the web browser! Those which run in the terminal (vim, emacs, nodejs, ruby, ...) and also full desktop applications (Photoshop, Sketch and whatnot).


How many of those come from the app store?


They'll just do what the very first piece of malware did - add a small random chance of re-infecting the machine, even if the malware seems to be present already.

https://en.wikipedia.org/wiki/Morris_worm#The_mistake


Nice example!

As I was saying in the blog post, doing so would potentially increase chance of discovery, hence sub-optimal from the malware author perspective.

In the case of ransomeware, you'd assume you'd want the encryption to be idempotent. Otherwise how would you be able to offer decryption and thereby profit?


>In the case of ransomeware, you'd assume you'd want the encryption to be idempotent.

Ransomware could just add a magic string into encrypted files, so any subsequent infection can check if the files are already encrypted. Of course that increases discovery chance but it doesn't matter: if the computer was already infected discovery doesn't matter anymore, the ransomware likely revealed itself already; if the computer isn't infected already then you have to read those files anyway to encrypt them.


Ah, the immunisation-Matryoshka-doll-possibilities are endless :D

So in this scenario where they add a magic string: 1) Would this magic string be constant? 2) Would this magic string be from some generative algorithm 3) Could this magic string be random?

See the similarities between the original scenario and this one?

I guess if they were sincerely hardcore they could use a MAC on the exposed "magic string", though to do that correctly, they'd need to embed that within the actual encrypted payload, otherwise they'd be subject to the same immunisation attack of just prepending a valid, MAC'd magic-string to files when they were requested through a file handler API. Or they could use something like AES-GCM or some other authenticated encryption, though they'd have to run the decryption algorithm on every file to obtain the AAD)


Make it extremely infrequent, so the expected probability of re-infection is about once a week. For instance, have a 1-in-7-times-24 chance, and re-run every hour. This way you don't rely on local state, so there's no local state for AV to corrupt.

When a victim pays, provide them all of the keys - a paying victim probably won't wait multiple weeks, so in the common case it should just be one or two keys. Storing these keys shouldn't be too much load on your servers, and ransomware doesn't really need to care about integrity protection, so there's no inflation of file size from repeated encryptions.


I think there are many many ways for a program to detect if it was loaded / infected that would not require the use of a system wide mutex -- many would not result in increasing the chance of discovery. So I think adding mutex revocation list would result in false sense of security.

While it may combat existing malware it would only be a temporary inconvenience.

Not to be too harsh, I feel like solutions like this tend to forget that these are computer programs we are talking about. The entire concept of them is to be able to do things within the constraint of the hardware. The notion that mutexes are a construct coded by a programmer one would expect to be able to make their own mutex without the help of the underlying framework/library you are using today.


The malware would use some other mechanism, or would use some semi common mutex name. Something out of Photoshop or AutoCAD or whatever, to really annoy enterprise licensees of AV tools that went for that mechanism. (I assume most of those who pay ransomware are individuals, as enterprise would have automated backups and machine imaging for recovery.)


If this technique became common, malware would just switch to another approach to determine if it had already infected a machine. It's a cat and mouse game where the cost of defense is much higher than the cost of attack.


Lots of ways around this, for example:

* Use an expensive KDF to derive the mutex name from some hardware attribute like the mac address or processor serial number (e.g. it takes 5s to compute, and must be done on each machine) - as soon as there are thousands of such malware codes using this approach, the AV would become commercially unreasonable. The existing infection could even include the time in the KDF input and rotate mutexes every hour.

* If an infection is detected by the mutex, the new instance tries to use IPC to connect to the existing instance and follow a protocol to verify its validity.

* Some other resource is used to ensure mutual exclusion of infections. Binding to a port on localhost, locking a file, writing a temporary file, creating a named pipe, creating a process with an obscure name in the process list, creating shared memory, etc...

* A statistically detectable behaviour is used to signal presence. For example, any Windows process can get the free system memory - an existing infection could regularly monitor free memory and then allocate or free memory to make sure the free memory stays as a multiple of some number. A new infection could then monitor free memory, and if it stabilises to the magical multiple, allocate memory, check if the free memory changes to the multiple, and repeat until it has confidence of an infection. A similar approach could be to look for an encoded signal in the pattern of CPU usage over time (it might only work when the system is mostly idle, but that might be good enough). Potentially this could be done using sleeps and accessing the system clock, so would be very hard to block.

All in all, I agree with some of the other comments that this a cat and mouse game, and it is stacked against the cat (AV vendor).


A bit late here so perhaps commenting into the void, but here's my £0.02 anyway ... :)

Creating a system mutex ought be a capability e.g. in capsicum http://www.cl.cam.ac.uk/research/security/capsicum/

Applications ought have to whitelist the named mutexes they can create.


It's not uncommon for apps to need dynamically named mutexes.


I always use mutexes based on machine unique characteristics. Obviously one can still reverse engineer the algorithm and ship AV updates to keep those mutexes. But that's more work than shipping in.txt and out.txt full of strings to your anti virus clients. In the end I think that this ultimately does not mitigate the discrepancy between attacker and defender costs.


Pointless idea. You can coordinate in mamy ways, mutex is just one way. Also, you can hash MAC with a stable id to get different mutex on every machine. You can also change mutexes, new versions may check multiple names. Sorry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: