The argument that vendors don't make any effort to prevent disassembly means that they don't care about piracy is flawed; the costs associated with additional protection may just not be worth it.
Adobe, for example, used to go to great lengths to prevent people from cracking their software. In the end, though, a crack still came out within a few weeks. Was it really worth months of highly-paid-developer-time to slow down piracy for maybe a month at best? (Evidently not, since CS5 and above can be disabled by adding a line to your HOSTS file.)
Even as an individual who's sold software in the past, it's usually not worth it. I don't have the time myself, and commercial software which does this costs hundreds to thousands of dollars. I may not be happy if people pirate my software (I'm actually fine with it), but it's not worth the cost to prevent it.
That said, I found the article extremely interesting!
Couldn't agree more. You also lose valuable debugging information, like crash reports, that aren't as useful when you've applied obfuscation techniques. You may very well wind up with a buggier product, and thus a worse experience for your actual customers.
Exactly. Buggy and slow software is a much larger, worse issue than piracy, which directly affects your paying customers.
IMO good software has the same properties as good hardware: the internals accessible to diagnose and solve problems.
Sure, people may "steal" your code, but then again, if your business model relies on keeping code secret instead of being helpful to customers, you have already lost. You're not some kind of priest protecting your superior, holy code from lesser minds.
Edit: Also, code obfuscation says "I don't trust you, customer!" if nothing else.
For my time spent, I'd rather improve debugability over most any other aspect of quality in software.
I dream of never needing to get something reproduced because when something goes wrong there is a real useful data and execution trace, no matter whether the bug manifested as a crash or incorrect behavior. IMO the compilation/execution strategy of chicken scheme is a big step towards enabling this.
Coming from an open-source background, a perspective that considers being able to analyse how your program works as a "vulnerability" seems quite alien.
(And implying that software vendors who don't obfuscate their code shouldn't complain about piracy seems analogous to the old chestnut of blaming rape victims for dressing immodestly).
Not to knock on the author's article, but I've never seen a code encryption technique or assembler-level obfuscation that wasn't trivial to remove. If you're looking for protection technologies that perform well in the real world, look into Source Level obfuscation, Virtual Machines, and Antidebug tricks. If other people are interested I can post links.
Also interesting to see an article like this on the front page of HN...
Not to knock on the author's article, but I've never seen a code encryption technique or assembler-level obfuscation that wasn't trivial to remove. If you're looking for protection technologies that perform well in the real world, look into Source Level obfuscation, Virtual Machines, and Antidebug tricks.
They're all "trivial" to remove for a given value of "trivial." What you can do is to detect whether or not exactly the intended code is running, and you can only get away with this for awhile if you separate the detection from any visible consequences by a significant time span.
Also interesting to see an article like this on the front page of HN...
We've seen things like this time and again: Technology X fails at task Z. The conventional wisdom is that technology X is somehow hopeless in the entire problem domain. Everyone is scared of using technology X. Someone realizes that technology X >can< be used for task Y, but everyone just laughs at him and spouts the conventional wisdom.
It's easy to recognize things like this in hindsight. What's hard is to recognize them ahead of time and to profit from the information asymmetry. For this reason, I think this is appropriate for the front page of HN.
For antidebugging tricks, Peter Ferrie has pretty much written the bible on these when it comes to Windows:
http://pferrie.host22.com/papers/antidebug.pdf
There's a couple of these papers, make sure you read all of them. For platforms other than Windows, you will have to come up with your own stuff, or Google well :)
For obfuscation, the best tricks are the ones you see in the real world. Check out the excellent Unpackme archives and tutorials of tuts4you:
http://tuts4you.com/download.php
And finally, if you're a noobie interested in getting involved in Reverse Engineering, check out Lena's excellent tutorial series:
http://tuts4you.com/download.php?view.2876
If you want more info on anything specific, say so...
the code still has to run. stuff like this is trivial to invert using a virtual machine that records what executed and also records modifications to code. you also suffer from the problem / weakness of having the decryptor for your code, in the code you give out.
now if you can avoid that (by applying some transform that allows the program to function but without having to carry its inverse) then ...
Adobe, for example, used to go to great lengths to prevent people from cracking their software. In the end, though, a crack still came out within a few weeks. Was it really worth months of highly-paid-developer-time to slow down piracy for maybe a month at best? (Evidently not, since CS5 and above can be disabled by adding a line to your HOSTS file.)
Even as an individual who's sold software in the past, it's usually not worth it. I don't have the time myself, and commercial software which does this costs hundreds to thousands of dollars. I may not be happy if people pirate my software (I'm actually fine with it), but it's not worth the cost to prevent it.
That said, I found the article extremely interesting!