Hacker News new | comments | show | ask | jobs | submit login
I Can Crack Your App With Just A Shell (And How To Stop Me) (kswizz.com)
250 points by SeoxyS 2437 days ago | hide | past | web | 89 comments | favorite



The most stealth cracking countermeasure I ever witnessed was the application would XOR some of its UI messages with the hash sig of the application binary file, so if you edited the application binary file directly the crack seemed to work just fine ... but then the application would gradually go insane. The cracker who finally posted a working crack was impressed with how simple and devious the countermeasures were.


Brilliant, but be careful about this. There was an article here years ago about an indie game developer who put tons of different piracy checks all throughout his game. He was pretty clever, and made it so that several didn't activate until partway through the levels — that way the crackers might miss them. Also, he didn't show any "pirated copy detected!" messages, which would have made the checks trivial to find. Instead, the application would simply crash with a cryptic error message.

It worked perfectly — the crackers missed the later checks. Anyone who torrented the game found that it crashed reliably as soon as you completed the first level. Game over.

Unfortunately, this did not generate the kind of PR he was hoping for. In fact, all this did was give the impression to everyone who pirated the game that it was a buggy piece of shit. Since there was no obvious reason for the odd behavior, they assumed it was the fault of the application. They stormed the gaming review forums and discussion boards, complaining about how the game was "shitty" and "unplayable". Nobody was keen to mention that they had pirated it, so there wasn't an obvious trend. At the time, the ratio of pirated video games to legal ones was about 10:1, so the bad feedback overpowered the good feedback by about 10:1. He was ruined.

Be careful about anti-piracy. You just might succeed.


I mentioned this game in another comment on this page, but Spyro: Year of the Dragon used this technique and it worked very well for them. It could be that this is because it was on a console and not on the PC, but who knows. The goal for them was to simply keep the crackers at bay for as long as possible to keep the sales high during the initial release of the game. The developers stated that once the game is cracked the sales drop dramatically, so the longer they can keep the game uncracked the more money they made.

Great article from the developers about this: http://www.gamasutra.com/view/feature/3030/keeping_the_pirat...


I remember playing Spyro, good memories. I think there's a new one out, maybe I'll check it out for old time sake. Thanks for the positive nostalgia


A similar (but much simpler) anti-piracy feature was built into Command & Conquer: Red Alert 2. The game would appear to operate normally, and let you start playing; however, after 30 seconds, all of your buildings would explode and you'd instantly lose.


That one was a bitch - sometimes it'd happen to my legally bought and paid for copy of the game.


DRM = functionally indistinguishable from broken


If lots of games used similar methodologies then people would slowly learn that "pirated game" == "buggy game". Wouldn't work unless lots of them did the same thing of course.


No, they don't have anything to compare it with. They'd never figure out that it's because stuff is pirated.


Exactly. It's even worse if you add the crippling behavior later on, i.e. in the upgrade from v1.2 to v1.3. All pirates notice is that the new version is really unstable.


I am almost certain Ableton Live for the mac does something like this. It is a piece of music production/performance software. It will appear to function normally, but the audio engine will gradually begin to fail in increasingly ugly ways, especially when you use plugins.

It usually starts doing this after a few weeks or months of regular uninterrupted use. Considering this app is used by professional musicians to perform in front of audiences of thousands of fans, having the possibility of the app crashing hard at a random time hanging over you is a pretty powerful disincentive against piracy.

Over the years, many cracking groups have tried and failed to overcome this. The guys at ableton are extraordinary programmers and they've obviously done a number on this one.

There were even rumours that one of the top audio software cracking group members was actually an ableton developer, and that they leaked these devilishly broken builds to the warez scene themselves.


I think that Ableton does this too. Almost every month or few weeks it would reliably crash.


Why bother with stealth? My favourite approach is the Microsoft approach. It pops up, says "I'm cracked, click here", which takes the user to a web page that shows them all the benefits they'll reap if they get a legit version, just type your credit card number in this box and all is forgiven. It's hilariously easy to make your installation legal, which is the point... it's easier than pirating it, AND you get benefits.


Just wanted to add that this is the exact opposite to what some large game companies do. I bought Settlers 7 (an Ubisoft title) and got kicked out of the single-player campaign every time my internet connection blipped. In contrast, had I pirated the game, I would have had a paradoxically better experience. No incentive to buy, whatsoever.


Forgive my numbness, but how does this work? How do you know the right signature to be verifying against? It seems (to my not-much-of-a-programmer mind) that you've got a chicken and egg scenario here.

But that's obviously not the case, so you can explain briefly how it works? Or just paste a link.

Thanks!


You move the UI messages into a separate resource file, as you would for language translation. The executable signature is unaffected by the changed UI messages.


Alternatively, you could use a broken hash and modify an unneeded string so as to produce a collision with the key you decided ahead of time.


Brilliant!


The fact that most simple copy protection can be broken by someone that knows a bit of assembly shouldn't surprise anyone writing applications, and this post is just self-congratulatory silliness that doesn't actually help someone that wants to protect their software.

It wouldn't be any more responsible/ethical/useful of me to post a "I Can Crack Your Non-Mac App With Just A Copy Of IDA Pro and HexRays" tutorial. I could show you how I can press 'F5' and decompile your code back to surprisingly readable pseudo-C, but that's not going to help you secure your application, it's just patting myself on the back and showing you how cool I am.

On top of that, the author is still flogging the PT_DENY_ATTACH horse, despite the fact that it's been documented over, and over, and over again as trivial to bypass. PT_DENY_ATTACH was added to meet the minimal contractual requirements Apple had to movie studios and record companies by preventing users from attaching a debugger to DVD Player and iTunes. It's not a real security solution. There's a simple open source kext (that was first implemented for Mac OS X 10.3!) that simply disables it across the board:

https://github.com/dwalters/pt_deny_attach


  The fact that most simple copy protection can be broken by
  someone that knows a bit of assembly shouldn't surprise
  anyone writing applications, 
But it does, which is the point of the author.

  this post is just self-congratulatory silliness
You know, some people just like to write up something they did that they think is pretty interesting. People will keep inventing the wheel over and over again and still be proud of their wheel. This kind of derogatory remarks are uncalled for. You have no idea about the thoughts or feelings of the author and have no reason to think ugly things.

If I were to respond to your comment in the same way, I would say that you were just displaying your superiority complex over someone learning the ropes. Or perhaps bitterness and jealousy over the attention this article gets, while your more advanced knowledge does not get the attention it deserves. Or ... whatever. I can come up with a number of epithets to attribute to you based on that little bit of text, all equally uncalled for.


The fact that PT_DENY_ATTACH can be easily patched by modifying the xnu source code made me wonder about the politics of open source inside Apple.


Hey SeoxyS,

Another fan of your writings. I like the occasional quote you throw in there. However:

I don't agree with the way you phrased your headings. Verging on linkbait, even.

RCE is a hobby of mine and I crack all sorts of shit; it's fun and challenging. I know quite a few people who do.

This is the first time I have read such a blunt "I can crack your..."/"How to stop me" approach. It sounded very arrogant at first. No one else that I know bothers with this direct attitude. I am sure Mac devs are more than aware (Anticipating an article on this as a followup to your post).

"[...] but implementing a bare minimum of security will weed out 99% of amateurs. [...]"

I am not sure where you pulled that number from but it's false. RCE is not as difficult as you make it out to be, and amateurs can overcome the usual barriers quickly. Communities thrive on teaching amateurs the art, and they pick up these skills very quickly. I taught a few.


The great danger in the fight against piracy is that it's so damn interesting. You can spend months playing cat and mouse with the people trying to crack your schemes, ratcheting up the complexity to insane levels, and every time you come up with a new scheme and get it working you'll feel like a million bucks because you Won(tm).

But the people on the other side feel the same way, there are more of them, and in reality, they're not actually hurting your business as badly as your delusions tell you they are - none of them were ever going to buy your shit anyway.

Add features, improve your design, fix bugs, or tweak your shitty description and screen shots in the app store (which, in my experience, will affect sales for most apps more than the first three factors put together). Literally any time that you devote to copy protection is wasted, unless you're Angry Birds (and even then I'm not sure) you're not reaching anywhere near a high enough percentage of the people that would happily pay for your product to worry about the ones that would rather just take it.


I agree. The arms race of building 'better' copy protections instead of continous improvement of your product won't do you any good. I think the main key is deciding on the investment. The amount of time put into those things can be expressed in money. So this poses two questions: 1. Would I be willing to pay the given amount to someone else to do it. If not and I still want to do it I should at least admit that this is for personal ambition and not for the product. It's OK everybody likes a challenge. 2. Will it improve my sales? Again the money. What stands to gain from this. How much effort is OK. Perhaps the simple checksum in addition to the common cmp jne check is enough to get a few sales. But that's about it for products with a market where uncracked time is not king. Look somewhere above for the gamasutra article about game releases and the value of time. And I think the time constraint doesn't work for many products.

The idea for one of my projects was to say "if you are able to crack it you can keep it". If someone spent the time and has the ability to do it, it's fine with me. Surely this is no viable solution for most products. But I'm curious how it will work out...


Agree, playing hide and seek with crackers is waste of time. I decided to make the anti-piracy protection trivial in the latest version of my software. I just wrote a installation date into an .ini file.

I want customers that love the software and are happy to pay the price I ask. Software piracy can be solved by social means, not technical means.


Yea, artificial scarcity is fundamentally flawed. I even wrote a series about the various attempts on my blog.


The App Store doesn't need high levels of security on your apps. No matter how much you obfuscate, it only takes one smart person to crack it and then your app is on all the bit torrent sites.

People will buy from the App Store because they want the protection it provides and the convenience. They know when they download your app from the app store that it's not a virus, the install will be one click simple, and Apple has hand reviewed and approved the app.

I think the Mac App Store protection is designed to be just enough to stop Average Joe from copying it onto a usb stick and giving it to his friend. In the end that's really what you want.


Another fairly easy way to do this kind of thing is to use the DYLD_INSERT_LIBRARIES variable. You can reverse engineer the classnames with class-dump, subclass a class and override a suitable function e.g. IsLicenseValid() to just return true; You can then start your program and insert your new subclassed class into it like this:

$ DYLD_INSERT_LIBRARIES=/path/to/your/Subclassedlibrary.dylib arch -i386 /Applications/OriginalApp.app/Contents/MacOS/OriginalApp &

And on a sidenote, I thought it was funny to see him refer to something as 'badly spelt' - I thought that 'rye' remark was a bit 'corny' (rimshot :-)


I didn't know about that, that's really neat! Will need to do some research into that!

PS: I'm poking fun at myself—since I wrote the original app, including misspelling. Also, I use American english, but I do prefer to use the british form of 'spelt' or 'burnt.'


A quick correction/clarification on the above technique for anybody wondering how this works from my poor explanation.

- I should not have written 'sub-classing'; this technique works by actually extending an existing class (Objective C allows you to extend a class)

- AFAIK this technique only works with Objective-C based apps.


Meh, decoding compiled C code is about just as easy for me. I wouldn't worry about it until it becomes a serious problem. The people who crack many apps in the scene are pretty decent at it and this will not slow them down.

Edit: Actually, they're not very good at it, but this still won't slow them down much.


Some people also view the windows as an invitation to throw the stones in, claiming they are too fragile to be of any protection anyway.

Next time you buy a DRM-ed book from Amazon.com, or watch a film you can not make a copy of, you can contemplate that the protection there is much better than in some Mac app.

Would that make you happier as a user ?

The way to solve this problem is to spend more time on adding more features into the frequently released newer versions of software. Cracking all the same basic reg code would get boring for a few-dollar app.


The challenge then is to write a script that automates the cracking.


But does that actually matter? As far as I can see, it’s already easy to pirate any app you want. All you need is Google.

I’m suspicious that super secure DRM really stops people from pirating, especially when considering something non-essential with many (maybe worse but often free or more easily piratable) substitutes. Super secure DRM might be more effective with something really unique you really cannot get any other way like games (but those will be cracked anyway, won’t they?) but some app? I doubt it.


I would actually lean the other way. If your system is far more secure than another identical system, people are less likely to bother targeting your application when they break open the other one much more easily.

It's only when you bring something unique or "better" to the table that you make it worth spending significant amounts of effort on breaking.


Well, that's more than just a shell.


I Can Crack My Own App With More Than A Shell (And Link To Another Of My Articles)


How so?


vim, gdb, hex editor...


disingenuous response: 'startx' also runs within shell


All of which run inside a shell.


I think you misunderstand what a shell is. All these apps run within a _terminal_, the shell merely invokes them.

The article's headline made me a little excited to see some cool bash hackery (there's a lot of functionality packed into bash, see its colossal manpage), but when I saw it was just the usual debugger/patch/etc routine I was disappointed.

By the standards of this post's title, I could say that I have written huge pieces of software with "just a shell". In fact, the foundations of modern computing could be said to have been built with "just a shell". (ie, before they had GUIs) See how silly it sounds?


There was a time I did everything in a shell...


I would've been more impressed if you did this ONLY using GUIs. Clickies, checkboxes, buttons - shiny stuff. Command line tools are best and most efficient for reverse engineering IMHO. So if you were looking for an "I built the statue of liberty with matchsticks" type of effect, its kind of a fail, at least on me.

still there is never a reminder too much on security, so thanks for bringing that up.


A much more interesting dive into exploring a binary's internals, at DEFCON CTF difficulty -- http://hackerschool.org/DefconCTF/17/B300.html



The only "real" copy protection would be trusted-computing right down to the hardware. Signed binaries, with the signature database ultimately in hardware and controlled by a single party.... and even then, we'll have jailbreaks and keys leaked.

But seriously - this was interesting in and of itself, for those who don't know the tools. The whole concept of copy protection and registration is a war that can't be won. Denying unregistered people proper updates seems to me, form experience,to be the most effective deterrent - I don't like to apply updates if I'm not sure if it will cripple my app because I used a weird serial # - and nobody likes to run a "keygen" these days because who knows what it does.

In the end - all software is piratable, and usually by those who won't pay for it anyway.

With the declining price of software and mass-markets like the app-store, more people will pay. (I like a certain piece of SSH terminal software for windows - but I don't use it, because I'll be damned if I'm going to pay over a hundred bucks a seat for it - it's not THAT much better than the free alternatives. If they brought that price down to something reasonable, I'd use it all over)


Near the end of the article, the author mentions that storing a digest of the binary is an effective means of protection. I've heard this before, but I've never understood how it works. There's two ways I can think of:

One is it just builds the binary, runs it through SHA1 (or whatever), and stores that digest somewhere in the installation directory. But what's stopping attackers from just changing the digest? They have access to the application, so they can know exactly how to generate the digest; all they have to do is run the bundled digest function in gdb, copy the output, and then search for it in the installation. Even if the author tried some sort of obfustication (xor, deflate, reverse, etc), such attempts would show up in the binary and could be trivially duplicated.

A second is that the digest is somehow pre-computed for a binary before it's built, then included in the binary itself. But I don't see how this is possible with secure digests. And if the method is simple enough that it's worth using for typical iOS applications, what prevents an attacker from pre-computing a digest for the cracked version?


They have access to the application, so they can know exactly how to generate the digest

Traditionally, the way of doing this is by making that 'can know' step very difficult. Techniques to accomplish that include refusing to run under a debugger, multiple layers of protection, self-modifying code, loading the digest code from disk block checksums, from between tracks on a floppy disk or from blocks marked bad (back in the day when there weren't that many layers between application code and hardware), etc.

A lock does not have to be unbreakable; it just has to make breaking it costly enough to discourage even attempting breaking it.

With hackers, though, that does not quite work. They see even attempting to break the lock as enough of a reward in itself.


When Mac OS X updates the signature on a binary (for instance, when you configure a firewall rule for a previously unsigned binary), the actual Mach-O file will be changed -- and your digest will be incorrect.

Skype (which has notoriously complex obfuscation) had this problem for a short time when Mac OS X 10.5 was released:

http://securosis.com/blog/leopard-firewall-code-signing-brea...

You can work around this by validating only the important subset of the Mach-O contents, but it's probably not worth it. Cracked applications (rather than, say, reverse engineered serial number generators) are an annoying thing to use -- you'll have to refrain from applying updates until you get a new crack, trust the person distributing the crack, etc.

It's not something I (or, afaik, most other small Mac developers) really worry about.


If you have access to enough computing power maybe you can store a digest of the binary while including the digest in that same binary. That would make it a LOT harder to just change, but would also slow down things like security patches.


If you have enough computing power to do it, so do the crackers (but usually even moreso)


I was under the impression that hashes were there to protect against download/disk errors, or generic viruses/trojans etc.


No. Effective means of protection are when you "roach motel" the data.

The first rule of software engineering is you never let the shareware stuff do it all. Test version should not save? Rip it out. Shouldn't print? Remove printing. Chop it out wholesale.

If you do an unlocking scheme, then make it subtle. Take a hint from Unix development: dont tell the user that the code worked or not. Just take the code or whatever. Tomorrow, then tell the user if it's a bad code.

And if it's in the blacklist, don't tell the user at all, and instead start introducing subtle errors everywhere. "What, you saved it yesterday and now it doesn't open? Whoops (snicker)." Or, misalign printing so anything looks good for a draft but not 'professional' use. Or you could go the obvious route of slapping a banner on it, but that is usually easily removable.

The idea here is to be subtly annoying up to the point of just doing nasty shit to the data worked with in your program. And of course, give error codes in a form of a md5sum that tells the company if you're a pirate or not.

But as I said earlier, put the time you would protect the program instead as improvement to make your program do its task better and easier. Crackers find these to be challenges. They just crack to keep their chops up.


"And if it's in the blacklist, don't tell the user at all, and instead start introducing subtle errors everywhere. "What, you saved it yesterday and now it doesn't open? Whoops (snicker)." Or, misalign printing so anything looks good for a draft but not 'professional' use. Or you could go the obvious route of slapping a banner on it, but that is usually easily removable"

"Gee, I'm sure glad I decided to pirate [program], it's buggy as hell. Better warn my friends..."


Spyro: Year of the Dragon used a similar technique to great avail. Don't be so quick to dismiss this method of deterring crackers. It just needs to be done right. There are arguments going both ways on this issue so it's simply not black and white. Perhaps it works better for games than applications, but again, you'll find arguments going both ways for games and apps alike.


There was a rumour going around the EE department that if you tried to crack Eagle or use a keygen'ed license, the software would slowly start corrupting the circuits you were working on. After a month it would tell you that you had pirated the software, but not before you'd created a fair number of non-working PCBs. Devious!

I'm not sure if this is true or not, but it kept us all on the straight and narrow. :)


From what I remember, the copy protection on Eagle was conceptually simple and seemingly effective. Basically it put the license ID into all saved files. Updates to the software would include an updated blacklist which prevented loading of files from old pirated versions. Crackers wouldn't bother with fixing the load functionality as everything would work fine for the current iteration.

There were of course ways to work around it (load then save with the same version under the free license, or export the entire design as text using a ULP and then import in the new version). But on the whole, it struck me as frustrating the process just enough to encourage users who would possibly pay, to pay. (I wonder if Cadsoft made even more from unlocking design files. "You are having trouble opening that design because your fly-by-night consultant used a pirated version of Eagle. We'll happily unlock it for the cost of a deluxe license.")


And then all the people who pirated your app deride it as "buggy" and you lose your paying customers, because who would buy such a buggy piece of crap?


I personally like the idea of an application "unlocking" itself every time based on a hash of its binary. You would have to find all the places these hashes are computed -- if you missed even one place, you wouldn't be able to unlock the app.

Of course, such an app could still be cracked -- as could any app... because all you have to do is

1) purchase a legitimate copy and enter a fake name 2) take a snapshot of a working, unlocked app 3) remove all the code that cripples that state

The only way to really prevent cracking of apps that run locally is either challenge-response dongles or requiring people to provide a strongly verified identity in order to unlock the app (that way the cracker can't distribute the app without compromising the identity of the original buyer). And that is just too inconvenient for the actual buyers. Once again, security at the price of convenience.


> The only way to really prevent cracking of apps that run locally is either challenge-response dongles ...

This doesn't really work. If you have all of the functionality running on your machine but the dongle is there to authenticate, it can be cracked by ripping out the code that does the challenges. The proper way to secure an app using a dongle is to move some key piece of functionality out to hardware instead.


good point! You need some functionality though where the response can't just be memoized by the crack. What could it be?


Depends on the application, really. I can give you an example of where I've personally considered using this (ended up going with an alternative, however): my startup's product is a hotel front desk system and we have to encrypt room keys to work in the locks; pushing the crypto off onto an external device would make it considerably more difficult to pirate the software, as you'd have to reverse the algorithm and reimplement it in software. In the end, it didn't make sense for us, but it would've been pretty solid, as the odds of you having two identical cards is monumentally slim (and would only even be possible every couple of years).


People crack and hack our apps. We don't think it's worth fighting.

And moreover, we have a link on our home page that says if you email us, we will give you our apps for free. Some people take advantage of this offer, but the vast majority of users do not.


I think we are exploring the wrong issue here. We shouldn't be looking for a non-crackable scheme, we should be striving to find a scheme to recognize customers who are willing to pay and reward them.


I wrote labrea for similarly playing with apps: http://dustin.github.com/2010/12/03/labrea.html

Specifically, the PT_DENY_ATTACH thing should be possible to be, itself, denied with labrea (though in practice, I've run into runtime linker problems with that exact call that I haven't quite figured out, but I haven't put much work into it).


Price your app exactly at a point where people with money will gladly pay for it instead of suffer the hassle of downloading crapware-infested copies. And let people without money copy it freely without barriers, and see it as a marketing tool so everybody use your app, not your competitor's.

That price usually is between $0.99 and $9.99 and thank Apple for showing us that lesson.


I recall Intellij Idea IDE had some pretty decent protection. They used an "encrypting" class loader (of the xor variety), and also encrypted all their resource files. I have a lot of respect for the lengths they went to, though i'm not sure how much it benefitted them really.


Linking statically helps too (not sure that's doable with Apple) then stripping and packing.


If your a developer for Apple, presumably you are selling your apps through the app store. Apple get's a cut of the revenue for apps. So if it was worthwhile to have a more complicated DRM scheme, wouldn't Apple provide it?


Crackers aren't your customers. Don't waste time/money on them.


For the definition of "Shell" used by the author, isn't the title basically the same as saying "I Can Crack Your App With Just A Computer"?


Thanks for writing a detailed step by step tutorial on this interesting topic. I have always wanted to learn more about this stuff.


Excellent article, especially with code


Gah, this is your standard 2 byte change_je_to_jne.

Perhaps, to people who program in higher languages this is not evident, but old assembly programmers know this stuff well. Even for the newer ASM programmers, we had Fravia+ (may he rest in peace) to teach us the ropes on reverse engineering and unprotecting 'nasty' code.

And those students of Fravia+ know something well: if it is viewable, executable, listenable on a device you own, you can do anything to it. He recommends taking what you would have put in for protections and make your program better by that much. Or prepare to protect the hell out of it (and release every day, munging the exe).


Oh no, I did not know that Fravia was dead. Spent a lot of time reading his stuff years ago. RIP Francesco.


Sadfully, he passed May 3, 2009.

Here was HN's article and responses: http://news.ycombinator.com/item?id=600523


Fravia materials are great, but they are outdated. Techniques are still valid, but the users have changed. Perhaps Mac users are still naive and ignorant when it comes to running random stuff on their machines, but on the Windows side anti-virus companies were fairly successful in educating people on this matter. Scaremongering works :)

So signed executable + few simple validity checks + a couple of well-hidden timebombs that activate when the .exe is messed with - then add witholding the support and automatic updates and this combo works as an effective piracy deterrent. Specifically, it provides enough incentives for those actually using the app to use the official version.

Fravia's great, but there are social anti-reverse engineering hacks to consider.


Subtly crippling the app may not be the wisest move, unless you make it obvious that it happens because the crack has been detected.

Otherwise, the pirates will get the impression that your program is buggy and will look at alternatives rather than opening their wallet.

And still... Even if you announce that the crack has been discovered before crippling, you may upset them and send them to your rivals anyway.

==> Potential good word of mouth replaced by bad one.

You may improve your "conversion" rate by being implicitly nice rather than hostile to pirates. Don't clamor it on the roofs, of course, there's no need to incite your paying customers to pirate.

Wait... Do clamor the lack of DRM! Protected programs are often more cumbersome to use than their pirated counterparts. This is especially true for music, movies and video games where drastic copy protection measure are taken. Punishing your customers for paying you is not a good idea (unless you hold a monopoly but it is probably not sustainable).


As someone who has been down this path I can offer a couple of comments.

1. Crashing or crippling the program indeed has an obvious negative PR side-effect. However it can be mitigated by inducing a very exotic crash, something like "Division by zero" or better yet - "Illegal Instruction", which would clearly point at mangled code being at fault. Also stick a thread titled "Illegal Instruction" in Support forums, explain why it happens and this will be the first hit in Google for a respective search.

2. While the trialware model is the way, it does not automatically mean it has to be an annoying nagware. What worked for one of my projects was to allow multiple consecutive trials. First was one month, next was two weeks, third and all subsequent trials were a few days long. These periods were configured on the licensing server, and the program did real-time license retrieval. So for me to be able to experiment with this model and get meaningful statistics, I had to ensure that the program at the other end of the licensing sessions is authentic. From that followed a need to safeguard parts of its code from modification and I ended up doing pretty much what eps described.

--

In other words ensuring integrity of the program is needed for more than just fighting pirates. Pirates are not a big deal, let 'em steal and crack. It's the legit customers that this protection ultimately benefits.


"Division by zero" and "illegal instruction" don't sound exactly exotic. And even with a proper error message, a crshy app is perceived as defective. Assuming that the users will google the message is a long shot. I'd assume that most would simply show your binary the trash can / recycle bin / dev/null and move on.

Your second strategy sounds very interesting. Especially because you can A/B test the licensing period and the text that prompt users to register even in people who have been using the trial version for a long time.

However, I don't understand how it helps paying customers.


> I'd assume that most would simply show your binary the trash can / recycle bin / dev/null and move on.

And this is totally fine. These are the users who consciously decided to run hacked version instead of the original. Why they would do that is beyond me, but I am damn sure I will not ever see a one of them as my customer.

The only drawback is that of that them making a fuss because of the crashes and this is easily mitigated as per above. You just have to keep in mind that checking Referrers in website logs and following up on any product related discussions out there should also be a routine. So for anyone complaining about the crashes - post a link to the support article explain why and when it does that.

> However, I don't understand how it helps paying customers.

Primarily by not needing to spend any time on support/PR issues stemming from the use of hacked versions.


> What worked for one of my projects was to allow multiple consecutive trials.

Awesome. I have always been annoyed at shareware who refused to run after a given time. I sometimes installed it just out of curiosity, then forgotten about it, then came back to it when I had a real need for it and a chance to really think whether to buy it or not and just then... it refused working.


You bring some valid points. As long as we don't go back to the nag-screens of shareware, I'm excited to see conversion attempts made in novel ways.

If a product is of value to me I will indeed pay for it, if I can connect to the developers, I will pay even more. That's where choose your own price really gets me, I often pay more than typical.

However it isn't true across the board.

Regardless, I have read that many companies which experience vast piracy of their products, e.g. Adobe, make much of their revenue from other businesses. Is this true?


#ifdef DEBUG

    //do nothing
#else

    ptrace(PT_DENY_ATTACH, 0, 0, 0);
#endif

I really have a hard time taking advice on copy protection from someone who doesn't known about ifndef.

Furthermore, PT_DENY_ATTACH won't help because any cracker worth is salt will just open the binary with an hex editor and remove the call to ptrace(). The other two tips to prevent cracking are, at best, as useless as this one.

And just in case you're wondering, those three methods are equally useless on iOS.


To avoid having to modify the binary (and thus deal with checksums and the like), you can alternatively break on ptrace where the first arg == PT_DENY_ATTACH, then have it immediately return. Easy to do in GDB.


This is useless: task_for_pid() still works, and there's many public kernel extensions out there that just remove this feature entirely.


Amature crack for an amature protection.


Read this title as a Mario Kart reference. Oops.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: