Solid writeup. From someone who does/did a lot of this professionally:
1. Android typically is easier for this kind of work (you don't even need a rooted/jailbroken device, and it's all Java/smali),
2. That said, instead of installing an entire framework like Xposed that hooks the process to bypass certificate pinning, you can usually just decompile the APK and nop out all the function calls in the smali related to checking if the certificate is correct, then recompile/resign it for your device (again, easier on Android than iOS),
3. Request signing is increasingly implemented on APIs with any sort of business value, but you can almost always bypass it within an hour by searching through the application for functions related to things like "HMAC", figuring out exactly which request inputs are put into the algorithm in which order, and seeing where/how the secret key is stored (or loaded, as it were),
4. There is no true way to protect an API on a mobile app. You can only make it more or less difficult to secure. The best you can do is a frequently rotated secret key stored in shared libraries with weird parameters attached to the signing algorithm. To make up for this savvy companies typically reduce the cover time required (i.e. change the secret key very frequently by updating the app weekly or biweekly) or by using using a secret key with several parts generated from components in .so files, which are significantly more tedious to reverse.
I am also doing a lot of reverse enginering work for fun and profit. I agree with most parts, except android.
While reversing Android apps, usually they are obfuscated ( I think it is included in the standard build process ), but when you are start to be familier with objective C (or swift) binaries there are really straight forward, with a nice disassembler and script, in my experience much faster.
Request signing, certificate pinning etc are not actually target at reverse engineers, more like to snoopers, replay attackers etc.
Also I agree with 4 totally, there is no point there, you should do your security on server side, instead of client side. (Except games I guess, which they can ban players etc, making it a cat and mouse game there can work pretty good actually)
Much harder targets, like obfuscated c++, or VMs (bytecode interpreters etc) usually helps with #4, but still then, it just slows down. (This slow down combined with regular changes in protection is basically targeted to make reverse engineers to give up)
On iOS you have to go through the extra step of getting the binary onto a jail broken phone to strip the drm in order to re-sign it. Once you've got a way tamper with the binary and run it, most platforms are pretty straightforward to work with - it's really just down to which byte code or assembler variants you're most comfortable working with. Obfuscated code isn't a barrier. Obfuscated control flow is more irritating, but I don't think the standard Android tools offer that?
yeah but this step is usually done for you, you can always google ‘appname’ ipa and reach latest version’s IPA.
Obfuscated control flow is like hell especially in c++ (ex: pokemon etc) but as I said I am comparing average android vs IOS. Once you have rooted platform you have much wider tool set of course.
> That said, instead of installing an entire framework like Xposed that hooks the process to bypass certificate pinning, you can usually just decompile the APK
I remember reading something years back about Java decompiling, and I believe it said that all Java code is decompileable except for inner classes and nested try-catch. Assuming my memory and the source are correct (which might not be the case), why hasn't it become standard practice for developers who don't want their app reverse-engineered to just put every class inside a wrapper class? I can imagine there could even be tools for doing this at compile time so that you wouldn't need to manually deal with the indirection when writing the code.
The standard way of frustrating decompilers for Android applications is with heavy obfuscation using ProGuard [1] or DexGuard. [2] IMO, DexGuard is a real pain in the ass to reverse around. If you aren't dealing with heavy obfuscation, decompiling APKs is trivial using jadx. [3]
In general I have found that most smaller app developers don't obfuscate at all, and often you can find hardcoded keys/secrets in these smaller applications.
Modern Java decompilers (fernflower, cfr, procyon) are very flexible in what they can handle. They're best used in combination, though, as they all tend to choke on different things.
One can easily reverse inner classes. They have a unique naming pattern that tells you exactly where they live: OuterClass$InnerClass. Anonymous inner classes just get numbers instead of names after the $, so you can even tell normal inner classes from anonymous ones and decompile them appropriately.
As for nested try-catch... I don't see how that would be an issue unless the compiler somehow merges them into a single block. Which, as long as it's semantically equivalent, doesn't matter at all.
I mean, Java compiles down to a byte-code, which can be decompiled. If that were a limitation of decompilers at a time, it's likely not now. There's no way to "hide" instructions.
You can hide them in the sense that it's very difficult to find the 100 instructions and correct state machine out of the millions of instructions and possible states.
I tried doing 2 recently, found the file with CertificatePinning in the name, found the part returning 1 and 0 based on whether the check passed and patched it to return 1 in both cases. It still didn't work and after a few hours, I gave up, grepped through the decompiled files for the website name, got an endpoint which turned out to work perfectly in a web browser and which led me to the ultimate endpoint I needed.
I could probably have gone deeper and found more functions to patch, but I don't know smali so everything was guessing and matching up to the java decompiled source (and the original source which seems to have been Moxie's?)
(>$100 billion company making the app with >1 million users.)
From my experience, for certificate pinning, fastest option is to search for existing certificate/fingerprint, and try to replace it with charles’/mitmproxie’s
Edit: oh another trick usually works is to change transport to HTTP from HTTPS, just changing endpoint to something you control and changing it to http (with little hex editing the endpoint), and reverse proxy with mitmproxy/charles to the target, you can speed up process.
I don't know certificate formats and I imagine it would require conversion between hex and something - maybe you can write a blog post with an example?
ooh I like the second idea - so since it's an HTTP the cert pinning code won't be triggered at all, and all you need is to grep https to find endpoints and change?
I do this occasionally, and recently found our local real estate app that typically hides sales prices in the form of a range actually returns the actual value through their mobile API. For #2, you don't even need to recompile it, by the time you decompile the APK, you can usually just read the endpoints and parameters/data and reimplement in your language of choice.
It is the same with app keys, you have to gate it behind an intermediary/signing server and implement any impersonation/abuse detection there.
https://casper.io/ is an example of doing this (for SnapChat) - they used to take registrations for their own API. Not sure how it's working out for them these days.
Any chances you would be willing to share some of the online resources you use for this? I have recently been getting into mobile security development professionally.
I don't use online resources. I mostly used a combination of dex2jar and apktool. Some custom coding to automate the process. When I'm doing this on iOS I use Hopper.
Could you pontificate a bit on four? Because of the identity problem, or is there something about a mobile app that is fundamentally less secure than say, a web browser? I'm genuinely ignorant, seems like it would be good to know.
The "protect" which 4 is referring to is fundamentally like any other DRM: you're trying to give someone access to the content, but also deny it at the same time. In the case of an API, you've given someone an app which knows how to use it, which they can execute on their own computer and control the inputs of, and inspect the output.
If you don't feel like RE'ing the API, you can always just supply the inputs to the app yourself from somewhere else. ("All problems in computer science can be solved by another level of indirection", as the saying goes.)
Actually it is opposite, mobile platform is more secure.
But...
Security depends on ‘sense of security’, when people think platform is more secure, they tend to ignore/skip a lot of parts on security. Developer tend to skip edge cases (such as pizza API in this thread). When they are developing on secure platforms, they tend to skip more.
For example if you are developing for not jailbroken platform, you trust platform DRM (mostly consoles), and skip a lot of parts, you put the certificate pinning, and call it a day. When platform is broken, you are totally exposed.
But when you are developing for web, you are exposed from the beginning, you dont have that sense of security anymore, so you try to fix all edge cases.
Sure. The only way to sign requests is through something both parties can verify. The client you're using must have access to the shared secret key used in (e.g.) the HMAC process. While you can obfuscate the secret key to extents that would make a reverse engineer's life miserable (for a case study in that, see the Facebook app), you fundamentally cannot prevent the request signing process from being reversed with enough effort.
It's a very simple principle: the relevant data must necessarily be exposed, even if only in memory, at some point. Like any other DRM, it's imperfect.
Yeah, no worries I understand what you mean. I reversed the Starbucks app myself a few months ago to see if I could find interesting data. There's less low hanging fruit to use for reversing large companies' mobile apps these days.
1. Android typically is easier for this kind of work (you don't even need a rooted/jailbroken device, and it's all Java/smali),
2. That said, instead of installing an entire framework like Xposed that hooks the process to bypass certificate pinning, you can usually just decompile the APK and nop out all the function calls in the smali related to checking if the certificate is correct, then recompile/resign it for your device (again, easier on Android than iOS),
3. Request signing is increasingly implemented on APIs with any sort of business value, but you can almost always bypass it within an hour by searching through the application for functions related to things like "HMAC", figuring out exactly which request inputs are put into the algorithm in which order, and seeing where/how the secret key is stored (or loaded, as it were),
4. There is no true way to protect an API on a mobile app. You can only make it more or less difficult to secure. The best you can do is a frequently rotated secret key stored in shared libraries with weird parameters attached to the signing algorithm. To make up for this savvy companies typically reduce the cover time required (i.e. change the secret key very frequently by updating the app weekly or biweekly) or by using using a secret key with several parts generated from components in .so files, which are significantly more tedious to reverse.