
We Reverse Engineered 16K Apps - r721
https://hackernoon.com/we-reverse-engineered-16k-apps-heres-what-we-found-51bdf3b456bb
======
djaychela
Forgive my ignorance (I'm on HN reading a lot in an attempt to educate myself)
- I can see why that would be a bad idea, but what is the correct alternative
to hardcoding the secret/key in the app?

~~~
weq
The only way i can think of would be to return the API keys with say a login
request and store them inmemory until the app closes.

~~~
kranner
You don't have to return the API keys to the user's phone. You could set up a
proxy API that will call the third-party service on behalf of an authenticated
user.

~~~
izacus
Isn't that what OAuth keys are explicitly there to PREVENT? Letting
application developers unlimited access to your accounts on their backend
servers.

~~~
kranner
I didn't mean this for scenarios where the user has an account with private
content with the third party (e.g. Twitter), but for e.g. a common backend
resource such as sending an error report to developers for user-facing errors.

------
shade23
This is a problem I've regularly faced with it android apps . Possible
solutions since many people seem to be asking that.

\- Fetch credentials and keys on login and then save them in an encrypted
manner in

\- SQLite Encrypted DB

\- Android Services like KeyStore[1]

\- AccountManager lets you save metadata[2]

But the way I mostly end up using which provides protection from a lot of
Reverse Engineering Tools(JADX,dex2jar,smali) is:

Use Enums instead of String constants. Using

``` public enum Key {

    
    
         Key1("keyValue");
     
         String keyVal;
     
         private Key(String keyVal) {
             this.keyVal = keyVal;
         }
     }
    

``` obfuscates the `keyVal` from many class decompilers.

I'd love better ways to do this. But when you do Android development (like
when you do Front End web dev), its easier to just assume that everything you
wish to keep private will be visible and avoid keeping/saving critical
information on the client app.

[1]:[https://developer.android.com/training/articles/keystore.htm...](https://developer.android.com/training/articles/keystore.html)

[2]:[https://developer.android.com/reference/android/accounts/Acc...](https://developer.android.com/reference/android/accounts/AccountManager.html#getUserData\(android.accounts.Account),
java.lang.String)

~~~
skybrian
How does using an enum help? You still have a string literal in your code.
Shouldn't that be easy to find?

------
niftich
Hardcoding your AWS stuff is nasty, because any bored script kiddie can
extract it and make spurious requests, and at the end of the month you receive
a larger-than-usual bill.

But Twitter API keys? These just allow access to the Twitter API. If these are
leaked, the biggest risk is a denial-of-service of rate limits by a third
party, which Twitter may catch and revoke the account. The developer will then
have to create a new credential and update the app for it to keep working.

In this day and age where mainstream apps update often too, I doubt this is
seen as a serious risk, for the long tail of apps. If this approach obviates
maintaining Backend-as-a-Service proxy the developer pays out of their own
pocket, it can be more cost-effective.

~~~
ec109685
Somebody can abuse the backend as a service as well, so proxying access to
something like Twitter doesn't help that much.

~~~
BoorishBears
Is it that hard to setup a BaaS lambda function that just increments a DB
field and checks it before making the Twitter call to rate-limit users? Even
outside abuse I'd want to be able to rate-limit my API actions to the realm of
stuff humans can do to keep out bots

------
pfooti
What this looks like to me is the same adage: never trust the client software.

When doing web stuff, I can't trust the client with my api secrets, that's why
I use oauth. My secrets are (theoretically) safe on the server side, and the
client generates a short-lived token that lets me access services on their
behalf if needs be. Google's drive filepicker is a pretty good example of how
that works. It's awfully complicated (just logging in requires me to open an
extra tab and listen for events on it for when the login flow completes), but
it does work as a way to keep me from having to push out secrets to the
client.

Doing this right basically requires two things. First, you must make your own
API server that can run code on behalf of the client. You can trust the client
to ask for stuff off of the API if it's properly authenticated, but you can't
trust the client to run its own authorization code - anything that requires
authorization rather than authentication has to run on the backend somewhere.

Second, your users have to trust you a little bit, so minimize that trust
surface. With stuff like Google APIs, only ask for stuff you really need. I
know I'm one of the few people who really read approval screens, but I have
rejected and will continue to not use services that ask for too many
permissions. Find the ones that just fit what you need (such as drive.file
instead of drive), and don't ask for more. Don't ask for offline access unless
you _really_ mean it - I'd rather have to go through the flow again if my
30-minute token expired than let you keep a refresh token on your server (but
again, the danger of the refresh token is mitigated by asking for the
minimum).

------
willyyr
Whats missing for me is advise on how to do it instead. How do you do it "the
right way" / better?

~~~
junkm
Take Twitter keys for example (the most shared secrets, according to the
article). Instead of hard-coding the secret in the app, just implement the
Oauth authentification as it should be done: You store the secret key on your
server, and issue the get request token from your server, then redirect the
user to Twitter authorization page, instead of issuing the request to twitter
from the app itself. You will say: why should i do that if an attacker could
just issue requests to my server and still get credentials using my app
credentials? Here are some reasons:

1\. You now have control over accepting the requests or not (ban ips, etc). If
your app is pretty popular, spammers may try to use your app credentials to
interact with Twitter as you will have a better reputation than newly
generated credentials.

2\. If your key is disabled by twitter for abuse, you can replace it with a
new one without having to update the app itself.

This applies to the other services too.

------
michaelvoz
Disclaimer: I work for Uber. These questions/opinions are entirely my own as a
curious engineer.

> These secrets belonged to a lot of different 3rd party services, for example
> Uber’s secret which can be used to send in-app notification via the uber
> app.

In every Apple application - aren't these keys a one off, created by the
client?

"The device token included in each request represents the identity of the
device receiving the notification. APNs uses device tokens to identify each
unique app and device combination. It also uses them to authenticate the
routing of remote notifications sent to a device. Each time your app runs on a
device, it fetches this token from APNs and forwards it to your provider. Your
provider stores the token and uses it when sending notifications to that
particular app and device. The token itself is opaque and persistent, changing
only when a device’s data and settings are erased. Only APNs can decode and
read a device token."

Source:
[https://developer.apple.com/library/content/documentation/Ne...](https://developer.apple.com/library/content/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html#//apple_ref/doc/uid/TP40008194-CH8-SW1)

If it is only your own token/secret you are seeing, that does not seem so bad,
right?

In addition - let's say these apps DO leak secrets, what is the alternative
solution here?

~~~
mkagenius
> In every Apple application - aren't these keys a one off, created by the
> client?

You just need the server token, and a phone number to send the notification.
(The gsm token already in possession with Uber can be used to send
notification to anyone at will)

Here is the uber documentation regarding reminders:
[https://developer.uber.com/docs/riders/references/api/v1.2/r...](https://developer.uber.com/docs/riders/references/api/v1.2/reminders-
post)

------
egman_ekki
A related question from an ignorant: How to do this correctly in a javascript
library? E.g. I was working on a tiny js library which uses flickr API to
fetch images from a specific album and having a separate server just to serve
flickr API key or talk to flickr API seems quite an overkill?

~~~
rahkiin
There is no other way. You can't do any IP blocking as it is a client side web
app, and as it is a client side web app, all data and strings will be
available to the users. I imagine however that a small wrapper won't be too
hard to write. Make sure you add a good limit on it though (based on user, ip,
(or a combo), whatever fits so yours is not used to freely access Flickr)

------
yeukhon
What about hosting images on S3? How would one authorize access based on user?
Use Cognito to map AWS IAM role to s3 resources at a per-user basis?

~~~
matharmin
You can generate keys for S3 on your server, allowing temporary upload or
download access to specific files. Generating an IAM role per app user is not
practical.

------
single2016
If you must hardcode some sensitive information, you can use the tool: Bg+
Anti Decompiler (JAVA)
[[https://play.google.com/store/apps/details?id=com.bgplus.Ant...](https://play.google.com/store/apps/details?id=com.bgplus.Anti.JavaDecompiler)]
It's the best solution for you

------
compsciphd
So while this is good that this is getting press again, its not a new thing,
see this Sigmetrics paper from 2014 (i.e. about 2.5 years ago)

(and ignored on hacker news back then)

[https://news.ycombinator.com/item?id=7918649](https://news.ycombinator.com/item?id=7918649)

mostly good to give credit where credit is due.

------
rsp1984
Did this only investigate secrets stored in Java code or also those stored in
native (C) code?

------
ergot
I like to intercept iOS apps' traffic with Burp Suite[1] or Fiddler[2]. The
trick is to have two adapters running on the same OS, one for the public
Internet, and the other acting as an ad-hoc hotspot. It's simply a case of
letting Burp suite sniff the traffic on the ad-hoc network and seeing what
'goodies' you find, like API keys.

[1]: [https://portswigger.net/BURP/](https://portswigger.net/BURP/)

[2]: [http://www.telerik.com/fiddler](http://www.telerik.com/fiddler)

------
jmiserez
Ignoring the results here, isn't the act of disassembling random Android APKs
illegal? Theoretically, could one of the affected companies sue them?

Also, if services only give you a single secret key per developer, what is the
alternative? Proxying all API requests through your servers might work, but
only with low-volume and if the service doesn't have rate limits per IP.

Most other schemes would require cooperation from the API providers, right?

~~~
izacus
Why would it be illegal to look at code on your own device and see what kind
of personal data your device sends to companies?

~~~
jmiserez
I was thinking along the lines of something like this:
[https://en.wikipedia.org/wiki/Anti-
circumvention](https://en.wikipedia.org/wiki/Anti-circumvention)

If the app uses DRM (e.g. Netflix), I would imagine that some of these laws
could apply depending on the country, right?

