> Starbucks could have chosen not to store the password on the phone, but users would then be forced to key in their username and password every time they wanted to use the app to make a purchase.
These aren't the only two options. Storing a token would let users remain logged in without having the same security implications as storing the password.
Some advantages of a token vs a password:
1. Lots of users use the same password on multiple sites
2. You could allow common usages via a token but still request a password re-entry for more potentially dangerous actions like changing the email address on the account
3. Tokens can be invalidated, so if the phone is lost the user can disable the app on it without needing to change their password
4. Tokens can be selectively invalidated, so if you have multiple devices you could log some of them out without logging them all out
5. Tokens can be set to expire so you can request password re-entry every so often just to ensure a bad actor would get locked out eventually
It's crazy to think that a developer would publish an application that would store username/password in clear text. iOS has a keychain API that would require maybe 3 hours of work to make it work/test. I'm sure Android has something similar.
There are tons of "developers" out there that don't know much about security or really care about it. This is especially true in the mobile space where anyone that has published a Hello World app to the app store can get a job working as a contractor.
This makes me think though that there could be some good money to be made by opening every single app like this to check it's security then offering services to said company to secure their application. Of course the way the world works today; instead of taking me up on the help they would probably just contact the police.
" Of course the way the world works today; instead of taking me up on the help they would probably just contact the police."
Ok so try it this way.
Contact the company [1] [2] and tell them you believe there might be security holes in their app (and make reference to cases like starbucks while pointing out that even if they are ok on that there could be other issues) and offer to tell them the results of your testing (a written report) in exchange for your fee.
You haven't said you found anything in particular (and you haven't), haven't specifically said you've intruded and you can get paid for a security review which if written and done correctly will give the executive hiring you cover.
[1] Suggest postal letter rather than email but you could start by email I just think it will be ignored and that it's worth the stamp to get more attention.
[2] I've done similar things (not with security) and it's worked pretty well.
And it's done quite frequently by home security companies whenever there is a burglary in a particular area. "You're neighbor just had a burlary and you might be vulnerable as well!"
And the wording can be altered to suit one's taste or level of comfort.
Of course you can blaze a large "this is a solicitation across it" but I would suggest that if you aren't willing to push the envelope with marketing you are going to not make out as well. This is based on my many years of experience doing similar things. Business involves taking and assessing risks and rewards. (Everyone's level of comfort or ability to do this differs of course).
And it's not the same (nor was I suggesting) that you say to someone "hey I found a hole in your app and if you don't pay me I will publish the results of the security hole". Details matter.
By the way saying to a homeowner "I saw you have a few windows at your house that appear to be broken (that would allow entry!) and I'll tell you the broken windows if you pay me $50" is not extortion. Anymore than saying "You have an outdated HVAC and for $100 I will give you a proposal on the best system to replace it with".
Yeah, you're right, it's not extortion. That's why I said it's almost extortion. It's certainly pretty tasteless. It is a tactic used by extortioners, and I wouldn't consider doing business with someone who applied that tactic.
I would not agree that this is extortion. I would consider it to be more self-protection. If you have no intention of misuse or public release of the security flaw to the public; you are offering a no harm approach while offering a valuable service. The unfortunate situation is that the business in question does not value the service even though they should.
Most competent programmers do not have time to just go around and fix every security flaw pro bono either.
I understand it's not extortion. You'll note I never said it was extortion.
I also understand how it protects the reporter.
I'm not asking the reporter to take responsibility or do anything that would harm them, I'm asking them not to essentially make the sales pitch of "look at how your neighbor had something bad happen to them, you wouldn't want something similar to happen to you now would you?"
While debatable (depends on execution) whether it is tasteless to make money you sometimes have to get over that.
"I wouldn't consider doing business with someone who applied that tactic."
The person doing the sending isn't looking to close 100% of the people he mails to. Nor does he care what the recipient thinks. If you worry about that you will potentially miss a business opportunity.
Look think of it like using a cheezy line in a bar. Something that I've never done but I recognize that it works for some people and gets them dates. In the end approaching 100 women with a line will work better than staying home and doing nothing (assumes you can take the rejection of course).
Well said. I think the deceiving part is the compiling/sandboxing environment phones have. New developer don't really know what's happening when they hit the 'Build & Run' so they can't understand that the key value they are storing are in plain text somewhere in the phone easily extractable.
This is the direction I steer all my customers these days. Then in the user's account screen I let the user see a list of their device authorizations and they can individually delete those... I really think tokens are the way to go in nearly every case.
> 1. Lots of users use the same password on multiple sites
I'd never say this at a job interview but I'll be Devil's advocate: As a business, this isn't my problem, it's yours. If you want the convenience of the same password for multiple sites, in the real world there are going to be weak points on some of those sites and someone who can abuse any point of the chain on any site can obtain your password for all.
For the rest, expecting to change my passwords if my phone is stolen is not an unreasonable thing at all. I should do this anyway, even if businesses ensure me that I don't have to.
And 99% of users who aren't IT or security professionals would just prefer to be done with entering their password after the first time, period.
> I'd never say this at a job interview but I'll be Devil's advocate: As a business, this isn't my problem, it's yours.
This argument is the same as saying "It's not my fault you're being spied on because you're not using OTR in your IM; it's yours." You're technically right that the user could theoretically avoid this problem, but you're wrong in practice since you're setting impossible expectations that even security-conscious people often don't meet.
It's a service provider's responsibility to not let user credentials be easily accessible because those credentials are used in many places. The latter is a more fundamental issue, yes, but you deserve the flak you get if you just say "not my problem."
Morally yes you are correct (imo). But which product are users going to go for - the one that goes the extra mile to keep them safe and help them out, or the one that doesn't give a damn about them? We have to deal with reality, not ideals, and the reality is that most people do not practice good password management.
> 99% of users who aren't IT or security professionals would just prefer to be done with entering their password after the first time, period.
Which is just as possible with tokens as it is with passwords. There is no reason not to use tokens.
The file in question is a log generated by the application. They are NSLog'ing stuff to the console, and Crashlytics must be capturing it up and putting it in this file. Along with debug tracing messages like this below, they're logging server interactions and JSON responses that contain personal information (I see my home address, telephone number, etc coming back from the server and being logged).
2539 $ -[CardDetailViewController refreshCardDetails:] line 798 $
2548 $ -[CardDetailImageViewController viewDidLoad] line 28 $ view did load
2551 $ -[CardDetailImageViewController viewWillAppear:] line 48 $ view will appear
2551 $ -[CardDetailImageViewController viewDidAppear:] line 53 $ view did appear
2588 $ -[CardDetailViewController refreshCardDetails:] line 798 $
2607 $ -[CardDetailViewController doPageChange:] line 684 $ :
2652 $ -[CardDetailViewController viewDidAppear:] line 301 $ I APPEARED!!
2652 $ -[StarbucksAppDelegate trackView:] line 1084 $ 2014-01-14 18:50:37 +0000 /Card/MyCard
2791 $ -[CardDetailViewController viewDidAppear:] line 301 $ I APPEARED!!
So the situation here isn't one of tokens vs passwords vs encryption or otherwise how they're being stored for interacting with the server. The user is going to have to enter a password at some point in the workflow, regardless of whether it's encrypted at rest or exchanged for an OAuth token or whatever. You shouldn't be logging that password back to the console when the user enters it!
edit: This also means that all that personal information of mine is presumably on some Crashlytics host as a side effect of all this having been logged and sucked up by Crashlytics.
"If you grab someone's phone, you can effectively go through this log and see effectively where this person has been," Wood said. "It's a bad thing for user privacy"
Compared to what? The implicit assumption is that Starbucks gathering and storing geolocation data is not a potential invasion of privacy or a meaningful risk. The person who steals my iPhone is very unlikely to do so for the data it contains. Their goal is to flip it for cash and a datum ain't worth much to anyone other than the PI my wife hired to find out if I'm sleeping around or an attractive lab technician in CSI: Miami.
No the real risk to privacy is when Starbucks' servers are comprised. Today's Willy Suttons are still bank robbers not pickpockets. Spreading the data out spreads the risk and reduces or eliminates the probability of a catastrophic breach.
Of course it will be popular sport to pillory Starbucks for not following the conventional wisdom because it allows us to ignore the fact that passwords are broken. There's no technical fix for poor password hygiene among iPhone owners and an encrypted password will barely slow down a determined attacker with a couple of GPU's and physical possession of the phone.
"... a datum ain't worth much to anyone other than the PI my wife hired to find out if I'm sleeping around or an attractive lab technician in CSI: Miami."
This is completely off-topic, but am I the only one who wondered why his wife would hire a PI to find out if he is an attractive lab technician?
Not as a critique, I just find grammatically-correct ambiguous cases like this one interesting.
So are passwords broken or not? (Hint: they are.) You can't use the inherent drawbacks of passwords as an excuse for introducing even more vulns.
ISTM the location data is stored client-side not because of some wise decision to spread data out (and then what, include it as query data in future reqs to the server?) but rather because their error handling module wasn't configured.
The greater vulnerability may be exposing all passwords to a single attack. The non-centralized redundant nature of the internet is a loose analogy. My server may go down, and it will suck for me, but it doesn't necessarily increase the risk to your server.
Pretty off topic, but I submitted this story last night [1] with the exact same URL and title. I thought whenever duplicate stories were submitted, it just upvoted the original submission without posting a duplicate story (that's what's happened to me in the past).
I originally saw this, and thought the URL was just different (even if just slightly), so I wasn't even going to say anything.
But since the URLs are identical, was just curious how the HN logic works when submitting duplicate URLs like that. Is it that too much time had passed, considering them "different" submissions?
It's not the same URL. Yours was the mobile version (it has a m.) and his was the normal URL.
I think this one got more traction just because of the timing of the post too being in the morning and getting enough traction to make it to front page.
Ah, that's it, thanks. I made the mistake of clicking the link (I'm on a desktop now), and copying the redirected URL to compare, which was the same as this one.
That's what I get for posting submissions from my phone.
I LOL'ed so big just reading the headline. It is absolutely terrible, but I have seen this before with big companies that should know better. About 6 years ago I was working on a project for The Wall Street Journal (yeah; that WSJ) in which all customer data was being stored in the DB in plain text then exported nightly to an Excel report and emailed unencrypted to client managers so they could review daily sales.
On numerous occasions I told them that was extremely risky and that we were violating PCI compliance and opening the companies to huge potential fines in addition to just putting customers information at risk.
Every time I brought this up I was told there wasn't time to fix the application and that the client managers thought it was too difficult to deal with encrypted files so just leave everything the way it was.
Eventually they got busted on a PCI compliance audit and started using PGP to encrypt the files sent via email; but by the time I left they still were not encrypting the backend data or actually maintaining PCI compliance. Extremely sad; but this happens all the time.
Every single app on your phone that remembers a username and password combination, or any other credential, is likely vulnerable.
Imho its about time that Google required the presence of a HSM in Android devices for key storage. A HSM that locked me out after ~10 x 6 digit PIN guesses (with software locking me out at a lower number) strikes me as a good thing. If someone wants to destroy my $500 phone, prise out a chip, grind it down, and look at it under an electron microscope to extract my passwords, then good luck to them. Why isn't this happening?
A HSM that locked me out after ~10 x 6 digit PIN guesses...
And now you've got a DOS enforced by hardware. Hopefully if I bring it back to the store I can get it reset? The existence of this reset sort of negates the point of an HSM.
However, I could see the point of an Android module that, rather than locking the user out, would simply delete keys the password to which had been entered incorrectly a configurable number of times. For an app like this, the user would simply have to enter a password and CC again.
Yes, a reset/unlock is a bad idea. Keys should be erased, and it should be up to apps/services to determine an appropriate means to re-authenticate users.
Storing a token isn't really much more secure than storing the password itself. (If I steal your token, I'm buying coffee with your account even though I don't know your password. A token that authenticates account access is a password.)
But it seems much more secure (which, it turns out, matters) and it does somewhat protect people who reuse the same password everywhere.
You can invalidate tokens for a compromised phone. Well you can also change your password, but I'd rather revoke the access from my phone than have my password stolen. Some kind of validation mechanism before you order could be nice (like a screen lock), something you can do with one hand.
Many services grant fewer capabilities to a session authenticated only by token. In the case of Starbucks, you can buy a coffee or even reload the card from the user’s bank account via the app, but you need the password and the web site to change the user's mailing address. Therefore, this vulnerability might enable an exploit such as draining your bank account into gift cards that I send to my address, whereas access to your phone and token would not.
I sympathize with the developers because I face this maddening argument every day between convenience and security, but storing passwords in plaintext on local. Geez.
Make a token on the server after initial login and store that! Not much more secure, but then this story wouldn't be news.
Seriously. Bigger problem is password reuse. If someone gets access to a user's password they then most likely have access to their email account, etc.
We use crashlytics as well but only an idiot would store username and password in the clear in their log. Geez, just store a randomized database ID if you have to. It's funny how people justify stupidity when they get caught.
"Adding money" means applying a credit card to purchase Starbucks credit, usually by a sequence like "Reload card -> $25 -> Confirm" on a pre-stored credit card number. Buying from the app has a small cash pool to draw from; reloading from a CC can get one a whole lot more money.
I'm not a Starbucks regular so I may have missed a detail. Transferring money from a stored bank account to the app seems to be a server action, so the server should be doing the auth. What then is the point of storing the password on the client? If it's just to confirm possession of the phone, a token would be better for usability, as well as in all the other ways a token is superior to a password. TFA says the password is also used to activate the app, but a token signed with a timestamp and emailed to the user would be better for that.
Why? If they steal my $20 balance I'm out $20. If I've got a card linked to my account and they load it up with $1k and go on a shopping spree, I'm significantly more screwed.
The assertion that users can make unlimited purchases after entering credentials just once is false: "Customers need only enter their password once when activating the payment portion of the app and then use the app to make unlimited purchases without having to key in the password or username again."
That is, unless you have automatic reloading on, which is a crucial point. This doesn't excuse the practice of storing passwords in clear text, but it's an important detail.
1. Starbucks' server has the private key, the iPhone app
has the public key.
2. The app locks the plaintext up in AES with the public
key, local to the phone, and keeps the locked data,
and sends a copy to the server. The server has the
private key, and can unlock the data locked up with
the public key anytime, even though the app (in
possession of the public key only) cannot unlock the
data by itself.
3. The app needs network access to operate properly,
because honestly, why is Starbucks attempting to
transact without a network connection, so if there's
no network access and the protected data can't be
accessed, oh well. Oh, and by the way, if the app
really needs the plaintext, why not just ask the user?
Oh right, thinking is hard. Don't ask a lazy user to
do anything.
4. Each time the app needs to unlock the protected data
and use it locally, it sends a GET request to the
server via HTTPS. Maybe it sends XML, maybe it sends
JSON. Who cares, as long as it's not keeping and using
the plaintext.
5. Based on the nature of the request, the server decides
whether it needs to send the plain text back over
HTTPS, or whether the app is just asking the server to
do something server-side involving sensitive data. If
the app *REALLY* needs the locked data sent back in
plaintext, the server sends it back for one time use
via HTTPS (still protected from interception, even
though it's being sent over network), to be nulled out
after the process or function returns complete.
6. The server is a fortress, and has the private key
(...somewhere). It does not store the sensitive data
in plain text. It too only stores the locked data, but
is capable of unlocking the data on the fly, per
request, each request, every time. The server should
actively garbage collect the plaintext data, and not
leave stale copies lying around.
7. The server *NEVER* give an app a copy of the private
key. NEVER, EVER. The iPhone app can rot in hell if it
can't get the data unlocked. If it has to wait, it
waits. Find something else to do. Mine bitcoins,
unfold some proteins, whatever.
Yes. This demands a server infrastructure with high performance and high availability, according to the popularity of the app (many millions of concurrent users). It will be expensive and complicated to execute something like this. One would not JUST AES it.
But hey, lazy users can't be bothered to type passwords and such. Gee whiz! Isn't this Starbucks app easy to use? How did they do that?
I beg pardon, but this part is a nonsense because AES is a symmetric cipher.
And even if you use public-key crypto, the schema's not better than old plain OAuth 2 tokens (or alikes). Actually, it's worse because of unnecessary complexity and because, as opposed to encrypted password, OAuth token has no relation to password at all.
Lose the encrypted key and until you change the password or revoke phone's key (so encrypted key would be unusable) you're not secure. Same with a token except that you don't have to change the password, just revoke a token.
Whoops! Looks like your right... AES doesn't use public key exchange, and public/private key pair generation at all! I was entirely confusing it with other completely different things.
I didn't really follow that. Doesn't that still leave a token (the encrypted password that is sent with each request) that I can steal from your phone and use to log in to your account and impersonate you?
Sounds like a complicated way of saying "don't store anything on the phone and remember a token instead of the password"
EDIT: As drdaeman pointed out, none of what I'm describing is AES. I am describing something completely different. I am describing public key cryptography, which is completely different from AES.
...but if you still want to know about asymetric public key cryptography, read on, but ignore any references to AES, because it isn't AES at all.
...
Well, this is just one layer in a hypothetical system comprised of many layers.
This one layer just handles storing secure data in a way that the app itself cannot understand.
The iPhone app needs to be authorized to make privileged requests to the server. This would be another multi-step process of performing a handshake with the server.
The server must not unlock anything for any client request, unless the client-side user agent proves that it's really a member of the secure system. This is the part where "the server is a fortress"
So this exchange of sensitive data would need to occur within the scope of a handshake process, such that the server challenges the app to prove it's authenticity. Something like this might also involve a third party verification from Apple & perhaps even the service provider that the app is not running on a jailbroken iPhone, a reportedly stolen iPhone, or some nefarious evil-doer's laptop.
THEORETICALLY (emphasis mine) once you encrypt data with AES, it is not merely a "token" that could be substituted for something else. It is an otherwise impenetrable object, and you can't just use the "token" itself. The token needs to be transformed into the usable data before it may be used.
For example, the stored data is:
0x01B926A340F0CCC67238DD00
That data is not a password. But. A valid app that is permitted to interact with the Starbucks server can send that string over to the server, and the server will send the tranformed data back in a secure manner. The server will send to the app:
"H3ll0_this_1$_a_p@ssw0rd"
The app is responsible for the plaintext version of the data and MUST destroy the data after use, and never save it.
The app has a PUBLIC KEY ONLY. That thing cannot unlock the data. the PUBLIC KEY cannot transform that hexadecimal string data back to the password. The app can create the impenetrable object but not unmake it.
The server, on the other hand, HAS THE SPECIAL PRIVATE KEY. The private key is The Spice. He who controls The Spice controls the universe. The private key is the only thing with the power to transform that hexadecimal string back to the real token. If something has the power to ask the server to perform the transformation, then it gets the information. The server has the responsibility of never letting the private key fall into the wrong hands, and never transforming data without properly challenging the client's authority to dare ask for sensitive data.
You appear to be punting on the process where the app proves to the server that it is authorized to make a request. That's.... kinda the whole point.
> Something like this might also involve a third party verification from Apple & perhaps even the service provider that the app is not running on a jailbroken iPhone...
These aren't the only two options. Storing a token would let users remain logged in without having the same security implications as storing the password.
Some advantages of a token vs a password:
1. Lots of users use the same password on multiple sites
2. You could allow common usages via a token but still request a password re-entry for more potentially dangerous actions like changing the email address on the account
3. Tokens can be invalidated, so if the phone is lost the user can disable the app on it without needing to change their password
4. Tokens can be selectively invalidated, so if you have multiple devices you could log some of them out without logging them all out
5. Tokens can be set to expire so you can request password re-entry every so often just to ensure a bad actor would get locked out eventually