Let me address the things mentioned in the article:
No data is ever sent or received to or from our servers in plain text. Due to a bug in our third-party network library the certificates were not being verified so a self signed certificate could decrypt the data. This issue has been addressed in an update waiting review at Apple. Users' passwords are hashed before we store them in our databases (pbkdf2, salt, multiple iterations).
Our user's address books are not stored on our servers and only used temporarily to help us find your friends. It was a mistake to not hash the contents of the address book before sending to our servers and we are currently changing the client application so it hashes the address book contents before sending to our servers.
Sensitive user data was exposed in certain endpoints (although only accessible for authenticated users). We have already addressed this issue in a server deployment and the hotfix is live now.
We are currently wading through inboxes looking for Kyle’s outreach. It looks like it may not have reached the core server developers. Please contact me personally at firstname.lastname@example.org if you have questions.
Finally I want to thank Kyle Richter for working out our security holes, small and large. We’re currently reviewing our endpoints and codebase to further harden security and ensure the privacy of our users.
From the home page "Play against friends in real time": this is a false advertising at best. Also, is it written anywhere that people can play against bots?
"We're sorry" would have been a better start
I think most users with half of a brain can figure out that not all matches are real time. If you challenge a friend, it clearly tells you that you can play the match without them and they can play against "you" when they get around to it.
A really acknowledging response was given -- would those words really make that much of a difference?
In QuizUp you are playing a human in real time in almost every game. In the off chance we cannot find an opponent (which is becoming very rare due to our popularity) you may be pitted against a bot as a fallback strategy. Matchmaking is a hard technical problem, and we have chosen to maximize gameplay experience and consistency. I’m happy to share that the ratio of ghost games to real ones is getting very small! Hopefully we will be able to phase them out completely in the future.
There is no cost for a faceless company to be 'sorry', and only prmotes the further unethical actions by other companies. I would rather see them pay the fine for privacy breach.
Moreover, this all goes down to the apps requiring ALL permissions to run, why is that acceptable? Why is QuizUp allowed to see user's location in first place?
To me, it feels like making stalker's life easier than ever. Make an app displaying cats, set it require full permissions, put on App Store.
You should NOT be sending such sensitive information on other users, encrypted or not. Unless of course you want to continue this trend of violating your user's privacy.
The CA can also be provided in a .mobileprofile, installable through email.
It also validates as a legitimate certificate, unless the app is looking for a particular certificate, which I think is rare.
So just for the record, are these all of the actual issues?
- no SSL verification means it's trivial to MITM
- exposure of other player's emails/bio/birthday/location/exif data in pics
- address book data is sent unhashed to the server
- signup emails expose the cleartext password (is this right?)
But the way I understand it, there's no reason or way to protect the client from the user him/herself - custom CA install, decompilation, etc are all ways for the user to get to their own data, or their own communication with the server.
So I'm a bit at a loss why the TC article is hammering on the "… and the local file which contained user information did not require any decryption to read."
The OP also mentions the FB tokens being exposed and such - I'm assuming these are only sent over SSL, and other people won't have access to it (with the caveat of the SSL fix), right?
- We haven’t stripped EXIF data from uploaded pictures, although this is on the roadmap. Sensitive fields from user profiles have been stripped from all endpoints. This was done before the news hit TechCrunch.
- We were never saving contact lists, just using to cross reference our user database. In the next update we will compare hashes, not plain text emails.
- No passwords are ever stored in plain text, but they are transmitted over SSL during signup and login. We are considering ways to further obfuscate this, but strengthening SSL goes a long way. Please contact me at email@example.com if you have comments or questions about our password policy.
You are right about the Facebook access tokens. The tokens are sent over SSL and we are not breaking any usage guidelines from Facebook. Access tokens can of course be invalidated by the user, or by Facebook. We are open to further enhancing the security of our OAuth flow, but currently it has not been exposed to any security weaknesses.
Finally, now the TC article makes sense. Someone told me the guy that found the exploit works for one of your competitors. ;).
Recording single player games and then sending them to other users to work as fake real time multiplayer games seems like a very clever move and is probably the reason this game is doing so well. Not that I have heard of it before this post, though. It's a good hack that capitalizes on the way a quiz game works and doesn't have any real differences to true real time multiplayer except for the likely lack of real time messaging. The same could be done for any game in which people compete yet do not directly influence each other.
The benefits are very clear: reduced matchmaking times, eliminates latency issues, eliminates signal loss issues. All of these are major hurdles to multiplayer cellphone gaming, so I don't doubt that this game would be pretty successful because of it.
Sending users data to other users without permission like that feels like it should definitely be a punishable offense, but then the legal system doesn't work on logic so who knows.
Tetris Friends does this for their multiplayer games. When you "play against people", what you're really doing is playing against their replays. It's quite clever, and it had me fooled for a while while I was still in college.
SongPop does the same thing.
What is perhaps the most shocking is QuizUp is backed by several venture capital firms, including some very large and well known ones. The question I have is: did they not do their due diligence when vetting this software or did they not care. I am not sure which one is more alarming to me, and it doesn’t really matter either way. Is this a sign of a bubble when a company can raise millions of dollars with so little care put into its technology or development?
This, sadly, should be of surprise to no-one.
So yeah, "fuck it, ship it" seems to be more or less the standard.
However, if you are VC funded with 10 engineers on the team, this is inexcusable.
The only kind of salt...
If you are using scrypt with a reasonable difficulty and a per-user salt, there is no reason to put the entropy restrictions, weak password restrictions, etc on your end-users. It is painful to interact with sites that enforce ridiculous password requirements.
You can get away with a 4 character password on Netflix. There is a reason for that. Security is much more subtle that password complexity.
No, I really am not. But as I didn't describe my reasons, you don't have the context to understand them.
Frankly, if Netflix has 4-character passwords, I would expect it to be relatively easy to compromise their accounts live with a carefully put together campaign. If Netflix gets their username/pw database dumped, I expect we'll see their policy change as the passwords are trivially cracked.
Not only that, putting together a safe & sane password retry system isn't the easiest thing every, and doing careful fraud detection based on geolocation/ip etc isn't the easist thing ever either. Particularly when I don't have someone working full-time on security.
Further, what you also didn't know is that the password strength functions as written have knobs I can adjust if things are too onerous.
So having harder passwords goes a long way towards 'better security' on the account side for little effort.
I would advise you to be more cautious about making unsubstantiated statements based on ignorance in the future.
But I agree, there's a huge difference between just not being able to implement security and not considering it relevant. To me, this is clearly a sign of the latter.
We don't really know what it means to write good code. We can't measure it. We can barely talk about it meaningfully. It's rarely taught except via osmosis: you pair with someone more experienced and you read cargo cult blog posts.
btw, combination of both is probably better
If the only diploma you hold is from some backwoods high school, you've got a bit more to prove.
Doesn't detract from your excellent piece or put Path in a better light, but that's the context you're referring to there.
Shoot First, Address Questions Later.
I bet this kind of decisions are a consequence of MBA/Excel mindset. Developing software properly takes time and money and that isn't... lean (lol) and doesn't drive billion dollar valuations.