Love this author's writing, esp this other post of his on Playing to Win - http://robertheaton.com/2014/11/03/why-you-should-read-playi...
A hacker at heart.
Just began reading the actual book Playing to Win (Its available free for online reading: http://www.sirlin.net/ptw ). It has already struck me as very intelligently written and insightful when you view its lessons as applying to life (at least the competitive aspects of it) rather some video game.
This, of course, would not completely stop the issue. But, it would make the author's job that much harder, since he'd have to emulate the Tinder protocol without the assistance of the Tinder app -- or would have to hack the Tinder app (and run it on a jailbroken device) to disable the cert pinning.
Cert pinning doesn't help when someone installs their own certificate authority. It stops other CAs that came bundled with the browser from working, but if it stopped self-installed certificates from working it never would have gotten off the ground because many organizations demand the ability to use their own certificates for signing things.
As far as I know, it's not supported by any current browser (I welcome feedback to the contrary) but is included in SChannel.
Given that we've only recently (arguably) gotten away from SSLv3, I don't have high hopes that it will be viable to require channel binding in the very near term.
HTML5 can't access TPM directly, but in ChromeOS, you can create or import a client certificate as a 'hardware-backed' certificate, which is then wrapped by the device's TPM.
At this point (if properly configured) an attacker can't exfiltrate client certificates from the device even with root-level access to the machine. Plus, in theory, extracting key material from the TPM should be made difficult by its manufacturer by means of various physical protections.
Obviously this is a very niche edge-case, but it is possible :)
Client certificates are generated by the user's machine and signed using your server's private key. The user's client presents them to your server to prove that the client is who they said they were when you signed their certificate. These certificates don't cost anything, besides some CPU cycles on both sides of the process.
The kind of server SSL certificates that cost money are generated by you and signed by a CA that most users' browsers will trust. Your server presents them to the client to state to the client that the server belongs to the domain it says it belongs to.
Most CAs will charge you money for the service of signing those certificates, but that process has nothing to do with the lack of adoption of client SSL certificates.
The parent article does a good job describing why client certificates aren't used more often: the UX doesn't make sense to users and there's not a user-friendly way to protect them with a second factor (the way you can encrypt your SSH keys using a passphrase or authentication device).
This is incidentally part of why the Chromebook design makes it hard to persistently change the machine; a reboot starts from a clean signed image and then mounts a home directory. It's still possible to stick a persistent exploit somewhere in the home directory, but it's not as simple as just dropping a file in /etc/init.
This probably wouldnt work given that I am assuming its the external IP and a user agent is pretty easy to copy/clone.
Seems like there should be another value mixed in that might be hard to figure out for a third party behind the same NAT.
They do seem to be going to an awful lot of effort to top themselves.
Considering that Tinder was launched around August 2012 and assuming Monica and Steve are modern and responsible people (ie. are careful not to rush into the big responsibility of parenthood), I'd say it's highly unlikely.
Install a RAT and do whatever you want later. You have have a lot more fun with a RAT than just grabbing FB cookies.
I previously carried out a similar attack whereby I temporarily borrowed the hard-drive out of a housemates laptop when they had left it not locked in their room and gained access to many of their frequented accounts (after they had made a point of saying I wouldn't be able to gain access). I was able to maintain access for several months changing subtle things before someone else notified them and they cleared sessions.
The cookies file is generally small enough to easily upload in the background if your passing around casual programming apps with friends. I don't condone this, but it's a very hard attack to mitigate without services breaking UX. Shopping websites do this by asking you to re-enter your password before changing account details/making a purchase, I'm not sure whether such a UX change would hurt social media.
Note: This was all probably around 6+ months ago, chrome may have mitigated this exact attack since by encrypting the file with something Google account specific.
(Made as a desktop app so it has permissions to intercept the auth token, could have also been done as a chrome extension).
The guy had physical access to a running chrome capable of sending those cookies, and the ability to install an extension. This basically means no software policy was going to stop him.
Very well written I should say.
But this article has all the technical details, and just-enough-but-not-too-much humor and background story to make this entertaining. Highly recommended, even if you're a "the details, all the details and nothing but the details" technical reader like me.