For anyone who has SSH access to a server (but not VPN) and is wondering what to do when you need some security in a pinch, here is a quick fix...
Open an ssh connection to a server you have access to using something like the following:
ssh -ND 8887 -p 22 firstname.lastname@example.org
where 8887 is the port on your laptop that you will tunnel through, -p 22 is the port the ssh server is on (22 is the default but I use a different port so I am used to specifying this) and the rest is your username and the address of the server
Set your network to point to the proxy. On a Mac that would be…
... Open Network Preferences…
... Click Advanced…
... Click Proxies…
... Check the SOCKS Proxy box then in the SOCKS Proxy Server field enter localhost and the port you used (8887)
Also Firefox doesn't put DNS through a socks proxy by default, which has some security implications and doesn't allow you to reach internal-only names. In about:config set: network.proxy.socks_remote_dns to True.
SSH proxies are not end to end encryption. They only protect part of the path. Not sure why this is being down voted. It's true. The tunnel is only between the client and the SSH server. The HTTP websites that you visit beyond the SSH server see your clear text packets.
In case this isn't clear: if you ssh to someone else's machine and use it as a proxy or VPN, the owner of said machine can of course still steal your HTTP cookies. (not trying to say anything about the trustworthiness of the above 2 posters, just a general statement)
This is a stupid question, but what about a guy like me who has no access to a server?
I'm going traveling for all of next month, the only sites I'll be checking where I'll be logged in is my hotmail account, and I might check my bank account (Chase) - both use https, so I suppose I'm in the clear then? (also when I click "log out" on these sites, it logs me out, but if my session has been hijacked, will it log the hijacker out of the session he's hijacked of mine as well?)
A cheap linux VPS is a couple bucks a month. Mine is three. If you aren't looking for a deal, there are many many options at the 5 dollar price point. If five bucks is worth peace of mind for the next month, then that's your answer. This will also have the benefit of getting around filters that are operating on WiFi network you are on.
Generally, watch out for older sites or sites made by people that haven't learnt much in this area which may store some kind of account id in place of a key generated on each login. In that case just because the website invalidated/deleted your cookie the hijacked cookie would still be good.
I'd like to buy such a server at low purchase and maintenance cost.
The Pandaboard looks like a good fit, but the instructions to install a Linux distro are a bit scary . I guess I could do it, from my Mac, but I'm a bit afraid to mess things up with the low-level disk utilities.
Does someones sells SD cards with a distro pre-installed? Or an equivalent device with an easier setup?
Instead of a dedicated server, use a router that can run DD-WRT or Tomato. I use a cheap(35$) refurbished wireless router from Linksys and I am sure there are other models. These will be more energy efficient and easier to maintain than running a dedicated server. Additionally, of course, you can use these as routers for your home network.
There are lots of dirt-cheap Atom-based Mini-ITX systems out there. A basic motherboard with CPU will cost you about €60, a bit more for a dual-core. You can probably scavenge some DDR2 ram from an upgraded laptop and install the OS on a USB stick. Mini-ITX cases/PSUs tend to be cheap too. If this going to sit in your office or home, you might want to watch out for noise/heat with both motherboard and PSU and pay a bit more for a fanless motherboard & PSU and get a case with a large, slow-rotating fan. All in all you can probably come in under €200 plus a multiple of that for your time for research, assembly and installation. Or just rent a VPS.
Exactly my question. Are there any cheap and reliable(very important in this case) VPS service I can use to do this? Using ssh thru internet as proxy seems to be the best approach. Unfortunately I cannot setup my own ssh server to do this as both power and internet connectivity is not reliable where i live.
He has plans starting at $5/mo but you'll want to take notice of the of the monthly transfer limit. The $5/mo plan is 10GB transfer a month (which will come to 5GB in/5GB out if you're using it as a proxy) so you won't want to tunnel video or downloads through it. If you go for the $8/mo plan though you can get 40GB data transfer.
I've never had an account with him so I'm not sure if there's a way to check how much of your data allowance you've used for the month. Someone else might be able to chime in about a program you could run on the server to notify you when you've reached a transfer threshold.
There are probably going to be a lot of people negatively affected by this for quite some time to come. One thing to point out is that there are grades of things. There is "public", and then there is "top hit on Google". Similarly, there is "insecure" and then there is "simple doubleclick tool to facilitate identity theft".
How many millions of dollars and man hours is it going to take to lock down every access point? How many new servers are going to be needed now that https is used for everything and requests can't be cached?
America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal. By contrast it's fashionable among a certain set (no doubt including the author of this mess, Mr. Butler himself) to hold that the real culprits are the door manufacturers. What said facile analysis excludes, of course is that there is always a greater level of security possible. The level we currently employ reflects our tradeoffs between the available threats and the cost/convenience loss of bolting our doors and putting finials on our gates.
Butler has simply raised the threat level for everyone. He did not invent a new lock or close a hole. He's now forcing lots of people to live up to his level of security. Congratulations to the new Jason Fortuny.
Butler has not raised the threat level on anything. This has been a widely known issue since forever. A friend of mine wrote a sniffer that could do this back in college, and he was one of the last to the party. Want something else to kvetch about? His tool could impersonate the router and act as a proxy, including serving up ssl-encrypted pages to users who didn't realize they shouldn't accept certs from unknown signers - again, that was years ago, and even then it was nothing new or unique at all.
When a tool like this rises to even a minimum level of public consciousness, you're better off thinking "people have probably been doing this for close to a decade" than "this asshole just ruined the internet by pointing out an obvious flaw that someone will now be able to exploit".
And yes, at some point, a door manufacturer that knows how easily their doors will open and how frequently people will just walk through does take on some responsibility to add a lock (and the homeowner to use it). It's going to cost more in servers? Okay, so what? It costs more to install seatbelts, are you upset at Ralph Nader, too?
Anyone who wanted to hijack http sessions was five minutes of Googling and installing away from being able to do so before "Eric Butler's little gift" anyways. Are you claiming that the marginal impact of packaging it up into a firefox extension is so great as to make it a threat of a wholly different kind?
That is exactly what I'm claiming. That's also why this article has 200+ comments and was on the top of Hacker News all day!
You vastly underestimate the barrier that "five minutes of Googling" presents. I assure you, the overwhelming majority of aspiring script kiddies would never be able to figure it out. It took an expert to package an exploit in a nice GUI (and write cookie parsing code for every major social site under the sun).
How about you recognize that there are a lot of innocent people who will be hurt by this stunt? There are hundreds of thousands of companies and millions of people who are targets for this, and most don't have a spare million lying around.
Hospitals, nonprofit groups, anyone running a website has to drop everything to lock it all down now. The effect is a lot like loosing a new virus (and might ultimately be treated that way).
> As long as only the highly motivated can exploit it, it's not really a problem, gotcha.
^ This modified statement is correct. All I'm saying that making something easy to use and publicizing it widely is going to result in a lot more people using it.
[Edits - hey jfager, I don't know you from adam and don't particularly enjoy flamewars. I agree that in the long run this should be fixed, ideally in such a way that 99.99% of people can blissfully go about their day. I just wish that the energy to secure stuff had taken the form of (say) a post on "here's how Google converted Gmail to https" rather than Firesheep. Hope we can find some common ground and you can see my POV.]
The intersection of 'evil enough to do something truly malicious', 'read a tech blog in the right 24-hour period', 'didn't already know the problem existed', and 'in enough cafes to pair with enough potential victims' is too low to cause "millions" more to be impacted by this, I promise.
Your implicit definition of 'highly motivated' (someone willing to put in 5 minutes of Googling) makes me sad.
I'm agitated because you're trying to hang someone for doing A Good Thing: putting real pressure on the bigs to finally actually fix a well-known, longstanding problem.
[Response to your edit: Facebook, Twitter, and other big sites know about the problem. How would explaining to them how Google secured Gmail change anything? They know how Google secured Gmail, and they know how to secure their own services. They just simply aren't, because it saves them money and their customers aren't demanding it. But the only reason their customers aren't demanding it is because the vast majority of their customers don't know the threat exists. This tool makes the threat clear as day to the most unsophisticated layperson, which makes it real, effective pressure, far more than yet another blog post asking nicely for SSL by default].
It might make you sad, but it's spot on. People were sharing MP3 files on usenet pretty easily, back in the day. It would have taken 5 minutes or less to work out how -- even easier than grabbing cookies.
It wasn't until Napster made that 0 minutes of googling that MP3 filesharing really took off.
For something like this to end up on millions of desktops, you have to be able to explain it to a half-stoned frat at a party. "Five minutes of googling and then some nerdery"? No chance. "Install this, go to the quad and you can sign into the facebook of any other person there?" Yup, that's going to spread like wildfire.
The responsibility is with every admin that setup an insecure access point, not with every security researcher to stay quiet about widely known and widely exploited vulnerabilities.
This isn't new. Point and click tools for doing this existed 10 years ago. Making a firefox plugin just pushed it back to the top of the headlines. This is actually a good thing because if word spreads more people will be aware of the already existing risk and will be more security conscious.
Does this mean everyone should stop logging into their personal accounts over unsecure wifi at school or starbucks? ABSOLUTELY.
Hopefully this new attention on an old hole will motivate more admins to fix their networks and more users to realize how vulnerable they are.
Obvious, easy security exploits should be be as publicly exposed as possible, and repeatedly so.
This kind of exploit is so many years old that it's a matter of basic public education and computer literacy. While this might be a "forcing function" on the web development community - it is not unfair. There is so much new tech every year, it's unfortunate that security isn't more in the consciousness of tech.
There may be more graceful ways to lead "sheep" to more secure use of the internet deserving of praise, but it's fair game to release an exploit, and I'd rather see FireSheep than censorship of it.
Re: "Hospitals, nonprofit groups, anyone running a website has to drop everything to lock it all down now."
For a public wifi user, how do those 150k downloads actually affect the probability that someone else on the network is using a session-hijacking tool? Given that it was already high enough that people should have already been taking preventative measures, any increase you can attribute to this would still fail to justify the witch-burning you're looking for.
There is zero difference between what someone using public wifi should be doing today and what they should have been doing last week. Now at least more people are aware of the problem.
People have been doing this for years already with tools like Wireshark. The only thing the app he has released does it to draw a massive amount of attention to the already existing problem. I say superb. Brilliant effort. Well done. Hopefully more people will stop stupidly sending session cookies over unsecured channels now.
America was a better place when people could keep their doors unlocked
I hate this mythical "good old days" B.S. I know people who live in the country who don't lock their doors because they live in the country. The idea that people who lived in urban areas ever could leave their doors unlocked is absurd.
> I... haven't locked my front door during the day
This isn't what people mean when they say they don't lock their doors. I grew up in the country in northeast Ohio. I knew many people who simply never locked their doors, including overnight or even when they weren't home.
I lived in middle-of-nowhere Texas for several years, and I think the only time I ever locked the door to my home was when I left for two weeks at Christmas. If my car didn't automatically lock itself after you get out, I would have left it unlocked as well, with the key lying in the center console.
I live in Brooklyn now. Things are a little different here. My door has a $350 deadbolt lock that -- when it broke and locked me in -- took a locksmith, a serious drill and a couple hardened bits to defeat.
Whether you should lock something and how secure you make it isn't a binary decision - it depends on the value of the thing you're protecting and the likelihood of an attack.
The chance of one trying to get into a house in the middle of nowhere Texas uninvited and getting shot in the face, may serve as a deterrent equal to a $350 lock. I know a few Texans who would totally agree with that statement.
There are probably going to be a lot of people negatively affected by this for quite some time to come.
Yes, but it's better than the alternative, where there would be an increasing number of people negatively affected for even longer. At least the problem is out in the open now and there will be public pressure to fix it.
America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal.
You are correct. But those days are long gone, and they're not coming back. Unless you want to throw out a good chunk of technology, kill half the people on the planet and go back living in communities where you knew personally everybody you interacted with during your entire lifetime.
Butler has simply raised the threat level for everyone.
Yes, he has. But he has also raised the defense-level for everyone, and by a greater margin. Before his post, there was a much larger divide between the people who knew about this exploit and those who didn't (the fox and the sheep, if you will). It's true that now more people can exploit those who don't know, but it's also true that even more people can defend against it.
He did not invent a new lock or close a hole.
Making other people aware of the hole is the first step in getting it closed, if you are unable to do it yourself. Shame on the rest of us for not doing this earlier.
I agree. Releasing a point and click exploit is standard practice among white hat, well intentioned hackers. Decades of this kind of tough love is why microsoft finally has an OS that is reasonably secure.
And without raising awareness of the issue, everybody might always be somewhat vulnerable forever, whereas now that "we know", after being highly vulnerable for a short time everybody's vulnerability to this should drop to zero very quickly.
If you assume a limited number of evildoers and a limited ability to exploit this at will (e.g. you have to catch your victim in close proximity on public wi-fi that you're sharing with him), releasing a tool like Firesheep may produce significantly less total damage.
Bull crap. Hamster and Ferret was only slightly harder to use than Firesheep. You had to run it, then adjust your proxy to localhost:1234. Aside from that, it does exactly the same thing. And before it was around, we were using cookie editing plugins in FireFox to import stuff we grabbed from Wireshark. And before that, we were manually editing our browser's cookie stores to bring in cookies we caught with tcpdump. And before that...
This isn't a new threat. Just a new shiny piece of ware that lowers the bar a little further.
How many new servers are going to be needed now that https is used for everything and requests can't be cached?
FWIW, non-JS is still a vector. For example, img tags can stomp on cookies. Yes, serving static media over SSL is diminishing returns, and it would suck for mid-stream proxies (and every ISP will hurt from it). But don't argue that it doesn't matter from a security perspective.
When done correctly, it should at least avoid this sort of cookie stealing scenario, though - HTTPS-only cookies won't leak into HTTP requests for non-active content, though malicious Set-Cookie: injection into the HTTP responses can obviously log the user out, etc. and may even reveal unwanted info if the HTTPS server doesn't handle the situation gracefully.
America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal.
The analogy is not complete because in our situation there's a third party involved beside the victim and the criminal: the website. What if your bank leaves the vault unlocked so anyone can take your money? Isn't the bank at least partly to blame?
when you log into most banks and websites that perform financial transactions you are routed to an SSL logon, its a default choice, in fact most of the transactions are done under SSL. why should social networks be different?!
by now it is clear that unauthorized access to social networks can cause much distress and even worse to a great many people who use them.
Banks minimize their liability when they use SSL, facebook should do this too. at this point it should be clear that the effect on a person social life can be severe, career destroying, financially damaging, what have you, we are witnessing stories along these lines in increasing rates.
The release of this extension is a blessing in my view, it forces the issue that companies like facebook or twitter would prefer to ignore, or cover in obscure terminology, this simply demonstrates how trivial this is.
When Ingersoll Rand released the Kryptonite lock, they named it after the mythical element that would bring superman to his knees. Too bad the lock was revealed to have a design flaw that enabled cracking it up with a BIC pen, was it shameful to display that defective design?
Facebook etc... talk about privacy all the time. This forces them to walk the walk, not just talk.
Butler isn't doing anything earth-shattering, he is just reminding everyone AGAIN that the current system is messed up.
There will always be this debate about disclosure, but you can't ignore that it works. Sure, innocents suffer (and they would[they are!] anyway), but at least it's one more reason why websites should change to https.
The problem with vulnerabilities like this is that they're too easy for people to rationalize as "hard" and it's too easy to pretend that they don't happen. People seem to think that as long as the problem can remain invisible (to them), nothing bad is happening.
What actually happened back in the day before people started forcing the issue with full disclosure was that the bad guys operated with impunity because the good guys couldn't work together because people got upset when folks let the "secret" vulnerability knowledge out.
I don't want to go back to those days. Things have improved so much since then.
Multiply that by thousands and you'll begin to have some idea of the discussions going on at every web based company with a clue today.
For those who make their living in computer security, like Mr. Butler, of course it's a good day (and month, and year). Pretty good business when you can start fires and then get paid well to put them out. Serves them right, of course, because they shouldn't have built that house out of wood in the first place.
While we're on the topic, I don't understand how a lot of people fail to realize that spending on computer security is a lot like spending on national security -- you can always spend more money on it, thereby taking away resources from other priorities.
"Every gun that is made, every warship launched, every rocket fired signifies in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. This is not a way of life at all in any true sense. Under the clouds of war, it is humanity hanging on a cross of iron."
It is nice to see a common security issue taken seriously, but for me the even worse gigantic hole is that most people use only one password for their email account and all other accounts. (writing on phone, sorry if unclear)
This is kind of a big deal. Not a whole lot of people are aware of this vulnerability and among those who are it's likely only a small subset that knew how to exploit it until now. I suspect all of the coffee shops in the college town where I live will have people using this starting tomorrow.
I've personally been working from cafes and tunneling everything through SSH for years, but in my experience almost no one else does this.
I've personally been working from cafes and tunneling everything through SSH for years
To where? I suspect it's to a server, VPS, or similar, and the connection is unencrypted from there to its endpoint. This being the case, could someone with a server on the same subnet be running a browser remotely (or even just tcpdump) and doing a similar thing with your logins?
(This is just some thinking out loud and I may be totally wrong - correct me ;-))
Virtually no modern wired networks use hubs anymore, they're for the most part switched. Unlike wireless networks where packets are broadcast freely in to the air, the switch checks the destination address and sends the packets only to the endpoint. There are some attacks like arp-spoofing and flooding which can defeat this, but they don't work well against modern enterprise-grade switches like you would find in a data center.
It depends on how secure the remote network is. If it's just another coffee shop, you're screwed. If it's your own Linode in one of those well managed datacenters, it would be pretty difficult for anyone to snoop that traffic.
This is one of many reasons Loopt has used SSL for all traffic from the very beginning. At least WiFi has fairly limited range. Cell networks (and satellite internet) can be sniffed miles away.
In addition to making session hijacking harder, using SSL keeps crappy proxies from caching private data. Remember when some AT&T users were getting logged in as other users on Facebook's mobile site? The cause was a mis-configured caching proxy.
Raising awareness of issues like this gets them fixed. Until a service's users demand SSL, it won't be offered. Unless the service is Loopt :) It's not a noticeable computational burden, but it does increase latency and cost money (for certs).
1. Not images
2. Older GSM crypto can be hacked in real time with rainbow tables now
3. Usually not encrypted at all
Indeed, Loopt appears to be one of the few high-profile sites to have done this right. SSL for everything, and cookies that are relevant to login sessions are marked secure. This is what we need everywhere!
Nice. A solid demonstration to show next time your webmaster doesn't want to set up SSL everywhere.
That said, the current cartel-like setup of certificate authorities (protection money and everything!) makes SSL annoying and expensive if you want the browser to not have a fit. Especially for small-scale projects. But there's really no excuse for larger sites.
HTTPS also needs distinct IP addresses for distinct hostnames, so that the HTTP handshake, in which the Host: header appears, is already protected by an encrypted channel. No more having multiple websites on one IP address.
SSL is bad for the environment because it requires far more server side hardware... Well, I'm only partially serious about the environment thing, the question is, how can internet companies make it commercially viable to use SSL for everything? The added hardware and power costs make each user way more expensive, possibly to the point where they may not actually be worth it.
An alternative is to bind the user's session to their IP address, but that isn't fool proof either because of NAT, DHCP and certain big ISPs that tend to change IPs on the fly.
Regarding IPs, there's a bigger issue here. People are used to being able to shut their laptop at home and open it back up at work without having to re-authenticate all their browser tabs. If you filter by IP this breaks. SSL requires no changes to user behavior.
People bring up the Google stat, but you have to remember they have incredible engineering resources so they probably optimize in many features every day without adding additional machines. That doesn't mean every dude with a LAMP stack out there can turn on SSL and expect the same performance, just that it's possible with mongo manpower and talent to make it work. (Google doesn't even release the details of their web stack so comparing their stat is apples and oranges.)
How many web servers do you know of which are CPU bound, and not through massive code stupidity, and not I/O (in some manner - waiting on SQL, disk access, bandwidth)? Encryption can run while other threads are waiting for a response.
In general, it's a negligible cost; it adds a very minor delay compared to latency / transfer time, and uses CPU otherwise highly unlikely to be pegged. If you're pushing threading limits / CPU usage limits, you're probably inches from needing new hardware anyway, and SSL should be considered part of the cost of running a web server.
It's not completely negligible--even if CPU usage is negligible, it does add latency during SSL negotiation that might be unpreferable for some apps. The testing burden is a lot higher because one wrong link to http:// in CSS, HTML, or AJAX will cause big scary messages. And there is the IP address problem, you can't vhost as most people with Apache like to do.
That may have been true 10 years ago, but the overhead of SSL for CPU is almost nothing, and only a few ms of latency. Web applications are mostly IO and memory bound, anyways. We should be using SSL all the time, by default. There's no reason not to, at this point, aside from certificate authorities.
You can get SSL certificates for free for one domain, and they work with all browsers (except Opera, IIRC). Also, you can use Perspectives for Firefox, which I think is much better than the current system.
The only downside to wildcard certs through StartSSL is that getting one requires high-resolution proof of personal identity, to be kept on file outside local jurisdiction (the company's based in Israel) until the cert's final renewal or revocation, plus seven years.
I admire their model of only charging for operations which require human intervention, like identity validation, but handing over that degree of documentation for that amount of time requires a lot of trust, not just of the company as it currently exists, but as it will exist in the far future.
If there was a way to validate organizations which wasn't layered on top of an earlier validation of an individual, or if their decentralized web-of-trust was usable for class 2/wildcard certs, I'd be a big fan.
As it is, there's no reason not to use Start for class 1, single-domain certs, for which the validation is automated and reasonable.
Even the dumbest script kiddies have been doing this for years anyway. There are plenty of existing tools. This one just lowers the bar so your mum can perform the attack too.
It almost makes me angry that websites like Facebook and Twitter don't force all traffic over https. They've got the money and the expertise. They just don't care if your account gets sniffed and taken over at a web cafe.
Exactly. I'm not a blackhat and my only "hacking" consists of forcing myself into my own systems which I've stupidly locked myself out of, yet I've managed to do much that this plugin can do.
The most un-ethical thing I have done was to take one of the OLPC XO laptops and convert it into a MITM machine, rebroadcasting the SSID it connects to while routing and logging all traffic anyone who connects to it generates. It took a weekend to setup using pre-existing tools and scripts and can be deployed anywhere I want within 2 minutes and run for up to 6 hours hidden in the bottom of my backpack. It was a fun experiment, and surely made me more aware of just how vulnerable I was outside of my home network.
Another point of interest, this weekend I hacked on a Minecraft bot for the Alpha version. In order to understand and dissect the connection protocol I needed to recreate, I used wireshark to dump and parse how the client authenticates and connects to the server. Even that transmits your username and password in plaintext.
This vulnerability (it hurts to even call it such at this point) has been around for years, and the attack has always been easy for a determined attacker to carry out.
How else are we going to convince people to secure their sites and protect their users? People have been presenting on this issue for years (Ferret & Hamster, Blackhat 2007) and companies haven't responded/cared. It's possible to solve this problem (Gmail is all HTTPS, and done correctly, Amazon has a tiered authentication system that properly uses SSL for important things, Wordpress does SSL right for accessing their admin interface) - companies need to step up and address the issue.
Definitely, I guess as a uni student, I'm worried about the majority of non-technical students who are going to have their sessions hacked and have no clue what hit them and cannot setup proxies/tunnels.
I'm not saying this isn't the site's fault. They definitely need a wake-up call.
This was already happening on a massive scale before this new app was released... I honestly don't think it will increase the number of attacks by all that much. It's brilliant as a tool for spreading the word though.
It was happening on a massive scale, but now a huge amount of really lazy people who didn't bother to do this before are. It had 3,000 downloads after 2 hours of release. The thing is, most universities have protection set up. It seems Cisco NAC is actually good for something. I never thought I'd say that. The extension certainly doesn't work on my campus.
This is essentially the same argument that comes up with full disclosure. Yes, it's not pretty. Yes, it causes a lot of collateral damage. But it also makes the big players patch things up faster, while letting the knowledge out to the public, which of course consists of not only the script kiddies, but also the unsuspecting legitimate users.
Thanks for posting this. It convinced me to upgrade SSL support from "something that would be nice to implement if I was bored someday" (BCC is not exactly security critical -- except, on reflection, the admin pages) to "drop everything and get it done."
I had a SSL certificate for a while, but actually using it throughout the site without showing users Big Scary Error Messages is not quite trivial. The activation energy for digging through several hours of edge cases was lacking... until today. ("Whoops, while you don't know you're doing it, you pull an unnecessary CSS file into the cached CSS for the registration page which references a background image on an absolute http:// URL. Your registration page now throws an error on IE. You lose." "You have approximately 150 images on the site linked as handcoded img tags rather than through Rails' image_tag helper, because when you were a Rails newbie you did not know that existed. You now get to rewrite all of them so that they can use SSL asset caching magic." etc, etc)
I've seen some sites which figure out a way to force the user in and out of SSL for certain URLs. You might be able to implement a fix which forces SSL for the admin section and non-SSL for everything else.
I thought the title of this submission was slightly misleading. This is not a security vulnerability from within Firefox, it's a Firefox plugin to reveal security vulnerabilities in a wide range of websites.
Mind you for any of these extensions to work the website you're visiting needs to be already accessible via ssl. If the site does not have encryption, these plugins can't force the sites to automagically start using the encryption it never had.
However they don't share the same session cookie for different service as far as I know (which they negotiate that through TLS protected link)
Likewise they have also made several other services TLS only (e.g. calendar, docs)
The explanation I've always heard for not using HTTPS 100% of the time is that it puts an substantial load on the server, and for many sites it's overkill. Setting aside the subjective topic of "overkill" ... how much more CPU-intensive is it to serve pages over HTTPS compared to HTTP?
OK, that's most likely too late to contribute to the stated article, but there was a talk by Michael Klishsin about a year ago, here're his slides: http://bit.ly/90qORL
(ssl, performance, certificates, lots of stuff)
The CPU intensive part of a HTTPS connection is the initial key negotiation/session setup (using asymmetric encryption methods). The symmetric encryption of the actual traffic is pretty trivial.
You can amortise the session setup cost by ensuring the HTTPS session caching is enabled on your server (in Apache, the directive is SSLSessionCache). This will let subsequent connections from the same client re-use the same SSL session.
The cpu load can be mitigated with frontend https accelerators or proxies (think nginx as a load balancer doing the https). The real problem is the first connection. Browsers don't fall back to https, if nothing answers on http they'll give an error. If the first connection is over http then a man in the middle attack can succeed.
Someone correct me if I'm wrong, but that's the exact vector point for a man-in-the-middle attack. First request over HTTP gets hijacked, redirected to a "secure" server, then you (the user) see the lock and go to town, secure in the knowledge that you're communications with this server are protected because they're encrypted.
Isn't that exactly why HTTPS sites are supposed to have expensive certificates issued by big companies? Otherwise the browser will display a big red warning message. If you ignore that warning, you deserve to be hacked.
If the request gets redirected to a HTTPS proxy site that the attacker has set up, that's a different story. But again, you should be checking what's in your address bar. No security system can rescue you if you can't tell the difference between "mail.google.com" and "mail.google.haxxor.com". But for those of us who actually read what's in the address bar, HTTPS is pretty good security.
Isn't that exactly why HTTPS sites are supposed to have expensive certificates issued by big companies?
There is virtually no cost associated with issuing a certificate. The fact that they are nevertheless prohibitively expensive for most private domains is partly responsible for the failure to adopt SSL on a broad scale.
Otherwise the browser will display a big red warning message.
While it is true that this warning is intended to protect users again man-in-the-middle attacks and other methods that redirect traffic away from the original source, it also prevents people with absolutely valid (but free) certificates from offering perfectly good encryption on their sites. This warning is misguided and does more to prevent the secure use of the web than anything else.
The remedies would be simple, but I guess commercial reasons prevent them from being adopted at the expense of everyday users.
> it also prevents people with absolutely valid (but free) certificates from offering perfectly good encryption on their sites. This warning is misguided and does more to prevent the secure use of the web than anything else.
Accepting self-signed certificates would not be a good solution. Anybody can sign a certificate with "www.facebook.com" in the Common Name field. Your communication would be encrypted, but you'd be communicating with the wrong guy. SSL was designed to provide both encryption and identification. Self-signed certificates provide only encryption.
Besides, certificates are already quite cheap if you know where to buy them. RapidSSL costs as low as $12/yr, which is just slightly more than the cost of a domain name, and it's recognized by all browsers including IE6. I wouldn't consider it "prohibitively expensive" if it costs less than two lunches.
There are, of course, many other ways in which the existing infrastructure is inadequate. Take a look at the list of CAs that your browser automatically trusts. It's a mess. But it's not easy to implement an alternative that can provide both encryption and identification with the degree of reliability that the current infrastructure has. Some kind of social trust mechanism might work, but we're a long way from standardizing on anything of the sort.
This specific attack should work just fine with WEP but will not work against WPA2 (Disclaimer: I haven't tried the combinations personally). With WEP every single client uses the same encryption key, so all the packets are visible to everyone. With WPA2-PSK, each client has it's own key which is derived from the pre-shared key. Because of this, your adapter won't decode someone else's packets, but it's technically feasible to do so with other types of attacks outside of the scope of this firefox plugin. I'm not that familiar with the encryption within WPA-Enterprise, but I don't believe you can derive other people's keys and sniff their data when using it.
Mind providing more information on this? eg, what about WPA2+TKIP?
I'm trying to wrap my head around how WPA2 could still provide protection with a shared key... I'm sure I'm not the only geek who feels like their knowledge of WiFi protocols goes stale every six months or so.
TKIP is much more vulnerable. In theory it requires "work" to crack WPA2+TKIP but it's comparatively trivial with modern hardware. With WPA2+AES you basically open a public/private key encrypted connection to the router (similar to SSL) and exchange the PSK in order to authorize the client. This traffic isn't any more sniffable, in principle, than https traffic. However, depending on configuration it can be vulnerable to man in the middle attacks and such-like.
It's fairly easy to do if you are logged in on the network already. For example, for the iPhone you can use something like pirni to spoof the mac address of the router. That way you'll receive all data on the network, and can send it on the router yourself. In the meantime, you can dump all cookies that are passed on. I think the tool even allows you to list all twitter and google cookies, and set them in Safari.
I think this should be a call to arms to network, web and system admins everywhere. This is a problem that everyone knows about but nobody wants to do anything about since it requires additional setup. Usually the barrier is a technical issue that the end user can't figure out. However since submitting forms via SSL is something the developer can do without impacting the end user at all, this is a simple fix for just about any website. You need a static IP and an SSL certificate, and they are both cheap.
Running out of IPv4 space is an issue in this regard, but hopefully with more people wanting SSL it will push providers to IPv6 quicker. Nicely done EricButler!
It seems fine to just enable SSL everywhere. But indulge me for a second in thinking of alternate solutions.
Logistically, I suppose you would run into some trouble setting a new cookie for each request depending on how the page is loaded. For instance, if the user pastes a url into a new tab manually, then this system wouldn't have a chance to set the new cookie first.
A downside is obviously that the content itself is still not safe, but at least the account would be. Any thoughts?
I think all cookies are sent with every request, so cookies can't be used to (securely) pass data to the next page. It'd work just fine on the login page, but every page after that would have to renegotiate to generate a new cookie, meaning you basically just created SSL everywhere.
Local storage, however, could probably be used to do just such a thing, as it exists only locally. In which case you could just have the login page generate an RSA key pair, receive the server's public key in the response, and use that for any kind of secure communication on each page load. The server would have to remember sessions => encryption keys, but that's not too hard.
Haven't thought too hard about passive attacks, but you're not secure against an active MITM like airpwn (http://airpwn.sourceforge.net/Airpwn.html), because the MITM can inject JS into the unencrypted content that steals your JS security scheme's secrets. Effectively, an active MITM allows XSS on plain ol' HTTP sites.
It doesn't currently do anything with passwords, it's only pulling out cookies from HTTP Response headers. But it would be trivial to also get passwords in non-HTTPS requests for logins with the same method.
Passwords are not a part of this... It's the session cookie, which is an entirely different matter. It's unique to the login process, so one compromised account isn't able to lead to compromising other websites. It's also time sensitive (generally) and so that hijacked cookie will expire. If he were collecting all this information, he wouldn't be able to do much with it.
This exploit is for insecure Wifi networks- so only using encrypted Wi-fi or Ethernet would seem to remove this attack vector. Is there a real risk that someone (besides the government) can see your cookie?
Is there a real risk that someone (besides the government) can see your cookie?
Yes, if you login and your cookie is sniffed and spoofed then basically you just allowed the attacker to login as you at the same time.
Minimizing it is a little bit different: you can use a secure proxy/tunnel, you can limit your unencrypted wireless activity, you can make sure that sites that should be SSL encrypted are (stripping SSL is common when password sniffing) and you can avoid these services while on open wifi networks.
That assumes the session is killed on logout. I know from first-hand experience that at least one version of Merb didn't do that. I hacked a pretty popular geo-socially site by grabbing the session cookie and playing around, then logging out. Was still able check-in after I had logged out and for good measure verified that my session was still valid after changing the password. I assume they were using the default sessions setup so I guess it expired. Didn't keep it around to see how long it stayed around.
On reporting it, the response was essentially, "oh you didn't have to go to that much trouble, you could have just used your user/pass from curl…" Completely obvious to the fact that they're app/site was completely vulnerable to session hijacking.
One of the problems of app frameworks, if you don't know what they're doing (and more importantly, not doing) you can get yourself in a heap of trouble before you even realize there's an issue. But boy, you sure can make it to market fast. shakes head
I would have expected each wireless client, on an encrypted network, to negotiate its own key with the access point -- so you'd only see neighbors' traffic if the access point chose to rebroadcast it to you.
Are you sure that neither WEP nor WPA/WPA2 do it this way?
Your terminology "the network" or "your network" is still unclear; encryption to the AP could be unique per wireless network client, or not. If it is unique per client -- and it is my belief that recent standards, like WPA2 at least, provide this -- then casual passive eavesdropping by other wireless clients (as with the FireSheep tool) is thwarted. (And that's what most people are most concerned about.)
Are you suggesting that no generation of WEP or WPA protects against other authorized wireless users of the same AP, because they're "on your network"?
WPA enterprise allows a separate (changing) key for each user, typically what you get from an RSA token. Once it gets to the AP, it's then clear text (assuming HTTP) over the rest of the internet until it hits your (HTTP) service provider.
If you have control over the internet between the AP and your server, then you're safe. If you don't, then how safe you are depends on how much you can trust the owner of each router along the way. In general, you should be okay, except that every now and then you might end up on an untrusted router, and it's then game over.
and it still asks me to run it with "--fix-permissions". I guess it's time to go digging around in the source to try and find out what it wants me to do.
After a bit of digging, I found out that running it with --fix-permissions really just chowns the binary to root then setuid's it. I don't see anything wrong with it on the surface, but I'll keep digging.
Plus, your source IP can change from request to request when your ISP transparently pushes you through one of many proxy servers. AOL does (or did) this, as do some large European ISPs whose names escape me.
It states that it works for "open networks". What does that mean? All networks that you have access to? Including those in Cafes where they give you a key to log in? Or just networks that are completely open? And why does it work at all? I thought the wlan access point would encrypt the communication between itself and the computer. Would be interesting, which protocols are vulnurable to this and which are not.
I guess the logging of raw wlan packets is a one-liner under linux? Does anybody know it?
Newer servers can serve more than one HTTPS domain using the same IP... to users who are not using IE/Chrome/Safari under Windows XP. If you depend on SNI, you're leaving out something like a third of your user base.
Same setup. Sidebar shows for me after selecting it from the View -> Sidebar menu, however it pops up with a message that says "Run --fix-permissions first." Not sure where I'm supposed to run this flag.
It's an interesting assortment of sites that are "supported" out of the box. Some of them are pretty harmless (bit.ly, Flickr), some could cause some pretty serious hassles (Google, Amazon), and some could be absolutely devastating (Deleting someone's Slicehost account? Ouch...).
On my Macbook Pro (purchased 1 year ago) it doesn't seem to be able to capture traffic on my wifi. It can see sessions originating from another browser on the same Mac, but not other macs on the wifi network.
Digest authentication is safe against passive sniffing (it doesn't exchange any password/token in the clear and uses nonces), but it doesn't protect against active attacker who could modify server headers and replace "Digest" with "Basic" to reveal password.
and recently discussed in HTML5 WG, but the conclusion was Digest and countless JS tricks proposed in its place are only partial solutions, cookies have unstoppable momentum, so it's better if everyone just switches to SSL.
I love SSH tunnels, but in regards to this particular problem, it really just pushes the problem off to wherever you ssh tunnel terminates. Do you trust you server operator? ISP? This is addressed in our presentation, here (VPN's are essentially doing the same thing): http://codebutler.github.com/firesheep/tc12/#20
I'm going to be traveling for a while pretty soon and using a lot of internet cafes and other free wi-fi spots so I should probably get this set up - I'm worried someone will be able to grab my password while logging in to check mail.
On the other hand, stealing somebody's real life identity is not that hard either. But it does not happen too often, in part because it's illegal.
Stealing somebody's cookie on the Internet is a crime just as is stealing somebody's driver's license.
Although technical solution to this security hole is desirable, it's not the only solution available.
Linux installation needs work. README is empty, and the INSTALL says use ./configure which doesn't exist. ./autogen.sh complains about needing xulrunner-sdk path, which is isn't something normal for linux.
Edit: Oops! Linux support is "on the way." I guess I assumed since linux is the easiest platform to get your driver to go into monitor mode.
What should happen if you use iPhone tethering? Could it top into the vast people on that network? (I have absolutely no idea). If this is the case, the internet will have a panic attack in 2 days max.
No. Cabled tethering to a cell phone only gives you access to your own packets. It's like a switched network where only packets addressed to you are sent to you.
On 802.11 wireless networks your wireless network card is capable of capturing traffic addressed to other computers. When encryption isn't used or is compromised, you can steal their credentials.
Doing something similar against cellular networks would require a much more sophisticated attack with specialized hardware that's largely illegal in the United States. I would also hope that cellular communications are encrypted these days.
Interesting. I was going to do something similar but keep it limited to Facebook chat. That way you could eavesdrop on conversations in the room and impersonate people, etc. This is actually probably easy to program and more versatile at that.