Hacker News new | past | comments | ask | show | jobs | submit login
Firesheep: Easy HTTP session hijacking from within Firefox (codebutler.com)
714 points by cdine on Oct 25, 2010 | hide | past | favorite | 341 comments



For anyone who has SSH access to a server (but not VPN) and is wondering what to do when you need some security in a pinch, here is a quick fix...

Open an ssh connection to a server you have access to using something like the following:

ssh -ND 8887 -p 22 rufus@12.120.186.8

where 8887 is the port on your laptop that you will tunnel through, -p 22 is the port the ssh server is on (22 is the default but I use a different port so I am used to specifying this) and the rest is your username and the address of the server

Set your network to point to the proxy. On a Mac that would be…

... Open Network Preferences…

... Click Advanced…

... Click Proxies…

... Check the SOCKS Proxy box then in the SOCKS Proxy Server field enter localhost and the port you used (8887)

... OK and Apply and you are done!

Now you can surf safely.


Also, remember that some programs don't respect the system's proxy settings and instead use their own. Firefox is one of those, you can find its proxy settings in "Advanced -> Network -> Settings"


Also Firefox doesn't put DNS through a socks proxy by default, which has some security implications and doesn't allow you to reach internal-only names. In about:config set: network.proxy.socks_remote_dns to True.


After Firefox 3.6.4, the default proxy selection policy is to use the system default, instead of no proxy.


http://lifehacker.com/237227/geek-to-live--encrypt-your-web-...

That link has screenshots to help you configure Firefox to use the ssh proxy.


Unfortunately Opera doesn't support SOCKS proxies. http://www.opera.com/support/kb/view/194/


Also unfortunately I think Flash and Silverlight media streaming don't respect proxies, leaving me unable to stream Hulu and Netflix when I'm in the UK.


They _should_, and do for me typically using Chrome.


alanstorm of stackoverflow answers that deal with Magento fame (well, fame being a relative term but famous to me, anyway)?


Yeah, I'm that Alan Storm. NOT the WCW scrub wrestler.


  ssh -ND 8887 -p 22 rufus@12.120.186.8
just hangs and doesn't look like it's doing anything ... if you want to see stuff happening, so you know it's working, use verbose mode:

  ssh -vND 8887 -p 22 rufus@12.120.186.8
and you'll see delightful ssl debug information scroll by every time you hit a page in your browser.


I wouldn't say it "hangs" as everything is actually working fine and "hang" means that the process is stuck. It's just there is no output to provide feedback that it is working as expected.


You can remove the N option and you'll get a shell.


You, sir, made my day! I just set this up with my home Linksys router which is reachable from the internet and it works like a charm.

I am using the Tomato firmware (http://www.polarcloud.com/tomato) which has an SSH daemon.


Wow, great idea! I was thinking of using my root server, but worry about wasting traffic. That would not be an issue with my DSL router at home.


Awesome! Happy to help.


And, make sure you already have the key in your known_hosts, otherwise you could be subject to a MITM attack :)


You proxy through the NSA? How brave!


You win a cookie! I was wondering how long it would take for someone to comment on that. :)


I thought the whole point of this mechanism was to avoid giving away cookies?


Insecurely transmitted cookies. I think it safe to assume these two will negotiate a means of exchanging keys to their respective cookie jars.


On a similar note, I have this aliased in my shell (.profile) so I don't have to think before getting a proxy up:

alias socks="ssh -ND 8887 -p 22 rufus@12.120.186.8"

That way I can just open a shell and type "socks" and be good to go (well and then do the system preferences deal, but I have an AppleScript that does that automatically).


Also, if you host your ssh server on 443 rather than 22, you can also tunnel through most corporate firewalls.


And if the firewall does protocol blocking, solutions like http://dag.wieers.com/howto/ssh-http-tunneling/ might be able to get through.


Client -> SSH_Server == Encrypted

SSH_Server -> FaceBook == Unencrypted

SSH proxies are not end to end encryption. They only protect part of the path. Not sure why this is being down voted. It's true. The tunnel is only between the client and the SSH server. The HTTP websites that you visit beyond the SSH server see your clear text packets.


This is true but not relevant to the discussion because the attack in question depends sniffing clear-text wireless traffic at the local access point.

Tunneling over SSH protects your traffic for that portion of the network (and out past your ISP as far as the remote end of the SSH tunnel).

An attacker would need different tools and resources to intercept your traffic between remote hosts.


So? The problem is insecure WiFi and local networks.

The network from the SSH_Server to Facebook and much larger and more secure.


http://silenceisdefeat.com/

Silence Is Defeat provides SSH accounts for a small donation.

(I am not affiliated with them)


I provide ssh accounts on 2 VPSs (and growing), free of charge. http://nipl.net/ http://ai.ki/


In case this isn't clear: if you ssh to someone else's machine and use it as a proxy or VPN, the owner of said machine can of course still steal your HTTP cookies. (not trying to say anything about the trustworthiness of the above 2 posters, just a general statement)


This is a stupid question, but what about a guy like me who has no access to a server?

I'm going traveling for all of next month, the only sites I'll be checking where I'll be logged in is my hotmail account, and I might check my bank account (Chase) - both use https, so I suppose I'm in the clear then? (also when I click "log out" on these sites, it logs me out, but if my session has been hijacked, will it log the hijacker out of the session he's hijacked of mine as well?)


A cheap linux VPS is a couple bucks a month. Mine is three. If you aren't looking for a deal, there are many many options at the 5 dollar price point. If five bucks is worth peace of mind for the next month, then that's your answer. This will also have the benefit of getting around filters that are operating on WiFi network you are on.


Wow, three? I thought prgmr.com's $5 system was the best deal I'd be able to find.


It was a special deal featured on http://www.lowendbox.com/. If you can pay by the year, there are even cheaper deals.


BuyVM.net has some good-for-my-purpose VPSes for cheap, so long as you can get them "in stock" (whatever that means...). Lowest one is a 15 bucks a year(!)


An Amazon EC2 micro instance is free for a year.


So I could go get a free VPS for a year through Amazon with this?


Generally websites will delete the login token on their side, leaving hijackers with an invalid token and a 'log in again' page.


Generally, watch out for older sites or sites made by people that haven't learnt much in this area which may store some kind of account id in place of a key generated on each login. In that case just because the website invalidated/deleted your cookie the hijacked cookie would still be good.


I would consider paying for VPN access. I travel a bit and use Witopia for this. It's not very expensive and is quite convenient.


Would the free Amazon EC2 deal for a year be good for this?


I'd like to buy such a server at low purchase and maintenance cost.

The Pandaboard[1] looks like a good fit, but the instructions to install a Linux distro are a bit scary [2]. I guess I could do it, from my Mac, but I'm a bit afraid to mess things up with the low-level disk utilities.

Does someones sells SD cards with a distro pre-installed? Or an equivalent device with an easier setup?

If not, there's probably a market for that...

[1] http://pandaboard.org/

[2] http://omappedia.org/wiki/OMAP_Pandroid_Main#Getting_Started


Instead of a dedicated server, use a router that can run DD-WRT or Tomato. I use a cheap(35$) refurbished wireless router from Linksys[1] and I am sure there are other models. These will be more energy efficient and easier to maintain than running a dedicated server. Additionally, of course, you can use these as routers for your home network.

[1] http://www.amazon.com/Cisco-Linksys-WRT160N-RM-Refurbished-W...


There are lots of dirt-cheap Atom-based Mini-ITX systems out there. A basic motherboard with CPU will cost you about €60, a bit more for a dual-core. You can probably scavenge some DDR2 ram from an upgraded laptop and install the OS on a USB stick. Mini-ITX cases/PSUs tend to be cheap too. If this going to sit in your office or home, you might want to watch out for noise/heat with both motherboard and PSU and pay a bit more for a fanless motherboard & PSU and get a case with a large, slow-rotating fan. All in all you can probably come in under €200 plus a multiple of that for your time for research, assembly and installation. Or just rent a VPS.


Exactly my question. Are there any cheap and reliable(very important in this case) VPS service I can use to do this? Using ssh thru internet as proxy seems to be the best approach. Unfortunately I cannot setup my own ssh server to do this as both power and internet connectivity is not reliable where i live.


You might want to look at http://prgmr.com/xen/ It's run by a Hacker News member, lsc.

He has plans starting at $5/mo but you'll want to take notice of the of the monthly transfer limit. The $5/mo plan is 10GB transfer a month (which will come to 5GB in/5GB out if you're using it as a proxy) so you won't want to tunnel video or downloads through it. If you go for the $8/mo plan though you can get 40GB data transfer.

I've never had an account with him so I'm not sure if there's a way to check how much of your data allowance you've used for the month. Someone else might be able to chime in about a program you could run on the server to notify you when you've reached a transfer threshold.


You can also just put DD-WRT or Tomato on your router and use that.


Are there public SSH servers that are safe?


Not that I know of, but decent VPSes are relatively cheap - for instance: $5/mo - http://prgmr.com/

If you can't afford that, you can always run a SSH server from your residence and use that.


Surely you jest, sir!


There are probably going to be a lot of people negatively affected by this for quite some time to come. One thing to point out is that there are grades of things. There is "public", and then there is "top hit on Google". Similarly, there is "insecure" and then there is "simple doubleclick tool to facilitate identity theft".

How many millions of dollars and man hours is it going to take to lock down every access point? How many new servers are going to be needed now that https is used for everything and requests can't be cached?

America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal. By contrast it's fashionable among a certain set (no doubt including the author of this mess, Mr. Butler himself) to hold that the real culprits are the door manufacturers. What said facile analysis excludes, of course is that there is always a greater level of security possible. The level we currently employ reflects our tradeoffs between the available threats and the cost/convenience loss of bolting our doors and putting finials on our gates.

Butler has simply raised the threat level for everyone. He did not invent a new lock or close a hole. He's now forcing lots of people to live up to his level of security. Congratulations to the new Jason Fortuny.


Butler has not raised the threat level on anything. This has been a widely known issue since forever. A friend of mine wrote a sniffer that could do this back in college, and he was one of the last to the party. Want something else to kvetch about? His tool could impersonate the router and act as a proxy, including serving up ssl-encrypted pages to users who didn't realize they shouldn't accept certs from unknown signers - again, that was years ago, and even then it was nothing new or unique at all.

When a tool like this rises to even a minimum level of public consciousness, you're better off thinking "people have probably been doing this for close to a decade" than "this asshole just ruined the internet by pointing out an obvious flaw that someone will now be able to exploit".

And yes, at some point, a door manufacturer that knows how easily their doors will open and how frequently people will just walk through does take on some responsibility to add a lock (and the homeowner to use it). It's going to cost more in servers? Okay, so what? It costs more to install seatbelts, are you upset at Ralph Nader, too?

[Edited to bring it down a notch]


> Butler has not raised the threat level on anything.

Flat out false. Ever heard the term "crime of opportunity"?

What's your over/under on the number of identity thefts facilitated by Eric Butler's little gift? Let's make this empirical.


Anyone who wanted to hijack http sessions was five minutes of Googling and installing away from being able to do so before "Eric Butler's little gift" anyways. Are you claiming that the marginal impact of packaging it up into a firefox extension is so great as to make it a threat of a wholly different kind?


That is exactly what I'm claiming. That's also why this article has 200+ comments and was on the top of Hacker News all day!

You vastly underestimate the barrier that "five minutes of Googling" presents. I assure you, the overwhelming majority of aspiring script kiddies would never be able to figure it out. It took an expert to package an exploit in a nice GUI (and write cookie parsing code for every major social site under the sun).


As long as only the minimally motivated can exploit it, it's not really a problem, gotcha.

How about instead of shooting the messenger, you take some of that righteous anger and point it at the companies with millions/billions to spend who have simply ignored a longstanding known issue?


How about you recognize that there are a lot of innocent people who will be hurt by this stunt? There are hundreds of thousands of companies and millions of people who are targets for this, and most don't have a spare million lying around.

Hospitals, nonprofit groups, anyone running a website has to drop everything to lock it all down now. The effect is a lot like loosing a new virus (and might ultimately be treated that way).

> As long as only the highly motivated can exploit it, it's not really a problem, gotcha.

^ This modified statement is correct. All I'm saying that making something easy to use and publicizing it widely is going to result in a lot more people using it.

[Edits - hey jfager, I don't know you from adam and don't particularly enjoy flamewars. I agree that in the long run this should be fixed, ideally in such a way that 99.99% of people can blissfully go about their day. I just wish that the energy to secure stuff had taken the form of (say) a post on "here's how Google converted Gmail to https" rather than Firesheep. Hope we can find some common ground and you can see my POV.]


The intersection of 'evil enough to do something truly malicious', 'read a tech blog in the right 24-hour period', 'didn't already know the problem existed', and 'in enough cafes to pair with enough potential victims' is too low to cause "millions" more to be impacted by this, I promise.

Your implicit definition of 'highly motivated' (someone willing to put in 5 minutes of Googling) makes me sad.

I'm agitated because you're trying to hang someone for doing A Good Thing: putting real pressure on the bigs to finally actually fix a well-known, longstanding problem.

[Response to your edit: Facebook, Twitter, and other big sites know about the problem. How would explaining to them how Google secured Gmail change anything? They know how Google secured Gmail, and they know how to secure their own services. They just simply aren't, because it saves them money and their customers aren't demanding it. But the only reason their customers aren't demanding it is because the vast majority of their customers don't know the threat exists. This tool makes the threat clear as day to the most unsophisticated layperson, which makes it real, effective pressure, far more than yet another blog post asking nicely for SSL by default].


It might make you sad, but it's spot on. People were sharing MP3 files on usenet pretty easily, back in the day. It would have taken 5 minutes or less to work out how -- even easier than grabbing cookies.

It wasn't until Napster made that 0 minutes of googling that MP3 filesharing really took off.

For something like this to end up on millions of desktops, you have to be able to explain it to a half-stoned frat at a party. "Five minutes of googling and then some nerdery"? No chance. "Install this, go to the quad and you can sign into the facebook of any other person there?" Yup, that's going to spread like wildfire.


The responsibility is with every admin that setup an insecure access point, not with every security researcher to stay quiet about widely known and widely exploited vulnerabilities.

This isn't new. Point and click tools for doing this existed 10 years ago. Making a firefox plugin just pushed it back to the top of the headlines. This is actually a good thing because if word spreads more people will be aware of the already existing risk and will be more security conscious.

Does this mean everyone should stop logging into their personal accounts over unsecure wifi at school or starbucks? ABSOLUTELY.

Hopefully this new attention on an old hole will motivate more admins to fix their networks and more users to realize how vulnerable they are.


> It wasn't until Napster made that 0 minutes of googling that MP3 filesharing really took off.

(a) network effects (b) autosharing, spurring more (a)

Neither of these apply here.


Obvious, easy security exploits should be be as publicly exposed as possible, and repeatedly so.

This kind of exploit is so many years old that it's a matter of basic public education and computer literacy. While this might be a "forcing function" on the web development community - it is not unfair. There is so much new tech every year, it's unfortunate that security isn't more in the consciousness of tech.

There may be more graceful ways to lead "sheep" to more secure use of the internet deserving of praise, but it's fair game to release an exploit, and I'd rather see FireSheep than censorship of it.


Your core argument still seems to be for security through obscurity. I'd rather have a problem be widely known, and addressed, rather than not widely known and ignored.


Re: "Hospitals, nonprofit groups, anyone running a website has to drop everything to lock it all down now." That simply isn't true. Unless a site uses cookies AND firesheep can understand those cookies, the site doesn't have a worse problem today than it did last month. It would be very nice if every site, of every group, implemented SSL for anything remotely personal. But from what I've read I doubt firesheep poses an additional threat to any such not mega-popular site.


24 hours later, more than 150000 downloads. I believe it is safe to say the threat level has indeed been raised.

http://github.com/codebutler/firesheep/downloads


For a public wifi user, how do those 150k downloads actually affect the probability that someone else on the network is using a session-hijacking tool? Given that it was already high enough that people should have already been taking preventative measures, any increase you can attribute to this would still fail to justify the witch-burning you're looking for.

There is zero difference between what someone using public wifi should be doing today and what they should have been doing last week. Now at least more people are aware of the problem.


People have been doing this for years already with tools like Wireshark. The only thing the app he has released does it to draw a massive amount of attention to the already existing problem. I say superb. Brilliant effort. Well done. Hopefully more people will stop stupidly sending session cookies over unsecured channels now.


Also. If you want to use Facebook completely over https. Install the "HTTPS Everywhere" Firefox addon. It forces a number of sites to make all of their requests over an SSL secured channel.


America was a better place when people could keep their doors unlocked

I hate this mythical "good old days" B.S. I know people who live in the country who don't lock their doors because they live in the country. The idea that people who lived in urban areas ever could leave their doors unlocked is absurd.


I live in a suburb of Atlanta and haven't locked my front door during the day in about a decade (since moving from an apartment to a house). The world isn't really as scary as the news makes it seem.


> I... haven't locked my front door during the day

This isn't what people mean when they say they don't lock their doors. I grew up in the country in northeast Ohio. I knew many people who simply never locked their doors, including overnight or even when they weren't home.


I lived in middle-of-nowhere Texas for several years, and I think the only time I ever locked the door to my home was when I left for two weeks at Christmas. If my car didn't automatically lock itself after you get out, I would have left it unlocked as well, with the key lying in the center console.

I live in Brooklyn now. Things are a little different here. My door has a $350 deadbolt lock that -- when it broke and locked me in -- took a locksmith, a serious drill and a couple hardened bits to defeat.

Whether you should lock something and how secure you make it isn't a binary decision - it depends on the value of the thing you're protecting and the likelihood of an attack.


The chance of one trying to get into a house in the middle of nowhere Texas uninvited and getting shot in the face, may serve as a deterrent equal to a $350 lock. I know a few Texans who would totally agree with that statement.


Question is, would it have been that hard to defeat if you were someone who cared absolutely nothing for minimizing damage to the door and door frame? Usually, the answer is not at all.


For most of the time I've been here, I didn't lock them overnight either. My girlfriend takes comfort in that illusion of security though, so I play along.


While I completely agree, I wouldn't consider personal experience a valid data point for generalizing the whole world.


which... suburb of Atlanta?


Roswell.


North Atlanta is the calmest part.

I bet you would feel differently if you moved into the city or south of town.


I live in Oakland. My wife and I left on a 10-day trip last summer and forgot the garage door open (the clicker didn't work or something)

Our garage leads to my office, which in turn leads to the rest of the house.

We came home shocked to see it open, and even more shocked that not a single thing was missing.


There are probably going to be a lot of people negatively affected by this for quite some time to come.

Yes, but it's better than the alternative, where there would be an increasing number of people negatively affected for even longer. At least the problem is out in the open now and there will be public pressure to fix it.

America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal.

You are correct. But those days are long gone, and they're not coming back. Unless you want to throw out a good chunk of technology, kill half the people on the planet and go back living in communities where you knew personally everybody you interacted with during your entire lifetime.

Butler has simply raised the threat level for everyone.

Yes, he has. But he has also raised the defense-level for everyone, and by a greater margin. Before his post, there was a much larger divide between the people who knew about this exploit and those who didn't (the fox and the sheep, if you will). It's true that now more people can exploit those who don't know, but it's also true that even more people can defend against it.

He did not invent a new lock or close a hole.

Making other people aware of the hole is the first step in getting it closed, if you are unable to do it yourself. Shame on the rest of us for not doing this earlier.


This is IMO a completely wrong approach to security. Butler has not raised the threat level, he has merely illuminated the existing threat level.


I agree. Releasing a point and click exploit is standard practice among white hat, well intentioned hackers. Decades of this kind of tough love is why microsoft finally has an OS that is reasonably secure.


Illumination is one thing. Enabling a ten year old to do malicious stuff with a few clicks and poorly considered actions is entirely another.


If it can be that easily scripted, 10 year olds were already doing it. Suppressing knowledge, especially knowledge of a flawed system, doesn't make the system safer.

In terms of severity, computing has overcome worse exploits; this is a problem awaiting an answer, which sounds like opportunity to me.


> Suppressing knowledge

Again, degrees matter. Abstract knowledge is one thing. A simple tool to facilitate griefing people is quite another.

Mobile web browsing existed before the Iphone. Search existed before Google. Telecommunication preceded the internet. You could share mp3s before Napster and mp4s before Youtube.

And you used to have to delve into Wireshark to pull this off, but now you can snag grandma's credentials from any Starbucks in the country with a mouse. Degrees do matter.


And without raising awareness of the issue, everybody might always be somewhat vulnerable forever, whereas now that "we know", after being highly vulnerable for a short time everybody's vulnerability to this should drop to zero very quickly.

If you assume a limited number of evildoers and a limited ability to exploit this at will (e.g. you have to catch your victim in close proximity on public wi-fi that you're sharing with him), releasing a tool like Firesheep may produce significantly less total damage.


I know 10-year-olds who do this already.


Bull crap. Hamster and Ferret was only slightly harder to use than Firesheep. You had to run it, then adjust your proxy to localhost:1234. Aside from that, it does exactly the same thing. And before it was around, we were using cookie editing plugins in FireFox to import stuff we grabbed from Wireshark. And before that, we were manually editing our browser's cookie stores to bring in cookies we caught with tcpdump. And before that...

This isn't a new threat. Just a new shiny piece of ware that lowers the bar a little further.


How many new servers are going to be needed now that https is used for everything and requests can't be cached?

The main thing holding us back there are browsers that go apeshit if you load images via HTTP on an HTTPS page. Requiring JavaScript or other active content to be loaded from the same HTTPS server would be a good thing in many cases. I think currently ANY https server is allowed, which doesn't actually defend against any kind of XSS, so it's pretty meh. Or is there some kind of meta tag etc. that enforces same-origin? (If not, that would be a cool addition. Maybe a list of allowed domains?)


FWIW, non-JS is still a vector. For example, img tags can stomp on cookies. Yes, serving static media over SSL is diminishing returns, and it would suck for mid-stream proxies (and every ISP will hurt from it). But don't argue that it doesn't matter from a security perspective.


When done correctly, it should at least avoid this sort of cookie stealing scenario, though - HTTPS-only cookies won't leak into HTTP requests for non-active content, though malicious Set-Cookie: injection into the HTTP responses can obviously log the user out, etc. and may even reveal unwanted info if the HTTPS server doesn't handle the situation gracefully.



That will still load images via https, not http. They'll still be cached client-side of course, but they can't be cached by proxies and you need an SSL certificate for your static content server.


America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal.

The analogy is not complete because in our situation there's a third party involved beside the victim and the criminal: the website. What if your bank leaves the vault unlocked so anyone can take your money? Isn't the bank at least partly to blame?


In parent's analogy, the third party is the door manufacturer.


Security through obscurity. Information is dangerous. Two sentiments that you're espousing that I consider bogus.


when you log into most banks and websites that perform financial transactions you are routed to an SSL logon, its a default choice, in fact most of the transactions are done under SSL. why should social networks be different?!

by now it is clear that unauthorized access to social networks can cause much distress and even worse to a great many people who use them.

Banks minimize their liability when they use SSL, facebook should do this too. at this point it should be clear that the effect on a person social life can be severe, career destroying, financially damaging, what have you, we are witnessing stories along these lines in increasing rates.

The release of this extension is a blessing in my view, it forces the issue that companies like facebook or twitter would prefer to ignore, or cover in obscure terminology, this simply demonstrates how trivial this is.

When Ingersoll Rand released the Kryptonite lock, they named it after the mythical element that would bring superman to his knees. Too bad the lock was revealed to have a design flaw that enabled cracking it up with a BIC pen, was it shameful to display that defective design?

Facebook etc... talk about privacy all the time. This forces them to walk the walk, not just talk.


I will blame lock manufacturers if any damn key on the planet can open it up.


As said before, these tools [edit: the stupidly easy point & click ones, btw] have already been available for about 3 years. (e.g. http://www.google.com/search?sourceid=chrome&ie=UTF-8...)

Butler isn't doing anything earth-shattering, he is just reminding everyone AGAIN that the current system is messed up.

There will always be this debate about disclosure, but you can't ignore that it works. Sure, innocents suffer (and they would[they are!] anyway), but at least it's one more reason why websites should change to https.


The problem with vulnerabilities like this is that they're too easy for people to rationalize as "hard" and it's too easy to pretend that they don't happen. People seem to think that as long as the problem can remain invisible (to them), nothing bad is happening.

What actually happened back in the day before people started forcing the issue with full disclosure was that the bad guys operated with impunity because the good guys couldn't work together because people got upset when folks let the "secret" vulnerability knowledge out.

I don't want to go back to those days. Things have improved so much since then.


Please reread the name of this site. I'm surprised that so many members of a site named "Hacker News" agree with you that what is clearly a very clever hack is inherently a bad thing.


This "clever hack" is costing a lot of people a lot of money today.

Concrete example: are you a location based startup? Well, you might need to shell out $10,000 for a Google Maps API Premier key in order to get HTTPS.

"Access to the API via a secure HTTPS connection" http://code.google.com/apis/maps/documentation/premier/guide...

"Google Maps API Premier is extremely cost-effective, starting at just $10,000 per year." http://www.google.com/enterprise/earthmaps/maps_features.htm...

Multiply that by thousands and you'll begin to have some idea of the discussions going on at every web based company with a clue today.

For those who make their living in computer security, like Mr. Butler, of course it's a good day (and month, and year). Pretty good business when you can start fires and then get paid well to put them out. Serves them right, of course, because they shouldn't have built that house out of wood in the first place.

While we're on the topic, I don't understand how a lot of people fail to realize that spending on computer security is a lot like spending on national security -- you can always spend more money on it, thereby taking away resources from other priorities.

http://www.goodreads.com/author/quotes/23920.Dwight_D_Eisenh...

"Every gun that is made, every warship launched, every rocket fired signifies in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. This is not a way of life at all in any true sense. Under the clouds of war, it is humanity hanging on a cross of iron."

-- Eisenhower


Not necessarily a clever hack (nothing new here), just a user-friendly interface.


> How many new servers are going to be needed now that https is used for everything and requests can't be cached?

Wrong. You don't need to use https for everything -- you can specify a domain and a path in the cookie. For things like images, videos and css, you still don't need SSL.


Many browsers give warnings when mixing secure and insecure content. Know of a good cross-browser example that mixes http and https requests?


That's a good point -- I hadn't thought about this.


It is nice to see a common security issue taken seriously, but for me the even worse gigantic hole is that most people use only one password for their email account and all other accounts. (writing on phone, sorry if unclear)


This is kind of a big deal. Not a whole lot of people are aware of this vulnerability and among those who are it's likely only a small subset that knew how to exploit it until now. I suspect all of the coffee shops in the college town where I live will have people using this starting tomorrow.

I've personally been working from cafes and tunneling everything through SSH for years, but in my experience almost no one else does this.


Exactly. That's why the net effect of this is going to be exactly what the author wants. All major potential targets will update this really fast.

I can't think of a more effective way for him to convince them all to update now.


I've personally been working from cafes and tunneling everything through SSH for years

To where? I suspect it's to a server, VPS, or similar, and the connection is unencrypted from there to its endpoint. This being the case, could someone with a server on the same subnet be running a browser remotely (or even just tcpdump) and doing a similar thing with your logins?

(This is just some thinking out loud and I may be totally wrong - correct me ;-))


Virtually no modern wired networks use hubs anymore, they're for the most part switched. Unlike wireless networks where packets are broadcast freely in to the air, the switch checks the destination address and sends the packets only to the endpoint. There are some attacks like arp-spoofing and flooding which can defeat this, but they don't work well against modern enterprise-grade switches like you would find in a data center.


Have a bazillion karma points. I didn't realize that switching resolved that whole problem. This is why I continue to bring up stupid hypothetical situations on HN from time to time ;-)


Switching doesn't resolve the problem completely. There are a range of complicated attacks that could be done, but can be detected in various ways in a well run NOC.


But we're talking a lot more complicated and deliberate than running tcpdump or this Firefox plugin, right?


I guess if you really wanted to you could run a GUI tool like Cain (http://oxid.it/), but most people doing this type of thing would use something like Scapy or at worst, Yersinia.

So I'd agree, more complex definitely, significantly not as much perhaps (it depends on the type of attack as tool), as for deliberation I'd say about the same as the firefox plugin.

If you do run tcpdump you do pick up broadcasts and such, one of our VPS instances actually sees a load of DNS traffic for our subnet, which we think is the other VPS instances.


It depends on how secure the remote network is. If it's just another coffee shop, you're screwed. If it's your own Linode in one of those well managed datacenters, it would be pretty difficult for anyone to snoop that traffic.


If you control the remote network, it's a lot safer than having all your traffic unencrypted on the Starbucks Wifi.


This is one of many reasons Loopt has used SSL for all[1] traffic from the very beginning. At least WiFi has fairly limited range. Cell networks[2] (and satellite internet[3]) can be sniffed miles away.

In addition to making session hijacking harder, using SSL keeps crappy proxies from caching private data. Remember when some AT&T users were getting logged in as other users on Facebook's mobile site? The cause was a mis-configured caching proxy.

Raising awareness of issues like this gets them fixed. Until a service's users demand SSL, it won't be offered. Unless the service is Loopt :) It's not a noticeable computational burden, but it does increase latency and cost money (for certs).

  1. Not images
  2. Older GSM crypto can be hacked in real time with rainbow tables now
  3. Usually not encrypted at all


Indeed, Loopt appears to be one of the few high-profile sites to have done this right. SSL for everything, and cookies that are relevant to login sessions are marked secure. This is what we need everywhere!


I'm proud of http://ourdoings.com/ having done this since 2004.


There are antennas[1] that let you sniff wifi from ~4 miles way. Some routers can be configured to drop clients more than N meters away, though.

[1] http://www.radiolabs.com/products/antennas/2.4gig/long-range...


Nice. A solid demonstration to show next time your webmaster doesn't want to set up SSL everywhere.

That said, the current cartel-like setup of certificate authorities (protection money and everything!) makes SSL annoying and expensive if you want the browser to not have a fit. Especially for small-scale projects. But there's really no excuse for larger sites.


HTTPS also needs distinct IP addresses for distinct hostnames, so that the HTTP handshake, in which the Host: header appears, is already protected by an encrypted channel. No more having multiple websites on one IP address.


There's TLS 1.1 SNI extension which adds vhosts.

Works — of course — everywhere except IE6/7XP.


That's an incredibly drastic change and I seriously doubt it can even be done with IP4.


SSL is bad for the environment because it requires far more server side hardware... Well, I'm only partially serious about the environment thing, the question is, how can internet companies make it commercially viable to use SSL for everything? The added hardware and power costs make each user way more expensive, possibly to the point where they may not actually be worth it.

An alternative is to bind the user's session to their IP address, but that isn't fool proof either because of NAT, DHCP and certain big ISPs that tend to change IPs on the fly.

What cost-effective solution would you suggest?


When Gmail switched on SSL for everyone earlier this year they added "no additional machines" (http://unblog.pidster.com/imperialviolet-overclocking-ssl).

Regarding IPs, there's a bigger issue here. People are used to being able to shut their laptop at home and open it back up at work without having to re-authenticate all their browser tabs. If you filter by IP this breaks. SSL requires no changes to user behavior.


What about pairing the auth token with a browser fingerprint? [1]

It would make it harder to troll an open network for random victims, and wouldn't annoy the user.

[1] Perhaps a hash based on something like this https://panopticlick.eff.org/


yep then you extend it to something as simple as having a second ID in localStorage or a flash cookie

then the next version of this plugin just spoofs all of those parameters as well

the only solution[1] is SSL and client certificates

[1] in the case of being on the same network


People bring up the Google stat, but you have to remember they have incredible engineering resources so they probably optimize in many features every day without adding additional machines. That doesn't mean every dude with a LAMP stack out there can turn on SSL and expect the same performance, just that it's possible with mongo manpower and talent to make it work. (Google doesn't even release the details of their web stack so comparing their stat is apples and oranges.)


How many web servers do you know of which are CPU bound, and not through massive code stupidity, and not I/O (in some manner - waiting on SQL, disk access, bandwidth)? Encryption can run while other threads are waiting for a response.

In general, it's a negligible cost; it adds a very minor delay compared to latency / transfer time, and uses CPU otherwise highly unlikely to be pegged. If you're pushing threading limits / CPU usage limits, you're probably inches from needing new hardware anyway, and SSL should be considered part of the cost of running a web server.


It's not completely negligible--even if CPU usage is negligible, it does add latency during SSL negotiation that might be unpreferable for some apps. The testing burden is a lot higher because one wrong link to http:// in CSS, HTML, or AJAX will cause big scary messages. And there is the IP address problem, you can't vhost as most people with Apache like to do.


interesting info in that url, thanks


That may have been true 10 years ago, but the overhead of SSL for CPU is almost nothing, and only a few ms of latency. Web applications are mostly IO and memory bound, anyways. We should be using SSL all the time, by default. There's no reason not to, at this point, aside from certificate authorities.


Require SSL on any request who's response sends a set-cookie http header. Leave it out for the non-sensitive request/responses.


You'd still be able to get the cookie when the client sends it bnack to the server on subsequent, non-SSL requests.

It's gotta be SSL all the time.


You can get SSL certificates for free for one domain, and they work with all browsers (except Opera, IIRC). Also, you can use Perspectives for Firefox, which I think is much better than the current system.


Off topic, but re: Perspectives. It allows your browser to compare notes with other nodes on the Internet ('network notaries') to ensure that everyone is seeing the same cert for a given website.

Looks like a great idea, but how do they prevent the man-in-the-middle from impersonating a network notary?


Notaries sign their responses, if I remember correctly.


Perspectives doesn't work in the latest version of Firefox. Its home page says a new version is coming, though:

http://www.cs.cmu.edu/~perspectives/firefox.html


I've had a bit of a look on Google, but I'm not 100% sure which provider you mean? Where can you get free SSL certificates that don't upset browsers?


Ah, I can't remember the name now... Rapidssl? That's probably it. Check historio.us, the ssl cert there is a free one (which is, sadly, why subdomains don't validate).

EDIT: I searched and it's actually http://cert.startcom.org/.


Check historio.us, the ssl cert there is a free one (which is, sadly, why subdomains don't validate).

AFAIK this is common to all certs (free or otherwise). You need a separate one for each subdomain (including www).


No, there are also wildcard certificates that match all subdomains, but are rather more expensive.


Wildcard certificates are available for USD $49.90 from StartSSL (http://www.startssl.com/?app=40), which is rather more expensive than free, but shouldn’t be a hardship.


The only downside to wildcard certs through StartSSL is that getting one requires high-resolution proof of personal identity, to be kept on file outside local jurisdiction (the company's based in Israel) until the cert's final renewal or revocation, plus seven years.

I admire their model of only charging for operations which require human intervention, like identity validation, but handing over that degree of documentation for that amount of time requires a lot of trust, not just of the company as it currently exists, but as it will exist in the far future.

If there was a way to validate organizations which wasn't layered on top of an earlier validation of an individual, or if their decentralized web-of-trust was usable for class 2/wildcard certs, I'd be a big fan.

As it is, there's no reason not to use Start for class 1, single-domain certs, for which the validation is automated and reasonable.


Wildcard certs don't match the underlying domain, though. See, for example, dropbox.com instead of www.dropbox.com; they've got a wildcard cert and it's not valid for dropbox.com.


Didn't know that, thanks.


Namecheap provides free ssl certificates for each domain you get through them.


The monkeysphere is also a good alternative if you use debian and gpg.

http://web.monkeysphere.info/


Right, and even the paid ones can be had for well under $30/yr nowadays, which is pretty trivial.


There are levels of pay for functionality, like subdomains, and being able to issue your own certificates.

For instance, from Verisign: a 1 year Microsoft code signing certificate starts at $499 [1]. A top-of-the-line (from their main pages) web certificate for a single server for one year: $1499 [2]

[1]: https://securitycenter.verisign.com/celp/enroll/selectOption... [2]: https://ssl-certificate-center.verisign.com/process/retail/p...

edit: it would figure the links don't work. Just go to www.verisign.com and those are a couple clicks from the front page.


Startcom offers free no-BS certificate signing, and their CA is on most modern browsers, I believe. I think they could be suitable for small scale projects.


"Double-click on someone, and you're instantly logged in as them."

Ouch. I think it's time to set up that VPN I've been putting off...


Am I the only one who thinks this is spoon feeding the script kiddies to cause mayhem?


Even the dumbest script kiddies have been doing this for years anyway. There are plenty of existing tools. This one just lowers the bar so your mum can perform the attack too.

It almost makes me angry that websites like Facebook and Twitter don't force all traffic over https. They've got the money and the expertise. They just don't care if your account gets sniffed and taken over at a web cafe.


Exactly. I'm not a blackhat and my only "hacking" consists of forcing myself into my own systems which I've stupidly locked myself out of, yet I've managed to do much that this plugin can do.

The most un-ethical thing I have done was to take one of the OLPC XO laptops and convert it into a MITM machine, rebroadcasting the SSID it connects to while routing and logging all traffic anyone who connects to it generates. It took a weekend to setup using pre-existing tools and scripts and can be deployed anywhere I want within 2 minutes and run for up to 6 hours hidden in the bottom of my backpack. It was a fun experiment, and surely made me more aware of just how vulnerable I was outside of my home network.

Another point of interest, this weekend I hacked on a Minecraft bot for the Alpha version. In order to understand and dissect the connection protocol I needed to recreate, I used wireshark to dump and parse how the client authenticates and connects to the server. Even that transmits your username and password in plaintext.


re: the OLPC, what were you running on it? I have one in my closet and I've been meaning to put something that isn't the stock software on there for a long time.


Well, hopefully it will then convince companies to properly secure their websites and actually protect users.


Agreed, but I still think giving someone else full control is a bit too much. It's not the user's fault (most don't even know this is happening) and they're likely to be the victims here.


This vulnerability (it hurts to even call it such at this point) has been around for years, and the attack has always been easy for a determined attacker to carry out.

How else are we going to convince people to secure their sites and protect their users? People have been presenting on this issue for years (Ferret & Hamster, Blackhat 2007) and companies haven't responded/cared. It's possible to solve this problem (Gmail is all HTTPS, and done correctly, Amazon has a tiered authentication system that properly uses SSL for important things, Wordpress does SSL right for accessing their admin interface) - companies need to step up and address the issue.


Definitely, I guess as a uni student, I'm worried about the majority of non-technical students who are going to have their sessions hacked and have no clue what hit them and cannot setup proxies/tunnels.

I'm not saying this isn't the site's fault. They definitely need a wake-up call.


This was already happening on a massive scale before this new app was released... I honestly don't think it will increase the number of attacks by all that much. It's brilliant as a tool for spreading the word though.


It was happening on a massive scale, but now a huge amount of really lazy people who didn't bother to do this before are. It had 3,000 downloads after 2 hours of release. The thing is, most universities have protection set up. It seems Cisco NAC is actually good for something. I never thought I'd say that. The extension certainly doesn't work on my campus.


The problem goes beyond client-website interaction. Improper wifi configuration also plays a big part in what Firesheep can achieve. ;)


It should be noted that Wordpress implements SSL for wordpress.com correctly, but any self-hosted blogs from wordpress.org need to be individually configured.


This is essentially the same argument that comes up with full disclosure. Yes, it's not pretty. Yes, it causes a lot of collateral damage. But it also makes the big players patch things up faster, while letting the knowledge out to the public, which of course consists of not only the script kiddies, but also the unsuspecting legitimate users.


The script kiddies already have their scripts and already do this. Firesheep will hopefully allow users to see the problem in a way they can clearly understand.


Thanks for posting this. It convinced me to upgrade SSL support from "something that would be nice to implement if I was bored someday" (BCC is not exactly security critical -- except, on reflection, the admin pages) to "drop everything and get it done."


You're saying that the BCC server doesn't have even a self-signed SSL cert installed? Or something else?


I had a SSL certificate for a while, but actually using it throughout the site without showing users Big Scary Error Messages is not quite trivial. The activation energy for digging through several hours of edge cases was lacking... until today. ("Whoops, while you don't know you're doing it, you pull an unnecessary CSS file into the cached CSS for the registration page which references a background image on an absolute http:// URL. Your registration page now throws an error on IE. You lose." "You have approximately 150 images on the site linked as handcoded img tags rather than through Rails' image_tag helper, because when you were a Rails newbie you did not know that existed. You now get to rewrite all of them so that they can use SSL asset caching magic." etc, etc)


I've seen some sites which figure out a way to force the user in and out of SSL for certain URLs. You might be able to implement a fix which forces SSL for the admin section and non-SSL for everything else.


That doesn't help, because my all-powerful admin session is as secure as the least secure page I access (or can be made to access) while on a compromised network.


Doh. Of course. It's all on the same domain. Do you think, that if designing a new application, it would make sense to make a separate admin sub-domain (assuming no wildcard cookies)?

Does the solution entail purchasing legit ssl certs for your static content domains?


Er, can’t you just specify that the session cookie is only sent over HTTPS?


I thought the title of this submission was slightly misleading. This is not a security vulnerability from within Firefox, it's a Firefox plugin to reveal security vulnerabilities in a wide range of websites.


To be fair that is exactly what I got out of the title and not that it was using Firefox vulnerabilities.


Sorry if it was misleading somehow, this is definitely not a vulnerability in Firefox. It's a Firefox extension that makes it easy to execute HTTP session hijacking attacks.


For what it's worth, I got that it was an extension or other Firefox tool. Your interpretation didn't occur to me.


Sites that are tracked:

amazon basecamp bitly cisco cnet dropbox enom evernote facebook flickr foursquare github google gowalla hackernews harvest live nytimes pivotal sandiego_toorcon slicemanager tumblr twitter wordpress yahoo yelp


Thanks to the EFF and the Tor Project we need not worry as much thanks to their HTTPS Everywhere project, a plugin for Firefox: http://www.eff.org/https-everywhere/

Any questions:

http://www.eff.org/https-everywhere/faq


Logging into insecure sites over Tor is probably not a good idea. It's always good to assume that people running exit nodes are not the most trustworthy.

HTTPS Everywhere is good but only works on known sites (and known domains for those sites).


I highly dislike the title of it. It is not HTTPS everywhere, it is "HTTPS on sites we know it is possible on".


HTTPS Everywhere only works on a select few sites. You're up a creek for anything it doesn't cover.

And Tor, there's lots of cases where operators did bad things. Don't trust it for sensitive information. http://blog.ironkey.com/?p=201


I realize I should of put more emphasis on "as much" as yes this only works on only a few popular websites as defined by the plugin.

Thanks to reading Techcrunch this morning, I read about this plugin which allows you to manually define which sites you want to force an HTTPS connection on:

Force-TLS https://addons.mozilla.org/en-US/firefox/addon/12714/

Mind you for any of these extensions to work the website you're visiting needs to be already accessible via ssl. If the site does not have encryption, these plugins can't force the sites to automagically start using the encryption it never had.


Be careful when trying this out. You could be breaking a law or two...


Also don't web-mail your friends to tell them about the new accounts you just broke into :) At least not on that open wireless connection.


Good thing GMail has SSL enabled by default ;)


Yup, they're one of our examples of a "good" setup. However, Google leaks iGoogle and some other things (Latitude, address book, reader, ...)


However they don't share the same session cookie for different service as far as I know (which they negotiate that through TLS protected link) Likewise they have also made several other services TLS only (e.g. calendar, docs)


The explanation I've always heard for not using HTTPS 100% of the time is that it puts an substantial load on the server, and for many sites it's overkill. Setting aside the subjective topic of "overkill" ... how much more CPU-intensive is it to serve pages over HTTPS compared to HTTP?


There was a great write-up of a talk on SSL/TLS performance at Google linked here a few months back (http://unblog.pidster.com/imperialviolet-overclocking-ssl, HN discussion at http://news.ycombinator.com/item?id=1485425)

Quoting from that, "On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead."


OK, that's most likely too late to contribute to the stated article, but there was a talk by Michael Klishsin about a year ago, here're his slides: http://bit.ly/90qORL (ssl, performance, certificates, lots of stuff)


The CPU intensive part of a HTTPS connection is the initial key negotiation/session setup (using asymmetric encryption methods). The symmetric encryption of the actual traffic is pretty trivial.

You can amortise the session setup cost by ensuring the HTTPS session caching is enabled on your server (in Apache, the directive is SSLSessionCache). This will let subsequent connections from the same client re-use the same SSL session.


ooh, SSL DDoS


That possibility is why it would be nice if client puzzles were part of the SSL protocol.


The cpu load can be mitigated with frontend https accelerators or proxies (think nginx as a load balancer doing the https). The real problem is the first connection. Browsers don't fall back to https, if nothing answers on http they'll give an error. If the first connection is over http then a man in the middle attack can succeed.


> If the first connection is over http then a man in the middle attack can succeed.

There are ways to work around this, if the non-https site immediately redirects to the https version and a "secure cookie" (https-only) is exchanged afterwards.


Someone correct me if I'm wrong, but that's the exact vector point for a man-in-the-middle attack. First request over HTTP gets hijacked, redirected to a "secure" server, then you (the user) see the lock and go to town, secure in the knowledge that you're communications with this server are protected because they're encrypted.


Isn't that exactly why HTTPS sites are supposed to have expensive certificates issued by big companies? Otherwise the browser will display a big red warning message. If you ignore that warning, you deserve to be hacked.

If the request gets redirected to a HTTPS proxy site that the attacker has set up, that's a different story. But again, you should be checking what's in your address bar. No security system can rescue you if you can't tell the difference between "mail.google.com" and "mail.google.haxxor.com". But for those of us who actually read what's in the address bar, HTTPS is pretty good security.


Isn't that exactly why HTTPS sites are supposed to have expensive certificates issued by big companies?

There is virtually no cost associated with issuing a certificate. The fact that they are nevertheless prohibitively expensive for most private domains is partly responsible for the failure to adopt SSL on a broad scale.

Otherwise the browser will display a big red warning message.

While it is true that this warning is intended to protect users again man-in-the-middle attacks and other methods that redirect traffic away from the original source, it also prevents people with absolutely valid (but free) certificates from offering perfectly good encryption on their sites. This warning is misguided and does more to prevent the secure use of the web than anything else.

The remedies would be simple, but I guess commercial reasons prevent them from being adopted at the expense of everyday users.


> it also prevents people with absolutely valid (but free) certificates from offering perfectly good encryption on their sites. This warning is misguided and does more to prevent the secure use of the web than anything else.

Accepting self-signed certificates would not be a good solution. Anybody can sign a certificate with "www.facebook.com" in the Common Name field. Your communication would be encrypted, but you'd be communicating with the wrong guy. SSL was designed to provide both encryption and identification. Self-signed certificates provide only encryption.

Besides, certificates are already quite cheap if you know where to buy them. RapidSSL costs as low as $12/yr, which is just slightly more than the cost of a domain name, and it's recognized by all browsers including IE6. I wouldn't consider it "prohibitively expensive" if it costs less than two lunches.

There are, of course, many other ways in which the existing infrastructure is inadequate. Take a look at the list of CAs that your browser automatically trusts. It's a mess. But it's not easy to implement an alternative that can provide both encryption and identification with the degree of reliability that the current infrastructure has. Some kind of social trust mechanism might work, but we're a long way from standardizing on anything of the sort.


Does this kind of wi-fi sniffing work with WEP or WPA encrypted networks? What about 802.1x?


This specific attack should work just fine with WEP but will not work against WPA2 (Disclaimer: I haven't tried the combinations personally). With WEP every single client uses the same encryption key, so all the packets are visible to everyone. With WPA2-PSK, each client has it's own key which is derived from the pre-shared key. Because of this, your adapter won't decode someone else's packets, but it's technically feasible to do so with other types of attacks outside of the scope of this firefox plugin. I'm not that familiar with the encryption within WPA-Enterprise, but I don't believe you can derive other people's keys and sniff their data when using it.


Yes, assuming you know the password to connect to the network. Otherwise no.


This is incorrect. Traffic on an access point using WPA2 + AES is not sniffable without significant cryptanalysis or use of exploits.


Mind providing more information on this? eg, what about WPA2+TKIP?

I'm trying to wrap my head around how WPA2 could still provide protection with a shared key... I'm sure I'm not the only geek who feels like their knowledge of WiFi protocols goes stale every six months or so.


TKIP is much more vulnerable. In theory it requires "work" to crack WPA2+TKIP but it's comparatively trivial with modern hardware. With WPA2+AES you basically open a public/private key encrypted connection to the router (similar to SSL) and exchange the PSK in order to authorize the client. This traffic isn't any more sniffable, in principle, than https traffic. However, depending on configuration it can be vulnerable to man in the middle attacks and such-like.


Actually this is false. I have actually read the 802.11i spec and I think the way the key exchange is done is same regardless of whether TKIP or CCMP is used.


It's fairly easy to do if you are logged in on the network already. For example, for the iPhone you can use something like pirni to spoof the mac address of the router. That way you'll receive all data on the network, and can send it on the router yourself. In the meantime, you can dump all cookies that are passed on. I think the tool even allows you to list all twitter and google cookies, and set them in Safari.


Pirni uses a vulnerability of a common WPA2 configuration to execute a MITM attack using ARP spoofing. There are ways to prevent this exploit as well.


Yep, if you are wondering what Hole196 was about, it was about this kind of attack.


Are you sure? It's not working on my WPA2 network.


no, I'm not familiar with WPA2.


Is there another application besides the FF extension to dump the packets and process them? How does this work?

EDIT: Sorry, I asking specifically how this FF extension works.


Wireshark is a very popular open-source tool. http://www.wireshark.org/



(atomical: You seem not to have realized what this answer was saying. The extension uses libpcap, as evidenced by the linked source code.)


Ah right, sorry.


Makes a strong case for everyone to start tunneling their traffic back to a trusted network.

I've been trying out sshutttle <http://github.com/apenwarr/sshuttle>. It only tunnels TCP traffic, so you still have DNS and UDP traffic on the local network.


Always use encryption, while using open wifi. I use openvpn ( http://openvpn.net/ ) for this.


I think this should be a call to arms to network, web and system admins everywhere. This is a problem that everyone knows about but nobody wants to do anything about since it requires additional setup. Usually the barrier is a technical issue that the end user can't figure out. However since submitting forms via SSL is something the developer can do without impacting the end user at all, this is a simple fix for just about any website. You need a static IP and an SSL certificate, and they are both cheap.

Running out of IPv4 space is an issue in this regard, but hopefully with more people wanting SSL it will push providers to IPv6 quicker. Nicely done EricButler!


Title is a bit misleading. This is a front-end to libpcap, and can be used for hijacking any token-based-auth, not just HTTP.

It just happens that they released w/ support for social networks as a demonstration.


It seems fine to just enable SSL everywhere. But indulge me for a second in thinking of alternate solutions.

Instead of sending a cookie, send a piece of javascript code (as part of the SSL-cloaked login handshake) that generates a new cookie for each request, and consider each new cookie in this sequence a "one time use" token. You can turn off SSL for subsequent requests and just use one of these new cookies each time to verify identity because an attacker won't have your cookie generator.

This javascript is really just an encryption key and algorithm, and if you implement it correctly, it should take quite some time for snoopers to reverse engineer the encryption key based on a sequence of one-time-use cookies.

Logistically, I suppose you would run into some trouble setting a new cookie for each request depending on how the page is loaded. For instance, if the user pastes a url into a new tab manually, then this system wouldn't have a chance to set the new cookie first.

However, I think you could architect a system that solves this. For instance, put the javascript token generator source in local storage. If a new page loads with an invalid key, that new page can just get the cookie generator code out of local storage and manually refresh the page's content by making a request with a valid token. This should be quick enough for most users not to notice, in the rare case that they circumvent the site's usual navigation.

A downside is obviously that the content itself is still not safe, but at least the account would be. Any thoughts?


Haven't thought too hard about passive attacks, but you're not secure against an active MITM like airpwn (http://airpwn.sourceforge.net/Airpwn.html), because the MITM can inject JS into the unencrypted content that steals your JS security scheme's secrets. Effectively, an active MITM allows XSS on plain ol' HTTP sites.


I think all cookies are sent with every request, so cookies can't be used to (securely) pass data to the next page. It'd work just fine on the login page, but every page after that would have to renegotiate to generate a new cookie, meaning you basically just created SSL everywhere.

Local storage, however, could probably be used to do just such a thing, as it exists only locally. In which case you could just have the login page generate an RSA key pair, receive the server's public key in the response, and use that for any kind of secure communication on each page load. The server would have to remember sessions => encryption keys, but that's not too hard.


Wow, good work. And pretty scary- imagine what one could do with this on any college campus.


A guy I know used to do this in airports (just for fun, didn't do anything malicious) by grabbing webmail logins. Running wireshark with some simple filters and watch the cookies roll in.


Wow, I've been wanting to do this for a while to raise awareness. Great implementation by plugging it into Firefox - well done.


You can slightly reduce the dangers stated here by logging out immediately after you are done doing whatever it is you are doing. This will make the captured session useless.

The best solution is of course to get a VPN acct and use it when you are at free/open wifi spots. I use WiTopia (www.witopia.net)


Or just get a mac mini server that will run vpn 24/7


Has anyone checked the source code to check that the passwords aren't sent to the author's website? :)


It's 100% open source! Please feel free to review it.

http://github.com/codebutler/firesheep

It doesn't currently do anything with passwords, it's only pulling out cookies from HTTP Response headers. But it would be trivial to also get passwords in non-HTTPS requests for logins with the same method.


Again, not assuming you're evil, but it's possible that the compiled binary (.xpi) was not created from the source posted on the github account :)


Indeed. Sorry if I implied that you were doing evil things.

People should also be aware of the security implications of installing various software on their system. :)


Passwords are not a part of this... It's the session cookie, which is an entirely different matter. It's unique to the login process, so one compromised account isn't able to lead to compromising other websites. It's also time sensitive (generally) and so that hijacked cookie will expire. If he were collecting all this information, he wouldn't be able to do much with it.


Just because the user interface only exposes cookies, doesn't mean that passwords aren't captured and sent somewhere.

It's very possible, given that the extension seemingly captures HTTP requests/responses. If passwords are sent or received in plaintext, then they can be captured.


Not sure why you're being downvoted, seems like a legitimate question to me! I was a little leery of checking it out at first, too, but curiosity got the better of me...

Maybe someone who has developed a FF extension can lay my worries to rest -- could this have, say, a built in key logger which sends that data to the author?

I downloaded the source code from github and glanced through it, enough to comfort me somewhat, but I wasn't super thorough.

All that said, EricButler seems to be an HN member in good standing, and someone like that wouldn't do anything malicious, right? :-)


What can an end user do to minimize this?

This exploit is for insecure Wifi networks- so only using encrypted Wi-fi or Ethernet would seem to remove this attack vector. Is there a real risk that someone (besides the government) can see your cookie?


Is there a real risk that someone (besides the government) can see your cookie?

Yes, if you login and your cookie is sniffed and spoofed then basically you just allowed the attacker to login as you at the same time.

Minimizing it is a little bit different: you can use a secure proxy/tunnel, you can limit your unencrypted wireless activity, you can make sure that sites that should be SSL encrypted are (stripping SSL is common when password sniffing) and you can avoid these services while on open wifi networks.


logging out will cause the captured sessions to be useless.

So remember to logout.

VPN is really the best overall option.


That assumes the session is killed on logout. I know from first-hand experience that at least one version of Merb didn't do that. I hacked a pretty popular geo-socially site by grabbing the session cookie and playing around, then logging out. Was still able check-in after I had logged out and for good measure verified that my session was still valid after changing the password. I assume they were using the default sessions setup so I guess it expired. Didn't keep it around to see how long it stayed around.

On reporting it, the response was essentially, "oh you didn't have to go to that much trouble, you could have just used your user/pass from curl…" Completely obvious to the fact that they're app/site was completely vulnerable to session hijacking.

One of the problems of app frameworks, if you don't know what they're doing (and more importantly, not doing) you can get yourself in a heap of trouble before you even realize there's an issue. But boy, you sure can make it to market fast. shakes head


Most sites don't properly invalidate sessions when you log out, you can't protect yourself as well as you think. See our slide on this topic:

http://codebutler.github.com/firesheep/tc12/#18


Excellent points on the slideshow. The general lack of care on this topic among web companies is worrisome.


Just tried it with iGoogle

Logging out doesn't kill the session.


vpn/ssh tunnel/encrypted wifi


Encrypted WiFi won't stop clients on the network from sniffing your packets.

It will, however, stop unauthorised computers from sniffing any network data.


I would have expected each wireless client, on an encrypted network, to negotiate its own key with the access point -- so you'd only see neighbors' traffic if the access point chose to rebroadcast it to you.

Are you sure that neither WEP nor WPA/WPA2 do it this way?


The encryption is between your client and the AP. Uaually everything after that is standard IP.


That's what I thought -- enough to protect against fellow wireless sharers, but not the hosting establishment or path through their ISP to a website.


No, you misunderstood. It's enough to protect you against random people sniffing wireless packets. Not other people that are on your network.


Your terminology "the network" or "your network" is still unclear; encryption to the AP could be unique per wireless network client, or not. If it is unique per client -- and it is my belief that recent standards, like WPA2 at least, provide this -- then casual passive eavesdropping by other wireless clients (as with the FireSheep tool) is thwarted. (And that's what most people are most concerned about.)

Are you suggesting that no generation of WEP or WPA protects against other authorized wireless users of the same AP, because they're "on your network"?

[rewritten completely to seek clarification]


WPA enterprise allows a separate (changing) key for each user, typically what you get from an RSA token. Once it gets to the AP, it's then clear text (assuming HTTP) over the rest of the internet until it hits your (HTTP) service provider.

If you have control over the internet between the AP and your server, then you're safe. If you don't, then how safe you are depends on how much you can trust the owner of each router along the way. In general, you should be okay, except that every now and then you might end up on an untrusted router, and it's then game over.


Sorry, did not follow this part - "Not other people that are on your network.". Care to elaborate?


This looks really cool. I can't wait to try this out. Very nice work, Eric.


Thanks! If you or anyone has any problems, email me (eric@codebutler.com) with the details.


I can't wait for this to become available for linux, good job!


On Mac OS X, it gives an error saying: Run --fix-permissions first.

Run with which command? and how?


I found the binary "firesheep-backend" in:

    ~/Library/Application Support/Firefox/Profiles/<profile>.default/extensions/firesheep@codebutler.com/platform/Darwin_x86-gcc3
I ran both:

    ./firesheep-backend --fix-permissions
and

    sudo ./firesheep-backend --fix-permissions
and it still asks me to run it with "--fix-permissions". I guess it's time to go digging around in the source to try and find out what it wants me to do.

EDIT:

After a bit of digging, I found out that running it with --fix-permissions really just chowns the binary to root then setuid's it. I don't see anything wrong with it on the surface, but I'll keep digging.


FYI if you're still looking into this, this comment helped me http://codebutler.com/firesheep#comment_5843350

I have filevault turned on. Moving the binary out of my home folder (and adding a symbolic link) solved the problem.


There is an another firesheep-backend at /firesheep-backend.dSYM/Contents/Resources/DWARF inside the Darwin folder. However this one wont run using ./

Any ideas?


I believe that it is a file containing debug info, not an actual program, so you can't run it.


I have the same problem, nothing I do works, it just tells me to fix the permissions...


I got it working by running firefox as root, I tried the --fix-permissions thing until I just gave-up.


Just so you know, that is a horrific idea security wise.


Yes, I know, but it's the only way I could test it... and it does work ;)


This just completely freezes my firefox. Can't even get firefox to start at all.. Any ideas?


The main problem will be with SaaS apps that allow custom domains names (i.e. mywebsite.com instead of mywebsite.mysaasprovider.com).

I made an early decision to enable SSL everywhere in Trafficspaces with the obvious downside being that I need to allocate a dedicated IP address each time someone requests a custom domain name.

I used to get worried that perhaps it would have been better to only provide SSL in specific stages (such as sign-in and payment) and only through a generic domain name. Not any more.

Firesheep clearly vindicates that decision.


Wouldn't it be easier to get a wildcard SSL certificate for *.mysaasprovider.com? That way you can serve all subdomains off a single IP address, since the name will always match.


That's what we are currently doing now.

I was referring to cases where the account holder wants to use an custom domain name e.g. ads.mywebsite.com, instead of the generic mywebsite.mysaasprovider.com.

In that case, we'll need to host their certificate within our Pound load balancer and get it to listen on a dedicated IP.


Why don't Facebook and other major sites check the user agent and IP address of client as well, instead of just relying on a cookie? That would solve this problem in 99% of the cases, right?


If you're on the same wireless network as someone, you have the same external IP address.


And of course, if you can see the traffic, you can spoof the same User-Agent as well.


Plus, your source IP can change from request to request when your ISP transparently pushes you through one of many proxy servers. AOL does (or did) this, as do some large European ISPs whose names escape me.


Yet another reason NAT sucks...


I realize that but at least my neighbors won't be able to hijack my session from home. Logging in over a public network always seems risky.


Are your neighbours on your private network? If not, you don't need to worry about them capturing your network data, because they're not on the same network.


It states that it works for "open networks". What does that mean? All networks that you have access to? Including those in Cafes where they give you a key to log in? Or just networks that are completely open? And why does it work at all? I thought the wlan access point would encrypt the communication between itself and the computer. Would be interesting, which protocols are vulnurable to this and which are not.

I guess the logging of raw wlan packets is a one-liner under linux? Does anybody know it?


So wait... this works regardless of wireless card? I've tried to use BackTrack on my mac before and it failed due to the card not being able to run in passive mode.


Yes, I believe it should work on any wireless card because you're not doing packet injection.


It doesn't work on my late 2009 MBP (sniffs sessions from other browsers on my laptop but not other laptops on our wifi).


are you sure you aren't on a WPA encrypted network? My understanding is that it doesn't work over WPA. WEP apparently does work though.


I'm on an open network (no security) and I too am only seeing traffic from the computer I'm running it on. I have two Macs on the same wifi network, but no luck so far =/


What a shame. There are going to be so many kids whose Facebook accounts get broken into and abused this week as a result of this.


Doesn't work in 3.6.4, even if you override install it or change the minVersion (which is 3.6.10)

Once I upgraded to 3.6.10 worked awesome.


SSL requires a unique IP per hostname, correct? Maybe this will be what actually ends up getting IPv6 going... :)


It used to be so, but newer servers can now serve more than one HTTPS domain using the same IP. For more details, check out http://serverfault.com/questions/109800/multiple-ssl-domains...


Newer servers can serve more than one HTTPS domain using the same IP... to users who are not using IE/Chrome/Safari under Windows XP. If you depend on SNI, you're leaving out something like a third of your user base.


Thanks. There really isn't anything more dangerous than just a bit of knowledge.


Anyone going to get HN on HTTPS? I'm very partial to my kharma points and don't want anyone to log in as me!


It's an interesting assortment of sites that are "supported" out of the box. Some of them are pretty harmless (bit.ly, Flickr), some could cause some pretty serious hassles (Google, Amazon), and some could be absolutely devastating (Deleting someone's Slicehost account? Ouch...).


The sidebar is not showing up for me after installing and restarting.

Firefox 3.6.11 OS X 10.6 firesheep-0.1-1.xpi


Same setup. Sidebar shows for me after selecting it from the View -> Sidebar menu, however it pops up with a message that says "Run --fix-permissions first." Not sure where I'm supposed to run this flag.


There is so many hoops I have to jump to make this work in OS X.

$ mv firesheep-backend firesheep-backend.binary $ cat > firesheep-backend #!/bin/sh sudo /path/to/firesheep-backend.binary $@ ^D $ sudo chmod +x firesheep-backend

Then restart Firefox and start capture. You need to run sudo once every certain period.


It worked instantly for me on OS X. I installed, restarted the browser, and it opened the side panel.


I keep getting a "Failed to fix permissions" error. Any insight into that?


FYI if you're still looking into this, this comment helped me http://codebutler.com/firesheep#comment_5843350

I have filevault turned on. Moving the binary out of my home folder (and adding a symbolic link) solved the problem.


Same error for me. No idea where to run though.


Ditto


view -> sidebar -> firesheep


Thanks!


What does this mean for HTTP basic authentication? How about digest access authentication?


Basic is useless - sends password in the clear.

Digest authentication is safe against passive sniffing (it doesn't exchange any password/token in the clear and uses nonces), but it doesn't protect against active attacker who could modify server headers and replace "Digest" with "Basic" to reveal password.


Ok, so digest authentication is safe against this new firefox extension?

If so, why don't facebook et al. switch to digest based authentication?

Surely its better than unencrypted cookie based logins. Is it just that its ugly (the browser login popup)?


Yes, Digest is safe in this case, but one could write a more advanced (packet-injecting) tool/Firefox extension that breaks Digest too.

Terribly bad UI and lack of standard way to log out are dealbreakers for HTTP auth.

There's also no reliable way to customize UI to offer help, password reminders, branding or anything like that.

There has been proposal to improve this in 1999:

http://www.w3.org/TR/NOTE-authentform

and recently discussed in HTML5 WG, but the conclusion was Digest and countless JS tricks proposed in its place are only partial solutions, cookies have unstoppable momentum, so it's better if everyone just switches to SSL.


On my Macbook Pro (purchased 1 year ago) it doesn't seem to be able to capture traffic on my wifi. It can see sessions originating from another browser on the same Mac, but not other macs on the wifi network.

Is there a way of debugging what's going on?


Which sites are you using this on? It only works on a few select sites (and you can add more with some more javascript code). It worked for me on my MBP on the main sites, twitter some igoogle.


I tried it on Facebook.

I have a WPA2 protected Wifi network. Two laptops (a MB and a MBP) on it. I run it on the MBP, on the MB I refresh a logged in Facebook page, and nothing appears as captured on the MBP.

If on the MBP I refresh Facebook in another browser it appears.


Try on an open wireless network.


Why? It should work on a WPA encrypted network as long as I have the network key - from the perspective of my network interface nothing is encrypted.

This indicates that the card has not properly been put in to listening mode, which means the plugin is not operating my card correctly.


I'm eagerly waiting trying this out once a Linux version becomes available.. looks very nice! Unfortunately I don't have a Windows or OS X installation available to me at the moment.



Here is a simple tutorial on how to set up an SSH Tunnel for Mac OS X http://bit.ly/cffjOY

This way all your communication is encrypted


I love SSH tunnels, but in regards to this particular problem, it really just pushes the problem off to wherever you ssh tunnel terminates. Do you trust you server operator? ISP? This is addressed in our presentation, here (VPN's are essentially doing the same thing): http://codebutler.github.com/firesheep/tc12/#20


I trust my server operator much more than anyone squatting on attwifi, yes.


Totally agree!

But right now I'm more worried about a co-worker or stranger in a Starbucks taking over my personal Facebook or Gmail account than my server operator trying to spy on me.


To be clear, that tutorial was made in 2007, so is a bit dated. Also, it shows how to set up FF to use the proxy, but the idea of a tunnel is not FF-specific, nor is this vulnerability.

One big issue with SSH-tunnel as a solution is that anything not set up to use the proxy still works, it's just quietly vulnerable.

Any suggestions on making TCP traffic which doesn't go through the proxy totally fail?


Thanks for the link!

I'm going to be traveling for a while pretty soon and using a lot of internet cafes and other free wi-fi spots so I should probably get this set up - I'm worried someone will be able to grab my password while logging in to check mail.


It would help a bit if there was a way to automatically encrypt sessions on an open wifi access point without requiring a password to connect.


On a positive note, at least a lot of people will be updating to the latest secure version of Firefox to run it.


On the other hand, stealing somebody's real life identity is not that hard either. But it does not happen too often, in part because it's illegal. Stealing somebody's cookie on the Internet is a crime just as is stealing somebody's driver's license. Although technical solution to this security hole is desirable, it's not the only solution available.


Linux installation needs work. README is empty, and the INSTALL says use ./configure which doesn't exist. ./autogen.sh complains about needing xulrunner-sdk path, which is isn't something normal for linux.

Edit: Oops! Linux support is "on the way." I guess I assumed since linux is the easiest platform to get your driver to go into monitor mode.


I just tried this here in a coffee shop. This is fucking evil.


I couldn't install it on FF 3.6.9 on Windows XP.


You need WinPCap installed. Just FYI.


What was the error?


"Firesheep 0.1 could not be installed because it is not compatible with Firefox 3.6.9."

And yes, WinPcap is installed. I don't think it should matter, but I'm running Windows XP on a VirtualBox.


Oh, you just need to update to the latest version of Firefox (3.6.11). Your version is out of date and not secure. http://www.mozilla.org/security/known-vulnerabilities/firefo...


What should happen if you use iPhone tethering? Could it top into the vast people on that network? (I have absolutely no idea). If this is the case, the internet will have a panic attack in 2 days max.


No. Cabled tethering to a cell phone only gives you access to your own packets. It's like a switched network where only packets addressed to you are sent to you.

On 802.11 wireless networks your wireless network card is capable of capturing traffic addressed to other computers. When encryption isn't used or is compromised, you can steal their credentials.

Doing something similar against cellular networks would require a much more sophisticated attack with specialized hardware that's largely illegal in the United States. I would also hope that cellular communications are encrypted these days.


Thanks. It works now with FF 3.6.11. Your extension is amazing.


Isin't this extension great ? =D


what version of firefox do you need to have to run it? i can't get it to work.


please don't tell 4chan


/b/ is going to have a field day with this. A long unbroken string of field days. For a long time.


Because they already know? And have their own tools?


Interesting. I was going to do something similar but keep it limited to Facebook chat. That way you could eavesdrop on conversations in the room and impersonate people, etc. This is actually probably easy to program and more versatile at that.


Works for me.


Well, whatever... encrypt all you like, $5 will still crack your session: http://xkcd.com/538/


wow, -1. HN readers sure left their senses of humour at home today.

I shall call this experiment a success.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: