Open an ssh connection to a server you have access to using something like the following:
ssh -ND 8887 -p 22 firstname.lastname@example.org
where 8887 is the port on your laptop that you will tunnel through, -p 22 is the port the ssh server is on (22 is the default but I use a different port so I am used to specifying this) and the rest is your username and the address of the server
Set your network to point to the proxy. On a Mac that would be…
... Open Network Preferences…
... Click Advanced…
... Click Proxies…
... Check the SOCKS Proxy box then in the SOCKS Proxy Server field enter localhost and the port you used (8887)
... OK and Apply and you are done!
Now you can surf safely.
That link has screenshots to help you configure Firefox to use the ssh proxy.
ssh -ND 8887 -p 22 email@example.com
ssh -vND 8887 -p 22 firstname.lastname@example.org
I am using the Tomato firmware (http://www.polarcloud.com/tomato) which has an SSH daemon.
alias socks="ssh -ND 8887 -p 22 email@example.com"
That way I can just open a shell and type "socks" and be good to go (well and then do the system preferences deal, but I have an AppleScript that does that automatically).
SSH_Server -> FaceBook == Unencrypted
SSH proxies are not end to end encryption. They only protect part of the path. Not sure why this is being down voted. It's true. The tunnel is only between the client and the SSH server. The HTTP websites that you visit beyond the SSH server see your clear text packets.
Tunneling over SSH protects your traffic for that portion of the network (and out past your ISP as far as the remote end of the SSH tunnel).
An attacker would need different tools and resources to intercept your traffic between remote hosts.
The network from the SSH_Server to Facebook and much larger and more secure.
Silence Is Defeat provides SSH accounts for a small donation.
(I am not affiliated with them)
I'm going traveling for all of next month, the only sites I'll be checking where I'll be logged in is my hotmail account, and I might check my bank account (Chase) - both use https, so I suppose I'm in the clear then? (also when I click "log out" on these sites, it logs me out, but if my session has been hijacked, will it log the hijacker out of the session he's hijacked of mine as well?)
The Pandaboard looks like a good fit, but the instructions to install a Linux distro are a bit scary . I guess I could do it, from my Mac, but I'm a bit afraid to mess things up with the low-level disk utilities.
Does someones sells SD cards with a distro pre-installed? Or an equivalent device with an easier setup?
If not, there's probably a market for that...
He has plans starting at $5/mo but you'll want to take notice of the of the monthly transfer limit. The $5/mo plan is 10GB transfer a month (which will come to 5GB in/5GB out if you're using it as a proxy) so you won't want to tunnel video or downloads through it. If you go for the $8/mo plan though you can get 40GB data transfer.
I've never had an account with him so I'm not sure if there's a way to check how much of your data allowance you've used for the month. Someone else might be able to chime in about a program you could run on the server to notify you when you've reached a transfer threshold.
If you can't afford that, you can always run a SSH server from your residence and use that.
How many millions of dollars and man hours is it going to take to lock down every access point? How many new servers are going to be needed now that https is used for everything and requests can't be cached?
America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal. By contrast it's fashionable among a certain set (no doubt including the author of this mess, Mr. Butler himself) to hold that the real culprits are the door manufacturers. What said facile analysis excludes, of course is that there is always a greater level of security possible. The level we currently employ reflects our tradeoffs between the available threats and the cost/convenience loss of bolting our doors and putting finials on our gates.
Butler has simply raised the threat level for everyone. He did not invent a new lock or close a hole. He's now forcing lots of people to live up to his level of security. Congratulations to the new Jason Fortuny.
When a tool like this rises to even a minimum level of public consciousness, you're better off thinking "people have probably been doing this for close to a decade" than "this asshole just ruined the internet by pointing out an obvious flaw that someone will now be able to exploit".
And yes, at some point, a door manufacturer that knows how easily their doors will open and how frequently people will just walk through does take on some responsibility to add a lock (and the homeowner to use it). It's going to cost more in servers? Okay, so what? It costs more to install seatbelts, are you upset at Ralph Nader, too?
[Edited to bring it down a notch]
Flat out false. Ever heard the term "crime of opportunity"?
What's your over/under on the number of identity thefts facilitated by Eric Butler's little gift? Let's make this empirical.
You vastly underestimate the barrier that "five minutes of Googling" presents. I assure you, the overwhelming majority of aspiring script kiddies would never be able to figure it out. It took an expert to package an exploit in a nice GUI (and write cookie parsing code for every major social site under the sun).
How about instead of shooting the messenger, you take some of that righteous anger and point it at the companies with millions/billions to spend who have simply ignored a longstanding known issue?
Hospitals, nonprofit groups, anyone running a website has to drop everything to lock it all down now. The effect is a lot like loosing a new virus (and might ultimately be treated that way).
> As long as only the highly motivated can exploit it, it's not really a problem, gotcha.
^ This modified statement is correct. All I'm saying that making something easy to use and publicizing it widely is going to result in a lot more people using it.
[Edits - hey jfager, I don't know you from adam and don't particularly enjoy flamewars. I agree that in the long run this should be fixed, ideally in such a way that 99.99% of people can blissfully go about their day. I just wish that the energy to secure stuff had taken the form of (say) a post on "here's how Google converted Gmail to https" rather than Firesheep. Hope we can find some common ground and you can see my POV.]
Your implicit definition of 'highly motivated' (someone willing to put in 5 minutes of Googling) makes me sad.
I'm agitated because you're trying to hang someone for doing A Good Thing: putting real pressure on the bigs to finally actually fix a well-known, longstanding problem.
[Response to your edit: Facebook, Twitter, and other big sites know about the problem. How would explaining to them how Google secured Gmail change anything? They know how Google secured Gmail, and they know how to secure their own services. They just simply aren't, because it saves them money and their customers aren't demanding it. But the only reason their customers aren't demanding it is because the vast majority of their customers don't know the threat exists. This tool makes the threat clear as day to the most unsophisticated layperson, which makes it real, effective pressure, far more than yet another blog post asking nicely for SSL by default].
It wasn't until Napster made that 0 minutes of googling that MP3 filesharing really took off.
For something like this to end up on millions of desktops, you have to be able to explain it to a half-stoned frat at a party. "Five minutes of googling and then some nerdery"? No chance. "Install this, go to the quad and you can sign into the facebook of any other person there?" Yup, that's going to spread like wildfire.
This isn't new. Point and click tools for doing this existed 10 years ago. Making a firefox plugin just pushed it back to the top of the headlines. This is actually a good thing because if word spreads more people will be aware of the already existing risk and will be more security conscious.
Does this mean everyone should stop logging into their personal accounts over unsecure wifi at school or starbucks? ABSOLUTELY.
Hopefully this new attention on an old hole will motivate more admins to fix their networks and more users to realize how vulnerable they are.
(a) network effects
(b) autosharing, spurring more (a)
Neither of these apply here.
This kind of exploit is so many years old that it's a matter of basic public education and computer literacy. While this might be a "forcing function" on the web development community - it is not unfair. There is so much new tech every year, it's unfortunate that security isn't more in the consciousness of tech.
There may be more graceful ways to lead "sheep" to more secure use of the internet deserving of praise, but it's fair game to release an exploit, and I'd rather see FireSheep than censorship of it.
There is zero difference between what someone using public wifi should be doing today and what they should have been doing last week. Now at least more people are aware of the problem.
I hate this mythical "good old days" B.S. I know people who live in the country who don't lock their doors because they live in the country. The idea that people who lived in urban areas ever could leave their doors unlocked is absurd.
This isn't what people mean when they say they don't lock their doors. I grew up in the country in northeast Ohio. I knew many people who simply never locked their doors, including overnight or even when they weren't home.
I live in Brooklyn now. Things are a little different here. My door has a $350 deadbolt lock that -- when it broke and locked me in -- took a locksmith, a serious drill and a couple hardened bits to defeat.
Whether you should lock something and how secure you make it isn't a binary decision - it depends on the value of the thing you're protecting and the likelihood of an attack.
I bet you would feel differently if you moved into the city or south of town.
Our garage leads to my office, which in turn leads to the rest of the house.
We came home shocked to see it open, and even more shocked that not a single thing was missing.
Yes, but it's better than the alternative, where there would be an increasing number of people negatively affected for even longer. At least the problem is out in the open now and there will be public pressure to fix it.
America was a better place when people could keep their doors unlocked, and when someone's first response to a break-in was to blame the criminal.
You are correct. But those days are long gone, and they're not coming back. Unless you want to throw out a good chunk of technology, kill half the people on the planet and go back living in communities where you knew personally everybody you interacted with during your entire lifetime.
Butler has simply raised the threat level for everyone.
Yes, he has. But he has also raised the defense-level for everyone, and by a greater margin. Before his post, there was a much larger divide between the people who knew about this exploit and those who didn't (the fox and the sheep, if you will). It's true that now more people can exploit those who don't know, but it's also true that even more people can defend against it.
He did not invent a new lock or close a hole.
Making other people aware of the hole is the first step in getting it closed, if you are unable to do it yourself. Shame on the rest of us for not doing this earlier.
In terms of severity, computing has overcome worse exploits; this is a problem awaiting an answer, which sounds like opportunity to me.
Again, degrees matter. Abstract knowledge is one thing. A simple tool to facilitate griefing people is quite another.
Mobile web browsing existed before the Iphone. Search existed before Google. Telecommunication preceded the internet. You could share mp3s before Napster and mp4s before Youtube.
And you used to have to delve into Wireshark to pull this off, but now you can snag grandma's credentials from any Starbucks in the country with a mouse. Degrees do matter.
If you assume a limited number of evildoers and a limited ability to exploit this at will (e.g. you have to catch your victim in close proximity on public wi-fi that you're sharing with him), releasing a tool like Firesheep may produce significantly less total damage.
This isn't a new threat. Just a new shiny piece of ware that lowers the bar a little further.
The analogy is not complete because in our situation there's a third party involved beside the victim and the criminal: the website. What if your bank leaves the vault unlocked so anyone can take your money? Isn't the bank at least partly to blame?
by now it is clear that unauthorized access to social networks can cause much distress and even worse to a great many people who use them.
Banks minimize their liability when they use SSL, facebook should do this too. at this point it should be clear that the effect on a person social life can be severe, career destroying, financially damaging, what have you, we are witnessing stories along these lines in increasing rates.
The release of this extension is a blessing in my view, it forces the issue that companies like facebook or twitter would prefer to ignore, or cover in obscure terminology, this simply demonstrates how trivial this is.
When Ingersoll Rand released the Kryptonite lock, they named it after the mythical element that would bring superman to his knees. Too bad the lock was revealed to have a design flaw that enabled cracking it up with a BIC pen, was it shameful to display that defective design?
Facebook etc... talk about privacy all the time. This forces them to walk the walk, not just talk.
Butler isn't doing anything earth-shattering, he is just reminding everyone AGAIN that the current system is messed up.
There will always be this debate about disclosure, but you can't ignore that it works. Sure, innocents suffer (and they would[they are!] anyway), but at least it's one more reason why websites should change to https.
What actually happened back in the day before people started forcing the issue with full disclosure was that the bad guys operated with impunity because the good guys couldn't work together because people got upset when folks let the "secret" vulnerability knowledge out.
I don't want to go back to those days. Things have improved so much since then.
Concrete example: are you a location based startup? Well, you might need to shell out $10,000 for a Google Maps API Premier key in order to get HTTPS.
"Access to the API via a secure HTTPS connection"
"Google Maps API Premier is extremely cost-effective, starting at just $10,000 per year."
Multiply that by thousands and you'll begin to have some idea of the discussions going on at every web based company with a clue today.
For those who make their living in computer security, like Mr. Butler, of course it's a good day (and month, and year). Pretty good business when you can start fires and then get paid well to put them out. Serves them right, of course, because they shouldn't have built that house out of wood in the first place.
While we're on the topic, I don't understand how a lot of people fail to realize that spending on computer security is a lot like spending on national security -- you can always spend more money on it, thereby taking away resources from other priorities.
"Every gun that is made, every warship launched, every rocket fired signifies in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. This is not a way of life at all in any true sense. Under the clouds of war, it is humanity hanging on a cross of iron."
Wrong. You don't need to use https for everything -- you can specify a domain and a path in the cookie. For things like images, videos and css, you still don't need SSL.
I've personally been working from cafes and tunneling everything through SSH for years, but in my experience almost no one else does this.
I can't think of a more effective way for him to convince them all to update now.
To where? I suspect it's to a server, VPS, or similar, and the connection is unencrypted from there to its endpoint. This being the case, could someone with a server on the same subnet be running a browser remotely (or even just tcpdump) and doing a similar thing with your logins?
(This is just some thinking out loud and I may be totally wrong - correct me ;-))
So I'd agree, more complex definitely, significantly not as much perhaps (it depends on the type of attack as tool), as for deliberation I'd say about the same as the firefox plugin.
If you do run tcpdump you do pick up broadcasts and such, one of our VPS instances actually sees a load of DNS traffic for our subnet, which we think is the other VPS instances.
In addition to making session hijacking harder, using SSL keeps crappy proxies from caching private data. Remember when some AT&T users were getting logged in as other users on Facebook's mobile site? The cause was a mis-configured caching proxy.
Raising awareness of issues like this gets them fixed. Until a service's users demand SSL, it won't be offered. Unless the service is Loopt :) It's not a noticeable computational burden, but it does increase latency and cost money (for certs).
1. Not images
2. Older GSM crypto can be hacked in real time with rainbow tables now
3. Usually not encrypted at all
That said, the current cartel-like setup of certificate authorities (protection money and everything!) makes SSL annoying and expensive if you want the browser to not have a fit. Especially for small-scale projects. But there's really no excuse for larger sites.
Works — of course — everywhere except IE6/7XP.
An alternative is to bind the user's session to their IP address, but that isn't fool proof either because of NAT, DHCP and certain big ISPs that tend to change IPs on the fly.
What cost-effective solution would you suggest?
Regarding IPs, there's a bigger issue here. People are used to being able to shut their laptop at home and open it back up at work without having to re-authenticate all their browser tabs. If you filter by IP this breaks. SSL requires no changes to user behavior.
It would make it harder to troll an open network for random victims, and wouldn't annoy the user.
 Perhaps a hash based on something like this https://panopticlick.eff.org/
then the next version of this plugin just spoofs all of those parameters as well
the only solution is SSL and client certificates
 in the case of being on the same network
In general, it's a negligible cost; it adds a very minor delay compared to latency / transfer time, and uses CPU otherwise highly unlikely to be pegged. If you're pushing threading limits / CPU usage limits, you're probably inches from needing new hardware anyway, and SSL should be considered part of the cost of running a web server.
It's gotta be SSL all the time.
Looks like a great idea, but how do they prevent the man-in-the-middle from impersonating a network notary?
EDIT: I searched and it's actually http://cert.startcom.org/.
AFAIK this is common to all certs (free or otherwise). You need a separate one for each subdomain (including www).
I admire their model of only charging for operations which require human intervention, like identity validation, but handing over that degree of documentation for that amount of time requires a lot of trust, not just of the company as it currently exists, but as it will exist in the far future.
If there was a way to validate organizations which wasn't layered on top of an earlier validation of an individual, or if their decentralized web-of-trust was usable for class 2/wildcard certs, I'd be a big fan.
As it is, there's no reason not to use Start for class 1, single-domain certs, for which the validation is automated and reasonable.
For instance, from Verisign: a 1 year Microsoft code signing certificate starts at $499 . A top-of-the-line (from their main pages) web certificate for a single server for one year: $1499 
edit: it would figure the links don't work. Just go to www.verisign.com and those are a couple clicks from the front page.
Ouch. I think it's time to set up that VPN I've been putting off...
It almost makes me angry that websites like Facebook and Twitter don't force all traffic over https. They've got the money and the expertise. They just don't care if your account gets sniffed and taken over at a web cafe.
The most un-ethical thing I have done was to take one of the OLPC XO laptops and convert it into a MITM machine, rebroadcasting the SSID it connects to while routing and logging all traffic anyone who connects to it generates. It took a weekend to setup using pre-existing tools and scripts and can be deployed anywhere I want within 2 minutes and run for up to 6 hours hidden in the bottom of my backpack. It was a fun experiment, and surely made me more aware of just how vulnerable I was outside of my home network.
Another point of interest, this weekend I hacked on a Minecraft bot for the Alpha version. In order to understand and dissect the connection protocol I needed to recreate, I used wireshark to dump and parse how the client authenticates and connects to the server. Even that transmits your username and password in plaintext.
How else are we going to convince people to secure their sites and protect their users? People have been presenting on this issue for years (Ferret & Hamster, Blackhat 2007) and companies haven't responded/cared. It's possible to solve this problem (Gmail is all HTTPS, and done correctly, Amazon has a tiered authentication system that properly uses SSL for important things, Wordpress does SSL right for accessing their admin interface) - companies need to step up and address the issue.
I'm not saying this isn't the site's fault. They definitely need a wake-up call.
Does the solution entail purchasing legit ssl certs for your static content domains?
amazon basecamp bitly cisco cnet dropbox enom evernote facebook flickr foursquare github google gowalla hackernews harvest live nytimes pivotal sandiego_toorcon slicemanager tumblr twitter wordpress yahoo yelp
HTTPS Everywhere is good but only works on known sites (and known domains for those sites).
And Tor, there's lots of cases where operators did bad things. Don't trust it for sensitive information. http://blog.ironkey.com/?p=201
Thanks to reading Techcrunch this morning, I read about this plugin which allows you to manually define which sites you want to force an HTTPS connection on:
Mind you for any of these extensions to work the website you're visiting needs to be already accessible via ssl. If the site does not have encryption, these plugins can't force the sites to automagically start using the encryption it never had.
Quoting from that, "On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead."
You can amortise the session setup cost by ensuring the HTTPS session caching is enabled on your server (in Apache, the directive is SSLSessionCache). This will let subsequent connections from the same client re-use the same SSL session.
There are ways to work around this, if the non-https site immediately redirects to the https version and a "secure cookie" (https-only) is exchanged afterwards.
If the request gets redirected to a HTTPS proxy site that the attacker has set up, that's a different story. But again, you should be checking what's in your address bar. No security system can rescue you if you can't tell the difference between "mail.google.com" and "mail.google.haxxor.com". But for those of us who actually read what's in the address bar, HTTPS is pretty good security.
There is virtually no cost associated with issuing a certificate. The fact that they are nevertheless prohibitively expensive for most private domains is partly responsible for the failure to adopt SSL on a broad scale.
Otherwise the browser will display a big red warning message.
While it is true that this warning is intended to protect users again man-in-the-middle attacks and other methods that redirect traffic away from the original source, it also prevents people with absolutely valid (but free) certificates from offering perfectly good encryption on their sites. This warning is misguided and does more to prevent the secure use of the web than anything else.
The remedies would be simple, but I guess commercial reasons prevent them from being adopted at the expense of everyday users.
Accepting self-signed certificates would not be a good solution. Anybody can sign a certificate with "www.facebook.com" in the Common Name field. Your communication would be encrypted, but you'd be communicating with the wrong guy. SSL was designed to provide both encryption and identification. Self-signed certificates provide only encryption.
Besides, certificates are already quite cheap if you know where to buy them. RapidSSL costs as low as $12/yr, which is just slightly more than the cost of a domain name, and it's recognized by all browsers including IE6. I wouldn't consider it "prohibitively expensive" if it costs less than two lunches.
There are, of course, many other ways in which the existing infrastructure is inadequate. Take a look at the list of CAs that your browser automatically trusts. It's a mess. But it's not easy to implement an alternative that can provide both encryption and identification with the degree of reliability that the current infrastructure has. Some kind of social trust mechanism might work, but we're a long way from standardizing on anything of the sort.
I'm trying to wrap my head around how WPA2 could still provide protection with a shared key... I'm sure I'm not the only geek who feels like their knowledge of WiFi protocols goes stale every six months or so.
EDIT: Sorry, I asking specifically how this FF extension works.
I've been trying out sshutttle <http://github.com/apenwarr/sshuttle>. It only tunnels TCP traffic, so you still have DNS and UDP traffic on the local network.
Running out of IPv4 space is an issue in this regard, but hopefully with more people wanting SSL it will push providers to IPv6 quicker. Nicely done EricButler!
It just happens that they released w/ support for social networks as a demonstration.
Logistically, I suppose you would run into some trouble setting a new cookie for each request depending on how the page is loaded. For instance, if the user pastes a url into a new tab manually, then this system wouldn't have a chance to set the new cookie first.
A downside is obviously that the content itself is still not safe, but at least the account would be. Any thoughts?
Local storage, however, could probably be used to do just such a thing, as it exists only locally. In which case you could just have the login page generate an RSA key pair, receive the server's public key in the response, and use that for any kind of secure communication on each page load. The server would have to remember sessions => encryption keys, but that's not too hard.
The best solution is of course to get a VPN acct and use it when you are at free/open wifi spots. I use WiTopia (www.witopia.net)
It doesn't currently do anything with passwords, it's only pulling out cookies from HTTP Response headers. But it would be trivial to also get passwords in non-HTTPS requests for logins with the same method.
People should also be aware of the security implications of installing various software on their system. :)
It's very possible, given that the extension seemingly captures HTTP requests/responses. If passwords are sent or received in plaintext, then they can be captured.
Maybe someone who has developed a FF extension can lay my worries to rest -- could this have, say, a built in key logger which sends that data to the author?
I downloaded the source code from github and glanced through it, enough to comfort me somewhat, but I wasn't super thorough.
All that said, EricButler seems to be an HN member in good standing, and someone like that wouldn't do anything malicious, right? :-)
This exploit is for insecure Wifi networks- so only using encrypted Wi-fi or Ethernet would seem to remove this attack vector. Is there a real risk that someone (besides the government) can see your cookie?
Yes, if you login and your cookie is sniffed and spoofed then basically you just allowed the attacker to login as you at the same time.
Minimizing it is a little bit different: you can use a secure proxy/tunnel, you can limit your unencrypted wireless activity, you can make sure that sites that should be SSL encrypted are (stripping SSL is common when password sniffing) and you can avoid these services while on open wifi networks.
So remember to logout.
VPN is really the best overall option.
On reporting it, the response was essentially, "oh you didn't have to go to that much trouble, you could have just used your user/pass from curl…" Completely obvious to the fact that they're app/site was completely vulnerable to session hijacking.
One of the problems of app frameworks, if you don't know what they're doing (and more importantly, not doing) you can get yourself in a heap of trouble before you even realize there's an issue. But boy, you sure can make it to market fast. shakes head
Logging out doesn't kill the session.
It will, however, stop unauthorised computers from sniffing any network data.
Are you sure that neither WEP nor WPA/WPA2 do it this way?
Are you suggesting that no generation of WEP or WPA protects against other authorized wireless users of the same AP, because they're "on your network"?
[rewritten completely to seek clarification]
If you have control over the internet between the AP and your server, then you're safe. If you don't, then how safe you are depends on how much you can trust the owner of each router along the way. In general, you should be okay, except that every now and then you might end up on an untrusted router, and it's then game over.
Run with which command? and how?
sudo ./firesheep-backend --fix-permissions
After a bit of digging, I found out that running it with --fix-permissions really just chowns the binary to root then setuid's it. I don't see anything wrong with it on the surface, but I'll keep digging.
I have filevault turned on. Moving the binary out of my home folder (and adding a symbolic link) solved the problem.
I made an early decision to enable SSL everywhere in Trafficspaces with the obvious downside being that I need to allocate a dedicated IP address each time someone requests a custom domain name.
I used to get worried that perhaps it would have been better to only provide SSL in specific stages (such as sign-in and payment) and only through a generic domain name. Not any more.
Firesheep clearly vindicates that decision.
I was referring to cases where the account holder wants to use an custom domain name e.g. ads.mywebsite.com, instead of the generic mywebsite.mysaasprovider.com.
In that case, we'll need to host their certificate within our Pound load balancer and get it to listen on a dedicated IP.
I guess the logging of raw wlan packets is a one-liner under linux? Does anybody know it?
Once I upgraded to 3.6.10 worked awesome.
OS X 10.6
$ mv firesheep-backend firesheep-backend.binary
$ cat > firesheep-backend
sudo /path/to/firesheep-backend.binary $@
$ sudo chmod +x firesheep-backend
Then restart Firefox and start capture. You need to run sudo once every certain period.
Digest authentication is safe against passive sniffing (it doesn't exchange any password/token in the clear and uses nonces), but it doesn't protect against active attacker who could modify server headers and replace "Digest" with "Basic" to reveal password.
If so, why don't facebook et al. switch to digest based authentication?
Surely its better than unencrypted cookie based logins. Is it just that its ugly (the browser login popup)?
Terribly bad UI and lack of standard way to log out are dealbreakers for HTTP auth.
There's also no reliable way to customize UI to offer help, password reminders, branding or anything like that.
There has been proposal to improve this in 1999:
and recently discussed in HTML5 WG, but the conclusion was Digest and countless JS tricks proposed in its place are only partial solutions, cookies have unstoppable momentum, so it's better if everyone just switches to SSL.
Is there a way of debugging what's going on?
I have a WPA2 protected Wifi network. Two laptops (a MB and a MBP) on it. I run it on the MBP, on the MB I refresh a logged in Facebook page, and nothing appears as captured on the MBP.
If on the MBP I refresh Facebook in another browser it appears.
This indicates that the card has not properly been put in to listening mode, which means the plugin is not operating my card correctly.
This way all your communication is encrypted
But right now I'm more worried about a co-worker or stranger in a Starbucks taking over my personal Facebook or Gmail account than my server operator trying to spy on me.
One big issue with SSH-tunnel as a solution is that anything not set up to use the proxy still works, it's just quietly vulnerable.
Any suggestions on making TCP traffic which doesn't go through the proxy totally fail?
I'm going to be traveling for a while pretty soon and using a lot of internet cafes and other free wi-fi spots so I should probably get this set up - I'm worried someone will be able to grab my password while logging in to check mail.
Edit: Oops! Linux support is "on the way." I guess I assumed since linux is the easiest platform to get your driver to go into monitor mode.
And yes, WinPcap is installed. I don't think it should matter, but I'm running Windows XP on a VirtualBox.
On 802.11 wireless networks your wireless network card is capable of capturing traffic addressed to other computers. When encryption isn't used or is compromised, you can steal their credentials.
Doing something similar against cellular networks would require a much more sophisticated attack with specialized hardware that's largely illegal in the United States. I would also hope that cellular communications are encrypted these days.
I shall call this experiment a success.