Anyway, "we are aware of this issue and are working on it".
Click http://www.realestate.com.au/ then "Register".
Then stand in utter amazement at their solution.
Why do we need your email address?
*We send your password via email.*
*Your email address is your log on.*
*If you forget your password, we'll send you a new one.*
This is hilarious. I can only assume that they took offence to you choosing a "strong version" password, so they decided, how can we fix this? I know, lets just pick the password for them.
So, their fix that they told you about, was to ensure that you can't pick a password at all, and they will still email you their "super strong version password"...
> Thank you for registering. Your password has been sent to username[at]gmail.com. It should arrive shortly.
12 seconds later.
Your password is: DTCNE
(In case people aren't aware, realestate.com.au is owned by HomeAway)
(Search for "We send your password via email")
You can't really do much with a realestate.com.au account unless you are an Agent (which is a separate account). There's no payment processing, or any way to add content to the site. The accounts there are basically just a way to save common realestate searches as far as I can tell.
All private user information is equally private. To arbitrarily suggest that certain data is less important is a dangerous road to walk down. We should be holding everyone to the same standards when it comes to security.
This is especially true with the high amount of password reuse that goes on.
Does it really make sense to hold my bank to the same standard as a real estate website? Sure they should all reach some minimum requirement (salted and hashed passwords), but I expect my bank to have far higher standards (e.g. two factor auth) than a a random site.
I guess the issue is that the layman cannot really tell how secure a solution is, and so are unlikely to be able to make well reasoned decisions about the information they release. As such there really needs to be a far greater level of responsibility placed on people who hold the keys so to speak. Once again this is especially true since people re-use (and use overly simple) passwords at a scary rate. By not protecting their information on a crappy real estate website you are potentially leaving open their bank to abuse.
I feel instances like these just show dangerous levels of incompetence and a blatant disregard for user's information. Good solutions generally require less work anyway so there's no excuse.
Amusingly enough, the banks who impose PCI compliance on merchants aren't themselves required to be PCI compliant, and some of them will happily e-mail you extremely sensitive customer data (no matter how many times you ask them not to), even though doing so yourself would violate PCI compliance.
> This is especially true with the high amount of password reuse that goes on.
I do agree that it is a bit off (read: probably illegal) that they allow users to change the password and then store the user's password in plaintext. The system would be considerably better if users could only use a system-generated password.
In other news it's not just REA. I checked and my ISP (TPG) does the same :-(
There have been many calls in past exploit threads for a name and shame policy, but that won't do anything. Name and shame only works when people keep up with the list, and people won't. They're too busy with their lives to focus on a list, especially given the number of insecure websites around the world.
We need everyone to have a list of easy to remember rules about web security from a consumer perspective. This list of rules needs to reach everyone. Putting them in the browsers may lead to the exposure needed, but I don't see that happening.
This primitive level of education needs to start breaking through as it's only going to get worse as computing and security advance further. We haven't even finished explaining to people that plain text passwords almost always indicate impending disaster, yet we already need a way to explain MD5 is never enough and SHA256 isn't enough without a salt...
There is an attempt at naming and shaming here: http://plaintextoffenders.com/
Unfortunately, Google would likely open themselves to lawsuits if they warned users away from or penalised websites due to poor security.
um no, it's Google that would worry. Myself, I don't care if they got a lawsuit. I'd applaud G for trying (and who knows the publicity that such a lawsuit would (hopefully) generate might do some additional good).
I can imagine Google not pursuing such a strategy without a really good reason, though.
Really? This password storage isn't great, but using tesco.com is hardly the same as visiting a malware or phishing site.
Unless/until Tesco have their databases hacked or stolen there is no risk at all.
This is not the case. The most glaring reason why was pointed out in the posted article. It very clearly showed that Tesco failed to communicate logged-in state information (stored in a cookie) between the client and server over an encrypted line. This means your account is vulnerable to attack without the entire db being leaked.
Not all security exploits require a database to be hacked either. Even if a database is hacked, half the time we're finding out about this from third party sources well after the fact instead of the companies released press releases themselves.
These things happening silently is horrific. Malware or phishing sites are relatively easy to spot and defend against -- but what about a compromised but legitimate website? If I find a security hole and pick a small but high quality selection of targets, how long will it take authorities (if ever) to piece together that they all were members of CornerStore Online?
: We still have no idea when Twitter lost their 6.5 million password hashes -- they probably don't either... http://news.ycombinator.com/item?id=4074510
Twitter or LinkedIn?
Unfortunately I can no longer edit, hopefully the link itself is self explanatory.
A conditional statement saying there is no risk is utter nonsense. The reason for this is very simple - there is always a risk the conditional has already been fulfilled.
While training your staff won't solve all your security problems, I still think it'll help mitigate a lot of them.
Edit: And publicly blog about it to shame them into action.
You report that the way a particular type of SSL cert is implemented leaves a MITM attack, and they come back with a dissertation on why MITM is not a concern of ours. (Oh? Then why the fuck are we encrypting the connection?!)
You tell them that they have unpatched, years-old, remote root vulnerabilities in their servers, and they give you the long list of reasons why we not only don't need to patch it, patching it would be bad.
You tell them how storing a password unhashed will lead to a PR catastrophe when an attacker gets your PW DB. They tell that implementing scrypt isn't feasible, bcrypt is weaker than scrypt, SHA1 hashes are easily crackable, and that if somebody has our PW DB we have bigger problems, so we shouldn't even worry about the passwords. And since we shouldn't worry, we might as well e-mail them.
My guess is they think it will be extra work and they're trying to avoid it. The alternative that I hope isn't true is that their egos are so big they don't want to believe they did something insecurely, so they craft a story to tell themselves and others that actually what they did was smart. Either way, the users lose out in the end, and there's nothing we can do about it.
Otherwise, all of their other "crimes" (cookies are sent unencrypted, etc) are bad but not really unexpected from a large chain like this. I'm never really surprised when large organisations get these things so wrong, given the way many either contract this work out and/or [mis]handle it in-house.
Hopefully this new attention will have them change the policy.
You can do case-insensitive passwords with hashing/salting. It's just a matter of lower-casing the password before hashing it. (Edit: I'm not saying this is a good idea, of course!!)
I remember reading once that Facebook actually hashes multiple versions of your password (eg with the first letter upper-cased to handle the case where a phone auto-corrects it, and also with all character cases toggled to handle the case when you left caps lock on). I wonder if there's any statistics about how often this kind of thing actually helps?
Of course, it seems pretty clear in this particular case that Troy is right and they're just storing your password in a case-insensitive database column.
For example, they might find that people who forget their password become less likely to use the site because when they get their new (hard to remember) password emailed to them they can't figure out how to change the password back to what it used to be. This means they end up resetting their password every week to do their shopping.
I don't doubt, though, that some sort of similar short-sighted thinking led to this decision. Is it really possible that such a large organization simply doesn't understand password policy? Not to mention, an organization that's on the board of PCI-DSS?
At the point where you are worrying about hashed passwords, your system has already been owned.
It also depends on public reaction to the incident, people won't necessarily blame tesco either and blame the "1337 chinese super h4x0rs"
Yes it's possible they two way encrypt their passwords, but that's still not as secure as salted hashes, not to mention all the other security blunders.
Of course, you shouldn't do it, unless you have a good reason. E.g. There was talk a few months ago about a new law being proposed in France requiring companies to provide the police with user passwords.
Tesco are not advising that everyone goes back to IE 3, they are simply stating this as a lowest common denominator since I'm assuming that was the first browser to support whatever version of TLS they were using etc.
Also, is running an old version of ASP.NET and IIS really a problem? Does he advocate going through the expense of rewriting/retesting the entire website every time MS drops a new version? If they are pulling down security patches this should be a non issue.
This is not necessarily a high-intensity exercise, once every few years you simply make sure you haven’t fallen too far behind the eight ball. Certainly you don’t let key software components get 9 years old and nearly 5 versions out of date.
This is quite a bit less intensive then you describe and I think it's reasonable to expect that a website taking payments not be more the a couple years out of date.
This means that if there is some security vulnerability discovered with it then Microsoft will provide a patch, therefor from a security point of view it isn't "out of date".
The number of years and versions is fairly irrelevant, there will be plenty of very secure systems in use by banks and the military that will no doubt pre-date much of what tesco is using by several decades.
> Dear ****,
> Thank you for contacting us on Lenovo Outlet.
> The password you requested is: ******
> Please note: This e-mail message was sent from a
> notification-only address that cannot accept
> incoming e-mail. Please do not reply to this message.
> Customer Service
It's amazing: a website will make me choose an 8-character (but not more than 14!) password with a number, a symbol, and at least one capital and one lowercase letter, but then it will let anyone who knows my birthday and favorite color change that password to whatever they like.
(Obviously, I can opt out of this feature by typing gibberish into those answer fields, but do you think your grandfather will think to do that?)
One variation that I know of that you can't opt out of it is on some banks' two-factor authentication, where they have you log in by answering a security question first, and then entering your password once you get the question right. The great thing about this is that it makes the bank an easy test-ground for guessing your security questions to use on other sites.
The most common implementation is to require to reset a password via a link sent in the email (and only then you answer security questions), which solves the particular problem you describe.
The reason for the security question is to prevent people who don't know you from being able to lock you out of your account. Of course this doesn't work so well if the person knows who's account it is and can find the answers to security questions online.
Just commenting quickly over breakfast but if I find the time later I'll try to look up the cases. (I remember something about a kid ending up in court for using relative paths to explore a web server (/content/../../ etc.)
Thank God there's still Waitrose.
Mailman warns users that passwords will be mailed plaintext, but why mail passwords to begin with?
As mailman has a fairly technical audience and reminds users that passwords are stored/sent in plaintext, I see it as a feature, not a bug.
Fortunately, someone on the main article did respond that having trace.axd enabled could result in 500 errors dumping a stack trace. That's a much clearer argument for why having tracing enabled is a bad thing.
This does not excuse this lapse, but it may help us understand why if a computer system seems to work fine, they have little motivation to replace, upgrade or fix it, even if it is running on an old version of the platform.
They have the money to pay for expertise to do better.
It is a shame that "it hasn't failed yet" is seen by them as an excuse to keep a broken system in place.
That is changing - it is getting to the point where their business angle is more "renting shelf space to brands" as much as selling what we want to buy (that is a general industry thing, not a Tesco specific comment).
Even ignoring that cynicism, their business is far more than food and I suspect the food sales are dwarfed when you add everything else together. Food is just the product that gets us through the door to see the other stuff: meat, veg, bread and milk are the things that the rest of the occupants of large shopping centres tend to lack.
> not [in the business] of making highly-secure websites.
On the contrary: they are taking in and storing personal details and in some cases banking details (they sell banking services as well as physical products), and getting access to your account may give someone the ability to buy things on your credit card. It is my understanding that both by law and by the agreements they have in place with their chose credit card processing partners they are required to live up to certain security expectations, and if they do not live up to those expectations they should be investigated, fined, and stopped from trading in those ways until they are up to scratch.
The claim that their password storage is "up to industry standards" is both unduly vague (though as the conversation was by tweet the length limitation there may be partly to blame for that) and simply wrong. They are not compliant with PCIDSS (http://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Secu...) which I believe is considered the industry standard for handling credit card data or accounts that are associated with it.
However I am not finding anything about "you may not store passwords in plain text or using reversible encryption" in that PCIDSS wikipedia link. If it is part of the standard (as it should be) then it deserves to be in the wikipedia article.
Even if it isn't in PCIDSS, not storing passwords securely is certainly not industry practice for any company that operates as a bank in any of its parts.
You'd think so, but apparently that's not how it's working out.
Imagine if they let criminals run riot through their stores, on the theory that they're in the business of selling meat and potatoes to everyone in the UK, not of policing stores. Well, it's true, but putting their customers at risk is wrong and bad for business.
No, this web site nonsense hasn't been bad for business... so far. I wouldn't be surprised if a major breach would change that in a hurry, though.
Sadly, 1Password is probably the best solution there currently is. But this only shows how abysmal the current state of affairs is for security.
Have anyone seen this? :-)
I considered writing / emailing but didn't think it would do much good.
However your bank will have so much information stored about you that if your bank gets owned you're basically fucked anyway even if they don't get your password.
I imagine the servers that actually store this data however are secure to a ridiculous degree.
Not sure if that happens in practice...
The difference is tiny. If your images are on the same domain as the html, you'll have no extra overhead from the ssl handshake (thanks to http keepalive), and the symmetric encryption used in an existing ssl connection will have a negligible impact on performance.
I can think of two reasons you want https, even for just images:
1) Even though modifying an image in flight will probably not have major security implications, you can't be sure. Perhaps a carefully edited and resized image could alter the layout of a form, tricking the careless user into publishing information they didn't mean to.
2) Perhaps your adversary is just a teenager bothering people trying to be productive at a coffee shop. Including a 10000x10000 image that makes the page unusable, or replacing your logo with porn isn't exactly something you want, even if it doesn't compromise anybody's bank account.
As a practical matter, if you embed http images in your HTTPs page I believe you will get mixed content warnings in some browsers. See e.g: http://stackoverflow.com/questions/3278341/help-with-ssl-vul...
Another strike against non HTTPs images is that if you don't have the 'secure' flag set on your cookies, these may get sent with requests, compromising your users' sessions. I guess you could consider not setting this cookie flag as a separate issue. Note that this is a passive threat, exploitable via sniffing.
Thirdly, an active MITM can return any MIME type in response to your <img src> link; you now have to be 100% sure that no browsers will try to process a malicious response of any type when it gets one instead of an image. Probably OK, but are you absolutely 100% sure? What about indefinitely, as browsers implement new features?
I wouldn't neccesarily follow facebook's lead on security practices.
Modern thinking is that if you care about SSL at all, you should force SSL for everything. For example if your "secure banking site" has an HTTP landing page, an active MITM will just sslstrip the customers and get them to enter their password in the wrong box.
This is why we have HSTS (HTTP Strict Transport Security) header, and why sites that care deeply about security (gmail, paypal, lastpass, etc) use it. (See http://www.chromium.org/sts).
When you embed HTTP elements you can no longer trust their authenticity. If, for example, you load JS into your banking app over HTTP it would be possible for a man in the middle attack to substitute it with a script which could re-write the page or siphon off sensitive data.
HTTPS says "We can verify the site you're connecting to and all data transited between it and yourself is encrypted". Embedded HTTP content invalidates that premise.
Any idea why so many sites allow http after logging in? I realize a facebook account isn't a high value target, but it seems crazy that a site that size works over http. I imagine they must have some reason to support http (not implying it has to be a good reason).
The challenge comes back to the fact that "Secure" in an HTTPS context is an absolute; either everything is loaded over HTTPS and you get a shiny padlock or it's not and you get a red cross (depending on the browser, of course). The browser itself obviously cannot discern what the developer feels should be loaded over a secure channel and what should not nor is there anything in the HTML/HTTP spec to support this (other than HSTS to force HTTPS).
The simple reason not to serve up HTTP content on an HTTPS page is that rightly or wrongly, the browser will tell your users that your site can't be trusted. I understand your point, but that's the implementation you'll find in the browsers of today.
Facebook is definitely a high value target, just ask a Tunisian who was using it early last year: http://www.thetechherald.com/articles/Tunisian-government-ha...
Not having HTTPS everywhere by default (although at least it's now a configurable option) is extremely serious for a site like Facebook. The fallout from governments monitoring political dissidents is just one example, the potential harvesting of personal information (including connections) is another that's closer to home. Remember Firesheep? http://en.wikipedia.org/wiki/Firesheep
Why doesn't Facebook force it everywhere? Perception of processing overhead (although debunked by Google), integration impact with non-HTTPS content (impact on ads has long been claimed as a barrier), re-engineering of one of the world's largest sites, etc. But it's heading in the right direction, Twitter, Facebook and Hotmail, for example have all made positive steps forward, I'm sure we'll see a much greater prevalence of HTTPS as time progresses.
http://www. preprod.tescoentertainment. com/Store/Browse/Home/