A maybe workable solution would be to only allow creation of new keys for the first-party origin. What I mean is that whatever.example.com has full access if that's what the user is currently viewing directly in their browser.
<wildcard>.example.com embedded via iframes could either get read-only access, or read-write access for existing keys. Also maybe limited to, lets say, 4K.
This sounds like a really complicated solution though. Any better ideas?
If the web app really needs permanent storage then that permanence of storage needs to be granted explicitly by the user.
> An application can request temporary or persistent storage space. Temporary storage may be easier to get, at the UA's discretion [looser quota restrictions, available without prompting the user], but the data stored there may be deleted at the UA's convenience, e.g. to deal with a shortage of disk space.
> Conversely, once persistent storage has been granted, data stored there by the application should not be deleted by the UA without user intervention. The application may of course delete it at will. The UA should require permission from the user before granting persistent storage space to the application.
Check out CLOCK-Pro and LIRS.
Doesn't the website answer that? Just follow the spec!
Firefox isn't vulnerable…
It would really suck for github (for example) not to be able to use local storage in their UI because pilif.github.com used up all available storage for the whole domain.
Perhaps for storage purposes, this is not a meaningful assumption.
After either of those limits is hit, prompt the user to grant another increment of allowed storage or a customized amount, possibly (and disclose how much each domain and subdomain is using under a detailed view option). In the end, this puts the power back in the hands of the user and prevents any malicious usage while not allowing one subdomain to effectively deny storage to others, etc.
Beyond that, just provide a good (simple) UI for deleting stuff. Which could suggest candidates for deletion based on heuristics like you suggest. E.g. iframes shouldn't need so much. Hopefully less visited sites would be suggested for deletion too.
I wouldn't mind Google Maps to fill some 100 GiB with map data so that I can have detailed maps while I am offline in some remote African town.
Granted that is the raw data, but you can extract the regions you need easily and then render them on demand.
But good to know anyway!
This isn't perfect in that your localstorage could still be filled up slowly if you leave a page open in the background, but I think this solution is robust to many different techniques.
Maybe the dev things they'll need 1GB but I'm not ready to give them that. The same way the apps asks for certain privileges when you install an app on, say, android.
I wonder why anybody thought it was a good idea to let any web page store some random stuff on my computer. Cookies were bad enough already.
The developer lacks the knowledge of the users requirements, that is why they can't answer the question. For "power users" the user is far better placed than the developer to answer the question about how much local storage space is used.
For a naive user the question comes down to "this website wants to put stuff on your computer, do you think your interaction with the website warrants them doing this", that's more a question of value of the website to the user than it is a technical question.
"What's a website? I just double-clicked on my e-mail google and now Foxfire wants to fill up my disks. Is this going to put a virus on my Microsoft? Why don't they put it up in the clouds?"
However, I'd say that prompts that may pop up whatever you're currently doing and ask for things most users cannot make an informed decision about. Eric Lippert once nicely summarised the problems in . And while browser's confirmation dialogs are usually no longer modal, the problem persists. In the vast majority of cases the wanted result is »increase storage limits«. That this might pose a denial-of-service risk is something they are often not aware. And if you try telling them up-front they either won't read it or are needlessly scared. It's a hard problem, actually, especially given user habits concerning message boxes, confirmations and stuff.
In the end, the problem is that one page can itself infer to other domain / subdomains in its document and those can execute and utilize localstorage. They have to, though, so you can embed an html5 game in your blog from some other site that you liked. It comes with the territory.
Sadly, it seems like the best answer is the horrible UX'd prompt - "do you want to allow x.y.z to store local content on your computer?" the same way you have to verify downloads and know exactly what you are running locally.
Thankfully, in this case, domain registrations are expensive. Filling a 16 GB iPad with this technique would cost around $10,000 in registrar fees. A 128 GB SSD could be filled for under $100,000.
...So I wanted to come in here and say "cost prohibitive!" but... maybe not, given that most devices will be at least partially filled already.
Once they hit that point, show a prompt below the toolbar that shows how much data is being used by the whole domain, in real time and allow it to keep on filling up with data until the user says stop or always allow.
Then again, prompting is really annoying, and most people just click "okay" and without comprehending.
Then, whenever space is used up, ask the user if they're willing to authorize extra space for the specific subdomain being added to. If they say yes, then it's authorized for that single subdomain (but not other ones).
I don't think there's any "automatic" way to do this, without asking the user. And I think most users would prefer to be asked.
You'd have to store the other domains your page has written to in its own local storage area, but it doesn't seem to me like the book keeping would be that complicated.
You could use a coarse rule of all data in a.mydomain.com counts, and use a larger quota of n * per-domain-limit.
You could visit as many legit-site.tumblr.com addresses as you want with this rule.
There's no category for Windows 8 on MS Connect, so when we found a bug in Windows 8 RTM I found the name of an MS employee working on the feature in question on the MSDN forums, then through Google found his LinkedIn profile where he luckily published his eMail address.
Microsoft should be ashamed.
It seems like the best way to file a bug these days is to create a blog post and publish it to HN, Reddit or so...
Has impacted my system multiple times since upgrading to Windows 8. Fortunately I know a work-around (eject the optical disc in the optical drive) but still -- annoying that I cannot even report it.
The bug we found is affecting a lot of Swiss customers (I admit Switzerland isn't so big) and it took a month until I got a useful reply.
Now we have a bug number and were told that the issue should be fixed "early this year" and changes would be checked in in March. Whatever that means...
At least their employees were kind enough to reply to my eMail. But this company should really improve their error reporting.
But hey, who cares about by-design black swan latency characteristics for real world use cases when the published benchmarks look so great.
Either way, it was a good call. Automatically playing music and filling my hard drive with no warning is a terrible idea.
EDIT: The link has been changed to the blog post describing the phenomenon. Good riddance!
Is there some generic way to know when a domain should be treated as a subdomain or do they basically hardcode the exceptions?
Example: does domain1.co.uk and domain2.co.uk share the same limit in Firefox? Probably not, but how does it know to treat them as separate?
I imagine these lists will become a real headache when the recent TLD auction is over. Is there any work being done on a more dynamic system (DNS TXT fields?)
If you are, say, the North Korean government, or have a close relationship with some small island registrar, you can register any number of domains you like for peanuts.
Or, you could buy one regular domain and then ask to be put on the public suffix list. I'm guessing that would have the same effect for less money.
http://publicsuffix.org/submit/ (and the rest of the site, obviously)
It's nice that this exploit is presented openly as a proof of concept, and includes a button to undo the damage. Many people, upon finding this, would try to use it for shadier ends.
Though I can't quite imagine why anyone would want to do this to some random stranger. Unless you knew the visitor or had some means of personally identifying him/her, there are more devastating ways of filling up a remote HD with just an IP and hostname (nmap and friends come to mind).
You must be new to the internet?
> Unless you knew the visitor or had some means of personally identifying him/her, there are more devastating ways of filling up a remote HD with just an IP and hostname (nmap and friends come to mind).
There's a few things:
This works by sending someone a link. So you can target people without knowing their IP. It's also so easy a kid could do it. Therefore, kids will do it, just to "fuck with eachother's shit". Not to mention, they'll do it to the school and the library, etc etc. There's also enough people doing things "for the lulz". Spam this link to a thousand people, crash a few PCs, hur hur hur. Again, the fact that it's browser-based and not IP-based allows for different types of attacks. They can spam specific communities and fora they don't like or are at odds with.
By the way, when I ran that site in Opera, it asked me whether I wanted to grant the site extra storage space, which I declined. I didn't feel like testing it in Chrome and crashing my things right now, but am I correct in assuming Chrome would not ask for this extra storage space, but simply take it, without any kind of upper limit?
But yes, indeed, if the machine's already vulnerable to something else, then that is possibly much worse.
Getting someone to visit a web site is relatively easy.
The approach to runaway scripts would work quite well here..
This Page is filling up your hard drive, do you want to a) crash, b) clear all data from this domain
i.e. nobody. why the hell is webkit not following the standard here? they even implemented a permission dialog so you can allow an app going over quota.
I'm going to guess you've never been Goatse'd.
They'll have to fix this bug, but I won't be surprised if they try to remove localStorage entirely soon.
1. JS has a language-level support for asyncrony.
2. The implementation of retrieval was performant enough or allowed for some way to control granularity of reads from the code.
I really dislike that the idea that the only simple API for local storage will be gutted because of reasons quite tangential for what it does.
Both of these things are true.
1. The main page contains an iframe which serves this script:
2. This script writes a 2,500,000-length string to local storage, which should occupy at least 2.5Mb (probably much more). This matches the maximum storage per sub-domain.
3. This script then reloads the iframe on a different subdomain but the same script. GOTO 2.
Session state is released as soon as the last window to reference that data is closed. However, users can clear storage areas at any time by selecting Delete Browsing History from the Tools menu in Internet Explorer, selecting the Cookies check box, and clicking OK. This clears session and local storage areas for all domains that are not in the Favorites folder and resets the storage quotas in the registry. Clear the Preserve Favorite Site Data check box to delete all storage areas, regardless of source.
[Edit: perf is very back and forth. Slow, then fast, then slow again. It does work though.]
Bad, non-conforming implementations do.
Is it not possible for Opera to keep their own implementation of LocalStorage (and other things)?
Am I wrong in assuming RenderEnginge != Browser?
"Allow example.com to track your location?" [Yes] [No]
"Allow a1.example.com to store x MB of data locally?" [Yes] [No]
> The HTML5 Web Storage standard was developed to allow sites to store larger amounts of data (like 5-10 MB) than was previously allowed by cookies (like 4KB).
Main difference is that cookies are uploaded to the server with each request, while localStorage is not.
It would be better to have sane and safe defaults in the browser, rather than pester the user. Would cookies have worked if the browser asked for permission on every website?
I'm actually not sure how much that'll change Opera, and affect their way of innovating new features to include.
(It's not like keeping state in the URL is hard - using cookies just looks marginally better.)
A root domain www.example.com can utilize upto 10MB of storage while sub-domains count towards that storage limit. Any domain trying to access more will automatically result in a user prompt. An exemption can be made for domains/subdomains that present a valid SSL certificate, the whole idea is to prevent abuse.
That said, if you're one of the few that has IPv6 access, this could turn in to an issue pretty quick.
IIRC Apple were selling mac book airs with no trim support if the user didn't pay to upgrade OSX.
If a malicious user felt so inclined they could with just a few domains create a bit of a write load that would quickly fragment and hurt the performance of the SSD.
Isn't there a limita Limit of 5Mb(?) per Domain definend in the HTML5 Spec for LocalStorage?
He explains here that most browsers (except Firefox) don't follow the standard close enough, and ignore the exception for subdomains, i.e. 1.filldisk.com, 2.filldisk.com, etc.
It's the one about cookies, and .co.uk (i.e. every commercial site in the UK) sites all sharing the same cookies, because they all look like subdomains. Or was it all .friendly-hosting-company.com sites?
The fundamental problem is, there's no easy way to distinguish domains and subdomains.
http://publicsuffix.org/ has the list that you use to distinguish
I'm sure any web developer will tell you it's been a problem from the moment there was more than one browser. This is just a particularly hilarious example.
Should I include a part about the possible abuses?
Oddly enough, Google Chrome prompts you to grant file system access, but doesn't explicitly tell you how much space is being asked for.
The thing I still find disturbing is that unlike cookies, it seems not easy/direct to view the contents of a localstorage (other than its meta-data).
The author of Flashblock even specifically mentions YouTube on the extension page:
Youtube videos not blocked: This is because they are now increasingly HTML5 Videos. I plan to add HTML5 blocking in the next version. Meanwhile you can try out a experimental version at:
This indicates there is a demand for the autoplay blocking feature alone, regardless of the medium.
Not that I care about unexpected noises in a situation like this, but I hope we all agree that unexpected and unwanted noises are genuinely annoying to many people.