I think it was a sham either way (without legislation, there's really no reason for anyone to adhere to the requests, and as users we have little to no way of knowing if our requests are being fulfilled), but MS screwed the pooch here by completely violating the intent and implementation of the spec.
Sending "DNT: 1" in your request header means "please do not track me". Sending "DNT: 0" means "I don't care if you track me". Sending no DNT header (every request on a browser where the header is unsupported or a browser has not received a preference from a user) means "I have not expressed a tracking preference or do not have the ability to do so".
By having a default sent either way as MS did, it violates the semantic meaning of the header, meaning that the receiving server can't distinguish between "no preference" and "don't track"/"go ahead". It's like a pair of radio buttons: selecting neither is valid, but once you've made a selection you can't get it back to an unselected state.
I've seen this repeated a couple times and I'd like to know the reasoning on this argument. Is making DNT opt-out bad? Seems like MS would catch more flak had they made it opt-in, though I may just be misunderstanding the situation
The spec does not say DNT should be turned on by default. The spec says it has to give the users full knowledge of what turning on DNT means which IE 10 does on first run.
How is that not blackmail? You might as well say if everybody started reading the list of ingredients of food they buy and take them seriously, corporations would just start to lie and put whatever they want in there. Sure, maybe some would, but that's what jail is for.
People intentionally and knowingly reading the ingredients / turning on DNT is the correct method.
I'm struggling to find a valid food-ingredient-analogy but it would be very different from your scenario. It would have to involve Microsoft auto-scanning the ingredients without the user ever turning this feature on. Then Microsoft would have to do something with the results that hurts/disparages certain food producers. And then the food producers would stop listing ingredients in a form that Microsoft can OCR, not lie about them.
I'm not sure, but let's assume they are. Doesn't respecting do not track reduce their value even further? I'm not arguing that ad companies shouldn't respect so not track, just considering the incentives and disincentives in front of them.
Honestly, I don't understand this reluctance to name wrongdoers, especially for something like this where verifying the wrong is trivial (e.g. load up a client site and find the offending code in source).
It seems to me that the harm is greater not naming names - reputation is important and if you take steps to invade user's privacy then your reputation can and should suffer for it.
"Witch hunt" generally refers to persecution of someone without any regard to whether they're innocent or guilty, so I presume the comment was intended to admonish against guessing which ad companies may be using this technique.
Who are these ad companies is a question that addresses the wrong topic. VP for Internet Explorer effectively said[1] that the vulnerability in Internet Explorer would not be an issue if nobody exploited it.
This could be used to track entropy of encryption key generation(like trucrypt or, the new MEGA site, any site that employs mouse/key binding for entropy.)
While this seems like something that Microsoft should fix as a matter of urgency, I don't believe the problem is as severe as is being portrayed.
In order to get any meaningful information from this attack, you would need to know what application/website the user is currently using (or send them to it), where it's positioned on the screen and the exact layout of the subject. The interface would also have to be either mouse- or meta-key driven, which isn't a common facet for sensitive inputs (passwords, bank transfers, and private messages off the top of my head).
As they mention in the article, if the user is using an onscreen keyboard, then the trace essentially amounts to a keylog. And since on screen keyboard usage would have very distinctive patterns, if you had a large enough dataset, you should be able to extract those logs relatively easily.
I guess it shouldn't be too hard to create an algorithm that maps the movements to potential numbers on a visual type pad. Once you have the numbers, you just need to match them to patterns which could be cc numbers, phone numbers, bank accounts and so on. You just need to collect enough to find some useful data.
Whether it's common or not isn't the issue, it's whether it's done at _all_ by banks and suchlike.
My bank on their online site asks for my account number, a memorable piece of data and a 6 digit passnumber that they generate (and I can't change). The passnumber is entered using pull-down menus for each digit, always ordered 0-9.
So, no, an attacker wouldn't have access to all the information they need, but they'd certainly have access to more than they should, in this case, if they're able to take advantage of this, that is.
And it's not just for general users, some sites do often additional functionality in this field for users with accessibility requirements (large on-screen number pads, etc).
So, yes, I'm sure the % of affected sites is low, but just 1 bank whose online system is comprised by this is 1 bank too many.
Even if mouse position tracking is permitted, it should clearly be limited to the current tab. Cross-tab, and certainly, cross-application is just clearly wrong.
agreed, which is why Microsoft should be held to account for not prioritising fixing this problem. However, I felt that the portrayal of this particular hole in the linked article made it out to be more than it is.
Ignoring the keypresses (to prevent inadvertent credential sharing), and just doing mouseclick heatmaps while anonymizing the IPs involved and sites visited would be interesting (you'd want to keep screen size/browser.version data, for an understanding of what the heatmaps represent).
Would provide lots of info without compromising much details.
How would it provide lots of info? Without any data on which program they were interacting with at the time, I'd have thought data on the mouse movements would be fairly useless?
Interesting that this has been around for so long. What are the ramifications of leaking mouse/ctrl/alt/shift if they don't have any context about what you are clicking on?
Off the top of my head I know ingdirect had a virtual pinpad. Combine this with a XSS vulnerability Icould easily send you a link to login to your bank website. The link would then load this type of mouse tracking data.
The INGDirect virtual pinpad changes the arrangement of the numbers everytime it loads and hides them when you click. That does provide some protection.
I keep seeing websites use those things, and it drives me utterly insane. Not only is it an onscreen keyboard, but nothing stays still when I'm using the damn thing. I hope more websites don't think it's a good idea.
I was under the impression this was actually a pretty good defence against usb keyloggers that are trivial to install on a public computer. Is that not the case? (Folks just not that concerned about that vector anymore?)
If someone has enough access to a computer to install a keylogger, they probably have more than enough access to just read whatever is being "typed" using the on screen keyboards. Inject javascript, read it out of the browsers memory, whatever.
Of course you could be using such a system to defend against a hardware keylogger, in which case I'd be thinking long and hard, trying to decide who I pissed off.
Edit: Just realised you /were/ referring to a hardware keylogger. My apologies.
Yes, if someone had access to install arbitrary software on your computer they could attempt to get behind any on-screen keyboards... but given the wide variety of them, and how hard it would be to detect one based on its code alone, I doubt anyone would bother.
Software keyloggers log which keys you type (obviously) but some also take a screenshot whenever you click to defeat on-screen keyboards. It sounds like INGDirect's keypad is designed to defeat this attack.
Yup, I assume that's the idea. I can't imagine many consumer banking accounts are hacked via hardware keylogger though. Presumably if you have physical access to a computer, you can usually install software on it anyway. A well positioned webcam could probably see what you're clicking on with the onscreen password prompt as well.
What kind of security by obscurity banks are people using where they have to enter numbers with the mouse to avoid keyloggers and answer to silly questions like my mothers last occupation??!
Any reputable bank will give you a small external card reader with a keypad where you have to insert your smartcard, enter your pin and a punch in the challenge-response code from the website. 2-factor authentication is a solved problem, plus no risk of keyloggers since the device is disconnected from the computer. (Most come with the option of connecting to the computer via usb to save you from manually entering the challenge-response but your pin is always entered on the external keypad.)
Ugh, what a hassle. My bank only required the card reader for potentially harmful things, like transferring money to someone I've never sent money to before. If it asked me to use it every time I logged in then I would change bank.
Looking at the holes and crocks of shit we see every damn day related to HTTP, HTML, JavaScript and the whole programming model that surrounds them, it's about time someone just shot it all and started again putting security and privacy first rather than playing whack-a-mole all the time.
Unfortunately I fear this is not possible based on the sheer momentum that this ball of sticky tape and string has.
I think the sheer number of articles that paper HN all the time over browser and protocol vulnerabilities, leaks and problems back up my assertion.
EDIT: just to add, my frustrations are based on having to spend 5 hours porting some JS code so it works properly on all browsers.
If 20 years ago some clairvoyant genius had foreseen what web applications would become, and decided to create a sane and secure alternative, and by some miracle had managed to pull it off without falling into numerous tarpits owing to an unsinkable combination of intelligence, persistence and vision, it still never would have taken off because it would be too complicated to gain traction compared to the simplicity of HTML.
The key thing you need to wrap your head around is that software ecosystems are not designed; they accrete and evolve organically, and no one has any power to change that.
No. The desktop is not cross-platform, cross-media and installation free. Your flippant dismissal of the real advances that the web brought to common computing renders your opinion moot.
As consultant, part of my job is to do web applications when the customers demand it, and web development is everything but cross-platform.
The amount of hacks one has to write to have CSS, JavaScript and HTML working flawlessly across all desired operating systems, browser versions and handsets renders the cross-platform argument moot.
As for installation free, the same is possible with the desktop applications as well.
There's never been anything in the history of computing that is 10% as cross-platform and cross-media as the web. Your head is firmly embedded in the sand my friend.
You mean you could make a statement about my ignorance in isolation, but you obviously can't make a statement that supports that opinion. If you want to make outrageous claims like desktop programming is cross-platform and cross-media then you need evidence, you can't just proclaim things like "desktop programming can be installation-free". You need evidence. Especially since you were responding to my post and going off on a bizarre unrelated tangent.
It's an Internet Explorer vulnerability. Shell level IE exploits are one of the reasons Firefox and Chrome have done so well, because they're more secure. Don't paint every browser with the same brush.
An interesting statement, which would be more interesting if backed up by facts. All brothers are basically virtual machines running remote code, how could they be secure?
Yes I know but the sheer number of articles aggregated across all browsers point to the architecture of the web being completely flawed.
Consequentially, they're all as bad as each other.
"More secure" is subjective i.e. it's more secure to us public but who the hell knows there aren't 100 zero day's out there in the wild changing hands for thousands of dollars.
I think you're being needlessly dismissive of how hard a problem it is. There are legitimate use cases for capturing mouse position. You could certainly make a secure browser, but you're also going to strip it of much of the functionality that we enjoy today.
The problem doesn't exist because people just aren't paying attention to security, or because the entire architecture of the web is flawed. The problem exists because it's a damn hard problem to deliver arbitrary executable code to clients on demand and let them run it and do useful things with it without compromising security and privacy. The browser vendors have really stepped it up in the last few years, and it takes a very narrow view of the web to see otherwise.
Not really. It's not a hard problem to solve if you start at the right end of it rather than retrospectively apply it.
Capturing the mouse position is perhaps legitimate for an "application" but not necessarily a "document". The web conveniently has turned from an information medium into a catch all for pretty much every hack that is imaginable. That's where it's all fallen over. "documents" are now "applications". This has lead to all of the crocks of shit out there. Office VBA and programmable documents are in a similar state.
I firmly believe we need to make the distinction between a document and an application and have appropriate sandboxes and/or virtualization for each.
> I firmly believe we need to make the distinction between a document and an application and have appropriate sandboxes and/or virtualization for each.
You can go back to 1993 and turn your web application platform (a.k.a browser) into simple document reader by disabling javascript (+ plugins, whoever keeps them enabled anyway). Good luck with that.
Oddly enough IE is the browser that seems to keep the option of disabling Javascript buried the deepest within their context menus. In Firefox it's just Preferences -> Content -> uncheck "Enable Javascript" (I do this to avoid NYTimes' paywall, lol) but in IE you have to scroll through an exceedingly long list of checkboxes that's a couple levels deep into their menus to find "Disable active scripting" because they still refuse to call it Javascript. I always forget where it is and have to hunt for it every time. Obnoxious.
You are suppose to set the security level of the Internet zone to "High" (the default on Windows Server), or add the sites needed to the "Restricted Sites" zone.
Apparently there is a need to deliver documents with interaction. A browser is an application that delivers information, which delivers interaction... It's a mess, real world is never clean. And it's always changing...
Back in the old days they wrote software which wrote documents (in fact the company I work for actually does this) and wrote software which parsed documents. If you need to interact with a document, you write an application which processes it and creates another document!
That neatly assumes that documents are data and not code.
And as the complexity of the documents increased the likelihood that some bug was written in to the interpreter increased geometrically.
The thing is we had all the things you say are great, and in spite of this we have created the browser as an application environment. Evidently people don't want a document web.
Adobe PDF and Word are evidence that document readers attempt to become web browsers with time anyway.
They don't need to be in order to have weaknesses; just look at all the security problems coming from Adobe's PDF reader, which afaik can't execute code, but where the input (PDF documents), if cleverly crafted, can create buffer overflows allowing for arbitrary code execution.
PDF is a very small subset of PostScript as a programming language, centered around objects that are usually in a compressed stream. Pages, text, images, drawings, etc. are all objects and appropriately linked¹. The language itself cannot really do more than creating objects and dictionaries; no programming is left.
Embedded JavaScript is another matter, but it's not needed to be executed for parsing the document.
_____________
¹ This gives rise to interesting applications, e.g. you can remove pages or images by just removing a link in the PDF. Yet the object would then still be there. There are some PDFs out there where sensitive information is buried in unlinked objects that still exist within the file. But that's obviously besides the point.
They are not needed to parse the file. PDFs can also contain Flash content or movies or 3D models or any number of other objects. It's just a blob in the file, same as OLE. Few applications beyond Adobe Reader care about implementing those parts, though.
How many downvoters really think that making bug-free software is effectively impossible? (I suppose we must leave some meta-uncertainty about the truth of math.) Sure, many programmers would be out of work if they no longer had a large bug database to work on, but there are numerous areas in hardware and software design that have incredibly low (or 0) bugs. NASA's work is usually brought up in these conversations, for instance. Does everything need to be designed and constructed so carefully? Probably not. But I would love it if a standard web browser was, given how important the browser is.
It's not so much effectively impossible as it is not possible to win with it in the market.
NASA has been able to produce high-quality code, but even their stuff is not 100% bug free. Even if you consider it to be close enough, their cost is incredibly high for the amount of functionality, perhaps 10-100x the usual. So while you're slowly building a nearly-bug-free system NASA-style, you get beaten to market by another guy with a buggier system that gains popularity and becomes entrenched before you even ship.
Yes, NASA uses formal methods to exhaustively test every possible state the system could enter during execution. This type of testing can cost several hundred dollars per line of code. And it still doesn't prove that the code is 100% bug free because the absence of bugs is not empirically provable.
But on the other hand, as others have pointed out, browsers are now an important enough application platform that they probably should be tested to a similar standard as an OS kernel is.
Personally, having been warned time and time again over the years that IE is one of the least secure browsers available, I just won't use it anymore (except for work-related purposes in a corporate environment where I'm forced to use IE). IE's reputation is terrible for a reason, and I think we're seeing that the buggier, more popular/entrenched system that burns its users over and over again will eventually fall out of favor.
Certainly there's something in between the extremes of Microsoft and NASA in terms of testing and debugging standards.
I would be amazed if there was anything out there with 0 bugs.
All you can really do is test the hell out of something until your chance of encountering a bug during actual use becomes vanishingly small.
You might be able to engineer a browser in this way but it would just be so ludicrously far behind all of the buggy insecure browsers in terms of functionality that it's security benefit would be close to zero because nobody actually used it.
That's a really important observation that bears repeating: the browser used to be a novelty application among peers, but it's increasingly become the platform upon which those applications are built. It's become a mission critical process, much like the kernel code.
While I respect Schneier for his views, this is not one I share.
I've worked in the defence industry. The cost of mistakes is very high. In my case I designed communication systems. I have one in the field which was verified mathematically and no defect, vulnerability or bug has been found in 18 years despite counter attacks. This covers the hardware and software portions of the design.
As for my OS or document viewer, 5-8 years is enough time.
Are you suggesting that there is something uniquely vulnerable about web technologies? Problems occur at every level of the stack from the OS on the client up to the server software. The big advantage of that programming model is how different elements can be loosely coupled. You can fix a problem by swapping individual parts of the stack without changing the experience.
Actually I do, to the point I have been appointed a couple of security positions in the past in the defence industry.
Code is executable.
Data is not. There should be no level of turing completeness.
Taking C as an example, loading a char* with data that contains code and jumping to it or letting it overwrite the code segment is precisely where it breaks down. The same is true when your json payload contains a function or your css contains an expression.
The main problem at the moment is that technologies freely interchange the two concepts. Code should be entirely immutable once compiled and data should not be executable.
But the pragmatic approach is to assume that data may become executable due to vulnerabilities and design around that fact. Assuming that you can control everything is fanciful and dangerous. Assuming that you control nothing, and everything can and will break is a much safer approach.
The pragmatic approach is to use a toolchain which prevents data being executable. Most high level languages without pointers and direct memory access (excluding dynamic languages) perform this feat quite well.
I disclaim the use of hacks like non executable segments here in certain CPU architectures (x86 LDT/GDT controlled access bits) as they are an afterthought.
So you just have to force every piece of code on your computer to use that tool chain. "Dear Mr Zuckerberg, please make Facebook available in a form that allows me access without needing to rely on dynamic languages and uses a strictly defined toolchain (of my choosing)". Good luck with that.
Why? They make shit that works with huge investment when the rest of the world makes do with shit that doesn't work properly for very little investment.
I did state in another reply that office VBA suffers the same problems. PostScript as well as you state. All these have been notoriously problematic formats with respect to security.
I see that to other responses have taken the sarcastic route, and I was strongly tempted to join them. Instead, I'm going to control myself and respond to this as if it was serious.
The simple fact is that there is a high demand for interactive applications. One of the best ways to distribute these applications is the web using JavaScript. If, for some reason, this distribution channel were removed (let's say it was removed by law) the demand would still be there, and the 'older' channel still remains - native apps. If the average person uses, say, 20 webapps heavily and a few hundred glancingly, and let's say that 10% of these survive the transition (probably a high figure) that's still a good handful of new native apps, each of them with their own security issues.
It was serious apart from the fact that I know it would never happen and will see little support here due to this website's infatuation with JS.
Every website that I see using JS these days would work much better with some GET or POST requests. Some examples:
* Using JS to load the next page for instance. That should be a simple request to the server for a /page2/.
* Using JS to change sort orders. Again a simple request for a new page should be made to the server.
* Pages not loading a full list in one go but making several requests for more entries via JS. The server should just return one long page.
* Searches not being sent to the server but using the browser to search which does a new search for every character.
* The same searches above hijacking the back button so that I can go back one key from round -> roun -> rou -> ro ->r -> no search -> at last, back one page!
* Images being hidden behind a javascript link when it should be a simple hyperlink to the image.
As for every website writing their own native software, at least some that I can think of could be substituted for old ones. Think ftp (or better yet sftp) for uploading files.
This is a tempting option in so many situations, not just software. But it never works because you can never wipe the slate truly clean. The state of the world is dependent on the previous state of the world.
The problem is not the technology, it's human nature. There are a certain number of scumbags out there, who will lie, cheat, spy, eavesdrop, backstab, whatever it takes to get something for themselves. This has been going on since we've existed as a species and I see no reason to expect it to ever stop.
Anything you replace it with will have tons of vulnerabilities as well. Formal verification might help, but that's entirely orthogonal to the programming model.
Sure, it sucks to develop for. But it's not fundamentally impossible to make it secure and private.
I think only the Plan 9 people have got the idea so far.
The problem that has shot us as a race is that in the 1990s, technology became suddenly ubiquitous and whatever was lying around was glued together to fill a niche which took off before people had a chance to think about it and engineer something sound. An analogy perhaps:
As a biologist, I've got to point out that our entire existence is based on evolution gluing together whatever was around to fill a niche, without thinking about proper engineering. Which is why, to give just one example, the nerves in our eyes run in front of the light sensitive cells, necessitating a blind spot where the nerve leaves the eye.
More seriously, I think we as humans benefit much more from getting technological developments quickly, than we would by waiting years or decades for them to be soundly engineered first. I doubt we can even foresee all the possible problems until we start using things at a large scale.
Good point. A valid counterexample would be the well though out text book examples of minix (and gnu hurd) vs Linux.
Sometimes paralysis by analysis is a bigger problem than bugs and bad architecture. And often the alternative to "usable" isn't "perfect" but "never shipped".
And you're complaining about crappy browser security and you're then supporting IE6? Why support such an outdated browser when you know it has numerous well documented and unpatched vulnerabilities?
Yes, yes, I know, the powers that be need this support for some explainable support even though worldwide IE6 usage is <1% etc etc. Whatever, it's just not worth the hassle. Show them a message and tell them to upgrade or use an alternate browser. I refuse to take any work on (I was freelance until recently) which requires IE6 support, and is a specific question I ask at interview time.
Those statistics are meaningless when a big chunk of the world is a darknet. Our client has 3,000 windows 2000/ie6 workstations still. None of these ever see the world wide web.
The numbers are entirely irrelevant. 28% of our client base is on ie6.
My point is that it shouldn't be an issue to start with.
If you redesigned a whole new ecosystem from scratch, you will still have IE6 users who don't support it so you'll have to provide a "fallback mode" anyway.
Just to point out how ridiculous this is: you can get mouse position information from any event (fired programatically using fireEvent or otherwise). You can even get it from the "onbounce" event on <marquee> elements, for goodness’ sake.
Kind of ironic considering Windows 8 visual/swipe password feature. Which, in general, is quite novel and interesting, albeit not very secure for various other reasons.
It cannot. The log-on gesture is made on a completely different desktop under a completely different user account. If that worked then Microsoft would have a much more severe problem to fix.
This is so low risk, why even bother posting it?
Zero days come out every month or two with far better attack vectors. Criminals are not going to waste their time with this rubbish.
Who are these companies?