Hacker News new | past | comments | ask | show | jobs | submit login
The Line of Death (textslashplain.com)
566 points by bpierre on Jan 14, 2017 | hide | past | web | favorite | 105 comments



I think the real issue is that everybody cares about usability but nobody actually cares about the users. Browsers, web apps, etc. try hard to make it easy to browse the web, but they don't try very hard to make it clear exactly what you're doing and what the risks are - in fact, everyone tries rather hard to downplay the risks and to hide how things actually work. How many users understand "the line of death", or the basic fact that different pixels are drawn by different programs, not to mention URL structure, or (gasp) Unicode and how it fits there, or how rnicrosoft.com isn't what they're looking for? What makes them understand this? Nothing. Software vendors are very happy with uninformed users (in fact these are the best users because they don't realize which of your programs and services can be replaced with an alternative and how), and users are very happy to stay uninformed, too.

Not saying I know how to fix this, just that in my experience non-tech people are so completely unaware of what's going on that this "line of death" thing is not even a thing for them.


To fix this the answer is to educate the users, and also oppose this style of UI that makes things opaque and hard to comprehend. (Maybe when users are better educated they will automatically find the problems with such UI and thus further oppose it.) Incidentally, if users customised their environments more, they would be far less likely to be fooled by fake dialogs and such, because they would look very obviously different. With the setup I have, it's almost hilarious to see all the adverts with fake dialogs and buttons that look nothing like the real ones on my system; the font, the colours, the controls, everything stands out as being different.


>To fix this the answer is to educate the users

That's like saying the fix for road deaths is to educate drivers. We already proved that doesn't work nearly as well as designing better cars, roads, and signs.

Educating 7 billion human beings is a lot of work. A mind-bogglingly insane amount of work. Security is even worse because one lapse in judgement, one sick or busy day, can completely erase a lifetime of following security best-practices.

The power of software is the power to make machines do the work. Why would we want to make 7 billion people take responsibility for a problem that could be solved by a few thousand programmers?


Why would we want to make 7 billion people take responsibility for a problem that could be solved by a few thousand programmers?

Precisely so that those 7 billion people will not be at the mercy of the few thousand programmers (and those who they represent the interests of.) Creating more walled gardens --- which is effectively what you seem to be proposing --- only gives those in control even more power.


What's the alternative?

Building software is a collective process that often depends on experts. Nowhere is this more clear than in cryptography, which everything else depends on. With hardware it's even worse.

We need to get everything reviewed and decided on in the open, so that independent experts can verify it. But getting rid of experts - forget it, that's a cabin in the woods scenario that doesn't scale.

Of course we can always do more to educate more people, but that's a way of increasing the number of local experts. You're still going to be surrounded by lots of non-programmers who can easily make naive mistakes.


>>To fix this the answer is to educate the users

I used to believe this too. Over the past several years though, I've worked with many, many users from all walks of life, and I've come to realize the hard truth: the users simply don't give a shit. They already use a billion other apps in their day-to-day work and don't have time to spend watching training videos or reading documentation. So if they have to be educated to be able to use your app properly, you've lost.

This isn't to say that all users are like that. There are definitely ones who are tech-savvy and competent and curious. But the overwhelming majority want nothing other than to click a few buttons and get the results they want from the app. Anything more complex will cause them to give up and move on.


Another way to think about the same thing.

The user is the boss of the software. The user is not a peripheral that I get to reprogram. I must accept the user as is, and adapt as best as possible to their actual capabilities.


Counteranalogy: If you have a shitty boss, it makes sense to interpret their directions in the way that is most convenient.

If users with to abdicate their positions as bosses, then they clearly are not to be considered as having valid authority.


Back when it was still possible (Windows XP? Maybe Windows 7?) I would always turn off all the modern Windows chrome and animation and make it look like Windows 2000. Made for better responsiveness and as you say, you could spot fake dialog boxes in an instant.

Not a Windows user these days but I understand it's not possible to get the old Windows 2000 look anymore, though I'm sure you can still change color themes and appearance to some extent.


You can still do this as late as Win7. I think this was eliminated in Win8, but I've not used it myself.

If you do use Aero, you can change the window chrome colour to something custom, which should catch out sites trying to fake windows. I don't think browsers provide any way to get the window chrome colour, though come to think of it, I'm fairly sure IE does/used to provide system colour names in CSS, so it might not be impossible... (If you recall the Win2000-era colour customization window, which IIRC you can still access in current versions of Windows, it's just hidden, you can set things like default window background colour, which traditionally in the Win2000 era was grey. So I guess the idea is that websites could use these system colours if you changed them to be consistent with the OS. Of course nobody really does this.)


Windows 10 lets you pick a "custom accent color," and it allows you to make a few other custom tweaks as well (should the titlebar be white or colored?).

Since this is part of the initial setup wizard, I think it would be pretty hard to fake a Windows 10 dialog from inside a web browser.


That problem is easily solved. In your malware, simply use the default settings for all of those things, and you will catch the 97% of users who never customize any of it.

You can afford to lose the remaining 3%.


It's also a bit self selecting - there's a lower chance those that change the defaults will fall for these attacks anyway.


The title bar thing, yes.

The accent color? If you buy a new Windows 10 machine off the shelf, after you enter your name, it asks what your favorite color is. I think blue is highlighted when that screen comes up, but you can't miss the opportunity, so 3% is a bit low for an estimate of how many people will change it.


Favourite colors don't follow a random distribution. IIRC with blue and green you already have two thirds of the population.


A browser ad that imitates a windows dialog can only guess one color per impression. Forcing Mallory to settle for 1/3 of the otherwise-vulnerable population is a definite improvement.


CSS System Colors support[1] in Windows/IE makes this harder to stop than you many realize. I don't see it very much anymore, but when it first came out, I saw quite a bit of this.

[1]: http://www.w3.org/TR/css3-color/#css2-system


I'd say windows 10 almost brings things back to the Windows 2000 era in this respect. Changing the wallpaper changes the accent colour (or you can pick one seperately), and you can choose whether titlebars have that accent.

A fake dialog won't know that my titlebars and buttons should be teal.


> To fix this the answer is to educate the users.

Impossible.

The way to improve is to restrict the amount of damage that can be done by spoofed websites and have multiple safe-guards in place.

People cannot be expected to learn these technicalities. If they get a phone call, on the other hand, and are asked whether they really intended to make a large purchase that raised a flag somewhere, they will know what to say.


Here's a test; find a six year old, get the six year old to look for stuff on the web. See how confused they are when the search box jumps to the url bar. See how they react to the clickbait and ads that get in the way of every interaction that they have. See how they struggle with dozens of peoples ideas about page layouts and responsiveness.

Now, you will say : kids are good at using tech, look they are on x,y & z. They know how to do things I don't! But the thing is that they are using services learned one at a time - not access to the library of babel that we all hoped for !

I agree with your post up to the end. The Very happy to stay uninformed too is like "the soviet people love communism". User don't know what is out there that they could get. Especially users don't know what it is that could be made and put out there. This is because tech people don't actually give a toss about HCI and information retrieval, just selling ads and raising rounds of funding.

Don't be evil.


I believe that "kids these days are good with technology" is a phrase uttered by older people who are actually seeing the kids' willingness to experiment without fear, and assume that it must mean proficiency.

Not quite the same thing in the end, though.


> Not quite the same thing in the end, though.

For all intents and purposes, it is. I do not know a single person who I would term "proficient with technology" who is afraid of experimenting or overly concerned about making mistakes when using something new.

In my experience, people who are "good with technology" are always curious hackers and tinkerers who are unafraid of breaking things, and are never satisfied with simply using the tool: they must look under the hood and they must understand how it works.

I would say the same principle applies far beyond computer technology: math, engineering, physics, music, etc...


My sister gave me my favorite definition of "computer literate": Someone that isn't afraid of the computer.

The worse thing that's every happened to me because of my lack of fear was that I fried the motherboard on a friend's Mac back in the 90's by plugging a parallel printer into a SCSI port. Oops! Fortunately it was under warranty.


> For all intents and purposes, it is.

The vast majority of them max out their skillset with being a consumer (e.g. social media apps) and install a bit of malware along the way.


One leads to the other; if said older people would be less afraid to experiment, they'd learn things as quickly and they'd too become proficient.


While this is basically true, there is also the difficulty to teach less tech-savvy where it's save to experiment and where not. (And this is where things like "The Line of Death" come into play.)

Anectdotal example: On one hand I sometimes get almost mad when I try to help my parents with computer problems (especially via phone) because they didn't even bothered with randomly trying somewhat problem-related buttons or options.[0] They just get paralyzed (by the fear they could break something) because something happens what isn't routine, even if the solution could be found within minutes even by non tech people.

On the other hand I'm glad about the same behaviour when they call me to ask if some mail or website is legitimate or not. About half of the calls are false positive scam suspicions, but I'm happy looking into them, as long it helps them not getting scammed.

[0] https://xkcd.com/627/


The problem is that 'security' often comes at a direct cost to 'usability'. Now of course I know that something can't be all that usable if the users are being hacked all the time, but on the simplest level it remains true.

Take MFA for example - absolute 'usability' nightmare! Makes your product many many times harder to use and fails horrible under fairly common use cases (like that time I had to mail in a certified copy of my ID overseas to AWS to regain access to our root account).


"I think the real issue here is everybody cares about usability but nobody actually cares about users..."

Could this also apply to the WhatsApp "backdoor" thread from yesterday?


All these browsers have that darn Modal dialog alert that lets the code below the line of death trap you on their site. Definitely a disconnect with the plight of their users.


Safari on both iOS and Mac have changed their Modal dialog alert that sites can trigger to be visually distinct from OS dialogs, which is a small welcome change. http://imgur.com/a/wRElN

Also worth nothing that in Safari the modals don't block the rest of the browser (I believe at least Firefox does this as well?) and are fully contained to its own tab.


The latest variation on this I've seen is sites which repeatedly force Chrome to switch to full-screen view, thus preventing you from navigating away from them. Apparently they've found a way to override Chrome's anti-abuse mechanisms.


After the first pop-up Chrome and Firefox let you say "block all the dialogs from this site from now on."


I have not seen that consistently, maybe there is a bug. Also the trick works for basic auth and I don't think they have addressed that yet?


I believe the commonest circumvention is to redirect to another URL and back, thereby making this a "new" page visit and reset that checkbox.


> in fact, everyone tries rather hard to downplay the risks and to hide how things actually work.

And why is that?

It's because doing the opposite is in the way of maximizing your ROI (return on investment).

Put simply, it reduces the company's (investor's/owner's) money. It's always about the money.


One of the best UIs I've seen crossing over this line of death was the HTTP Basic Auth popdown in Opera 12. I've always wondered why that UI concept was never taken up by other browsers.

... will try and find a screenshot

Edit: Couldn't find one so just installed it myself: https://pageshot.net/images/4af15a26-6eb8-45a2-b4d5-ed6ea19a...

Edit2: dom0 beat me to it below also

Edit3: reword


Both Chrome and Firefox are under the line of death too. Although for Firefox it is a bit trickier to replicate as it uses native components whereas Chrome uses its internal UI kit.


Sorry, perhaps my comment was unclear. I meant "breaking" in a positive way (hence "best") - the UI crossed the line in a very significant and impossible-not-to-notice way.

Still searching for a screenshot unfortunately... may just re-install it and take one myself.



I still disagree with both lucideer's original and improved wording, but I agree with their message, which praises Opera's basic auth UI as making it clear with the borders and 3D foreground overlay effect that it's a part of the browser-produced "trusted zone", and not the pool of untrusted content behind.

Moreover, these kinds of UIs are still possible with the 'flat' look that's in vogue today, so there's little excuse why others choose not to do it. Perhaps one reason is that basic auth lost out early on to site-supplied login forms, so people got used to entering usernames and passwords into the page content anyway, instead of the browser UI.

For the most part, basic auth only tends to affect uses like intranet sites, router login pages, web services, remote management pages -- settings where phishing can still cause (serious) damage, so a harder-to-fake UI would be beneficial nonetheless.


On my wording, apologies (edited). I was thinking of "breaking" in terms of "breaking/crossing a line/barrier one does not typically cross". Probably not the best wording in retrospect.

On the comments on basic auth losing out, you might be right that that's a reason, but HTML5 APIs requiring some kind of UI confirmation from the user (like HTTP basic auth does) are far more proliferant than they once were (see http://permission.site/ for many examples), so I don't think that excuse is really good enough for browser vendors.

Incidentally, it's worth noting that the UI for this kind of thing (user confirmation prompts) is pretty much a solved problem on mobile: these prompts tend to use the OS notification API, so always appear outside the browser chrome entirely.


Desktop platforms like OS X, most Linux desktop environments, and newer versions of Windows have similarly allowed applications to hook into the OS' own notification mechanism for a while, but for some reason this model never caught on. One can argue that it's much clunkier on these platforms than on mobile, but the capability is now there.

Unfortunately, with HTML5 notifications, the ship has probably sailed on this and it went from being an intriguing idea to a bad one, as now it's getting mainstream for individual websites to generate OS-level notifications. This made a previously privileged pool of messages full of untrusted content.


> one reason is that basic auth lost out early on to site-supplied login forms, so people got used to entering usernames and passwords into the page content anyway, instead of the browser UI

To be fair, basic auth is not particularly user-friendly. If you want to add anything else to the login form, such as a "Remember Me" checkbox or a captcha, you can't put that in the browser chrome, you need to add an additional step in the login flow. If you want an "Did you forget your password? Click here to reset it." error message, the user has to cancel out of the auth dialog in frustration before they can see it (or you have to redirect them to an error page after failed auth, with another link/button to Try Logging In Again.

And even if you solve those problems, the user still needs to enter their username/password when creating their account, and I have never seen Basic Auth used for this scenario.


Basic auth is appropriate mostly for non-Internet or single-user applications. It is eg. commonly used as a simple-no-markup-required input where actual authentication is delegated to LDAP/AD. In these instances the application can neither change the password nor create users anyway.


dom0's comment showed what you meant. That's a much better auth dialog.


It's actually subtly above the line of death in Chrome: https://i.imgur.com/dEootju.png

And Firefox greys out some browser chrome: https://i.imgur.com/JfJ57qA.gif

But these things are probably not going to be noticed by your average user...


It's a nice design. But in this particular case, does it matter? What's the threat, that the credentials you enter into a fake basic auth box could be sent to the server? The credentials you enter into a real basic auth box are just sent to the server -- basic auth doesn't do any password hashing clientside. Why would a phishing site (that's pretending to be another site that uses basic auth) fake a basic auth box when they could just send the right header and get a real one?

What am I missing?


Does the real authbox include a domain that's asking?


Yes


Speaking of zones of death, I was recently the (unsuccessful) target of a credit card gathering scam—on a Twitter ad, pretending to be Twitter.

https://twitter.com/bcjordan/status/819894043870105602

Multiple users actually entered their CC #s, two canceled them after they saw my reply to the tweet warning users.

Incredibly, Twitter has still not notified the scammed users about it despite removing the ad after my report and multiple tweets to support requesting they notify the affected users.

The fact that they allowed that ad to get through (essentially profiting from users identities and financial information being leaked?!) is just unbelievable, separate from their failure to protect/notify the users affected by the scam.


I don't know why you'd need to scam people on Twitter. There are plenty of people who just post photos of their cards:

https://twitter.com/needadebitcard


I was pleasantly surprised when my grandmother showed me her new chip credit card and it didn't have the numbers stamped on the front. They're instead printed on the back. I suspect the bank was getting sick of issuing new numbers after people inadvertently posted their own card numbers online.


That is excellent. My colleagues in ecommerce will have kittens over that. Sometimes we wonder if such and such country or such and such bank uses chip and pin. We can just test our online checkout to find out!!!


That's just depressing


That's actually quite well done


that is bonkers.


From a few weeks ago:

https://twitter.com/tomscott/status/812265182646927361

This is a neat blog post that goes to show the extents of faking that can be done in the browser. More talks about this will hopefully lead to better "security UI" as the author puts it.


I have my browser default to non-standard zoom (150%). Assuming the fake attachments use a jpg instead of an SVG, they would look different. Not to mention miss the on-hover CSS. I wonder if I would notice it or not; something would probably feel off but not enough to fully register.


That was one of the most clever phishing attack that I've ever seen. I'm quite sure that if I wasn't aware of such attacks, I'd fall for it myself.


The comment made about domain names not being trustworthy is why I like EV-certificates.

Some people (notably google) argue that EV-certificates add very little of value because the user can just as easily check the domain name. Thing is, I could probably get some paypal or google like domain. Even if I can't, I could use a data URI as in [1] to put https://google.com in the address bar.

Compare that to EV. To get a browser to display google (or googel, or paypal), I'd need to convince a CA to issue such an EV cert. Whilst that might not be impossible, it takes something close to a state-level actor. A lot of phishers operate below that level.

What EV gives over the domain name is a fully CA controlled part of the UI. Whilst the address bar is the 'zone of death by phisher' the EV bar is the 'zone of death by CIA/KGB'.

[1] https://www.wordfence.com/blog/2017/01/gmail-phishing-data-u...


An entirely different but similar issue is logs. If you aggregate logs in a simple, unstructed text file, then it becomes pretty easy to add faked log lines, or, if they're viewed in the terminal plain-and-easy, embed VT control characters in log lines that can hide other log lines. And with creative use of Unicode one can also often confuse readers.


I saw an example of that with shellscript files you're supposed to run with 'curl http://example.com/script.sh | sh' or something, where if you pipe it to cat instead, it looks harmless enough, because it contains control characters that erase the dangerous parts. So you have to download the script, and load it up in an editor before you can see what it actually does.

...which of course is so much work that noone does it.


Opening it in a text editor is not sufficient. With clever use of 'sleep' you can even have the server return a malicious payload only if it thinks its getting immediately piped to sh[0].

If you're opening it in a browser to check, you've also got to worry that the server may be looking at curl's user agent to decide whether to serve up a malicious payload[1].

[0] https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b... [1] https://jordaneldredge.com/blog/one-way-curl-pipe-sh-install...


Or open it in an actual text editor, like I do anyway to get syntax highlighting.


You can't embed control characters in URLs, though. Of course some web servers and web applications may blithely accept illegal unescaped control characters in URLs. If so, they're buggy.


    if invalid(input):
      log.warning('UberForm field wrong: ' + input)
(And many subtle variations thereof)


In my experience the ordinary user is no longer able to distinguish between a native binary that is running on their OS (example: the windows update feature of the win10 control panel) and data presented inside a browser window. witness the number of people who have fallen for the bsod scam sites and given away their CC info.

https://www.google.com/search?q=bsod+scam+site&num=100&prmd=...

Even when people see a scam where the UI obviously doesn't match with the OS, ignorant non technical people fall for it. People get those scam sites in safari on OSX and still click and pay.


I've never heard the term "line of death" used to describe this before, but this concept is exactly why I've sadly convinced myself that fully chromeless browsers are a bad idea. Unless there were some sort of spoofless hardware indicator that a given UI element was being displayed by the browser, I suppose... but that sort of defeats the purpose.


How about, in the same way that some banks allow you to upload an image that they can then show you to prove you're talking to the same party that you've talked to previously, a chromeless browser could ask you to "draw" or supply an image for trusted actions. This "badge of trust" could then be show inside any dialog that asks the user to perform any action that you would not want a website to fake. It would of course be important that untrusted website content is never able to read these pixels.

This is essentially inverting the relationship, by inserting a "zone of life" inside the "zone of death".


What's a chromeless browser?



This is an interesting case where security and design are in direct collision. From a security perspective you'd want the demarcation between the application itself and the untrusted content area to be as clear and obvious as possible, which would mean drawing big borders between them so thick nobody could possibly miss them. But contemporary design is all about being "clean," part of which involves making borders razor thin and so lightly colored you can barely see them.

The example given of Chrome's line-of-death-crossing chevron is a good illustration of this tension. Say what you want to about it security-wise, but you can't say it's not clean!


I don't think I really agree with this. Making good visual design, or let's say in this case "clean" visual design, work is up to the designer.

Taking for example the Opera 12 example I mentioned in comments above[0]: while I don't think Opera's overall UI design in 2011 was particularly visually pleasing, and certainly not as clean as Chrome's today, if you consider the UI pattern in isolation there's nothing preventing it from being done cleanly. Facebook uses the same UI pattern for the active state of its status input today.

That's just one obvious example - I'm not suggesting it's the only one. Google's Material Design guidelines advocate a lot of context-crossing - the canonical example being the "Floating action button" attaching to sheets[1]

[0] https://news.ycombinator.com/item?id=13400645

[1] https://material.io/guidelines/components/buttons-floating-a...


Occurs to me that counteracting this problem might be one of the strengths of 3D UIs, such as we might have to look forward to in AR systems, or with display advances. Untrusted content can literally be loaded up and restricted to exist only 'inside' a chrome box, making its provenance clear.

Of course, chances are that we'll get carried away with the possibilities of allowing AR web browsers to create arbitrary objects in the AR space long before we realize what a terrible idea that is...


Something else to add, a Data URI can do this: https://twitter.com/tomscott/status/812268998742118400


I have found a much worse case in mobile apps. I do not have Facebook App installed, so I get a prompt from some apps when I try to auth through facebook in-app to login into facebook, which could be totally fake.


This is a truly dark pattern. https://textplain.files.wordpress.com/2017/01/image38.png?w=... Faking browser popups is evil, and unexpected for a high-profile site such as Tom's Hardware.


What exactly does it accomplish, though, in this instance? Since it's not real it can't actually authorize anything the site can't already do. It's a dubious approach, but it seems more misguided than actively malicious.


If they say no to the real thing, the site can't ask again. But of course they can present the fake one on every page as many times as they want until the user acquiesces. So once they click "yes" on the mini dialog, the page opens the real permissions dialog, because you can be much more certain that they'll allow it.

Good TC article on the practice: https://techcrunch.com/2014/04/04/the-right-way-to-ask-users...


On iOS I think it is correct to first ask in app and then trigger the real thing if the user agrees because in an app there is a context that the app usually needs the permission so it makes sense and I feel it's nicer to ask me first inside the app and usually to explain why they need it at the same time than to take me straight to the real thing without warning because that puts me outside of the app. On websites it might have been ok if they asked in a proper way but presenting something that is pretending to be part of the browser UI when it isn't -- that is, as parent commenter said, a dark pattern.


Phishing, mainly.


Put the tab strip below the URL bar. This used to be the case in the past.

That'll leave plenty of room to have things drop down from the URL bar without much ambiguity.


I was playing with picture-in-picture attacks on Chrome some time ago and even proposed a way for mitigation, but it was dismissed.

Here's the PoC I did: https://www.youtube.com/watch?v=0oega6C5SF0

And the mitigation I proposed was from this: http://i.imgur.com/8m6UdiC.png

To this: http://i.imgur.com/turRAdc.png


> Unfortunately, on windowed operating systems, this is worse than it sounds, because it creates the possibility of picture-in-picture attacks, where an entire browser window, including its trusted pixels, can be faked [...]

They retorted “Well, we passed this screenshot around our entire information security department, and nobody could tell it’s a picture-in-picture attack. Can you?”

Maybe I'm naive, but shouldn't you be able to detect picture-in-picture attacks rather easily because you never opened that window in the first place?

Additionally, the "chrome" of the picture-in-picture would behave significantly than a real chrome.

I feel both of those points can't be assessed by showing people a screenshot, because people have significantly different expectations when looking at a screenshot of a website than when browsing a website by themselves.


A few websites I know will pop up a separate window for entering credit card credentials.


I can't thank this article enough for raising a stupidly simple yet very generic problem.


I thought Microsoft or somebody implemented a prototype for secure pop-ups where the windows had animated borders containing personal user information as a marquee? The idea was that this would be extremely difficult to fake (even if it looked a bit weird).


How about we put a distinctive icon in the trusted zones, which the renderer won't allow under any circumstances in the untrusted area.

Also with a buffer so nothing too similar is allowed, or perhaps a warning comes up if something is close.

I vote for something like the chrome dinosaur.


> How about we put a distinctive icon in the trusted zones, which the renderer won't allow under any circumstances in the untrusted area.

So in the blogpost announcing this distinctive icon, what does the screenshot look like?


Isn't this essentially the lock icon, today?

What is "trusted"? We get the lock icon if a valid TLS connection is formed; if you want a more secure connection, you can get EV certificates. We could do away with the lock icon and only show a broken lock if not on TLS, and only show something that looks secure on EV certs, (which seems to be where browsers are headed.)

A simple valid TLS connection getting the lock icon is problematic when people are using DNS names that are close-but-not-quite to things like paypal.com. And we want TLS certs to be issued automagically ala Let's Encrypt and such, so it's easy, unfortunately, to get a cert for paypal-not-quite.com. Such is the difference in "secure connection" and "a secure connection to a party you trust."


Pretty much - except the rending engine won't let that block of pixels allow to hit the framebuffer


Can someone explain the risk of something like Mac OS Mail asking for your gmail password? There's no address bar so I've wondered if I can really trust that I'm not handing my password to a MITM.


Google usually suggests that users create app specific passwords for anything that requires you to enter your Google credentials inside another app. If we follow this religiously, then the risk will be quite low


Not an expert, so perhaps this isn't the only way. But i could envisage a malicious DNS server (such as in a coffee shop or airport, perhaps even an AP run by a malicious user inside Starbucks called "Starbucks RLY SRSLY") serving up mail.gmail.com A records that point to their own server, which listens on IMAP ports and logs usernames and passwords. For this to work, the application in question (such as Mail.app) would have to do a poor job of certificate trust verification. I am not familiar with how that works on OS X, but i'm assuming that it'd be somewhat hard (although not impossible) to obtain an SSL certificate for mail.gmail.com that a usual OS X installation would accept. Personally i tend to prefer the TOFU (trust on first use) way of doing things, after having connected on a (relatively) trusted internet connection, e.g., at home. If someone is more knowledgeable feel free to weigh in.


Why can't browsers do image differencing to detect when the page contains something pretending to be the browser or OS chrome, and plaster warnings overtop?


Surely it would have an unacceptable performance impact. Probably you'd need to run the matching in the GPU to get anything remotely useful, and you would literally kill battery life.


It would be helpful if this post included mentions or links to any best practices to help mitigate this. Does anyone have any they would like to share?


Here's one: don't use the default window manager theme. This is much easier on Linux and *BSD than in Windows or MacOS.

I've seen lots of picture-in-picture attacks. They usually simulate Windows title bars and controls. Hah. I once saw one on a Mac which adapted to the OS and tried to show a Mac window frame, but it was an outdated version.

That brings me to another point: send an incorrect User-Agent. Same browser on a different OS, perhaps.


It used to be pretty easy to customise appearance on Windows, but the latest versions seem to have mostly castrated that functionality.


Would you say that castrating window decorations was a misguided attempt by Microsoft to make Windows more like eunuchs?

(I'll let myself out)


With HTML5 push state navigation, can we really trust the address bar? Or is the domain part protected from that?


The domain part is protected from that, though as shown in grenoire's link there are other reasons to distrust the address bar.


Why is there a line of death to begin with?

The picture-in-picture attacks seem serious enough to warrant a new kind of browser.


How could we eliminate the line of death? There has to be a part of the page that we hand control to the site, otherwise there wouldn't be any content, and the line of death is just the boundaries of that.


Oh my grandma what big teeth you have!




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: