Not saying I know how to fix this, just that in my experience non-tech people are so completely unaware of what's going on that this "line of death" thing is not even a thing for them.
That's like saying the fix for road deaths is to educate drivers. We already proved that doesn't work nearly as well as designing better cars, roads, and signs.
Educating 7 billion human beings is a lot of work. A mind-bogglingly insane amount of work. Security is even worse because one lapse in judgement, one sick or busy day, can completely erase a lifetime of following security best-practices.
The power of software is the power to make machines do the work. Why would we want to make 7 billion people take responsibility for a problem that could be solved by a few thousand programmers?
Precisely so that those 7 billion people will not be at the mercy of the few thousand programmers (and those who they represent the interests of.) Creating more walled gardens --- which is effectively what you seem to be proposing --- only gives those in control even more power.
Building software is a collective process that often depends on experts. Nowhere is this more clear than in cryptography, which everything else depends on. With hardware it's even worse.
We need to get everything reviewed and decided on in the open, so that independent experts can verify it. But getting rid of experts - forget it, that's a cabin in the woods scenario that doesn't scale.
Of course we can always do more to educate more people, but that's a way of increasing the number of local experts. You're still going to be surrounded by lots of non-programmers who can easily make naive mistakes.
I used to believe this too. Over the past several years though, I've worked with many, many users from all walks of life, and I've come to realize the hard truth: the users simply don't give a shit. They already use a billion other apps in their day-to-day work and don't have time to spend watching training videos or reading documentation. So if they have to be educated to be able to use your app properly, you've lost.
This isn't to say that all users are like that. There are definitely ones who are tech-savvy and competent and curious. But the overwhelming majority want nothing other than to click a few buttons and get the results they want from the app. Anything more complex will cause them to give up and move on.
The user is the boss of the software. The user is not a peripheral that I get to reprogram. I must accept the user as is, and adapt as best as possible to their actual capabilities.
If users with to abdicate their positions as bosses, then they clearly are not to be considered as having valid authority.
Not a Windows user these days but I understand it's not possible to get the old Windows 2000 look anymore, though I'm sure you can still change color themes and appearance to some extent.
If you do use Aero, you can change the window chrome colour to something custom, which should catch out sites trying to fake windows. I don't think browsers provide any way to get the window chrome colour, though come to think of it, I'm fairly sure IE does/used to provide system colour names in CSS, so it might not be impossible... (If you recall the Win2000-era colour customization window, which IIRC you can still access in current versions of Windows, it's just hidden, you can set things like default window background colour, which traditionally in the Win2000 era was grey. So I guess the idea is that websites could use these system colours if you changed them to be consistent with the OS. Of course nobody really does this.)
Since this is part of the initial setup wizard, I think it would be pretty hard to fake a Windows 10 dialog from inside a web browser.
You can afford to lose the remaining 3%.
The accent color? If you buy a new Windows 10 machine off the shelf, after you enter your name, it asks what your favorite color is. I think blue is highlighted when that screen comes up, but you can't miss the opportunity, so 3% is a bit low for an estimate of how many people will change it.
A fake dialog won't know that my titlebars and buttons should be teal.
The way to improve is to restrict the amount of damage that can be done by spoofed websites and have multiple safe-guards in place.
People cannot be expected to learn these technicalities.
If they get a phone call, on the other hand, and are asked whether they really intended to make a large purchase that raised a flag somewhere, they will know what to say.
Now, you will say : kids are good at using tech, look they are on x,y & z. They know how to do things I don't! But the thing is that they are using services learned one at a time - not access to the library of babel that we all hoped for !
I agree with your post up to the end. The Very happy to stay uninformed too is like "the soviet people love communism". User don't know what is out there that they could get. Especially users don't know what it is that could be made and put out there. This is because tech people don't actually give a toss about HCI and information retrieval, just selling ads and raising rounds of funding.
Don't be evil.
Not quite the same thing in the end, though.
For all intents and purposes, it is. I do not know a single person who I would term "proficient with technology" who is afraid of experimenting or overly concerned about making mistakes when using something new.
In my experience, people who are "good with technology" are always curious hackers and tinkerers who are unafraid of breaking things, and are never satisfied with simply using the tool: they must look under the hood and they must understand how it works.
I would say the same principle applies far beyond computer technology: math, engineering, physics, music, etc...
The worse thing that's every happened to me because of my lack of fear was that I fried the motherboard on a friend's Mac back in the 90's by plugging a parallel printer into a SCSI port. Oops! Fortunately it was under warranty.
The vast majority of them max out their skillset with being a consumer (e.g. social media apps) and install a bit of malware along the way.
Anectdotal example: On one hand I sometimes get almost mad when I try to help my parents with computer problems (especially via phone) because they didn't even bothered with randomly trying somewhat problem-related buttons or options. They just get paralyzed (by the fear they could break something) because something happens what isn't routine, even if the solution could be found within minutes even by non tech people.
On the other hand I'm glad about the same behaviour when they call me to ask if some mail or website is legitimate or not. About half of the calls are false positive scam suspicions, but I'm happy looking into them, as long it helps them not getting scammed.
Take MFA for example - absolute 'usability' nightmare! Makes your product many many times harder to use and fails horrible under fairly common use cases (like that time I had to mail in a certified copy of my ID overseas to AWS to regain access to our root account).
Could this also apply to the WhatsApp "backdoor" thread from yesterday?
Also worth nothing that in Safari the modals don't block the rest of the browser (I believe at least Firefox does this as well?) and are fully contained to its own tab.
And why is that?
It's because doing the opposite is in the way of maximizing your ROI (return on investment).
Put simply, it reduces the company's (investor's/owner's) money. It's always about the money.
... will try and find a screenshot
Edit: Couldn't find one so just installed it myself: https://pageshot.net/images/4af15a26-6eb8-45a2-b4d5-ed6ea19a...
Edit2: dom0 beat me to it below also
Still searching for a screenshot unfortunately... may just re-install it and take one myself.
Moreover, these kinds of UIs are still possible with the 'flat' look that's in vogue today, so there's little excuse why others choose not to do it. Perhaps one reason is that basic auth lost out early on to site-supplied login forms, so people got used to entering usernames and passwords into the page content anyway, instead of the browser UI.
For the most part, basic auth only tends to affect uses like intranet sites, router login pages, web services, remote management pages -- settings where phishing can still cause (serious) damage, so a harder-to-fake UI would be beneficial nonetheless.
On the comments on basic auth losing out, you might be right that that's a reason, but HTML5 APIs requiring some kind of UI confirmation from the user (like HTTP basic auth does) are far more proliferant than they once were (see http://permission.site/ for many examples), so I don't think that excuse is really good enough for browser vendors.
Incidentally, it's worth noting that the UI for this kind of thing (user confirmation prompts) is pretty much a solved problem on mobile: these prompts tend to use the OS notification API, so always appear outside the browser chrome entirely.
Unfortunately, with HTML5 notifications, the ship has probably sailed on this and it went from being an intriguing idea to a bad one, as now it's getting mainstream for individual websites to generate OS-level notifications. This made a previously privileged pool of messages full of untrusted content.
To be fair, basic auth is not particularly user-friendly. If you want to add anything else to the login form, such as a "Remember Me" checkbox or a captcha, you can't put that in the browser chrome, you need to add an additional step in the login flow. If you want an "Did you forget your password? Click here to reset it." error message, the user has to cancel out of the auth dialog in frustration before they can see it (or you have to redirect them to an error page after failed auth, with another link/button to Try Logging In Again.
And even if you solve those problems, the user still needs to enter their username/password when creating their account, and I have never seen Basic Auth used for this scenario.
And Firefox greys out some browser chrome: https://i.imgur.com/JfJ57qA.gif
But these things are probably not going to be noticed by your average user...
What am I missing?
Multiple users actually entered their CC #s, two canceled them after they saw my reply to the tweet warning users.
Incredibly, Twitter has still not notified the scammed users about it despite removing the ad after my report and multiple tweets to support requesting they notify the affected users.
The fact that they allowed that ad to get through (essentially profiting from users identities and financial information being leaked?!) is just unbelievable, separate from their failure to protect/notify the users affected by the scam.
This is a neat blog post that goes to show the extents of faking that can be done in the browser. More talks about this will hopefully lead to better "security UI" as the author puts it.
Some people (notably google) argue that EV-certificates add very little of value because the user can just as easily check the domain name. Thing is, I could probably get some paypal or google like domain. Even if I can't, I could use a data URI as in  to put https://google.com in the address bar.
Compare that to EV. To get a browser to display google (or googel, or paypal), I'd need to convince a CA to issue such an EV cert. Whilst that might not be impossible, it takes something close to a state-level actor. A lot of phishers operate below that level.
What EV gives over the domain name is a fully CA controlled part of the UI. Whilst the address bar is the 'zone of death by phisher' the EV bar is the 'zone of death by CIA/KGB'.
...which of course is so much work that noone does it.
If you're opening it in a browser to check, you've also got to worry that the server may be looking at curl's user agent to decide whether to serve up a malicious payload.
log.warning('UberForm field wrong: ' + input)
Even when people see a scam where the UI obviously doesn't match with the OS, ignorant non technical people fall for it. People get those scam sites in safari on OSX and still click and pay.
This is essentially inverting the relationship, by inserting a "zone of life" inside the "zone of death".
Discussed at https://news.ycombinator.com/item?id=8670503
The example given of Chrome's line-of-death-crossing chevron is a good illustration of this tension. Say what you want to about it security-wise, but you can't say it's not clean!
Taking for example the Opera 12 example I mentioned in comments above: while I don't think Opera's overall UI design in 2011 was particularly visually pleasing, and certainly not as clean as Chrome's today, if you consider the UI pattern in isolation there's nothing preventing it from being done cleanly. Facebook uses the same UI pattern for the active state of its status input today.
That's just one obvious example - I'm not suggesting it's the only one. Google's Material Design guidelines advocate a lot of context-crossing - the canonical example being the "Floating action button" attaching to sheets
Of course, chances are that we'll get carried away with the possibilities of allowing AR web browsers to create arbitrary objects in the AR space long before we realize what a terrible idea that is...
Good TC article on the practice: https://techcrunch.com/2014/04/04/the-right-way-to-ask-users...
That'll leave plenty of room to have things drop down from the URL bar without much ambiguity.
Here's the PoC I did:
And the mitigation I proposed was from this:
They retorted “Well, we passed this screenshot around our entire information security department, and nobody could tell it’s a picture-in-picture attack. Can you?”
Maybe I'm naive, but shouldn't you be able to detect picture-in-picture attacks rather easily because you never opened that window in the first place?
Additionally, the "chrome" of the picture-in-picture would behave significantly than a real chrome.
I feel both of those points can't be assessed by showing people a screenshot, because people have significantly different expectations when looking at a screenshot of a website than when browsing a website by themselves.
Also with a buffer so nothing too similar is allowed, or perhaps a warning comes up if something is close.
I vote for something like the chrome dinosaur.
So in the blogpost announcing this distinctive icon, what does the screenshot look like?
What is "trusted"? We get the lock icon if a valid TLS connection is formed; if you want a more secure connection, you can get EV certificates. We could do away with the lock icon and only show a broken lock if not on TLS, and only show something that looks secure on EV certs, (which seems to be where browsers are headed.)
A simple valid TLS connection getting the lock icon is problematic when people are using DNS names that are close-but-not-quite to things like paypal.com. And we want TLS certs to be issued automagically ala Let's Encrypt and such, so it's easy, unfortunately, to get a cert for paypal-not-quite.com. Such is the difference in "secure connection" and "a secure connection to a party you trust."
I've seen lots of picture-in-picture attacks. They usually simulate Windows title bars and controls. Hah. I once saw one on a Mac which adapted to the OS and tried to show a Mac window frame, but it was an outdated version.
That brings me to another point: send an incorrect User-Agent. Same browser on a different OS, perhaps.
(I'll let myself out)
The picture-in-picture attacks seem serious enough to warrant a new kind of browser.