Hacker News new | comments | show | ask | jobs | submit login
Bypassing Browser Security Warnings with Pseudo Password Fields (troyhunt.com)
175 points by pgl 7 months ago | hide | past | web | favorite | 120 comments



If only we had these kinds of strong warnings in the VOIP industry. Nearly every provider barebacks the internet, throwing unencrypted signaling data (phone number dialed, keys pressed during the call, codec to use) and call media over the internet raw, just hoping that no one eavesdrops or alters their data.

HIPPA compliance? Nah bruh, unencrypted UDP is just fine! PCI-DSS says we can't take credit cards over this wholly insecure connection? Who cares! Just don't let the auditor near our PBX.

Sadly, the HTTPS and IPv6 anti-vaxxer crowd is strong in the VOIP community, if it isn't severely painful to the VOIP company themselves, they aren't going to secure it. Not that they'd secure it properly anyway, even if their livelihoods depended on it...


I've seen PBX people find religion once they wake up Monday morning to a weekend of hacked calls to Africa. Though sometimes that religion is in the form of yelling at upstream providers to please give them refunds.

And because of these insecure systems, the more serious issues (all this SIP software is tons of C, and I've found exploits in just 1000 line utilities, let alone protocol level hackery and other fun) get ignored. I found a simple redirect bug in a VoIP platform. Ignored. Later some guy used it to the tune of $90K.

I don't want to become a criminal outright but I've gone from a "friendly disclosure offline and let you lie about fixing issues" to "Sit on 'em and sell 'em one day" model because everyone's so obtuse.


SIP... doesn't seem like the greatest protocol. Maybe I'm tainted because I've only really dealt with it in the context of the Microsoft IM/telephony platforms (renamed constantly, but always essentially the same), but it appears to be incredibly brittle complicated. On anything less than a local LAN, messages get dropped or mangled or timed out all the time, and that trashes connections, or puts sessions into unrecoverable states.


I guess you can argue about whether it's the greatest protocol, but the problems you describe most certainly are not procotol problems, but rather broken implementations.


The protocol problems are due to the text-based format and a bizarre desire to make said text format "human friendly". It inherits the stupidity from HTTP (line folding, comments in headers) and adds its own fun. The authors actually suggest that the "intent" of the message be followed in case of errors. It's terrible. The packet issues are due to the strange use of UDP with mandatory TCP switching. Needless complexity.

Oh and these aren't theoretical. Two big, widely deployed implementations cannot even agree on how headers end, and will read the same message differently. This can be exploited when a network does header processing. Imagine adding a "x-accountid" header, and removing any existing ones - if interpretation differ on what constitutes a header, an attacker can slip fake headers in. Not entirely dissimilar to browser exploits that let script include a header with a newline in the value, letting scripts maliciously set headers they shouldn't be allowed to.


Well, yeah, I certainly didn't mean to say that SIP is a great protocol, it is indeed quite terrible--but still in practice if things don't work, that's usually a result of at least one party not following the spec (which is not really surprising , given all the pointless complexity ... but then that's still not really an excuse to not implement SIP when you are claiming that you do).


Practically, it is not possible to implement the spec in an interoperable way. Existing implementations are mutually incompatible. Hence the need for config flags per "trunk". What one system requires, another will reject. Sad but true.


Well, yes. But still, more often than not, when I encounter interoperability problems, if both sides actually did what the spec says, there wouldn't be a problem. SIP itself is quite sad, but in my experience many implementations are even worse than the spec.


NAT punching with SIP has gotten worlds better in the past few years. The protocol itself is sufficient, its just actual implementations that are piss poor at this point, missing even basic stuff like IPv6.


Yeah, "security" people in the VOIP industry can't get their head out of their own ass. Seen it too many times myself!


In my experience, the entire world runs on insecure systems (and will continue to do so until companies start getting sued into oblivion for leaking data). Secure systems are the exception--not the norm. It's just not a priority because companies only prioritize things that "add value". So until we attach a real cost to lack-of-security, it won't be valued.

I've literally seen a college have admin credentials hosted on a publicly addressable plaintext document just so that their new machines can netboot. And that's just one, quick, story out of dozens upon dozens I have.


Admins are lazy and busy, hence why UW for example runs a totally insecure PBX, which is surprising considering Avaya is usually one of the better vendors.


Admins lack management support for security issues.


What are VOIP systems used for these days? Private enthusiasts or some company call centers? I've no idea, as I haven't seen one for at least half a decade now -- even all conference calls are always over Skype or Hangouts.


The largest VOIP companies in America today are the cable companies, ACN (pyramid scheme with a few million VOIP customers), Vonage (a first mover) then a few dozen minor players that are near the half million customer mark. IMO its a big (but unsurprising) marketing failure.

VOIP as a product never really made it into the home, despite all its features and enhancements. Instead, it is limited to the realm of businesses, where it is in most hospitals, medical facilities, chain stores, call centers, and so on.


All landline phone connections sold by Telekom in germany are VOIP, it's just mostly hidden from the subscriber. The technology is doing perfectly fine.


VOIP and specifically SIP in its least secure form became the replacement for old tandem circuits. Sadly, rather than pushing SIP hardware out to the endpoints, most lamdline carriers chose to just provide twisted pair service.

VoLTE is as close as most consumers will get to proper VOIP service.


Giving everyone SIP phones is expensive, and forcing your subscribers to purchase SIP phones will ensure a number of them flee to the competition. Putting the VOIP hardware in the modem makes much more sense, because you have to provide that to your subscribers anyway.


I actually have my own Cisco Phone adapter to connect my old analog phone/answering machine. I'd assume that a true SIP phone would work as well. What's questionable is whether I could authenticate from a different network than my home network.

I know, however, that both Vodafone and Telekom in Germany offer products that allow connecting VOIP phones from anywhere to a virtual phone appliance.


That's interesting. In the Netherlands, all the telcos (with the notable exception of XS4ALL) lock down the modems and don't give you the settings and logins to use your own equipment (although the Consumer and Market Authority is still researching whether this is actually legal).


Locking the connection to a provided modem is not legal in Germany since 2016-08-01 by law (FTEG, replaced by FuAG in 2017) The provider is obliged to hand you the required credentials to connect your own hardware, though getting the actual required configuration may require some puzzle work, but most providers give config examples for at least the most common hardware. My setup for example consists of a draytek modem with unifi network hardware and above mentioned Cisco phone adapter. I don’t even have a piece of Telekom-provided hardware here. Took a bit of googling to get IPTv working, but other than that it’s a perfectly stable and capable setup. Many people use Fritz boxes which are a bit less capable but easier to set up.


I'm assuming you're talking about consumer VoIP; in the corporate world, Cisco (and presumably others) do a crapload of VoIP office phones.


I also haven't seen an office phone in ages, it's just all mobiles around here.


I've worked for pretty much every type of company except VC-funded Silicon Valley style start-ups since about 2008. Every desk I've ever sat at, including the one I'm at now, has had a VoIP phone sitting on it.


In the early days at Nest, one of my co-workers had trouble with his loan application because Nest, his employer, didn’t have a phone number. He showed the bank the company’s web site, and they were satisfied. Only a real company would have an actual web site.

For added fun, the site looked like this at the time https://web.archive.org/web/20110207225932/http://nestlabs.c...


Is your entire daily experience only between your home and office and car? You don't even have to go to an office to see non-mobile phones.

You've never noticed the phones at every checkout stand in the grocery store? Or at every customer service desk in every retail store in the nation? Or hotel front desks?

When that new coffee shop opens down the street and isn't on Yelp yet (or is, but the hours are wrong anyway), how do you find out its hours? Do you ask the barista for his Facebook ID? How do you find out information about a place that doesn't have a web site, or that has an obviously outdated web site? Have you never worked in a place with a receptionist?

The only place I've ever worked that didn't have telephones turned out to be a scam operation. I don't think I would trust a company that didn't have phones.


It's because you don't do bulk calls. Sells, support, orders, etc. They all need office phones at a certain scale. Not that they couldn't do it with mobiles. But infrastructures for those kind of systems all assume a land line.


Decent deskphones are significantly more comfortable than cellphones, nevermind the better audio hardware.


You could make special apps with a UI made to help with the specific tasks, and with a good bluetooth headset make a nice mobile experience IMO. But yeah currently solid IRL keypads + phone handles are better than mobiles'.


You are almost certainly an exception. The next time you are out in public, just look around. VoIP phones are everywhere.

Hell, I have a Cisco VoIP phone on my desk in my home office, tied into $work's phone system.

(I actually have three VoIP phones here at home, but I'm a network engineer for an ISP/CLEC.)


I've seen a lot of tech companies use appear.in or talky.io for their VoIP conferencing as well


"the HTTPS and IPv6 anti-vaxxer crowd "

What does this mean?


Pretty sure that means people who refuse to deploy TLS & IPv6, even when their hardware & software stack fully supports it.


There are situations where you need IPv4 due to rate-limiting. So while my stack technically supports IPv6 I have no reason to migrate to it.


You've been downvoted because this shouldn't be true, but since no-one else has stepped up, I'll bite :).

It's perfectly possible to do IP-based rate-limiting in the IPv6 world, you just need to do it based on different prefixes, rather than full IPs.

As a specific example, my ISP -- as is quite usual -- hands out /48s. So in the same way that you can rate limit my entire NAT'd IPv4 connection with a single entry, you can rate limit my entire IPv6 connection with a single entry, by storing the prefix.


That makes more sense than trying to force a completely unrelated opinion into a conversation.

Also, the notion that broad use of IPv6 = security in VOIP, IoT or any area is a postulation at best.

I've personally always found this to be a good overview of security issues involved in both protocols in VOIP:

http://ieeexplore.ieee.org/abstract/document/6714161/

(Sci-Hub approved)

There are certain parts of the industry hesitant to transition for the possibility of security mis-configurations and human errors, the picture that the VOIP industry as a whole holds no interest in security is false. It could definitely improve, but that doesn't just apply to VOIP.


It's not an opinion or unrelated but an analogy to another case where there's very strong evidence of a massive benefit with very little downside which is being objected to based on conspiracy theories and lack of concern for the damage to anyone foolish enough to believe them.


As a side note, the debate is not that simple. Vaccines work, but many vaccines are manufactured without what many would consider proper testing, and using toxic substances banned in different countries.

The debate is more about sloppy implementations than about the idea of vaccines as a whole. It's like someone is forcing the issue down to choosing poorly-regulated vaccines - or no vaccines at all. A false dichotomy.


The guidelines ask us to eschew flamebait, which means to avoid it altogether—both making it and taking it. This is what happens.

https://news.ycombinator.com/newsguidelines.html


> The debate is more about sloppy implementations than about the idea of vaccines as a whole. It's like someone is forcing the issue down to choosing poorly-regulated vaccines - or no vaccines at all. A false dichotomy

This actually goes quite strongly against what I've observed though. Literally all of the anti-vaxxers I've met are 100% against vaccines. They are not vaccinated, nor are their children.

They're not doing research and choosing to use some vaccines but not others. They're completely ignoring all vaccines.


> As a side note, the debate is not that simple. Vaccines work, but many vaccines are manufactured without what many would consider proper testing, and using toxic substances banned in different countries.

Those are big claims without any supporting evidence. From the sounds of it, you're repeating the anti-vax claims about mercury.


Ethyl vs Methyl mercury is one of the most popular debates (because mercury is scary!). Most vaccines switched to Thiomersal which has ethylmercury in it. However, even ethylmercury crosses the blood-brain barrier. Other 2 big ingredients are formaldehyde and aluminum. From there you have to get more specific about which vaccine you are talking about.


You still haven’t specified your claims. Which vaccines aren’t well tested? Which substances are included at levels known or suspected to be toxic, and by whom?


My claim is about the debate itself, not about any certain vaccine or danger.


So far you're following the anti-vax script perfectly: lots of FUD, no claims specified in enough detail to even evaluate them much less counteract the overwhelming evidence that vaccines are a public good with no reputable downside.


I've been supplying information and you've been complaining. I fail to see how I'm at fault for trying to answer questions.


You didn't just talk about the debate. You stated as a fact that many vaccines have improper testing or substances. If you state something as a fact, you should be prepared to back it up.


> You stated as a fact

You're putting words in my mouth. What I said was "...without what many would consider..."

I'm still taking about the debate itself and you're trying to make this a binary debate about vaccination.


You stated toxicity as completely objective. (And you had better mean "toxic at the doses given", because anything is toxic in large-enough quantity.)

> What I said was "...without what many would consider..."

That's still calling them correct about the vaccines not meeting those standards. There are a lot of claims about vaccine testing that are objectively false. It's not that their standards are higher, it's that they falsely believe vaccines undergo less testing than they actually do.

I don't want this to be a binary debate. I want you to quantify 'many' and provide actual evidence of anything.


You're still not supporting your claims. If, as you said, the debate is not that simple the onus is on you to justify the claim that there is in fact any significant debate over vaccination in the scientific community.


There's more mecury in the fish you eat and it's the methyl kind which sticks around for weeks. If you eat fruit, there's methanol naturally present which the body metabolizes into formaldehyde. I am willing to bet your kid gets a higher dosage of formaldehyde from the juices they drink then a vaccine.


> That makes more sense than trying to force a completely unrelated opinion into a conversation.

I'm curious, at what point does that opinion become fact? Isn't the evidence overwhelming?

I mainly ask because it feels like if the evidence for vaccination were not sufficient to warrant it as more than simply an opinion, wouldn't many other things become merely opinion too?


^^ Keys pressed... Jesus Christ, that alone has fueled entire generations of criminals. :))


Could we ask you to please comment more substantively, like the guidelines ask?

https://news.ycombinator.com/newsguidelines.html


"I’ve been speaking with the owner about SSL before I invest in becoming a member, but she’s been told by the dev of the platform (it’s a franchise system called ShopCity.com) that SSL is more about Google’s monopolizing visibility of content, and less to do with security"

This is an interesting observation of how Google's technical crusades often align with its profit interests.

The main threat that HTTPS everywhere secures against is preventing your ISP from analyzing your traffic in order to build and sell an advertising profile on you.

Now, obviously that is something I don't want, so I am all for HTTPS everywhere, but Google already has that profile, so for them HTTPS everywhere is eliminating the competition.


I don't want my ISP to inject JavaScript to random pages or analyze my traffic. That should be downright illegal. They should be like water supply company: provide me damn clean water and get out of my way.

Somehow the sewage company doesn't analyze my urine (I hope) to figure out if I prefer spicy or sour food and get an extra buck from third parties, and somehow they're still in the business.


> I don't want my ISP to inject JavaScript to random pages or analyze my traffic. That should be downright illegal. They should be like water supply company: provide me damn clean water and get out of my way.

I don't want my search engine to do that either. They should provide me damn accurate results and get out of my way.

Unfortunately, whereas I have a choice of several good ISPs here (UK), I have a choice of precisely one good search engine - Google - and it analyses my traffic to high heaven. (And I've tried DuckDuckGo, on several occasions for several weeks at a time, and I'm afraid it still sucks.)


I find DDG useful for probably ~75% of searches, so I keep it as my default, and switch over to Google if needed.

What aspect of DDG "sucks" for you?


I have been using DDG for several years now, and it works just fine. You of course are free to like Google better - to each for their tastes, but please do not claim there's no choice. There is choice, and you just happen to like Google better - fine, so live with Google then :)


> I don't want my ISP to inject JavaScript to random pages or analyze my traffic.

I haven't seen a single reputable ISP do this anywhere. It would illegal.

Is the US really such a third world nation that not even basic regulation like this exist?


If you run a JavaScript error collector or use CSP on a public site you’ll find this all over the world.

Mobile ISPs inject horrible 90s JavaScript which recompresses images (see e.g. https://calendar.perfplanet.com/2013/mobile-isp-image-recomp...).

Many ISPs - both mobile and wired - inject code to send messages about your account.

ISPs like Comcast have tried objecting ads:

https://www.infoworld.com/article/2925839/net-neutrality/cod...

https://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-...

After leaving Mozilla, Andreas Gal described ISPs reselling search engine queries and results to Google competitors:

https://andreasgal.com/2015/03/30/data-is-at-the-heart-of-se...


I used to have internet from Cox Communications. One day when I was working from home, I was surprised to see a page in my local dev environment load with an overlay telling me my ISP thinks I have a Windows virus (I didn't own any Windows machines at the time).

Turns out our dev environment didn't force CDN assets to load over HTTPS, so my ISP injected some JS into a library we were loading.

I tweeted at them, and they not only verified that it came from them, but also that they consider hijacking and modifying my traffic to be a service.


"With respect to your complaint that Cox intercepts and injects our own data in order to display alerts, etc, we note that browser alerts are a method Cox utilizes to bring customers’ attention to important information that may affect their Internet experience."

They'll also appear if you're nearing your data allowance or if their email service (that you probably don't use) is undergoing maintenance. When I last asked, there is no possible way to opt-out or disable these.


US Cable companies in the past have injected alerts into unsecured sites users browsed, to tell them they're running out of data. If they could do that, they could inject ads too.

US Mobile companies inject identification headers in unsecured HTTP calls, for advertiser tracking; and in other cases allow servers to ping the user's IP back to the ISP to get full details of the user (including addresses).

The regulatory agents responsible for regulating the cable and mobile companies in the US are right now hellbent on removing net neutrality and are fighting against the consumer. Fat chance of those "basic regulation"s existing or surviving.



Siblings have pointed out how very much it does happen, but if I may take issue with:

> Is the US really such a third world nation that not even basic regulation like this exist?

First, the US is by definition the First World (USSR et al. being second world, third being "everyone not allied with first two"). Second, we have somewhat different ideas about freedom that, often, lead to an extreme lack of regulation; the hope is that this gives more freedom and we'll work around abusive actors (yes, I know monopolies are an obvious weak point in the system).


It has nothing to do with regulation. Users sign contracts that explicitly state the company will do this. You can't fault the company for doing what it told you it would do.


> The main threat that HTTPS everywhere secures against is preventing your ISP from analyzing your traffic in order to build and sell an advertising profile on you.

That is not true. The main threat it protects against is MitM (man in the middle attacks) that allow someone to redirect all traffic to a website through their machine and thus see all the data including your password.

HTTPS when combined with root certificate trust is very effective at preventing these kind of attacks. Without it, using any shared internet at all (such as a company, school, or coffee shop) to log into any website or enter your credit card would be trivially easy to hack.

Seriously, I can boot up Wireshark, go to my coffee shop and easily see every non-HTTPS communication going over the network. IM messages, emails, and in cases like this post suggests... passwords too.

Edit: As a side note... I do this all the time to reverse engineer the wireless protocol for IOT devices since most of them do not use HTTPS yet. I use it for personal use but it could be used for harm as well. For instance, if the security cameras are IP cameras over HTTP I could probably intercept the password and use it to remotely turn off the cameras.


MitM is blown out of proportion. Afaik, only dns poisoning attacks will result in MitM as effective as phishing or botnets, and poisoning can be mostly-solved with better resolvers. No criminal anywhere cares about your password going over the wire in a coffee shop.


Which is true exactly up to the point where some criminal decides it's an attack worth automating, as with every other attack.

The one thing that reduces the likelihood of that happening is to minimize the amount of credentials you could get your hands on using that attack.


If you have a mass target sure, there are better attacks.

But if you target is only a single network, packet sniffing is pretty effective and is stopped by HTTPS.

And if you target is a single person or small group of enumerable machines, arp poisoning still works on many (most?) networks.

Personally I am more scared of the damage that can be caused by being a direct target than I am having my info in one of those massive dark-web data dumps.

Edit: Also, the ops post is actually an example of a MitM (where the ISP is the one in the middle). I just expanded it to the superset.


Perhaps, but it's fairly easy to un-Google your life nowadays.


Except, it shouldn't take work!.. Like - with cell phones.

Until recently, I had no idea that manufacturers actually PAY Google to have the services on Android... Talk about idiocy.


It doesn't seem like idiocy to gain access to the largest application ecosystem available for their product.

I mean, you _could_ go the amazon route, but how's that working out for them? Their mobile platform is not exactly flourishing. An Android device without Google Apps simply isn't going to sell in the millions.


> The main threat that HTTPS everywhere secures against is preventing your ISP from analyzing your traffic in order to build and sell an advertising profile on you.

As someone living in Europe, this literally happens nowhere. Because it's illegal.

SSL in browser has nothing to do with our ISPs. Stop being US-centric.


Yeah, no way that would happen in Europe.

Oh wait.

http://www.telegraph.co.uk/technology/news/8438461/BT-and-Ph...


Nice trick. Can it be used to stop password managers? Our system works over HTTPS and saving the password to login is okay of course, password managers are great and they must obey the user, not the page. But on sone pages of the system one has to enter credentials for other systems using password fields. Chrome always wants to remember these and prefills it with the login data to our system. Can this be stopped? Maybe by using this font


Have you tried adding:

autocomplete="new-password"

to the field? This is supposed to stop Chrome autofilling the value


No need for the JS, just change the font if the input isn't empty, right?


Yeah, but this is 2017, when we can't even display images or text on a webpage without JavaScript. Why do in a few characters of CSS what can be done in kilobytes of remotely-executed JavaScript?


Alternatively, couldn't you make "password" an eight-character ligature that looks like the word rather than eight discs?


yeah. The :empty() pseudoclass should do the trick.


:empty is for elements with no children, not empty value.


You could potentially use the validation rules, and set a min length of 1.


You'd need [required] instead of [minlength] for :valid to work. The other option is :placeholder-shown, to style no-input normally, but it's newer and not as supported.


I wonder if the good old input[value=""] would work. Probably not if you have jQuery.


No, because that matches on the HTML attribute (aka the defaultValue property in JS), not on the current value of the input.


It's kind of clever, but also bloody stupid at the same time.


I'm kind of happy that developers responsible for this type of stuff get exposed - if they can't get HTTPS then in my mind there's a high chance that other parts of their setup are following weak security practices. Now I know where NOT to signup!


This hack is exactly what I needed 2 days ago while working on a browser-based terminal app. My site will be secured with SSL/TLS, but I needed a way to make a content-editable span mask input like a password field. I already implemented it with a password input, but it doesn't wrap inline like a span does. It will be much cleaner to just add a class that masks the font.


A password field does more than just mask input. At least on macOS, it's also a secure input field where the OS ensures no other applications can see what is entered. Simply masking the font will imply to the user that their input is secure, when in fact it is not.


That is a good reason to keep using a password input. I should probably intercept the keypress events and show nothing so that the password never even reaches the DOM. Then I wouldn't have to worry about the input growing/wrapping.


Most terminal apps I use when you type in a password area nothing shows at all in the terminal (not even a mask). Though admittedly, that is not the best user experience for providing feedback.


The terminal has a feature called "local echo".

When you type something, it goes to the application (almost always its stdin file descriptor, but it can open /dev/tty and read that too).

When local echo is enabled, the terminal also prints what you type.

Applications that prompt for passwords simply (temporarily) disable local echo.


I wonder if terminal apps hook into the same security input APIs from the OS that browsers are using for password inputs.


Edit: Ignore this. It seems most of the posts I was talking about were either deleted or edited. Keeping my original message here for completeness sake.

---

The number of people saying that this is a clever workaround and agreeing with the people putting in the bug reports is very disheartening to see on HN. If a highly technical crowd such as HN can't get why HTTPS is important than what hope does everyone else have?

Troy hunt is a security researcher. The examples of the bug reports and quotes were meant to terrify and it worked on me. If I was a customer of any of these examples I would be pissed. If you're not upset and/or frightened please for everyone's sake take an infosec course or read up on the subject.


> The number of people saying that this is a clever workaround and agreeing with the people putting in the bug reports is very disheartening to see on HN

Huh? I've been through this entire thread and I haven't seen anyone suggest that this is acceptable behavior; even in the heavily-downvoted comments. It's mostly just people laughing at the lengths this site went to to shoot itself in the foot.


Wow... looking at it now I don't either. I don't know if they got flagged or deleted or editor of what. One of the comments in particular I can see was clearly edited to be more clear that they thought it was a bad idea.


Surely it would be easier to just get a cert.

What's preventing these types from doing so?


These kind of websites rely on having a huge number of domains (for branding purposes) all doing roughly the same thing. In this case basically 'Shop*.ca'. Of course HTTPS can be automated and facilitated with LetsEncrypt, but it would require switching a working system from one way of doing things to another. It's an investment they are not willing to make, because their customers (shops willing to pay for an extended listing on those websites) are shopkeepers, not tech-savvy users asking for improved security.


What's easier for a dev: inserting a few lines of code, or getting access to the production server, setting up letsencrypt (or getting a budget approval for the $10/year certificate)?


Code is rarely the same as "inserting a few lines of code" though. They presumably had to think about how to work around the problem, look for an appropriate font, etc. All for the purpose of hiding a symptom of an underlying problem.


Yes, exactly - there's a framework in place, which allows for the bullchit. :/


I believe it's either (a) a lack of understanding of _why_ one should use SSL or (b) a mistaken sense of principle of standing up to the perceived bulliness of Google, which, come to think of it, it's basically an application of (a)


> (b) a mistaken sense of principle of standing up to the perceived bulliness of Google, which, come to think of it, it's basically an application of (a)

But Google has been bullying around with their behaviour. I don't think that's even debatable.


They have been bullying around, but their enforcement of HTTPS for forms with password inputs shouldn't count as one of their instances of bullying. It's something browser vendors should have implemented long ago, even before LetsEncrypt came along, because it is highly insecure and users should know about it.


> It's something browser vendors should have implemented long ago

Netscape Navigator did this (almost) 20 years ago.

EDIT: Link. http://www.kentlaw.edu/faculty/rwarner/classes/legalaspects/...


Fortunately Google is consistent about enforcing encryption anywhere where passwords could be intercepted.

Oh, wait.

http://blog.elliottkember.com/chromes-insane-password-securi...

And if you disagree with them, you're "a novice".

https://news.ycombinator.com/item?id=6166886


That article is from 2013. You can set a master password on Chrome now. It then requests that password whenever you wish to view a password in the manager.

If you don't set a master password, then your passwords are (presumably) encrypted with your google account. So anyone using Chrome that's logged into your google account will be able to view the passwords via settings. So just don't let malicious users use your Chrome?

Edit: And there's also a guest mode for Chrome, but they can just exit out of the window and run a regular instance of Chrome to use it under your profile.


Can someone give a precise idea about this entire subject?


Huh? I don't understand your question.


I just realized: This site was never over HTTPS. It was an HTTP site, and the browser had a regression that broke the site's user functionality.

Troy Hunt is attempting to claim this is a feature and not a bug, and that their workaround is "being deceptive", when they never claimed it was secure to begin with.

The browser is literally pushing an idealistic philosophy down websites' throats and basically doing damage to businesses and brands without an attempt to help them, and any attempts to simply keep old functionality are being vilified as "anti-vaxers". This is not an honest narrative.

Yes, Oil and Gas International had an insecure site, and yes, their reaction and demand to the browser vendor was inappropriate. But the point of it is still valid: as a vendor, you don't embarrass and damage business reputations in order to force them to comply with the way you would like them to run their sites.

Troy writes in the article that browser vendors are trying to use a "lever" to "force organizations to go secure". I don't care who you are, it's wrong to force anyone to do anything they don't want to do, and on your timeline instead of theirs, and with absolutely no help given to them before this deadline.

Imagine if Microsoft changed their OS to flag every single application as "insecure" if it doesn't implement a new primitive, and they pushed this out today. All of a sudden, you receive a barrage of calls from upset users. You didn't know they were going to push that out (certainly Microsoft never sent you an e-mail), and you now have to hit the ground running trying to figure out how to add those primitives to your code, test them, and release them, none of which could possibly happen immediately, and may take weeks of development. Meanwhile, your reputation with your users is damaged, and users themselves go through emotional stress and fear. And Microsoft's response? "Too bad. You should have been secure already."

This is fucked up. And if Google does this knowing it's going to damage businesses, they could face a class-action lawsuit.

The only way they get away with it is because they have the biggest market share. If Chrome had a smaller user base, businesses would simply shut off access to Chrome browsers and tell them their browsers were faulty and to switch to IE. This is impressively tyrannical behavior for a software vendor, and Google is indeed being a bully.


The key thing to realize here is that the browser is the "user agent": it is supposed to represent the interests of the user.

Now users have pretty diverse interests, so browsers don't always get this entirely right, which is one reason it's important to have a variety of browsers so users can pick one that does represent their interests.

What's happening in this case is that the site is doing something that pretty much everyone who understands the issue agrees is harmful to users: having them type their password into an insecure page. Browsers and security professionals spent 10+ years trying to convince web sites to stop doing that. Then browsers spent a few years telling websites that they will start warning users about this behavior and giving specific timelines for when this would happen. Then they started showing those warnings they promised they would show.

To go back to your analogy, it's as if Microsoft had told developers for a long time that some specific API is deprecated due to being "insecure". Then they gave a timeline for the API being removed. Then they removed it. Can there still be applications who didn't move away from that API? Sure. Is it entirely Microsoft's fault that they are now getting lots of support calls? That's a hard case to make. Note that this sort of deprecation is something that Microsoft and Apple have in fact done.

> The only way they get away with it is because they have the biggest market share.

Firefox is showing the same warnings, no?

> businesses would simply shut off access to Chrome browsers and tell them their browsers were faulty

Sure, just like in the Microsoft case businesses tell their users to not install the OS security update, etc. You're right that if Chrome and Firefox had smaller marketshare businesses _could_ threaten to do this or actually do this. But at that point it's not entirely clear who the real "bully" is... In either case there's an exercise of market power to get your way against the (possibly reasonable) objections of others.

Disclaimer: I work for Mozilla, on Firefox.


> the browser is the "user agent": it is supposed to represent the interests of the user

> it's important to have a variety of browsers so users can pick one that does represent their interests

First of all, I'm now terrified of Mozilla/Firefox, because this comment reflects the idea that browsers should be developed as independent ethical entities that represent different groups, in the way special interest groups lobby on behalf of specific people, ignoring the concerns of everyone else.

Second, it's dangerous to put the onus of security on everyone but the user. I'm sure you've seen the wall of sheep: it's more than just http passwords. Users are stupid, and they get security wrong, and they need to be helped to get it right. But one thing that won't help them is absolving them of any thought whatsoever into investing in their own security.

Where this will end is a marketing campaign that sounds a lot like "Mozilla Firefox: The Secure Browser". All they need to do is download your program and just assume everything is fine. Which will of course be a lie, but one that everyone will accept, because they want it to be true.

The browser should not become a political toy. It should be simply a tool, and it should be up to those who wield that tool to decide how it is used. If I make an axe, I don't come to your farm and tell you how to swing it.

This could have been trivially handled by simply asking users how much concern they want to have over their security, or providing some mechanism for organizations to easily transition into technology changes at a pace that works for them. Instead it seems like browser makers are too fond of themselves as white knights to provide reasonable compromises.

Browsers are completely at fault for handling security so poorly in the first place. They continue to have the most asinine user experiences in the world when it comes to understanding what is actually going on when a user browses the web. They continue to support standards which can be easily subverted. They continue to build hack after hack into something that was supposed to just navigate documents and is now an entire fucking application platform. Browsers are a mess, and it's their designers that are at fault for that mess. Now it's clear that a mentality of moral superiority and special interests is the cause.

And while I'm ranting, what is wrong with browsers that they can't simply build a working secure authentication framework into the protocol and back it with a halfway usable UI? How is it a 20 year old tool used to access backend servers has a more effective authentication and authorization system than the most commonly used program in the entire world? It's not like this stuff was some mystery that the poor lowly browser devs couldn't understand. We don't need to be relying on shitty web forms to send plaintext passwords - we didn't need to be doing that in the year 2000!!!! How the hell is it that this piece of software, which is somehow more complex than my entire operating system, can't seem to perform the basic functions i've been doing with other programs for half my life? And yet have the balls to claim they're working in service to the user?

You know what would have been great for the users? A secure protocol which didn't degrade its own security. A URI convention that refuses to communicate with insecure sites. A button that rejects all connections not destined for the domain in the address bar, and functions that control the browser or access to its data without the user expressly allowing it. Simple things that could have actually completely ensured users' safety, without ridiculous complicated kludges that only do half of what they're supposed to do. And these should not be considered controversial - it's not like I'm suggesting they implement security policies before they add buggy features to brand new releases.

You are right, though. Browsers did take 10+ years to enforce a policy that is as unnecessary as it is sudden. I'm sure users will thank the browser vendors now for how much safer they are from black hat hackers in coffee shops. Oh, wait - they are still insecure. It's just now they know it and are unhappy about it, and other organizations can now capitalize on this.


> because this comment reflects the idea that browsers should be developed as independent ethical entities that represent different groups

I'm not sure where "ethical" came into that.

Some users want to have features that allow them to read websites in their preferred fonts. Other users don't care about fonts, but _really_ care about the colors and want high contrast. Still others want to have strong privacy safeguards (think Tor), while a fourth set care about privacy a bit less than that, and a fifth set don't care about privacy at all. These diverse needs might best be served by multiple different browsers that focus on different aspects of the user experience.

I see no reason why a browser that explicitly tries to make the web more usable for people who are red/green colorblind, say, should be a problem, though it seems to me that you do....

> it's dangerous to put the onus of security on everyone but the user

No one is suggesting that. However the reality is that there are maybe at most double-digit numbers of different browsers, maybe hundreds of millions of websites, if you're very generous, and billions of users. You ideally want to enforce security at chokepoints, which is why the browsers do most of the lifting here, then websites, then users.

There have been tons of user education campaigns in the history of the internet. To some extent they've even worked.

> The browser should not become a political toy. It should be simply a tool

Sure, and no one suggested it should be a "political toy". But maybe one user wants a flathead screwdriver and another wants a phillips head. And a third one wants a hammer, or hex wrench.

> This could have been trivially handled by simply asking users how much concern they want to have over their security

Been done, via surveys. The answer is "a lot". And yes, we could just say that if they care then they should be constantly vigilant. But constant vigilance is something people are really bad at (on a hardware level!), compared to computers. So any time we can design systems that don't require constant vigilance from people we probably should. I would go so far as to claim that requiring constant vigilance from people when we don't have to, and then blaming or punishing them when they cannot comply, is simply unethical.

> or providing some mechanism for organizations to easily transition into technology changes at a pace that works for them

This is why browsers have been cooperating at creating things like Lets Encrypt, precisely to provide such a mechanism. The question of timeframes is a complicated one, of course.

> Browsers are completely at fault for handling security so poorly in the first place.

No argument there. This is something browsers have been trying to do better.

> Now it's clear that a mentality of moral superiority and special interests is the cause.

I think you're reading things into what I said that were simply not there.


This isn't about pages that are just serving good old fashioned plain #content, it's about pages POSTing passwords. Hopefully there is no need to explain why that is bad.

I agree with most of your post, but it's coming from an incorrect assumption.


It's not coming from an incorrect assumption.


LOL, I love the guys filing that bug report with Mozilla... Had to have cost some overtime for their network admins. xD

The way I see it, this - along with most of things - when you place the mechanisms there, people will use them and abuse them.

20 years ago, a browser had 3 MB and today they're 30 and 60 MB large, with much better compression of the installer.

Why do we need this-and-that service integration within the browser, to "follow trends" of the likes of Adobe, Microsoft?..

I don't think so. Cut it all out. Someone wants to watch a video: install a codec. Their service uses different coding? Tough luck, get with the (popular, useful) standard(s), or gtfo.

What the hell do I care about your corporate policy of "creating new jobs" and "advancing development", all you're doing -anyway- is peddling your products. In my browser. On my hardware, which I paid for, meh. Introducing 1000&1 vulnerabilities, where there should be none.

Right, wrong? Know what I mean?


>Someone wants to watch a video: install a codec. [...] Introducing 1000&1 vulnerabilities, where there should be none

I'd rather use a codec maintained and updated by Mozilla every 6 weeks than some "community maintained" codec that I installed years ago and has to be manually updated.


YMMV, but I'd rather prefer browser vendor to work on a browser, rather than try to become an operating system, with its own libraries and update channels.

A web browser shouldn't normally bundle image decoders, audio or video codecs, on-screen keyboards or printer and video card drivers.

Obvious exceptions apply, of course - e.g. if OS doesn't have built-in image decoder for a specific format it totally makes sense to bundle one.

But this is really getting off-topic.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: