HIPPA compliance? Nah bruh, unencrypted UDP is just fine! PCI-DSS says we can't take credit cards over this wholly insecure connection? Who cares! Just don't let the auditor near our PBX.
Sadly, the HTTPS and IPv6 anti-vaxxer crowd is strong in the VOIP community, if it isn't severely painful to the VOIP company themselves, they aren't going to secure it. Not that they'd secure it properly anyway, even if their livelihoods depended on it...
And because of these insecure systems, the more serious issues (all this SIP software is tons of C, and I've found exploits in just 1000 line utilities, let alone protocol level hackery and other fun) get ignored. I found a simple redirect bug in a VoIP platform. Ignored. Later some guy used it to the tune of $90K.
I don't want to become a criminal outright but I've gone from a "friendly disclosure offline and let you lie about fixing issues" to "Sit on 'em and sell 'em one day" model because everyone's so obtuse.
Oh and these aren't theoretical. Two big, widely deployed implementations cannot even agree on how headers end, and will read the same message differently. This can be exploited when a network does header processing. Imagine adding a "x-accountid" header, and removing any existing ones - if interpretation differ on what constitutes a header, an attacker can slip fake headers in. Not entirely dissimilar to browser exploits that let script include a header with a newline in the value, letting scripts maliciously set headers they shouldn't be allowed to.
I've literally seen a college have admin credentials hosted on a publicly addressable plaintext document just so that their new machines can netboot. And that's just one, quick, story out of dozens upon dozens I have.
VOIP as a product never really made it into the home, despite all its features and enhancements. Instead, it is limited to the realm of businesses, where it is in most hospitals, medical facilities, chain stores, call centers, and so on.
VoLTE is as close as most consumers will get to proper VOIP service.
I know, however, that both Vodafone and Telekom in Germany offer products that allow connecting VOIP phones from anywhere to a virtual phone appliance.
For added fun, the site looked like this at the time https://web.archive.org/web/20110207225932/http://nestlabs.c...
You've never noticed the phones at every checkout stand in the grocery store? Or at every customer service desk in every retail store in the nation? Or hotel front desks?
When that new coffee shop opens down the street and isn't on Yelp yet (or is, but the hours are wrong anyway), how do you find out its hours? Do you ask the barista for his Facebook ID? How do you find out information about a place that doesn't have a web site, or that has an obviously outdated web site? Have you never worked in a place with a receptionist?
The only place I've ever worked that didn't have telephones turned out to be a scam operation. I don't think I would trust a company that didn't have phones.
Hell, I have a Cisco VoIP phone on my desk in my home office, tied into $work's phone system.
(I actually have three VoIP phones here at home, but I'm a network engineer for an ISP/CLEC.)
What does this mean?
It's perfectly possible to do IP-based rate-limiting in the IPv6 world, you just need to do it based on different prefixes, rather than full IPs.
As a specific example, my ISP -- as is quite usual -- hands out /48s. So in the same way that you can rate limit my entire NAT'd IPv4 connection with a single entry, you can rate limit my entire IPv6 connection with a single entry, by storing the prefix.
Also, the notion that broad use of IPv6 = security in VOIP, IoT or any area is a postulation at best.
I've personally always found this to be a good overview of security issues involved in both protocols in VOIP:
There are certain parts of the industry hesitant to transition for the possibility of security mis-configurations and human errors, the picture that the VOIP industry as a whole holds no interest in security is false. It could definitely improve, but that doesn't just apply to VOIP.
The debate is more about sloppy implementations than about the idea of vaccines as a whole. It's like someone is forcing the issue down to choosing poorly-regulated vaccines - or no vaccines at all. A false dichotomy.
This actually goes quite strongly against what I've observed though. Literally all of the anti-vaxxers I've met are 100% against vaccines. They are not vaccinated, nor are their children.
They're not doing research and choosing to use some vaccines but not others. They're completely ignoring all vaccines.
Those are big claims without any supporting evidence. From the sounds of it, you're repeating the anti-vax claims about mercury.
You're putting words in my mouth. What I said was "...without what many would consider..."
I'm still taking about the debate itself and you're trying to make this a binary debate about vaccination.
> What I said was "...without what many would consider..."
That's still calling them correct about the vaccines not meeting those standards. There are a lot of claims about vaccine testing that are objectively false. It's not that their standards are higher, it's that they falsely believe vaccines undergo less testing than they actually do.
I don't want this to be a binary debate. I want you to quantify 'many' and provide actual evidence of anything.
I'm curious, at what point does that opinion become fact? Isn't the evidence overwhelming?
I mainly ask because it feels like if the evidence for vaccination were not sufficient to warrant it as more than simply an opinion, wouldn't many other things become merely opinion too?
This is an interesting observation of how Google's technical crusades often align with its profit interests.
The main threat that HTTPS everywhere secures against is preventing your ISP from analyzing your traffic in order to build and sell an advertising profile on you.
Now, obviously that is something I don't want, so I am all for HTTPS everywhere, but Google already has that profile, so for them HTTPS everywhere is eliminating the competition.
Somehow the sewage company doesn't analyze my urine (I hope) to figure out if I prefer spicy or sour food and get an extra buck from third parties, and somehow they're still in the business.
I don't want my search engine to do that either. They should provide me damn accurate results and get out of my way.
Unfortunately, whereas I have a choice of several good ISPs here (UK), I have a choice of precisely one good search engine - Google - and it analyses my traffic to high heaven. (And I've tried DuckDuckGo, on several occasions for several weeks at a time, and I'm afraid it still sucks.)
What aspect of DDG "sucks" for you?
I haven't seen a single reputable ISP do this anywhere. It would illegal.
Is the US really such a third world nation that not even basic regulation like this exist?
Many ISPs - both mobile and wired - inject code to send messages about your account.
ISPs like Comcast have tried objecting ads:
After leaving Mozilla, Andreas Gal described ISPs reselling search engine queries and results to Google competitors:
Turns out our dev environment didn't force CDN assets to load over HTTPS, so my ISP injected some JS into a library we were loading.
I tweeted at them, and they not only verified that it came from them, but also that they consider hijacking and modifying my traffic to be a service.
They'll also appear if you're nearing your data allowance or if their email service (that you probably don't use) is undergoing maintenance. When I last asked, there is no possible way to opt-out or disable these.
US Mobile companies inject identification headers in unsecured HTTP calls, for advertiser tracking; and in other cases allow servers to ping the user's IP back to the ISP to get full details of the user (including addresses).
The regulatory agents responsible for regulating the cable and mobile companies in the US are right now hellbent on removing net neutrality and are fighting against the consumer. Fat chance of those "basic regulation"s existing or surviving.
> Is the US really such a third world nation that not even basic regulation like this exist?
First, the US is by definition the First World (USSR et al. being second world, third being "everyone not allied with first two"). Second, we have somewhat different ideas about freedom that, often, lead to an extreme lack of regulation; the hope is that this gives more freedom and we'll work around abusive actors (yes, I know monopolies are an obvious weak point in the system).
That is not true. The main threat it protects against is MitM (man in the middle attacks) that allow someone to redirect all traffic to a website through their machine and thus see all the data including your password.
HTTPS when combined with root certificate trust is very effective at preventing these kind of attacks. Without it, using any shared internet at all (such as a company, school, or coffee shop) to log into any website or enter your credit card would be trivially easy to hack.
Seriously, I can boot up Wireshark, go to my coffee shop and easily see every non-HTTPS communication going over the network. IM messages, emails, and in cases like this post suggests... passwords too.
Edit: As a side note... I do this all the time to reverse engineer the wireless protocol for IOT devices since most of them do not use HTTPS yet. I use it for personal use but it could be used for harm as well. For instance, if the security cameras are IP cameras over HTTP I could probably intercept the password and use it to remotely turn off the cameras.
The one thing that reduces the likelihood of that happening is to minimize the amount of credentials you could get your hands on using that attack.
But if you target is only a single network, packet sniffing is pretty effective and is stopped by HTTPS.
And if you target is a single person or small group of enumerable machines, arp poisoning still works on many (most?) networks.
Personally I am more scared of the damage that can be caused by being a direct target than I am having my info in one of those massive dark-web data dumps.
Edit: Also, the ops post is actually an example of a MitM (where the ISP is the one in the middle). I just expanded it to the superset.
Until recently, I had no idea that manufacturers actually PAY Google to have the services on Android... Talk about idiocy.
I mean, you _could_ go the amazon route, but how's that working out for them? Their mobile platform is not exactly flourishing. An Android device without Google Apps simply isn't going to sell in the millions.
As someone living in Europe, this literally happens nowhere. Because it's illegal.
SSL in browser has nothing to do with our ISPs. Stop being US-centric.
to the field? This is supposed to stop Chrome autofilling the value
When you type something, it goes to the application (almost always its stdin file descriptor, but it can open /dev/tty and read that too).
When local echo is enabled, the terminal also prints what you type.
Applications that prompt for passwords simply (temporarily) disable local echo.
The number of people saying that this is a clever workaround and agreeing with the people putting in the bug reports is very disheartening to see on HN. If a highly technical crowd such as HN can't get why HTTPS is important than what hope does everyone else have?
Troy hunt is a security researcher. The examples of the bug reports and quotes were meant to terrify and it worked on me. If I was a customer of any of these examples I would be pissed. If you're not upset and/or frightened please for everyone's sake take an infosec course or read up on the subject.
Huh? I've been through this entire thread and I haven't seen anyone suggest that this is acceptable behavior; even in the heavily-downvoted comments. It's mostly just people laughing at the lengths this site went to to shoot itself in the foot.
What's preventing these types from doing so?
But Google has been bullying around with their behaviour. I don't think that's even debatable.
Netscape Navigator did this (almost) 20 years ago.
EDIT: Link. http://www.kentlaw.edu/faculty/rwarner/classes/legalaspects/...
And if you disagree with them, you're "a novice".
If you don't set a master password, then your passwords are (presumably) encrypted with your google account. So anyone using Chrome that's logged into your google account will be able to view the passwords via settings. So just don't let malicious users use your Chrome?
Edit: And there's also a guest mode for Chrome, but they can just exit out of the window and run a regular instance of Chrome to use it under your profile.
Troy Hunt is attempting to claim this is a feature and not a bug, and that their workaround is "being deceptive", when they never claimed it was secure to begin with.
The browser is literally pushing an idealistic philosophy down websites' throats and basically doing damage to businesses and brands without an attempt to help them, and any attempts to simply keep old functionality are being vilified as "anti-vaxers". This is not an honest narrative.
Yes, Oil and Gas International had an insecure site, and yes, their reaction and demand to the browser vendor was inappropriate. But the point of it is still valid: as a vendor, you don't embarrass and damage business reputations in order to force them to comply with the way you would like them to run their sites.
Troy writes in the article that browser vendors are trying to use a "lever" to "force organizations to go secure". I don't care who you are, it's wrong to force anyone to do anything they don't want to do, and on your timeline instead of theirs, and with absolutely no help given to them before this deadline.
Imagine if Microsoft changed their OS to flag every single application as "insecure" if it doesn't implement a new primitive, and they pushed this out today. All of a sudden, you receive a barrage of calls from upset users. You didn't know they were going to push that out (certainly Microsoft never sent you an e-mail), and you now have to hit the ground running trying to figure out how to add those primitives to your code, test them, and release them, none of which could possibly happen immediately, and may take weeks of development. Meanwhile, your reputation with your users is damaged, and users themselves go through emotional stress and fear. And Microsoft's response? "Too bad. You should have been secure already."
This is fucked up. And if Google does this knowing it's going to damage businesses, they could face a class-action lawsuit.
The only way they get away with it is because they have the biggest market share. If Chrome had a smaller user base, businesses would simply shut off access to Chrome browsers and tell them their browsers were faulty and to switch to IE. This is impressively tyrannical behavior for a software vendor, and Google is indeed being a bully.
Now users have pretty diverse interests, so browsers don't always get this entirely right, which is one reason it's important to have a variety of browsers so users can pick one that does represent their interests.
What's happening in this case is that the site is doing something that pretty much everyone who understands the issue agrees is harmful to users: having them type their password into an insecure page. Browsers and security professionals spent 10+ years trying to convince web sites to stop doing that. Then browsers spent a few years telling websites that they will start warning users about this behavior and giving specific timelines for when this would happen. Then they started showing those warnings they promised they would show.
To go back to your analogy, it's as if Microsoft had told developers for a long time that some specific API is deprecated due to being "insecure". Then they gave a timeline for the API being removed. Then they removed it. Can there still be applications who didn't move away from that API? Sure. Is it entirely Microsoft's fault that they are now getting lots of support calls? That's a hard case to make. Note that this sort of deprecation is something that Microsoft and Apple have in fact done.
> The only way they get away with it is because they have the biggest market share.
Firefox is showing the same warnings, no?
> businesses would simply shut off access to Chrome browsers and tell them their browsers were faulty
Sure, just like in the Microsoft case businesses tell their users to not install the OS security update, etc. You're right that if Chrome and Firefox had smaller marketshare businesses _could_ threaten to do this or actually do this. But at that point it's not entirely clear who the real "bully" is... In either case there's an exercise of market power to get your way against the (possibly reasonable) objections of others.
Disclaimer: I work for Mozilla, on Firefox.
> it's important to have a variety of browsers so users can pick one that does represent their interests
First of all, I'm now terrified of Mozilla/Firefox, because this comment reflects the idea that browsers should be developed as independent ethical entities that represent different groups, in the way special interest groups lobby on behalf of specific people, ignoring the concerns of everyone else.
Second, it's dangerous to put the onus of security on everyone but the user. I'm sure you've seen the wall of sheep: it's more than just http passwords. Users are stupid, and they get security wrong, and they need to be helped to get it right. But one thing that won't help them is absolving them of any thought whatsoever into investing in their own security.
Where this will end is a marketing campaign that sounds a lot like "Mozilla Firefox: The Secure Browser". All they need to do is download your program and just assume everything is fine. Which will of course be a lie, but one that everyone will accept, because they want it to be true.
The browser should not become a political toy. It should be simply a tool, and it should be up to those who wield that tool to decide how it is used. If I make an axe, I don't come to your farm and tell you how to swing it.
This could have been trivially handled by simply asking users how much concern they want to have over their security, or providing some mechanism for organizations to easily transition into technology changes at a pace that works for them. Instead it seems like browser makers are too fond of themselves as white knights to provide reasonable compromises.
Browsers are completely at fault for handling security so poorly in the first place. They continue to have the most asinine user experiences in the world when it comes to understanding what is actually going on when a user browses the web. They continue to support standards which can be easily subverted. They continue to build hack after hack into something that was supposed to just navigate documents and is now an entire fucking application platform. Browsers are a mess, and it's their designers that are at fault for that mess. Now it's clear that a mentality of moral superiority and special interests is the cause.
And while I'm ranting, what is wrong with browsers that they can't simply build a working secure authentication framework into the protocol and back it with a halfway usable UI? How is it a 20 year old tool used to access backend servers has a more effective authentication and authorization system than the most commonly used program in the entire world? It's not like this stuff was some mystery that the poor lowly browser devs couldn't understand. We don't need to be relying on shitty web forms to send plaintext passwords - we didn't need to be doing that in the year 2000!!!! How the hell is it that this piece of software, which is somehow more complex than my entire operating system, can't seem to perform the basic functions i've been doing with other programs for half my life? And yet have the balls to claim they're working in service to the user?
You know what would have been great for the users? A secure protocol which didn't degrade its own security. A URI convention that refuses to communicate with insecure sites. A button that rejects all connections not destined for the domain in the address bar, and functions that control the browser or access to its data without the user expressly allowing it. Simple things that could have actually completely ensured users' safety, without ridiculous complicated kludges that only do half of what they're supposed to do. And these should not be considered controversial - it's not like I'm suggesting they implement security policies before they add buggy features to brand new releases.
You are right, though. Browsers did take 10+ years to enforce a policy that is as unnecessary as it is sudden. I'm sure users will thank the browser vendors now for how much safer they are from black hat hackers in coffee shops. Oh, wait - they are still insecure. It's just now they know it and are unhappy about it, and other organizations can now capitalize on this.
I'm not sure where "ethical" came into that.
Some users want to have features that allow them to read websites in their preferred fonts. Other users don't care about fonts, but _really_ care about the colors and want high contrast. Still others want to have strong privacy safeguards (think Tor), while a fourth set care about privacy a bit less than that, and a fifth set don't care about privacy at all. These diverse needs might best be served by multiple different browsers that focus on different aspects of the user experience.
I see no reason why a browser that explicitly tries to make the web more usable for people who are red/green colorblind, say, should be a problem, though it seems to me that you do....
> it's dangerous to put the onus of security on everyone but the user
No one is suggesting that. However the reality is that there are maybe at most double-digit numbers of different browsers, maybe hundreds of millions of websites, if you're very generous, and billions of users. You ideally want to enforce security at chokepoints, which is why the browsers do most of the lifting here, then websites, then users.
There have been tons of user education campaigns in the history of the internet. To some extent they've even worked.
> The browser should not become a political toy. It should be simply a tool
Sure, and no one suggested it should be a "political toy". But maybe one user wants a flathead screwdriver and another wants a phillips head. And a third one wants a hammer, or hex wrench.
> This could have been trivially handled by simply asking users how much concern they want to have over their security
Been done, via surveys. The answer is "a lot". And yes, we could just say that if they care then they should be constantly vigilant. But constant vigilance is something people are really bad at (on a hardware level!), compared to computers. So any time we can design systems that don't require constant vigilance from people we probably should. I would go so far as to claim that requiring constant vigilance from people when we don't have to, and then blaming or punishing them when they cannot comply, is simply unethical.
> or providing some mechanism for organizations to easily transition into technology changes at a pace that works for them
This is why browsers have been cooperating at creating things like Lets Encrypt, precisely to provide such a mechanism. The question of timeframes is a complicated one, of course.
> Browsers are completely at fault for handling security so poorly in the first place.
No argument there. This is something browsers have been trying to do better.
> Now it's clear that a mentality of moral superiority and special interests is the cause.
I think you're reading things into what I said that were simply not there.
I agree with most of your post, but it's coming from an incorrect assumption.
The way I see it, this - along with most of things - when you place the mechanisms there, people will use them and abuse them.
20 years ago, a browser had 3 MB and today they're 30 and 60 MB large, with much better compression of the installer.
Why do we need this-and-that service integration within the browser, to "follow trends" of the likes of Adobe, Microsoft?..
I don't think so. Cut it all out. Someone wants to watch a video: install a codec. Their service uses different coding? Tough luck, get with the (popular, useful) standard(s), or gtfo.
What the hell do I care about your corporate policy of "creating new jobs" and "advancing development", all you're doing -anyway- is peddling your products. In my browser. On my hardware, which I paid for, meh. Introducing 1000&1 vulnerabilities, where there should be none.
Right, wrong? Know what I mean?
I'd rather use a codec maintained and updated by Mozilla every 6 weeks than some "community maintained" codec that I installed years ago and has to be manually updated.
A web browser shouldn't normally bundle image decoders, audio or video codecs, on-screen keyboards or printer and video card drivers.
Obvious exceptions apply, of course - e.g. if OS doesn't have built-in image decoder for a specific format it totally makes sense to bundle one.
But this is really getting off-topic.