Hacker News new | past | comments | ask | show | jobs | submit login
Advice to avoid public Wi-Fi is mostly out of date (eff.org)
206 points by DiabloD3 on Jan 30, 2020 | hide | past | favorite | 114 comments

Err, no it is not safe unless you trust the app you are running to validate the certificate chain. Not so long ago, I found out my bank's app didn't validate the cert and I could happily put a proxy and intercept all calls.

If your app does that then it's not particularly safe over an encrypted wifi either.

The solution is to fix that app, not to rely on a very weak defense that may help you in a fraction of the possible attack scenarios.

Good luck fixing your bank app. For a consumer it’s a lot easier to avoid situations where your communication is easily intercepted than it is to change the code of their banking apps.

In that case the best option might be to use the bank’s mobile website is it exists. In any case, the point is that this is a problem with the application, not the network, and a more trustworthy network doesn’t resolve the issue.

That’s an interesting point. As an app developer, I’d assumed that would be handled automatically by the OS.

What’s the best way to test for certificate validity? (In my case I’m interested in iOS, but the same concern must exist on all platforms).

In my experience, the OS _does_ handle that automatically. If the app isn't verifying it, it's because they went out of their way to disable certificate validation.

Which is alarming.

What's the odds that the corporate network the developers are on does MITM https interception, and the only way they could get their app to work was to remove certificate validation

Been there done that. Corporate IT often doesn't want to acknowledge that devs exist in the company because it's so much easier to just lock down the admin and marketing use cases.

It's fucking scary how far they're willing compromise security internally and externally to avoid extra work and maintain control.

It's amazing how well such companies can repulse developers:

- Everyone works on 8GB windows machine and sticky keyboard

- Remote desktop

- "You wanna install your IDE? Yeah contact IT, gonna take few days"

- Can't install any of cli tools

- Atlassian suite

- Spend half a day in meetings

- Scrum

Then they complain how hard it is to get a good developers...

am I the only developer(ish) who likes JIRA?

Probably. I’ve never seen one that’s useable. Also BitBucket is quite terrible when compared to Github.

It has some friction in spots, but I like it a lot. A little configuration made it fit how we work. It does the job.

However, I hear other companies like the configurability too...and their crappy processes were so easily configured that it became a crappy tool for their poor developers.

Not at all. In my opinion, it sucks, but it's the best tool available for the job.

The pros and cons of jira all depends how you use it

I think we found a yeti!

MitM is pretty difficult if your app is validating, a lot of corps will install their in-house CA on your company issue devices so they can do this. If your software is using certificate pinning they can't even do this.

Very slim, as you can still verify the certificate chains up to a trusted root certificate and it’s trivial (and generally part of the enrollment process) to load the companies root CA on your device.

We MITM and certificate validation works correctly.

Except that there should be validations at even the Root CA level and most corporate MITM CAs don't pass those verifications either:

- Is your root self-signed only? (Root CAs haven't been allowed to be self-signed only since roughly 2007 according to the principles of most browser root CA policies for public Roots. All public roots today are cross-signed among each other.)

- Does your root certificate have a valid revocation chain? Can you query up-to-date revocation information on it? (Modern Roots all have to have working revocation information, and Root CAs have been revoked in internet history, you cannot blindly trust your device's Root CA store over time without up to date revocation lists.)

Those are just two warnings I see most often from my dev tools on the MITM infrastructure I'm forced to deal with it. I know that this is compromising my security stance as a developer, and I know that turning off/ignoring those trade offs is a risk I directly pass on to users of anything I build. I've felt it a responsibility of professional ethics to pass on this concern to others in my company. I have debated many times whether if the right Root CA CVE or Self-Signed Certificate CVE comes across my dash if I will have to attempt to exercise the company's "Stop Work Authority" and refuse to continue development while being MITMed in a way that the company's security/safety infrastructure will not understand how to handle, but remains on my radar because I'm a professional and worrying about such things is my job.

Running a Root CA is a huge responsibility, and still has a ton of risks for the "real" Root CAs. (Just look at the recent battle between browser security teams and Symantec, for instance, over generating bad certificates.) Running a corporate MITM has all the same responsibility, with an even worse risk if you get it wrong (your entire company's device footprint has a single point of failure). It's such an incredible vulnerability/risk that whatever tiny gain it gives companies in surveillance over SNI sniffing and endpoint/device-deployed auditing tools is never worth the risk of subjecting so many developers to badly MITMed developer environments, especially some of the developers most at risk (bank software, health software, etc) of passing on the software equivalent of a bad MITM plague given the worst happens. I cannot imagine the blasé with which Corporate America has MITMed itself can be seen as anything but an incredible folly, if not today than certainly tomorrow (hopefully not after the worst happens).

As far as I understand, this is no longer possible on modern iOS versions at least, except if the app developers explicitly disable that validation.

You can pin your certificate in your app bundle such that your app only allows certificates you specify or ones signed by CA's you specify. That clearly isn't the case here. Normal iOS operation will verify the certificate chains up to a trusted root certificate and it is indeed possible to load your own trusted root CAs on to a device for purposes of MITM. Again, some apps may pin their own certificates, but that clearly wasn't happening in this example.

I deal with this virtually every day.

It's not the same as pinning though. The device trusts that a cert was signed by _any_ CA on your phone, not necessarily the one that really issued the one you expect.

So, if my company installed a CA on my phone that they issued in-house, and MiTM my traffic, they can spoof certs and most software will accept it.

To be really safe, you should pin the certificate to ensure that your code only trusts a specific certificate or specific authority, so, for e.g. it was signed by Let's Encrypt X with fingerprint Y and not, say, Digicert Z with fingerprint A.

If you're writing the backend and frontend, yor could go one step further and embed your own CA in the app and follow secure practice for managing the private key and issuing certificates to your infrastructure.

Pinning is a serious step, there's a lot of opportunity for a foot gun. You absolutely need to decide up front what the intended behaviour for your app is when the pin is invalid. Don't say "That will never happen" because it will happen. Maybe the client is happy that their app simply does not work if the pin condition isn't satisfied. A bank might feel that way for example. But if it's a surprise I guarantee they aren't going to be happy and that means you did a bad job.

Building your own PKI is always potentially the safest option, and in practice it will usually be the least safe and most unreliable. The main attraction of your own PKI should not be the safety/ security you likely won't actually achieve in practice but other conveniences. For example your PKI can issue a 20 year cert. Maybe it shouldn't, but it can and that might work better for you than certificates which expire and introduce exciting last minute changes.

Yes, I agree with all of the above.

We specialise in this sort of thing at my work, I'm not suggesting anyone does this without first understanding the risks you mention above as well as the long term commitment required.

But done right, it is the most secure approach.

Not necessarily, there are plenty of applications that use their own trusted root CAs and thus technically the app is validating it. Most popular example being Firefox.

Often libraries have options to specify trusted root CAs as well as options to disable validation per host and/or globally. I've never come across any library that would have any of these options enabled by default.

With that said, apps like banks should still not be putting trust into the OS or anything else. Certificate pinning is a good and should be utilised, especially for sensitive systems such as banks' apps.

Historically it was very common to default disable or entirely omit essential checks. CWE-297 https://cwe.mitre.org/data/definitions/297.html is about this common mistake. The happy path is invariably well tested and doesn't show this, unhappy paths often use garbage self-signed certs which fail non-host based checks and so those behave as expected too. Testing usually misses the host mismatch check.

OpenSSL for years only provided some fairly hairy code if you actually wanted to do dnsName matching, which you absolutely should do. What that means is, lots of software was written (say, 10+ years ago) in which OpenSSL is checking that your peer has a "real" certificate but it doesn't care which one. A certificate for we-are.literally-thieves.example ? Cool, that's issued by a trusted CA and so it's fine. Oh you thought you were connecting to my-real-bank.example? You didn't ask me to check the name on the certificate and I don't bother providing a sensible API to do so anyway.

Here's their actual documentation:

> Versions prior to 1.0.2 did not perform hostname validation. Version 1.0.2 and up contain support for hostname validation, but they still require the user to call a few functions to set it up.

Modern (1.1 onward) releases of OpenSSL provide a sane API which checks names you give it, so if you tell OpenSSL to connect to my-real-bank.example it realises you don't think certificates for other names are OK. But the old ones didn't do that and the ones 10+ years ago expected you to grok PKIX (the Internet's agreed way of coercing the X.509 standard intended for the X.500 series Directory into a way to certify things on the Internet) or else give up.

Unless the libraries you are using are fubar then you normally have to explicitly tell it to ignore certificate chain errors i.e. requests.get(..., verify=False)

With apps there’s two levels of validation that you can do, and only one is done by the OS.

The most common, and automatic, is the verification of the chain of trust. On iOS this happens automatically if you use the standard network APIs against an HTTPS URL.

You can take it a step further and avoid MITM attacks where the middle party is able to mint trusted certs by doing something called certificate pinning. This is a manual verification that the certificate used by the server you’re connecting to has certain properties that you know match your API server’s.

It's handled by your http client which may or may not be part of the OS or even something in between (speaking of Android, it's probably far more likely to be on the OS side on iOS but if the OS allows raw TCP then it can't really keep an app from running its own http(s) on top of it).

A frequent problem on android that might lead some to throw validation under the bus, is that certificate validation is much less robust than the one in browsers, particularly on old devices. A typical scenario is that the certificate of one of your backend approaches EOL, ops dutifully obtain a new one and it checks out nicely on all browsers. But a whole bunch of older Androids that might make up a quarter of your user base of you are unlucky has never heard of the root certificates involved so the app becomes unusable. A similar situation can arise if you have clients that check revocation (good) but don't check for alternative signature chains like a modern browser would do (not so good).

The correct way to solve these situations is extending the server configuration with another certificate chain that is valid on the devices in question, no doubt about that. But when the app is not a core use of the backend and there server is not run by the same organization, breaching the wall of "but it works in all browsers, clearly the error must be on your side" defense can be quite hard. Nontechnical leadership will be extremely tempted to do the writing thing.

On iOS certificates will be validated by CFNetwork provided you haven't disabled ATS.

People who think app store approval is a reliable quality gate, please take note.

Was that Chase? I read about it a while ago that there was a flaw in the Java API that made it ignore cert warnings by default.

I feel like this needs to be an OS-level requirement. All network comms should be encrypted and any unencrypted traffic needs to be allowed with a user opt-in.

Passive attacks from open wi-fi allowing everyone in 100m range to read your traffic and active MITM are entirely different classes of attacks, with entirely different barriers to entry.

A problem I see quite regularly is self-CA is an afterthought, many don't write their software for self-CA and so when people have issues, the top answer is "turn off verification".

Between that, and the number of "important" pieces of software that don't certificate pin, it really rubs me the wrong way.

Many people have no idea that any one of the CAs installed in your browser or device can sign a certificate for any domain and most software won't care.

While you’re probably not going to be instantly attacked, I still wouldn’t do online banking on a public network.

Honest question... why not?

Modern banks use HTTPS throughout. The banks I use all have HSTS and use preloading so no hijacking to a non-HTTPS site. I use a password manager so if somehow I do get hijacked and get sent to a phishing site, and even if that phishing site is using a Lets Encrypt cert to prevent the “Not Secure” banner in a modern browser, my password manager isn’t going to recognize the domain so it would not let me attempt to log in even if I wanted to.

This comments thread literally starts with someone who discovered their bank's app wasn't validating the certificate, though.

In that case it wouldn't be safe to use the app on a private network either.

True, but doing so on a private network would still be a whole lot safer than on an open AP.

Every commercial network is a public network in a sense

In a boolean sense sure, but the world is more complex that true or false, and some are more public than others.

I suspect the average public network is more secure than the average bank network

Even if HTTPS is deployed, and even if the client actually verifies the certificate, untrusted networks are still risky.

There are many attacks against HTTPSs itself (e.g. DROWN [0]), bugs (like the Windows 10 crypto bug [1] from just a couple of weeks ago), irresponsible CAs (e.g. symantec [2]), and hacked CAs (e.g. DigiNotar [3]).

Are things better than they were before Let's Encrypt? Sure. But is the advice against public Wi-Fi out of date? I don't think so.

[0]: https://en.wikipedia.org/wiki/DROWN_attack [1]: https://techcrunch.com/2020/01/14/microsoft-critical-certifi... [2]: https://wiki.mozilla.org/CA:Symantec_Issues [3]: https://en.wikipedia.org/wiki/DigiNotar

> So when you visit HTTPS sites, anyone along the communication path... can see their domain names (e.g. wikipedia.org) and when you visit them. But these parties can’t see the pages you visit on those sites (e.g. wikipedia.org/controversial-topic), your login name, or messages you send.

I believe this is the reason Turkey blocked the entirety of Wikipedia[0], which was recently lifted[1]. They wanted to block specific pages that revealed negative information (and I believe they did at some point), but when Wikipedia went https only[2] the only avenue was to block the entire domain.

0: https://en.wikipedia.org/wiki/Block_of_Wikipedia_in_Turkey

1: https://wikimediafoundation.org/news/2020/01/15/access-to-wi...


Am Turkish, and not really. There is no evidence of Turkey caring about what the individual citizens visit (except in case of a crime investigation etc.) Bans in Turkey works like this: Turkey sees something they don't like on the Internet, Turkey reaches the company / individuals behind it (they can be anywhere in the world) and tells them "take it down or we will block your access to Turkish citizens and you'll lose revenue / traffic". If the site owners comply nothing happens. If site owners refuse for any reason, they block the site so it is not accessible from Turkey. In the older days, they used to do it through DNS but it was easy to circumvent. Now they use other methods so changing your DNS isn't enough, but a VPN works just fine.

This was also the case before https wasn't as common BTW. Turkey either didn't have the technical capability to block individual pages (even back then) or they were seeking to punish the site by blocking access in whole.

A site like wikipedia values integrity more so they don't take pages down without good reason. But companies seeing Turkish citizens as a revenue source generally comply. If you browse Twitter in Turkey, it is common to see tweets where it just says something like "this tweet is blocked in your country" - Turkey reaches twitter to mark the tweet invisible and that individual tweet goes away. IIRC it also applies to entire profiles - I'm not a frequent twitter user but I remember seeing entire profiles blocked by country.

I feel like asking the site to take something down ... is effectively caring about who sees it. Otherwise you wouldn't do that.

Sorry I was not clearer. What I meant was an operation like in China where you have to be careful about what you look for on the Internet. I saw a video from China where the police was interrogating someone supposedly because of his remarks about police confiscating motorcycles in WeChat. They were actively mass monitoring, found someone they didn't like and took him in for intimidation and questioning, perhaps more. So in effect, they are monitoring what individuals are doing on the Internet to catch and punish them en masse. My point was this type of "watching citizens and snubbing those that look at things they are not supposed to see" thing is not (yet) a thing here. You can go ahead and "read" anything, they don't care (by that I mean nothing will happen to you) but they have a problem with the site being able to serve a Turkish audience here.

The parent post didn't mention individual citizens.

> But these parties can’t see the pages you visit on those sites (e.g. wikipedia.org/controversial-topic)

This is... not entirely wrong, but overly simplified. My impression is that TLS fingerprinting is sometimes-to-often good enough to figure out which exact page of a static website that a user is visiting[0].

That's not to say TLS is useless, or that the fingerprinting isn't hard enough that some adversaries won't just give up, it's just that the protections are more complicated, and it's not quite as simple as just saying, "I have TLS, I'm fine."

[0]: http://rabexc.org/posts/guessing-tls-pages

Unfortunately, while HTTPS is very common, this isn't really the case with HSTS Preload, so active MitM attacks are still a threat.

Even just HSTS is not widely deployed. Here in France for instance:

* top 1 banking, top 18 FR https://www.ssllabs.com/ssltest/analyze.html?d=labanqueposta... no HSTS

* top 2 banking, top 20 FR https://www.ssllabs.com/ssltest/analyze.html?d=credit-agrico... no HSTS

* top 1 taxes, top 30 FR https://www.ssllabs.com/ssltest/analyze.html?d=www.impots.go... with CAA, HSTS (not preload), OCSP Must-Staple! (that's a surprise)

* top 3 banking, top 45 FR https://www.ssllabs.com/ssltest/analyze.html?d=caisse%2depar... no HSTS

Hopefuly, by 2030 banks will have caught up to the 2018 standard for "secure".

I wrote a wifi hotspot app for Android that could MitM HTTPS requests as you suggest about 8 years ago. Even back then it didn't work very well because even without HSTS preload, most people aren't visiting new websites for the first time on your free public Wifi.

Yep. Not only is HTTPS stripping still a problem, but active attackers can do all sorts of other nasty things as well, like force the user's browser to initiate plaintext HTTP requests to non-HSTS sites and then use that to steal cookies from third party domains, or achieve persistent XSS by poisoning the user's browser cache. See https://samy.pl/poisontap/ for a great example of that attack in action.

Specifically preload or HSTS?

Because HSTS gets you most of the protection you need while being able to recover if something goes horribly wrong.

Set your HSTS timeout to greater than the gap between user visits and it does prevent active MitM.

You're only unprotected for first time visits being actively intercepted or particularly long gaps where HSTS can expire (both of which are hard targets).

refresh my understanding, if i manually type "https" into the address bar, then i can't be MitM'ed through lack of HSTS, right?

correct, assuming you don't let yourself get tricked into trying without https.

Which, if my experience pentesting is any indication, most people will.

This. The question of public WiFi often isn’t “can you keep your comms secure if you try”, but “will my average user who just wants stuff to work While traveling be better off on their own mobile hotspot or connecting to dodgy free WiFi?” Unquestionably, they’ll be better off avoiding public WiFi.

Applications like Outlook will warn you about cert problems but still let you bypass them. This could be better on app side, but it’s a reality end users deal with. And when/if IT knows about it, it’s because the user complains that their laptop/Outlook is broken. The avg business user doesn't think about cert chains.

You could be.

If you are, you’ll get a message that the very isn’t valid.

Unless there’s an attack on cert providers or someone adds a cert to your device.

The cert approach can be seen in some corporate environments.

> Unless there’s an attack on cert providers or someone adds a cert to your device.

How does HSTS help with that?

HSTS has a certificate pinning extension, but base HSTS wouldn't.

>HSTS has a certificate pinning extension

You mean HPKP? AFAIK it isn't an extension, but rather another feature. Also, it's deprecated at this point.

Isn't certificate pinning deprecated for regular web traffic?

> But these parties can’t see the pages you visit on those sites (e.g. wikipedia.org/controversial-topic), your login name, or messages you send. They can see the sizes of pages you visit and the sizes of files you download or upload.

Given the pattern of sizes of data you request, one can do seemingly-amazing things such as figure out what area of Google Maps someone is looking at based on the visible map tiles or figure out what movie someone is watching on Netflix based on the MPEG fragments or guess what article someone is reading on Wikipedia based on the pattern of requested media files. Note that these are each practical attacks that people have implemented; I have also seen a strong argument for a type ahead search attack based on the sequence of search response sets but I don't know if it has been implemented and it feels harder to pull off reliably.

Sure, and if you're under a repressive regime or have reason to think that someone is targeting you, you should probably still avoid public WiFi (and take a bunch of other countermeasures as well). But for the majority of people, who just care that their banking info isn't compromised (etc.), public WiFi is fine.

And if that is what the article said, I wouldn't be annoyed; but it went out of its way to claim that people wouldn't know what pages you were visiting, and that's a naive misinterpretation of what encryption is buying you.

Hm... is there some kind of transit quantization plugin for a VPN like Wireguard? Like maybe data is sent in blocks of small, medium, large, or none.

I haven't integrated support for this yet, as I am still working out how I want to best handle the return path, but I definitely am going to be getting this feature into Orchid in the near future.

It's only a little more data than an passive attacker would have with encrypted wifi.

That's an interesting point, but because browsers decided not to use HTTP/1.1 pipelining and encrypted SNI is still only an optional thing in TLS 1.3, the clean signal you get from the separate connections, each of which is tied to a hostname, makes these attacks extremely cheap and "practical" in a way that isn't quite as true when you are dealing with commingled data from all of the user's simultaneous connections, many of which might be to the same host; like, seeing "the user requested 32 map tiles with these individual sizes" is very different from "the user requested a bunch of map tiles--maybe 32ish--and they total in size to this amount... that or they are watching a YouTube video". This article might be a lot more valid once the web has upgraded to HTTP/2, particularly if it manages to adopt encrypted SNI.

What is up with these b.s posts about open or public wifi being safe this week? A few days ago there was a twitter thread by a security person at a hotel claimig their open wifi is safe.

I won't detail all the many harms you can suffer (or the threats that will readily cause you harm),but let me state just one argument related to eff's silly (and dangerously harmfull ) statement here:

1) when you type in a domain in your navigation bar, your browser attempts to connect to unencrypted http(port 80)

2) if (big if!) The site supports https it will do an http 301 redirect to the https version of the site.

3) An attacker needs to intercept just one such redirect to have an opportunity for credential theft or content injection (downloads,exploits,etc...)

3) your browser does indeed remember these redirects going forward,which is great.

4) Except if you configured your browser to forget all history. Or if you happen to remember a site you visited a while ago (perhaps on a different device) and just typed it in to navigate. Or if you typed in something to search but your browser navigates to it,or many other opportunities for pwnage!

5) you don't care about that? Well attackers are happy to setup a malicious captive portal(captive portal checks are plain http for all browsers I know of) and use that directly or to social engineer installation of an app you "need" to connect (oh,mitmproxy has a nifty captive portal like page you can customize to install a CA cert on the device for TLS interception)

I won't even begin to talk about at least half a dozen additional classes of MITM attacks that can be used, even with wpa3 and client isolation! What you have to understand is that vulns that would normally be low severity are amplified in this sort of a network, due to the sheer magnitude of threat exposure.

I can't complain about most people being ignorant to good infosec practices(we have to understand+educate) but man this stings! The eff makes one of my favorite extensions HTTPSEverywhere, how can they post this? It takes a long time to educate people about good security practices.

I think jumps the gun a little.

When sharing a network, there are other attack vectors into people's unhardened laptops except browser MITM. Do you have any unprotected shared folders? Can someone brute force your login via RDP? Can you account for all the listening ports running on your device?

A NAT provides strong protection by simply firewalling you from the outside world. It's so common that the focus (rightfully) zoomed in on MITM as that is the only thing "left", but in a shared network, the adversary may reside on the inside nulling that protection. Most users have not taken precautions against this.

Oh, and shoulder surfing.

Do you have any unprotected shared folders?

It's surprising how many people have unprotected shared folders. And for some reason they very often are full of music.

Spend a few days on a hotel's wifi and you can slurp up thousands of other people's MP3's.

>Can someone brute force your login via RDP?

probably a non-issue since it isn't enabled by default.

> Can you account for all the listening ports running on your device?

that's what firewall (which are typically default deny for incoming) is for.

Yeah this article is only covering a specific attack vector, to claim that public Wifi is nearly risk free because of HTTPS is a very dangerous statement to make. The risk of public wifi was far from just having your traffic spied on.

Passive interception is less of an issue because so many sites are using tls, but in the case of a mitm attack isn't https stripping still a problem unless the site is using hsts?

You would have to trust a root certificates from your mitm attacker, so it is not a problem.

I know someone who got caught out by this. Bank's front page was http, so the attackers mitm'ed that. Ebanking link was swapped out for an https page they controlled, allowing the credentials to be harvested before redirecting to the bank.

Block outgoing connections on port 443. MITM anything on port 80 which forwards to the server on port 443.

Your browser then loads www.whatever.com as http, even if the server doesn't allow http.

HSTS means if you've been to www.whatever.com before you'll be blocked. If you've never been before that doesn't help though.

In that fashion, typing www.mybank.com could redirect you to http://www.mybank.com (mitm) then to https://www.mybank.com-login.com/, where you get a green padlock.

When you first access a site, unless the site is using HSTS you are going to go to an insecure version so a mitm can proxy the request and remove tls or redirect you to another site. This is what is known as "https stripping."

You are talking about "HSTS Preload", HSTS doesn't do anything on first access.

HSTS helps unless you are always on compromised networks or the site uses short TTLs. Even without preloading most people are probably not accessing their bank for the first time ever on a malicious network.

Not really because most sites you visit you've already visited before. HTST preload only helps for the first visit to a site. After that it makes no difference.

They're talking exclusively about web browsing, though. There's more to net access than the web.

Personally, I just always use a VPN (and a firewall to ensure that no traffic flows except through the VPN). Then I don't have to worry as much.

HTTPS is not limited to web browsing. It's also how the vast majority of desktop apps communicate on with web servers.

I remember in 2005 when you could just start up Ethereal, run it for a minute, and get many people's email passwords, email, everything...

I think Wi-Fi security is going to be a major FUD talking point for the telecoms as they try to justify high prices, 5G, and the rest of their trip.

5G as it exists now does little to compete with WiFi because (in the millimeter wave form) it doesn't pass through walls. The overwhelming majority of data consumption happens indoors, so it can't make for a revolution in the market unless you get a huge number of antennas and/or cells installed indoors.

That's immensely problematic because building managers aren't going to want to have Verizon, AT&T, T-Mobile and maybe someday Dish Network stomp through their buildings, drill holes, do damage, etc.

There has been talk of wholesale access networks, which would be a great idea (e.g. a neutral vendor installs indoor infrastructure that gets rented by the carriers...) but the carriers are dead set against it.

But 5G is not meant for in house consumption. It is meant for crowded spaces like train stations, supermarkets to offload other bands. Then you get micro cells installed indoors inside of supermarkets. (what I mean by supermarket means mall and all those kind of shopping centers)

It is meant also for all kinds of smart sensors in area, but not like home sensors but utilities. We have that with LoRa, but it is really small data amounts, where 5G would be used for sensors that need more data, like smart traffic lights? Then you won't have to connect your old traffic lights to some cable network and they would get good connection capabilities.

Right now 4G is not used everywhere, in less dense areas you only get 3G. Of course providers will install base stations for 5G in places where it is economically possible. Places that have 3G now are not getting 5G anytime soon. It is also understandable that 900MHz is better for longer range than 1800MHz.

5G below 6GHz really requires dynamic spectrum sharing with 4G. Once that is standardized, carriers can gradually switch out 4G for 5G with modest but real benefits.

I was part of a group that installed some radio gear on the roof of the local mall and I can say that the building manager of the mall was a tough customer. It really helped that he liked our (union) electrician and was certain we wouldn't contribute any leaks to the roof.

Carriers used to use the IBEW and CWA and had some standards for the quality of work done. Today carriers tend to use non-union contractors -- some of those people are excellent to OK but some are real idiots that any property owner would want to keep far away.

So far as serious IoT goes I think the coverage problems will still dog IoT. With fiber you can get 100% coverage -- it costs money, but there is no site that can't be served.

Will cell phones carriers will tell you they cover 98% of POPs, when you investigate it might be more like they cover 89% of POPs. With wireless systems you make a rather large capital investment that rapidly erodes in value to get to that 89% coverage but then the cost explodes from there.

Don't most of public Wifi (airports especially) have their own CA to MiTM SSL connections just like most companies do to inspect HTTPS traffic?

This only works if your mitm ca is preinstalled in the client device.

No, because that will simply fail.

I did see it once on a train in the UK, but that's the only time I've seen https MITM.

I've never seen this. They usually just block all HTTPS connections and rely on automatic captive portal detection in modern OSes. Occasionally I've had wifi that had a captive HTTP portal but would allow HTTPS through anyway.

> airports especially

Indeed, lets not forget what the Snowden leaks said about airports:


I haven’t seen that in about 15 years: the experience is horrible and will get tons of complaints.

As other people have mentioned, HTTPS without HSTS still makes MitM a problem.

And there are still other attacks possible on public wi-fi networks which don't involve MitM-ing HTTP(s) traffic. MitM DNS traffic and you can do nasty things: https://github.com/infobyte/evilgrade

"So when you visit HTTPS sites, anyone along the communication path - from your ISP to the Internet backbone provider to the site's hosting provider - can see their domain names (e.g. wikipedia.org) and when you visit them."

Except wikipedia.org does not require SNI.

Most HTTPS sites do not require SNI.

Not every client sends SNI by default. OpenSSL's s_client does not. There are others.

    printf "GET /wiki/MediaWiki HTTP/1.1\r\nHost: en.wikipedia.org\r\nConnection: close\r\n\r\n"|openssl s_client -showcerts -connect mediawiki.org:443 -ign_eof
Some sites "require" SNI, but then do not check it against the Host header. A client can send any SNI.

    The server certificate says *.wikipedia.org.  
But there are numerous websites sharing the IP addresses for wikipedia.org, not all of them serving Wikipedia content. One example is mediawiki.org.

What if a website padded all its pages to be the same size.

This seems like weird advice to give out.

Sure, it's not as dangerous as it was... Or is it? All the tools are mature now. Documentation is rife. A seven year old with a Raspberry Pi can set up a hotspot and spike your DNS. This thread contains a litany of security papercuts across the whole stack. DNS, shitty apps, crappy server configs and sites that just don't care.

So yeah, it's no 1999. You're probably not getting your FTP password sniffed off a public network these days, but there are still plenty of reasons for me to use 4G or WireGuard instead of trustless networks.

Public Wi-Fi will remain firmly on my "list of things to worry about" until I can audit all traffic from my devices.

Even if a WPA2 wifi point has a password, isn't the encryption key shared among all connections? I.e, if an attacker has the wifi password, it nullifies the wifi encryption? I recall that fixing this was one of WPA3's selling points.

Not exactly, you need more information than just the WiFi password in order to decrypt the traffic: https://superuser.com/questions/156869/can-other-people-on-a...

VPN companies have been using the "omg public wifi scary" meme to market their products. I agree with EFF here.

Isn't domain information still in the clear? (What specific domain you're connecting to.)

The exact hostname is delivered in SNI (serverNameIndication) during connection yes. Figuring out a safe reliable way to encrypt this data is an ongoing work item for the TLS Working Group after TLS 1.3 wrapped up. If you have a recent Firefox (possibly only in Nightlies?) you can see one possible approach work with Cloudflare sites which opted in. You will also need encrypted DNS (DPRIVE e.g. DNS over HTTPS) or it's largely pointless.

I still http-only links on Hacker News regularly.


  "You would be safe in active war-zone (eg. syria) because you are civillian and do not carry any weapons with you"
No, you are not safe at all. Lets assume public wifi requires acceptance of terms and conditions.

- Redirects all pages (from your Mac/Ip address) to their server

- There is checkbox on the page for ToS/ToC approval and 'Continue' button

- Behind the scenes clickjacking/framebusting happens [0]

- You get PWNd or monetized.

- also being fingerprinted by company. eg: amiunique.org

Given conditions, they can inject ads/cookies to track you even after you go away. (eg. at home)

[0]: https://blog.innerht.ml/google-yolo/

The EFF is awesome with Let's Encrypt! It was really a dreadful task to buy and renew certificates, especially as out infrastructure back then wasn't that automated.

I think this article is a response to all those ads from VPN companies. They do try to scare people about public WiFi's.

One thing I would love to see in the future is the addition of LetsEncrypt support for major web servers like Nginx and Apache. I think this could go a long way. In the case of Apache it would be one of those "mod" type of packages. Someone feel free to let me know if this is already the case though, I would love to make note of it.


Looks like Apache has one called 'md':


Your move Nginx? :)

> One thing I would love to see in the future is the addition of LetsEncrypt support for major web servers like Nginx and Apache. I think this could go a long way.

This is not as useful as you think. In nginx you only need a couple of extra lines of configuration to let an external program issue and renew certificates independently from nginx, without reloads, etc. Definitely not worth developing a C nginx module that starts a helper process that does that just so that a few people who run nginx on a single server could get their certificates issued with only one line of configuration.

You still need to reload nginx for it to start using the new certificates. But you're right about issuing/renewing certificates. I have a small snippet like this in all my server blocks:

  location ^~ /.well-known/acme-challenge/ {
    allow all;
    default_type "text/plain";
    root /var/www/letsencrypt;
And to issue a cert (and automatically renew in the future) all I need is:

  acme.sh --issue -w /var/www/letsencrypt/ -d example.com --reloadcmd "service nginx reload"
Although recently I've been using the Cloudflare DNS option also offered by acme.sh instead of webroot mode. It doesn't make any difference in my issue workflow because the domains are already on CF DNS anyways, but it's required for wildcard certs.

I definitely agree in not seeing added the value of a nginx module over my current solution.

Since version 1.16 certificates can be dynamic, no need for reload.

Oh I hadn't known this, I know the configs for Nginx are rather powerful, but didn't realize they were this good. Maybe alternatively somebody could make a web UI to make managing this sort of thing for Nginx simple. Most neckbeards will rage about that, but they don't have to use it.

Really what we need is what Caddy ended up being. Best practices rolled in as defaults.

That’s why I use caddy just about everywhere that isn’t a load balancer.

What you're asking for already exists. The certbot package already takes care of that [1]. No need to develop anything extra for nginx. [1] https://certbot.eff.org/lets-encrypt/ubuntuxenial-nginx

The Certbot team would like to see an official nginx integration at some point because it would be easier and more reliable. Certbot's integration relies on parsing nginx configuration files but the nginx configuration file grammar isn't formally specified and there are surely divergences between nginx's interpretation and Certbot's interpretation. (The last one I worked on, which I don't think we resolved, is that nginx allows you to use arbitrary character encodings in configuration files, e.g. many Russian users may have comments in KOI8-R rather than UTF-8. I believe this is because nginx doesn't make a consistent attempt to explicitly interpret multibyte characters in all contexts. Certbot, as a Python application, generally does nowadays.)

The most sustainable and reliable long-term approach would be to have Certbot's integrations gradually superseded by supported official Let's Encrypt integrations in applications that terminate TLS.

P.S. Thanks for your enthusiasm for Certbot!

A more neutral stance might be for software to explicitly offer ACME integration rather than Let's Encrypt specifically, after all part of the rationale for Let's Encrypt is to be a huge practical demonstration that ACME can be a success for the public Internet.

e.g. a config setting get-certs-from: ACME-ENDPOINT-URL rather than a binary "Use Let's Encrypt" feature.

Thanks for your work, which is much more important than our enthusiasm.

If you’re running nginx on Nix you can do it with just a couple config settings

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact