Hacker News new | past | comments | ask | show | jobs | submit login
Chrome may start restricting requests to private networks (utoronto.ca)
267 points by zdw 64 days ago | hide | past | favorite | 268 comments



The backstory on this, if you don't follow web security at all, is that building public web pages to trick browsers into talking to private networks (or your loopback interface) is a surprisingly pernicious attack vector. The modern countermeasure for this is to require devices parked on private networks to opt-in to the ability mix public Internet requests with private network accesses, using CORS, which is what this is about.

You should probably be mostly thrilled about this development.


Finally CSRF attack scenarios on outdated router firmwares can be prevented.

CORS was basically ineffective in a lot of ways because it only works - by design - with newer servers that send those headers and with newer browsers that actually respect the headers and not simply ignore them. It was also ineffective for older servers from pre-CORS ages. The never break the web mentality didn't work out for this scenario.

Looking forward to this! Finally the LOIC and routersploit web implementations are ineffective. Now drop support for ftp in the web browser and we are good to go.

(̶F̶i̶r̶e̶f̶o̶x̶ ̶c̶a̶n̶ ̶s̶t̶i̶l̶l̶ ̶b̶e̶ ̶a̶b̶u̶s̶e̶d̶ ̶t̶o̶ ̶D̶D̶o̶S̶ ̶v̶i̶a̶ ̶f̶t̶p̶ ̶u̶r̶l̶s̶)̶

edit: correction: I was wrong: Firefox finally phased out ftp support this year in July.


> Firefox can still be abused to DDoS via ftp urls

How? They dropped FTP support a few versions ago (https://blog.mozilla.org/addons/2021/04/15/built-in-ftp-impl...).


> Firefox can still be abused to DDoS via ftp urls

Firefox deprecated FTP in 2015, and required it be manually re-enabled since April last year (FF88), and completely removed it in July (FF90), this year.


Thanks on the hint!

I was assuming that they will drop support by the end of this year, didn't follow it up recently so I missed the update in July about it.


The web moves at a brutal pace. It's easy to miss one announcement among all the many, many others.


> The web moves at a brutal pace.

This is so true. Especially since the pandemic the sheer amount of CVEs are hard to follow through and evaluate whether they're relevant for your own tech stack or not.


The sad part is, I now have clients refusing to upgrade Firefox.

Great. And while I blame Mozilla for much, this isn't their fault.

I almost feel as if end users should need a network license, and if they get too many tickets, no license for them!

And yes, it is not realistic.


> The sad part is, I now have clients refusing to upgrade Firefox.

If these clients are on Windows... tell them to use Windows Explorer.. it has FULL ftp capabilities.

You don't need a browser for ftp, or even an FTP specific program.


See answer to the other person in this subthread.


Windows Explorer comes installed on every windows computer.

You cannot even uninstall or disable it as it disables the entire desktop.

Note that I am not talking about Internet Explorer (which is related, but not the same thing)


They can use a third party ftp client and it will register as a protocol handler. You can just view it as "ftp support with a different program"

It's pretty seamless. There's lots of ftp supporting clients. Give it a go


Naturally I suggested such things.

You can insert a dozen strange answers here.

For example, one answer?

1)

A client of mine has an FTP site, and their customers access it. Those customers have an IT policy which does not allow them to install other software, for security reasons.

Thus, keeping and old version of firefox around is what their customers do.

(Yes, this is insane and bizarre beyond belief. The security policy is working against security, and the fact that the security policy doesn't care about an old browser is insane. Yet there it is.)

2)

I have a client with employees around the world. They are usually very secure. However, these employees seem to be the complete opposite of computer literate. Every step they take, every task assigned, is accompanied with PDF files and wiki walkthroughs of "here is menu item X, click this, then menu item Y", along with screenshots, and enlargements of menu items.

All their training is rote. They don't know how to use software, only how to click this, then that, as per pictures and doc, then entire report in the form that pops up.

If anything deviates -- tech support.

I honestly don't know how it is possible to find people capable of doing a job with diligence, competence, and intelligence, but require this level of hand holding, yet I see it myself, through this client, constantly.

Like I said ... strange and bizarre.

While I am sure this client will eventually manage to upgrade its staff, they have been researching clients, testing them, re-working all documentation, and even rolling out 'test upgrades' for employees!

And of course this takes time, naturally they are short staffed, and it requires management buy in at every step.

And getting people to modify about:config? That's way, waaay too complex. So they're stuck on an old browser, which they aren't supposed to use for anything but FTP, yet these employees are the sort that call a browser "google", and don't know the difference between firefox and chrome.

So you can be sure they're using an old version.

--

Again, I don't blame Mozilla for this.

This is the sort of stuff which makes me think 'maybe people need a license, like a driver's license, to be on the internet, they're too dangerous otherwise'.

But of course, as I said initially... not easy or realistic to roll out.

Now that I think of it, though, maybe it should be "businesses need a license to be on the internet". The important part here being, if you have constant breeches, and your infra gets used to launch endless attacks, you get fined until you go out of business.


Correct me if I'm wrong but the browser still sends the request. The browser checks for the response's CORS headers to see if the response can be used but there's potentially an attack vector even when the response is ignored.


> The never break the web mentality

Would you pass on that memo to the Chrome developers, please?


I'd be a lot more likely to be thrilled by it if the browsers had a persistent per page or per site setting to disable these kind of things.

It gets annoying when a page I write loaded onto my table on my LAN's WiFi cannot talk to devices I own on my LAN just because I loaded the page from my server that is on the public Internet.

PS:

> The modern countermeasure for this is to require devices parked on private networks to opt-in to the ability mix public Internet requests with private network accesses, using CORS, which is what this is about

I predict that most IoT devices won't have a way to configure this.

If the manufacturer intended it to be controllable from some web app from their site, they will opt-in to control from everyone. If the manufacturer only wants it controlled from their mobile app, they will explicitly opt-out of web control if that is possible.


Just to be clear: the people working on PNA thought about how likely it was for IOT devices to support this, and the default behavior for most devices is expected to be fail-closed.


This is why one of my security mantras has become: if it is not secure enough to connect to the open Internet, it is not secure at all.

That doesn't necessarily mean you connect it to the open Internet, but it means you don't leave everything inside wide open because "oh there's a firewall." It also means buggy vulnerable IoT (Internet of Targets) stuff has to be dealt with.

Firewalls are almost security theater. They're just a basic precautionary thing. Same goes for virtual and zero trust networks. Systems must be secure, period.


I think the concern is for even clicking a link, which to my knowledge typically hasn't fallen under implicit CORS concerns. The browser will automatically assume that unless a private origin server (i.e. one within a private address space) explicitly allows requests from a public source, then the navigation should be prevented.

It's always a challenge in browser design, but basically this is just another case of killing valid use cases because some servers don't follow the spec (i.e. GETs with side effects).


> killing valid use cases because some servers don't follow the spec

There are probably 25 IoT devices in my home, and more than half of them have a magic GET request with side effects. For example, just by clicking this link, my lights turn on http://lighting.londons_explorer/cm?POWER%20ON

A malicious web page could redirect me to that URL and force my lights on with no input from me. I bet some of the devices allow firmware updates with the same method.


  > just by clicking this link, my lights turn on
Wasn't working for me, so I added an entry to my hosts file directing it to 127.0.0.1.

Now I don't know what's happening with _your_ lights, but when I click that link my own lights come on.


No idea why this obvious joke was downvoted tbh :P


A dress-up localhome joke? Hey, at least it was one I hadn’t heard.

(Cracked a localhome joke in for my first IT job interview. Manager laughed, Engineering Manager rolled his eyes.)


And because you use crap hardware, everyone else has to change his totally legitimate network setup?


I'd wager the ratio of people who have at least one item of IoT junk to the number of people who have a legit setup that requires allowing cross site access to a local resource from a remote one is probably ten million to one. Who are these "everyone else" who are controlling their LAN devices from a public website who are going to suffer an evening setting up CORS?

edit: what I can see breaking is stuff like Synology QuickConnect https://global.download.synology.com/download/Document/Softw...


Also services using split horizon DNS: i.e. it is publicly available service, but only when used from outside. When used from inside, it is not routed through public interface, but resolved directly to it's internal IP.


I'm pretty sure I'm lacking experience in administering a huge network to spot the obvious? problem, but for smaller setups I've never understood why you would do this anyway and not treat it as "external first" by putting it in a sort of DMZ and treat your internal users as "quasi-external" ones by routing them into the DMZ via another means...

Of course this has the downside if you have actual 'internal only' stuff, but those could be separated from the split stuff... Just too much work with years (decades) old setups?


At bigger, established corporations, I've seen internal web apps that need external access available set up this way, but when accessed via the public IP, a second auth factor is required. IIRC, MS Exchange's web interface was the most smooth, where the only difference to the end user was an additional field for the MFA code.

If an organization is using the "BeyondCorp" approach, it doesn't seem relevant, but that's tough to bolt onto large, complex existing environments IMO.

Edit: just to clarify, the advantage is similar to what "BeyondCorp" gets you - end users just need to remember the one URL, regardless of where they're connecting from.


good point, I hadn't thought of that because I personally never had that. yeah


In DMZ, it still has an IP from private range, ergo, still treated as internal -- it doesn't matter whether it is in the same subnet as your users, only whether it falls into the range that the browser considers private.

So it doesn't solve anything here.


Maybe our understanding of DMZ is different, for me it's not necessarily with an internal IP - so to clarify, what I meant was hosting it on-premise, but only accessible via it's public IP (as the service is public anyway), which shouldn't be slower if your physical route isn't any longer.


I use split horizon networking to, eg, provide public DNS entries to a public canary server. Then on the private network a different DNS server provides correct (LAN-only) addresses. This way I can see if/when private hostnames are leaked to some entity.

I have about two dozen devices on my private LAN so I wouldn't consider myself to be "big" or "enterprise".

The setup is fairly unusual though because most users (and unfortunately many developers) lack the technical know-how for it.


> edit: what I can see breaking is stuff like Synology QuickConnect https://global.download.synology.com/download/Document/Softw...

Goooooood. I hate this thing with a passion since I had to set it up for a computer illiterate friend.


I don't think it's because of this one person, but rather the masses that has crap hardware and software installed everywhere. It's a global problem, not everyone is capable of securing their own network.


Although i agree with this browser change, i highly doubt it will fix all crap hardware. There's a limit to how much you can apply band aid solutions to something that seems to be trying its hardest to be insecure, and non-idempotent GET requests is a bit beyond a minor oversight.


Note that GET requests should be side-effect free. Not just idempotent. That means a GET request that turns your lights on, or opens your garage door (which is idempotent) is still wrong.


> Note that GET requests should be side-effect free.

Not quite. If a GET request is side-effect free then it won't be logged (since that is a side effect).

GET requests aren't supposed to modify state. Logging is a side-effect but it isn't usually considered to be stateful. Changing the state (on/off) of a lightbulb is definitely against the standard requirements for a GET request.


Logging is stateful, though the state usually isn't observable via HTTP requests. Of course the same is true of the light bulb, unless another API is provided to read its status, and a similar API could be provided for the log files. Or a script could toggle a light bulb in response to the logging.

Perhaps a better way to express this is that user agents are permitted to turn one GET request into N GET requests (N >= 1), and can also issue GET requests without user interaction (e.g. for preloading). When this happens the system should still meet its (customer / end-user) requirements. The requirements related to logging are that every request is logged, so it makes sense to record each GET request separately. The requirements for the light bulb are that one user interaction (not one GET request) equals one state change, so updating the state in response to each GET request doesn't meet the requirements. Even if the API were explicitly "turn on" or "turn off" rather than "toggle" you still wouldn't want the state to be affected by preloading, and you could get odd results if opposing requests were repeated (interleaved).


Isn't that what idempotent means? You can't exactly open the garage if its already open.

https://developer.mozilla.org/en-US/docs/Glossary/Idempotent


From your example:

> DELETE /idX/delete HTTP/1.1 is idempotent, even if the returned status code may change between requests:

So requesting to open the garage door multiple times which results in an open garage door in the end is an idempotent request, even though after the second request the response is "I am already open!"

Now, a request to toggle the state of the garage door would not be idempotent. The state of the system is different if you call it an odd or even amount of times.


In general, an idempotent method is a method such that calling it one time is the same as calling it n times in a row for any n > 1. However, idempotency doesn't require that calling the method one time is the same as calling it zero times, although such a method isn't excluded from the definition either. So a stateless method is necessarily idempotent, but an idempotent method isn't necessarily stateless.


Nothing will fix all crap hardware, so we have to live with it and solutions like this prevent crap hardware from being accessed and abused even if it remains crap forever (since it will remain crap).


Just like not everyone is capable in handling explosives or build roadsafe cars or most other stuff by themselfes. We have rulings for this, laws, needed checks and so on. Start stopping the import of crap, like one does it with illegal firecrackers or in the past with eletronic stuff that interferes to much.


Ok, so which safety measures do you mandate?

Oh, I know! How about a browser that does not allow direct navigation from external to internal addresses? Ah, wait.

It still does not quite work for services that have a public IP address. So where do you go from there, a new protocol that has capability handling and external access is disabled by default?

Forcing companies to issue recalls for buggy hardware and firmware could probably do the trick. Note the expense and the fact of insufficient dissemination of found issues combined with lag to fix them.


Are you suggesting regulating IOT devices for better security? Who the heck could you trust to do that? The FCC? The Chinese government? Consumer rights in the USA is barely a thing, and there is no competent bureaucracy for this sort of auditing. Creating some standards body and coming up with evolving criteria and hoping manufacturers will buy in seems like a multi year, if not multiple decade, process...

In the meantime Google can single handedly monkey patch this situation within a few months and force manufacturers to catch up with the next product cycle. While a less ideal route, this seems far likelier to produce actual results within our lifetimes.


I find it really interesting how the US government can constantly solves very tricky real world problems that impact our lives, yet people have close to zero trust in it. Which isn’t to say you’re wrong here.

I think it’s because when stuff works it looks simple. There isn’t an obvious difference between a mound of dirt piled up in days and one that’s carefully compacted as it’s constructed. At least until you want to build something on top that actually lasts for 50 years. Build stuff to last takes time and nobody is around to see you succeed.

Back to your point actually fixing the underlying issues with IOT security is worth it long term even if it hypothetically takes 20 years to get it correct. At the same time moving quickly and patching one of 100,000 problems can still be useful.


> how the US government can constantly solves very tricky real world problems that impact our lives

What are some recent examples? Especially dealing with technology?

I can think of a bunch of counter-examples where the government did not do a good job of regulating:

* Rural broadband failed

* Net neutrality failed

* Healthcare.gov was a fiasco

* Wireless spectrum auction never got us municipal/rural long-range wireless

* NASA's duties have largely been outsourced to private actors

* State DMVs are a shitshow

* Election integrity failed

* They can't figure out what to do about online disinformation

* Warrantless wiretapping: both unconstitutional yet ineffective, as in 9/11 and the lack of data sharing between agencies

* Foreign military misadventures, our traditional forte: Afghanistan, Iraq, both abysmal failures

* Healthcare: a joke compared to every other developed country

* Education: pathetic and getting worse

* Social programs: Welfare, what welfare? inequality and homelessness getting worse every year

* Infrastructure: crumbling

* Clean water: only for rich white people

* Immigration: heh

* Covid: lol

* Renewables: haha

* Nuclear: let's pretend it's not there

On every major policy front, the US government has been a disaster for decades. I'm no libertarian by any stretch, but our government is a complete shitshow compared to any other developed democracy. We have neither the leadership competence (decisionmakers and legislators) nor the engineering talent (career civil servants) who can tackle something as diverse as nuanced as IoT security, or arguably, digital security in general. Give them 20 years and they might be able to catch up to 1990s netsec, and by then the manufacturers will be two decades ahead and foreign intelligence services even further beyond that.

Our government is doomed, and taking us down with it.


Almost everything the government does is extremely complicated. How about the modern coast guard search and rescue process, weather forecasting, air traffic control, making counterfeit resistant money, etc etc.

Of course it’s all stuff the government does directly like GPS that generally works even if theirs issues with version 1. Go to Healthcare.gov today and it works fine, but wow 8 years ago there where issues. People still get mileage talking about that launch presumably because it’s that unusual.

Bringing up the FTC there’s keeping the wireless spectrum clean. You can blame the Government for not solving all shorts of issues, but people complain while at the same time they largely don’t want state or federal government internet. Healthcare is the same issue, we apparently don’t want even a public option, yet somehow the government is still on the hook.

People are always going to talk up government boondoggles because that’s what’s memorable. Clean water in all 155,693 public water systems in the United States isn’t easy it’s a monumentally difficult task that works 99.9% of the time across a huge range of public and private organizations managed by a huge range of different locations from tiny towns to vast cities. Of course if people actually trusted their water then bottled water would be less popular…


Almost anything any government does is extremely complicated, and yet most of the developed ones succeed in ways our fails at. Why are we unique? Countries poorer than us, less dense than us, smaller than us, still manage to provide many basic services, rights, protections, and guarantees that ours does not. This is American exceptionalism gone awry.

Those things you mentioned aren't recent developments. Yes, there was a time when our government was capable of producing good output. What happened? Why are we still judging today's government by its successes of decades past...? Most of what you mentioned is literally last-century tech. The world has moved on; our government has not.

> You can blame the Government for not solving all shorts of issues, but people complain while at the same time they largely don’t want state or federal government internet.

Maybe there's one class of issues that government can't deliver on because the public mandate isn't quite there yet, like single-payer healthcare. But there's another class of issues that the public DOES want, the government already wrote the laws and allocated the budget for, and then did absolutely nothing about (like rural broadband grants basically going to corrupt telcos, with zero real enforcement). That has nothing to do with the lack of public will, just sheer incompetence and corruption.

Then there's the outright unconstitutional things, like warrantless wiretapping or drone assassinations of US citizens... to say nothing of recent developments, like Roe v Wade.

It's not that our boondoggles our more visible, it's that we fail at providing basic services for a huge portion of the population -- things that most other developed democracies can provide without much issue or controversy. By that measure, we fall far short.


It’s generally possible to find individual governments that do just about any specific thing better than the US gov, but that’s not to say each of those countries is better in every way. The best healthcare isn’t in the country with the lowest crime, least corruption, down a list where the 2nd country is then second on every criteria. Further only looking at US failures ignores how other counties fail in different ways. The UK’s land situation is seriously messed up and yet we rarely consider that when comparing governments. I am not going to say the US is #1, but it’s a long way from abject failure.

Also when you excluded say Nigeria and in fact most counties then every remaining country is going to seem worse simply because you just arbitrarily raised the standards. It’s not US exceptionalism to simply say few countries or groups of countries have landed anything on Mars which is freaking difficult. Sure, providing great healthcare is more important, but it’s also something very few counties have done well.

In the detracting from success by looking at unrelated failures misses my argument, at best it speeks to the likelihood of success not the possibility.


(Sorry for the late reply, didn't see your response until now.)

So what do you think is a fairer way to measure governments? Ratio of important successes to important failures? A matrix of weighted policies and implementation scores?

You'd probably end up with something similar to to the UN's human development index (http://hdr.undp.org/en/composite/HDI), in which the US ranks #17, behind Norway, Ireland, Switzerland, Hong Kong, Iceland, Germany, Sweden, Australia, Netherlands, Denmark, Finland, Singapore, the UK, Belgium, New Zealand, and Canada. All of those are perfectly livable countries. My only criterion was "developed democracies", and Hong Kong isn't even much of a democracy anymore. I don't think that's an unreasonably high bar.

Our government is just on the low end of mediocre compared to other developed democracies, at least by the metrics I can think of.

If you can think of a better metric, I'm all ears.


Couldn't a malicious page use an IMG URL to issue the GET request, as in <img src="http://lighting.londons_explorer/cm?POWER%20ON">?


Yes


Wee! I think I'll add that to my repo so you'll know when you're looking at my code. :)


Been following this for a while; I'm really surprised it's taken so long for browsers to begin to address this problem. It's kinda crazy that random websites can just send any HTTP request they want to your private network or local PC with no restrictions; that's a pretty huge attack surface to expose to the public internet.

Even this measure by Chrome is extremely limited, as it sounds like they're only blocking insecure (HTTP) sites from making requests to your private network. HTTPS sites are unaffected (for now).


This is infuriating. Google is once again severely hampering openness and accessibility in the name of security.

The web is more than just the big CDNs. It is not all about your business model. If I can not use your browser to open a socket to a routable host and render the data, then it is not a web browser anymore. But I don't think Chrome has been one for awhile.

Google's logical conclusion is to only allow you to connect to a whitelist of sites that they have control over. For that matter, domain names themselves are obsolete, we should just have AOL-style keywords like 'amazon' and 'facebook'. Only a poweruser weirdo would want to have their own website.

Users could also be tricked into adding their own self signed certificates, or having their own private DNS. It would not at all surprise me if that is banned soon as well.

I can't count the number of things that this would break, especial VPN accessible intranets.

Having users register with some outside service for getting a private network SSL certificate is a no go. That is like saying that a platform is open because submitting to their app store is free. If I have to rely on an outside authority then it is no longer federated, and it is no longer the web. Which is fundamentally what they want.

> It's kinda crazy that random websites can just send any HTTP request they want to your private network or local PC with no restrictions; that's a pretty huge attack surface to expose to the public internet.

I don't think there will be any restrictions on what advertising and tracking CDNs a webpage can make requests to, so long as they have their certificates in order and are not on a private network. I'm sure that when I go to log into my bank it will still happily connect to 12+ different analytics services.

I do think it would be nice to expose more control to the user for namespaces of what is allowed to be fetched from what context, but that might hamper advertising and content consumption.


So would this be like creating a public webpage called “help troubleshoot my router”, and having that webpage serve up malicious JavaScript which attempts to attack my routers local web server?


Hmm question:

Will this race to secure the internet mostly by applying ever more complex band aids end up in completely discouraging small entities from running their own public resources?

It's already next to impossible to run a public home server. Is it likely to become completely impossible?


It will keep getting harder to do things independently online. The creative, individual empowerment phase of the internet is winding down. It is being gobbled up and calcified as an apparatus for rent extraction, spying and manipulation.


From a historical perspective, the complete opposite is true. It has never been easier to do things independently on the web than it is today and by huge orders of magnitude compared to even 20 years ago. The panoply of mature open source software and tooling at the disposal of a would be webmaster is so enormous that it now actually pisses people off when programmers open source their own library or framework code.

As far as the rent extraction apparatus, it is enumerated almost entirely by a population that simply did not exist online back in the imagined halcyon days of "the independent internet". The masses didn't come online to tap through homespun webrings, they don't care about that stuff, these shiny hyper-optimized manipulation machines are what keep the masses online in the first place.


I have friends who set up their own web pages in the late 90s armed with Notepad, free hosting supplied by their ISP, and some online help.

No chance at all of most people being able to do that today.

Why do you Open Source Evangelists not understand that most humans cannot do this stuff?

It's great that you know what a build system and github and dependencies are, but most people don't.

And IMO they shouldn't have to.


>I have friends who set up their own web pages in the late 90s armed with Notepad, free hosting supplied by their ISP, and some online help.

>No chance at all of most people being able to do that today.

I used to save my notepad files on a floppy drive. No chance at all of most people being able to do that today. Just because things are different doesn’t mean worse. The exact skills and methods used 20 years ago is not a good metric for something as nebulous as “do things independently online”. The “things” you can do are going to change and evolve.


20 years ago you saved notepad files on a removable media. Today you still do the same. Underlying technology may have changed, but the interface remains the same.


> I have friends who set up their own web pages in the late 90s armed with Notepad, free hosting supplied by their ISP, and some online help.

> No chance at all of most people being able to do that today.

What are you talking about?

You can still do all that. There's still free hosting available, just not through your ISP.

You can still hand-edit HTML with Notepad and publish it to a free web host.

> It's great that you know what a build system and github and dependencies are, but most people don't.

All the added complexity of web deployments these days is not required for a simple personal web page. You don't need JavaScript, a build system, dependency management, etc. The plain HTML written in 1998 still works today. Even the old HTML frames still work, according to a quick Google, even though they haven't been used by any site in well over 15 years.

> And IMO they shouldn't have to.

Good, because they don't.


> You can still hand-edit HTML with Notepad and publish it to a free web host.

Exactly. The last website I made for a festival early last year I wrote by hand with Notepad++. It ended up being 14 HTML files (7 files and 2 languages) and a couple CSS files and a lot of reading about current CSS standards. Initially I started with WordPress but couldn't find a decent theme to do the layout we wanted, so I scrapped it after a couple days of trying to bend several themes to my will.

Not much different than how I did it in the 90s... except back then I couldn't just DuckDuckGo to find thousands of pages with HTML/CSS help.


You can still do all that with any competent shared hosting provider. (I guess "free, supplied by ISP" is rare nowadays, but you'll find other "free at some subdomain" offerings)

Just because all these other options exist doesn't mean you need to use them. Plenty people I know still handwrite their HTML.

Somehow people got to the point of thinking that the only way to host a website is renting a VPS and setting up everything themselves, and that's just not true. (and even if you do that, there's a range of how complex you need to make it)


> No chance at all of most people being able to do that today.

https://neocities.org seems fine for this purpose?


But the maintenance bar raised. It is not the case anymore that you could set up web server as hobby and leave it running untouched for years. Now you won't connect at all in 2 months if you don't set up letsencrypt correctly. And their API and root certificate also changes regularly so you must keep software up to date. If you use some cross-domain stuff to interact with outside services from browser, that now breaks every year or two. Things like that add up.


> And their API and root certificate also changes regularly so you must keep software up to date.

Their API deprecated one method with a security risk once and their root certificate is none of your concern if you run a webserver (and it also only changed once and not "regularly"). Their certificate chain is an issue that may concern you, but if your software is working correctly then it should just serve the chain that you get with a new cert.


That's a lot of ifs and buts, just to keep up with the last decade's implementations. For a simple blog that maybe ten people a month read. Good luck keeping up on developments of the next one...

Whether it's lets encrypt or Google or Apple or Facebook, the internet has largely moved away from a culture of small time hackers operating on barebones standards to super complex implementations gatekept by a few huge companies with infinite resources and conflicting values. They want to curate the web and monetize it, not lower the barrier to entry. You are free to use their ecosystems to produce things they can share revenue from, but everything else will only keep getting harder... what even is the web anymore but marketing landing pages redirecting to walled gardens.


That dichotomy is false, lots of levels between "self-host everything" (and deal with the pain of maintenance) and "walled gardens". For "just a blog", good old shared hosting works just as well as it did in the 90s/00s.


It's not really true, but essentially it is.

It used to be a web server was something you could almost auto deploy. Then it became a series of increasingly complex steps as various 'security' measures were employed. You can do these things yourself, and they aren't that hard, but they were never made easy in a way that didn't imply a lot of specific technical know how. I kept up with it for a while, eventually everyone has to deal with the real world and it's time constraints, and the 'security' of today provides undeniable barriers compared to the yesteryears of the web.

I'm not convinced this browser change is a good thing - I think the issue is the aforementioned crap on personal networks, not the ability for a browser to go there. If your security is endagered by your shitty dishwasher, either don't connnect it, or since you are doing the connecting, put it on an isolated private network. This move is encouraging bad security practices while at the same time just throwing another roadblock in the way of legitimate uses of 'home' software.


You do realize that the managed website hosting of the late 90s/early 00s still exists today, right?

You don't have to stand up your own servers in your favorite cloud provider and become a Cloud DevOps expert. You don't have to manage deployments, dependencies, etc. You can still pay $3/month to get shared hosting on DreamHost, upload your HTML file, and it gets served. No fiddling with nginx, no operating system patching, etc.

Even if you don't want to pay $3/month, I'm sure there are still hosts that will give you a few megabytes of storage and a couple gigabytes of traffic for free.


> Then it became a series of increasingly complex steps

honestly, what series of increasingly complex steps? The main thing today is an expectation of HTTPS, and that is added complexity, but also something you can auto-deploy today and lots of tutorials available. E.g. I'm fairly sure I've spent more time of my life on .htaccess and nginx redirect syntax than on HTTPS, despite starting early with Let's Encrypt and not choosing the most-automated solutions - and in other setups "add HTTPS to a domain" is literally a line of config file, with the webserver doing the rest. But that's beside the point I made:

This is assuming that you actually are deploying something to a server, instead of making use of the myriad of ways of having that be someone else's problem. How are those "essentially" not true options?

"We can trust users and random developers to do the right thing" is understandably not the security position browsers take, so this needs some solution eventually. What the right tradeoff is is a good question. (i.e. IMHO there should be clear ways for devices to opt-in to being accessed)


(FWIW, their servers use their own certificates, so in fact I had to spend some time today updating the root certificates on a web server so it could fix certificate renewal.)


And try running a mail server from home. Receiving is ok, sending is a non starter. It’s hard even from a server.


> Now you won't connect at all in 2 months if you don't set up letsencrypt correctly

So set it up correctly, or just buy a cert like in the good ole days, or just don't use any encryption like in the good ole days.

All the options from the good ole days are still available to you.


If you don't enable encryption, many browsers reduce functionality available to you.


Functionality that wasn't available in the good old days.


Downloading files? Firefox gives a warning everytime you download from http.


What does the warning say? "You might not be downloading the file you think you are"? That just seems like useful, accurate information that you probably want to be aware of.


I have half a dozen websites that have been running without maintenance for the last 5+ years.


You're lucky. Or skilled. I have half a dozen websites that broke, with maintenance, in the same period. Often because of SSL issues configured by someone else. Plus my own screw ups. It's not impossible to do right but it's definitely not trivial. Even if you configured everything right, something up chain will probably break, in time...


Why should a webserver need maintenance?

To feed the SSL ponzi pyramid?


How can something that literally costs $0 be a ponzi pyramid?

And why should a web server need maintenance? I mean, just search Google for your favorite web server software and "CVE" and you'll find plenty of reasons.


I use a CDN (namely the one on Amazon AWS) to provide HTTPS for my website. That knocks two things at once; fast distribution across the globe, and security. Do you wish to abstain from using a CDN ?


> Do you wish to abstain from using a CDN?

Yes. You are missing the point entirely.


Let's Encrypt is easily automated with certbot, I've been running my home webserver for over 10 years with Debian and NixOS, without touching it apart from stable OS version upgrades.


Let's Encrypt needs internet access, something I prefer not to have for various (rather dated) systems on my network. Worse several things that ran on file:// in the past have been blocked by most browsers so even having to set up a server that then needs a valid cert is a painful complication over just clicking index.html and letting the browser handle every request locally.


For local-only access, you could run your own CA. I found gnoMint to be quite easy to use to generate and manage certificates. It does everything in an SQLite database. I do this for OpenVPN, but you could do it for web services just the same.


> As far as the rent extraction apparatus, it is enumerated almost entirely by a population that simply did not exist online back in the imagined halcyon days of "the independent internet". The masses didn't come online to tap through homespun webrings, they don't care about that stuff, these shiny hyper-optimized manipulation machines are what keep the masses online in the first place.

I agree, but I think people arguing over that would have expected to maintain the same ratio as the internet population grew. Frankly utopian IMO but one should dream, no?


The old internet is already compromised. Decentralization is the only way forward.


I agree with you, with a couple exceptions. I think it's an error (and a frequently made one) to treat "the masses" as being distinct from the curious, virtuous hackers trying to build with the tools they were given. The difference in many cases is just how much they were encouraged to create, either by the example of others or by the tech itself. If we always treat consumers as strictly consumers instead of fellow humans in a process of mutual improvement... that's what we'll tend to get. Why do we on the one hand claim to have a need for more STEM workers or more competency in this or that, then on the other hand sell devices that coddle users and waste their time? Productivity tools are often behind big paywalls. I won't enumerate all the ways mainstream computing sucks, but our manipulation machines actively discourage people from engaging more positively with tech, not to mention each other. It's not very encouraging to me that Open Source is growing in absolute size when it is struggling so much in terms of mindshare.

The other thing to address is about being "independent online". Many of the things that make it so easy to create a website, for example, are made easy at a cost, i.e. vendor lock-in and rent for continued service. Or github will host your code but also use it for their own purposes, training your AI replacement. Those are ultimately good things to have around but do follow the trend of being cages-with-benefits --they increase dependence on central infrastructure.


I don't see how people come to this conclusion.

TLS certificates used to cost a lot of money, now they're free. Pretty much all relevant web frameworks and technology stacks are published under FOSS licenses.

Nothing stops you from running your own web server with either whatever is the current state of the art web tech or whatever you prefer to build yourself.


Well, the web wasn't as encrypted before. And then when it was, the cost of the certificate wasn't a big deal in comparison to the maintenance effort of constantly rotating certificates, as you'd just buy a certificate that lasted for years and years, which isn't even allowed anymore. SSL with its massive set of "authorities" that are all granted way too broad and yet interchangeable powers with keys that both have to be rotated with pre-determined expiration dates and yet also must be shared with the client is just barely a solution to the original problem :(.

But of course, the truth is that the web was never easy: it was just naive. Most (NOT ALL: some categorically only are protecting the interests of advertisers or rights holders) of these security bandaids and limitations are fixing actual problems with the web that were always there... developers just didn't know about them or didn't realize the ramifications. It would be better to have solved some of these things with solutions that are more elegant, and the lack of a definitive guide for "what all you should and should not do" sucks, but mostly the web is just banning stuff that never should have existed in the first place :(.


It's not always possible to get certificates from Let's Encrypt for local network only services. In a big corporate environment jumping through the hoops necessary to deploy keys or get things into a DMZ can be near impossible. Even if you don't have those issues it is still one more thing to learn and setup. All these things pile up. Try setting up a basic e-mail server on the modern Internet and compare that to 20 years ago.


> It's not always possible to get certificates from Let's Encrypt for local network only services. In a big corporate environment jumping through the hoops necessary to deploy keys or get things into a DMZ can be near impossible.

You're not wrong, but if you can go through the paperwork to add a CNAME to external DNS, your team can use DNS validation to verify host record ownership for LE/ACME:

* https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...

* https://www.eff.org/deeplinks/2018/02/technical-deep-dive-se...

* https://dan.langille.org/2019/02/01/acme-domain-alias-mode/

Seems not many people know about using dns-01 for internal-only hosts.


> In a big corporate environment jumping through the hoops necessary to deploy keys or get things into a DMZ can be near impossible.

TBH that sounds like you decided to make things painful and then complain that they are painful.


If your company has network admins smart enough to deploy segmentation rules, they are also probably smart enough to setup a internal CA and deploy the certs to everyone’s root store. If not, that stinks.

Smallstep makes a basic CA for free that is ACME compliant, meaning you just need to change the URL for Let’s Encrypt on your server and restart. Microsoft also has a CA included with Windows Server if you’re using that which works fine (although it uses a different API to get certs).


Only if you allow it to become so, there's no rent-seeking on "my" part of the net. Just avoid any and all devices, programs and "services" which exhibit such behaviour, self-host where you can or use decentralised services if you can not, don't use any devices through "cloud" interfaces - and block those devices from the open internet altogether of they'll contact the mothership - and enjoy the 'net the way you want it.


The Web3 community is pretty actively innovating to create a decentralized and creative web, if you ignore the scams going out there.


only if you live in the technological present. They won't really have us until they can chase us out of the past and/or future


Doesn't seem related though, since if you ran a public server from home, that server would appear through your public facing "Internet" IP rather than its intranet address.

It might make some homelab setups slightly more annoying, though.

FWIW, I do the same as the article for my home network: I hijack all DNS requests from intranet devices and respond with corresponding intranet IPs. Externally on the internet trying to resolve those same (sub-)domains would lead to the public facing firewall.

This makes it so I can manage stuff like NASes and IoT stuff fairly easily regardless of where I'm connecting from.

Luckily none of my stuff really depends on making cross-boundary requests between intranet and internet services (it's always completely internal or external) so I should still be OK.


> next to impossible to run a public home server

How so? I run a public HTTP server and a VPN server from a Raspberry Pi in my living room. It was pretty easy to set it up. Regarding the HTTP server, the only thing that was different from the last time I did this (around 2004) were SSL certificates.


What is the easy way to get SSL certs for my private network, which are recognized by all my browsers? My private host has no public IP or hostname, thus can't verify automatically via letsencrypt.


You can do two other things, besides using a wildcard domain as mentioned in a sibling comment.

1) Use public DNS to validate instead of HTTP. I do this for internal-only webservers. TXT records are updated during renewal using Hurricane Electric's DNS service at dns.he.net.

2) Run your own CA. This used to be a huge pain, until I found gnoMint. I use this to generate certificates for OpenVPN. If necessary, installing a root certificate is not difficult on most systems. You can set it to expire in, say, 10 years, so you won't need to update it so often.


Not sure about “Easy” but you want to be an intermediate CA signed off by another CA already recognise by common browsers. LE don’t provide that apprenty but it does seem to be available for a price …

[0] https://community.letsencrypt.org/t/does-lets-encrypt-offer-...


I get a *.domain.tld cert for my public server and then copy that to all my internal hosts, which are only reachable internally, but use the same domain.



> no public IP or hostname

You can get a domain name for free from many non-profits (eg. eu.org). And chances are you have a public IP address, it's just dynamic not static, in which case dynamic DNS setup is fairly easy.

The only case you're screwed is in 4G/5G setup where you actually don't have a public IP at all, but only half/quarter IP (just a dedicated port range on a shared IP).


If your host has no public IP or hostname, letsencrypt has no business issuing that host a certificate.

If you wanted to, having a public facing IP that uses challenge files, and just reverse proxying that specific URL-range to the private host might work.

But really, if you want SSL for a private network, self-signed certs or your own trusted CA cert is the way to go. That does mean changing your browser to accept those certs.

Alternatively, drop the SSL requirement, since everything is apparently on private networks.


For whatever it’s worth, my startup kubesail.com exists to solve exactly these problems :)


I’m curious. Are there any new major difficulties when setting up home servers?

Last time I tried it was a little annoying with dynamic IPs getting in the way, but possible with just a port forward or UPnP. Are ISPs making changes to prevent this?


I've never run into trouble running stuff from home. SMTP from home is always going to be problematic, though.

The lack of IPv4 address space has started a trend of CGNAT which makes hosting from home nearly impossible. Luckily, IPv6 continues to be rolled out, but in many situations thst would lead to your services only being reachable through IPv6. There are still a great many outdated networks out there that can't reach IPv6, so you might run into trouble there.

If you can get a public IPv4 IP address, I see no reason why you'd run into issues. That's more of an ISP problem than a hosting problem, in my opinion.


I -- along with many other mail server operators -- block dynamic IP space. Which is almost the same as blocking home users, but not quite -- I have a static /29 on my residential package and ran my mail server from the slow side of an ADSL connection for several years.


> SMTP from home is always going to be problematic, though

If your IP is not dynamic, and you can configure the reverse DNS, there's not going to be problems :)

Except with Gmail and Outlook of course, but well these are the problems, not us.


Most, if not all, residential IP blocks are in the various blacklists most mail servers query. Merely having a reverse PTR won't get your email delivered. Even SPF and DKIM with DMARC probably won't be enough to get over the blacklist rating.

Some business IP blocks aren't blocked, though, so in rare cases you might get away with running a mail server from a business internet subscription.


> Even SPF and DKIM with DMARC probably won't be enough to get over the blacklist rating.

I can confirm this. I recently tried to set up being able to send emails from an smtp server in my homelab to my gmail address. Even with all the good stuff - a domain, tls, spf, dkim, dmarc, gmail just straight up refuses to receive mail from residential IPs. I ended up proxying it through my VPS, which works better but still requires me setting up gmail rules to NEVER send messages from my special domain to spam. Which it would otherwise do for no apparent reason sometimes.


In general using IPv6 reverse proxied by something like CF is the optimal solution to most problems.


If you're going the Cloudflare route, then just get a cheap VPS somewhere and run off that instead. Gets you both IPv4 and IPv6 connectivity, and you can always proxy stuff yourself if you need to access local hardware.

It's a solution, but it's hardly the home server project you could (and should be able to) run from your home internet.


If you're going to rely on cloudflare, who control access to a huge number of sites and explicitly choose to cut off some of them for political reasons, why even bother hosting something yourself? At that point you might as well be posting on facebook.


most ISPs in Australia offer a static IPv4 for a monthly fee ($5 to $10 a month).


That's an ISP problem, though. For that price it's better to just rent a VPS for hosting/proxying traffic. A wireguard tunnel to your actual home server and an nginx proxy at a $4 VPS provide more options than just a guaranteed static IP. Hell, if you can nab some capacity, you can use free VPS solutions like the one AWS and Google provide for a year, or the one Oracle providers forever if you can live with the 50mbps uplink (you get 2 servers so maybe you could tunnel that into 100mbps?) and the annoying Web UI for setting them up.

Luckily, you don't need a static IP address in most use cases if you set up dyndns, as long as the IP is exclusively used by you and doesn't change too often (e.g. every week or month or so).


> For that price it's better to just rent a VPS for hosting/proxying traffic.

Why would it be better? It would be more complex technically, and certainly less resource efficient. Am i missing something?


It would be more complex, but you'd have a lot more flexibility. Your ISP might not be able to deflect DDoS attacks as efficiently as a remote proxy, you can set up your own caching with a high-speed connection, and you can secure your home network a bit better by only allowing the remote proxy access to your home server.

You'd also save costs moving some of the hosting to the cloud while you're at it, be use you don't don't need to pay a separate electricity bill for a cloud VPS. Plus, VPS storage is usually more reliable than a custom RAID config, as is the power grid around data centers and the internet connection itself.

If you're going for efficency or simplicity then you're totally right, but if you're trying to get value for money I think a cheap VPS would be better.


If you're going to pay $10 a month, you might as well pay for a VPS instead and connect up to it with wireguard, nebula or tailscale. Especially if paying means dynamic IPv4 that isn't behind CGNAT rather than an actual static IPv4 address.


Paying gives you a static IPv4 that remains the same for as long as you're subscribed.

It's not a bad option if you're already paying for gigabit, sadly nearly impossible to get symmetrical gigabit here, but still for an extra $5 or $10 a month it's ok.


That's about it. In addition, some ISPs make sure you cannot change the resolver advertised on DHCP, and that your own IP is routed via the Internet and back (so local requests are slow as hell), both of which can be hacked around by running your own router behind the ISP's modem/router.

Of course, the real solution to the problem is to find a decent ISP, like a non-profit from FFDN.org federation. Then you have "real" internet and no worries for selfhosting.


And how much time/expertise do you need to not trigger all kinds of warnings in modern browsers?


Close to none. Just a letsencrypt cert and not completely outdated ciphers is enough.


:) "Just a letsencrypt cert". That's right for me and most off the HN audience. Is it right for the general public?


Which audience exactly is cool with the register a domain name, figure out your ip, setup an A record to point there (maybe dynamically!), enable port forwarding, install a web server, etc, but gets stuck on letsencrypt?

Letsencrypt is almost certainly the easiest part of the entire process of self-hosting a website.


Yes, because if you web server publicly reachable, letencrypt can be automated easily. And there is plenty of letsencrypt software that does fully automates this.

A public web server is the easy part if you want to do letsencrypt.


Letsencrypt is easy, if you are running generic Linux distribution and can easily install whatever you want.

But you might run a device that already comes with software, and letsencrypt support is either limited (example: Synology; their implementation allows only http-01 challenge so if you need dns-01, tough luck. Even wildcards are a new feature) or non-existent (example: Ubiquiti, and their cloud keys (administration UI, guest portal) or routers (Radius/WPA Enterprise needs TLS cert too)).


dns-01 is possible on Synology, but you need to SSH in:

* https://github.com/acmesh-official/acme.sh/wiki/Synology-NAS...

* https://lippertmarkus.com/2020/03/14/synology-le-dns-auto-re...

* http://www.thedreaming.org/2020/11/18/synology-lets-encrypt/

It's possible as long as your ACME client has hook scripts, and your DNS provider has an API:

* https://github.com/AnalogJ/lexicon


It's unofficial, but https://github.com/kchristensen/udm-le lets you provision LetsEncrypt certificates for Ubiquiti kit.


> Is it right for the general public?

My Asus router has a checkbox for dynamic DNS and for getting Let's Encrypt certs. See Method 2:

* https://www.asus.com/us/support/FAQ/1034294/#a2


Is the general public that much into running private web services on their home networks?


Yes. Even the most complex technical setup can be accomplished by a non-technical person that can follow directions assuming that someone took the time to write clear and concise directions and included common caveats and troubles that one may run into and where to check for them. I have proven this many times over by having managers and directors that were non-technical follow my instructions. In the rare moments this breaks down, forums and chat rooms can be a very handy gap-filler and provides an indirect feedback loop to further improve documentation.


The general public's alternative is to buy a cert every year for $$. They'll figure it out. (or their shared hosting provider will)


That is the lowest bar for setting your certificates. It should be right for the general public.


Fairly little? I think you might be thinking of email where running your own email sever is tough


Receiving mail is also not hard. You may have to deal with the bonus mail (spam). But that's it.


I don't see how this will make it any harder. Its not like your homeserver is likely to mix private and public ips.


>It's already next to impossible to run a public home server. Is it likely to become completely impossible?

Why? I try to do this, it's an exercise that I wanted to do.


In case it wasn’t clear: This won’t stop you from going to 192.168.1.1 or otherwise accessing private network resources from Chrome. This is about closing specific public/private boundary security vulnerabilities that you don’t want 99% of the time. The author of this article happens to have a 1% corner case, but the average user generally doesn’t want this.


I think split-horizon DNS is a bit more than a 1% corner case: it's almost universal in corporate and education contexts.


When I read that all I could think of was routing things through a privacy-invading proxy. Is that what it is?


There often is such a proxy in these environments but it isn't really related. The scenario here is that you have servers with both internal and external IP addresses, and for whatever reason if someone is connecting from inside the network you want them using the internal IPs and not the external ones. (Simpler routing, different features available to internal clients, etc.) So you set up the DNS servers for your network to serve internal IP addresses for your domains, but anyone outside the network sees the public IPs for those same domains. (That's the "split horizon" part.)

Now someone in the network could follow a link from a page served from a public IP to a domain with a private IP address—which this change would disallow unless the first page was served from a "secure context" (with TLS) and the internal server responds to a "preflight" OPTIONS request with the required CORS headers to allow following links from public networks.


This is extremely common in Universities where you share 2-4 public IPs that you can usually ask ports to be forwarded to a internal IP, and often there are resources (Like say Servers with GPUs) available on a interally reachable IPs using a easy to use hostname served by the internal DNS server.

Ofc this change won't be that big of a issue, things would just need to change a little, using Split-DNS was already a pain when students wanted to say use DNS-over-HTTPS and didn't want the University DNS servers to know every site they visited.


They can apply group policy to make it work then.


Use IPv6 and stop with the NAT. It makes all the split-horizon DNS pain go away.


Lack of IPv6 support is unfortunately quite common among devices, as are buggy/bad/underbaked implementations.


+1 Especially for the devices this is targeting, like IoT stuff, printers, fridges, cameras etc. In fact all the IoT stuff I have don't support IPv6 well (or at all) and many of them don't even support 5Ghz ac WiFi.


I'd just return it if it says IPv6 on the package, but is implemented shoddily. Manufactureres should do better in 2021.


> Use IPv6 and stop with the NAT. It makes all the split-horizon DNS pain go away.

\s

Sure. Just give me a week or two(or several months) to shut down whole network and reconfigure all the servers, devices, and services.

Also all our business partners and vendors who integrate with our services, will be glad to switch to our new setup, exactly when we need them to.

\s

If you are building new site/network ipv6 is way to go. Migrating existing ones is next to impossible due to all of the dependencies out of your control


It's about shifting a security boundary from one place to another, because the original location of it has been ignored for generations and is now about as good as keeping invaders out as the Great wall of China.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: