You should probably be mostly thrilled about this development.
CORS was basically ineffective in a lot of ways because it only works - by design - with newer servers that send those headers and with newer browsers that actually respect the headers and not simply ignore them. It was also ineffective for older servers from pre-CORS ages. The never break the web mentality didn't work out for this scenario.
Looking forward to this! Finally the LOIC and routersploit web implementations are ineffective. Now drop support for ftp in the web browser and we are good to go.
(̶F̶i̶r̶e̶f̶o̶x̶ ̶c̶a̶n̶ ̶s̶t̶i̶l̶l̶ ̶b̶e̶ ̶a̶b̶u̶s̶e̶d̶ ̶t̶o̶ ̶D̶D̶o̶S̶ ̶v̶i̶a̶ ̶f̶t̶p̶ ̶u̶r̶l̶s̶)̶
edit: correction: I was wrong: Firefox finally phased out ftp support this year in July.
How? They dropped FTP support a few versions ago (https://blog.mozilla.org/addons/2021/04/15/built-in-ftp-impl...).
Firefox deprecated FTP in 2015, and required it be manually re-enabled since April last year (FF88), and completely removed it in July (FF90), this year.
I was assuming that they will drop support by the end of this year, didn't follow it up recently so I missed the update in July about it.
This is so true. Especially since the pandemic the sheer amount of CVEs are hard to follow through and evaluate whether they're relevant for your own tech stack or not.
Great. And while I blame Mozilla for much, this isn't their fault.
I almost feel as if end users should need a network license, and if they get too many tickets, no license for them!
And yes, it is not realistic.
If these clients are on Windows... tell them to use Windows Explorer.. it has FULL ftp capabilities.
You don't need a browser for ftp, or even an FTP specific program.
You cannot even uninstall or disable it as it disables the entire desktop.
Note that I am not talking about Internet Explorer (which is related, but not the same thing)
It's pretty seamless. There's lots of ftp supporting clients. Give it a go
You can insert a dozen strange answers here.
For example, one answer?
A client of mine has an FTP site, and their customers access it. Those customers have an IT policy which does not allow them to install other software, for security reasons.
Thus, keeping and old version of firefox around is what their customers do.
(Yes, this is insane and bizarre beyond belief. The security policy is working against security, and the fact that the security policy doesn't care about an old browser is insane. Yet there it is.)
I have a client with employees around the world. They are usually very secure. However, these employees seem to be the complete opposite of computer literate. Every step they take, every task assigned, is accompanied with PDF files and wiki walkthroughs of "here is menu item X, click this, then menu item Y", along with screenshots, and enlargements of menu items.
All their training is rote. They don't know how to use software, only how to click this, then that, as per pictures and doc, then entire report in the form that pops up.
If anything deviates -- tech support.
I honestly don't know how it is possible to find people capable of doing a job with diligence, competence, and intelligence, but require this level of hand holding, yet I see it myself, through this client, constantly.
Like I said ... strange and bizarre.
While I am sure this client will eventually manage to upgrade its staff, they have been researching clients, testing them, re-working all documentation, and even rolling out 'test upgrades' for employees!
And of course this takes time, naturally they are short staffed, and it requires management buy in at every step.
And getting people to modify about:config? That's way, waaay too complex. So they're stuck on an old browser, which they aren't supposed to use for anything but FTP, yet these employees are the sort that call a browser "google", and don't know the difference between firefox and chrome.
So you can be sure they're using an old version.
Again, I don't blame Mozilla for this.
This is the sort of stuff which makes me think 'maybe people need a license, like a driver's license, to be on the internet, they're too dangerous otherwise'.
But of course, as I said initially... not easy or realistic to roll out.
Now that I think of it, though, maybe it should be "businesses need a license to be on the internet". The important part here being, if you have constant breeches, and your infra gets used to launch endless attacks, you get fined until you go out of business.
Would you pass on that memo to the Chrome developers, please?
It gets annoying when a page I write loaded onto my table on my LAN's WiFi cannot talk to devices I own on my LAN just because I loaded the page from my server that is on the public Internet.
> The modern countermeasure for this is to require devices parked on private networks to opt-in to the ability mix public Internet requests with private network accesses, using CORS, which is what this is about
I predict that most IoT devices won't have a way to configure this.
If the manufacturer intended it to be controllable from some web app from their site, they will opt-in to control from everyone. If the manufacturer only wants it controlled from their mobile app, they will explicitly opt-out of web control if that is possible.
That doesn't necessarily mean you connect it to the open Internet, but it means you don't leave everything inside wide open because "oh there's a firewall." It also means buggy vulnerable IoT (Internet of Targets) stuff has to be dealt with.
Firewalls are almost security theater. They're just a basic precautionary thing. Same goes for virtual and zero trust networks. Systems must be secure, period.
It's always a challenge in browser design, but basically this is just another case of killing valid use cases because some servers don't follow the spec (i.e. GETs with side effects).
There are probably 25 IoT devices in my home, and more than half of them have a magic GET request with side effects. For example, just by clicking this link, my lights turn on http://lighting.londons_explorer/cm?POWER%20ON
A malicious web page could redirect me to that URL and force my lights on with no input from me. I bet some of the devices allow firmware updates with the same method.
> just by clicking this link, my lights turn on
Now I don't know what's happening with _your_ lights, but when I click that link my own lights come on.
(Cracked a localhome joke in for my first IT job interview. Manager laughed, Engineering Manager rolled his eyes.)
edit: what I can see breaking is stuff like Synology QuickConnect https://global.download.synology.com/download/Document/Softw...
Of course this has the downside if you have actual 'internal only' stuff, but those could be separated from the split stuff... Just too much work with years (decades) old setups?
If an organization is using the "BeyondCorp" approach, it doesn't seem relevant, but that's tough to bolt onto large, complex existing environments IMO.
Edit: just to clarify, the advantage is similar to what "BeyondCorp" gets you - end users just need to remember the one URL, regardless of where they're connecting from.
So it doesn't solve anything here.
I have about two dozen devices on my private LAN so I wouldn't consider myself to be "big" or "enterprise".
The setup is fairly unusual though because most users (and unfortunately many developers) lack the technical know-how for it.
Goooooood. I hate this thing with a passion since I had to set it up for a computer illiterate friend.
Not quite. If a GET request is side-effect free then it won't be logged (since that is a side effect).
GET requests aren't supposed to modify state. Logging is a side-effect but it isn't usually considered to be stateful. Changing the state (on/off) of a lightbulb is definitely against the standard requirements for a GET request.
Perhaps a better way to express this is that user agents are permitted to turn one GET request into N GET requests (N >= 1), and can also issue GET requests without user interaction (e.g. for preloading). When this happens the system should still meet its (customer / end-user) requirements. The requirements related to logging are that every request is logged, so it makes sense to record each GET request separately. The requirements for the light bulb are that one user interaction (not one GET request) equals one state change, so updating the state in response to each GET request doesn't meet the requirements. Even if the API were explicitly "turn on" or "turn off" rather than "toggle" you still wouldn't want the state to be affected by preloading, and you could get odd results if opposing requests were repeated (interleaved).
> DELETE /idX/delete HTTP/1.1 is idempotent, even if the returned status code may change between requests:
So requesting to open the garage door multiple times which results in an open garage door in the end is an idempotent request, even though after the second request the response is "I am already open!"
Now, a request to toggle the state of the garage door would not be idempotent. The state of the system is different if you call it an odd or even amount of times.
Oh, I know! How about a browser that does not allow direct navigation from external to internal addresses? Ah, wait.
It still does not quite work for services that have a public IP address. So where do you go from there, a new protocol that has capability handling and external access is disabled by default?
Forcing companies to issue recalls for buggy hardware and firmware could probably do the trick. Note the expense and the fact of insufficient dissemination of found issues combined with lag to fix them.
In the meantime Google can single handedly monkey patch this situation within a few months and force manufacturers to catch up with the next product cycle. While a less ideal route, this seems far likelier to produce actual results within our lifetimes.
I think it’s because when stuff works it looks simple. There isn’t an obvious difference between a mound of dirt piled up in days and one that’s carefully compacted as it’s constructed. At least until you want to build something on top that actually lasts for 50 years. Build stuff to last takes time and nobody is around to see you succeed.
Back to your point actually fixing the underlying issues with IOT security is worth it long term even if it hypothetically takes 20 years to get it correct. At the same time moving quickly and patching one of 100,000 problems can still be useful.
What are some recent examples? Especially dealing with technology?
I can think of a bunch of counter-examples where the government did not do a good job of regulating:
* Rural broadband failed
* Net neutrality failed
* Healthcare.gov was a fiasco
* Wireless spectrum auction never got us municipal/rural long-range wireless
* NASA's duties have largely been outsourced to private actors
* State DMVs are a shitshow
* Election integrity failed
* They can't figure out what to do about online disinformation
* Warrantless wiretapping: both unconstitutional yet ineffective, as in 9/11 and the lack of data sharing between agencies
* Foreign military misadventures, our traditional forte: Afghanistan, Iraq, both abysmal failures
* Healthcare: a joke compared to every other developed country
* Education: pathetic and getting worse
* Social programs: Welfare, what welfare? inequality and homelessness getting worse every year
* Infrastructure: crumbling
* Clean water: only for rich white people
* Immigration: heh
* Covid: lol
* Renewables: haha
* Nuclear: let's pretend it's not there
On every major policy front, the US government has been a disaster for decades. I'm no libertarian by any stretch, but our government is a complete shitshow compared to any other developed democracy. We have neither the leadership competence (decisionmakers and legislators) nor the engineering talent (career civil servants) who can tackle something as diverse as nuanced as IoT security, or arguably, digital security in general. Give them 20 years and they might be able to catch up to 1990s netsec, and by then the manufacturers will be two decades ahead and foreign intelligence services even further beyond that.
Our government is doomed, and taking us down with it.
Of course it’s all stuff the government does directly like GPS that generally works even if theirs issues with version 1. Go to Healthcare.gov today and it works fine, but wow 8 years ago there where issues. People still get mileage talking about that launch presumably because it’s that unusual.
Bringing up the FTC there’s keeping the wireless spectrum clean. You can blame the Government for not solving all shorts of issues, but people complain while at the same time they largely don’t want state or federal government internet. Healthcare is the same issue, we apparently don’t want even a public option, yet somehow the government is still on the hook.
People are always going to talk up government boondoggles because that’s what’s memorable. Clean water in all 155,693 public water systems in the United States isn’t easy it’s a monumentally difficult task that works 99.9% of the time across a huge range of public and private organizations managed by a huge range of different locations from tiny towns to vast cities. Of course if people actually trusted their water then bottled water would be less popular…
Those things you mentioned aren't recent developments. Yes, there was a time when our government was capable of producing good output. What happened? Why are we still judging today's government by its successes of decades past...? Most of what you mentioned is literally last-century tech. The world has moved on; our government has not.
> You can blame the Government for not solving all shorts of issues, but people complain while at the same time they largely don’t want state or federal government internet.
Maybe there's one class of issues that government can't deliver on because the public mandate isn't quite there yet, like single-payer healthcare. But there's another class of issues that the public DOES want, the government already wrote the laws and allocated the budget for, and then did absolutely nothing about (like rural broadband grants basically going to corrupt telcos, with zero real enforcement). That has nothing to do with the lack of public will, just sheer incompetence and corruption.
Then there's the outright unconstitutional things, like warrantless wiretapping or drone assassinations of US citizens... to say nothing of recent developments, like Roe v Wade.
It's not that our boondoggles our more visible, it's that we fail at providing basic services for a huge portion of the population -- things that most other developed democracies can provide without much issue or controversy. By that measure, we fall far short.
Also when you excluded say Nigeria and in fact most counties then every remaining country is going to seem worse simply because you just arbitrarily raised the standards. It’s not US exceptionalism to simply say few countries or groups of countries have landed anything on Mars which is freaking difficult. Sure, providing great healthcare is more important, but it’s also something very few counties have done well.
In the detracting from success by looking at unrelated failures misses my argument, at best it speeks to the likelihood of success not the possibility.
So what do you think is a fairer way to measure governments? Ratio of important successes to important failures? A matrix of weighted policies and implementation scores?
You'd probably end up with something similar to to the UN's human development index (http://hdr.undp.org/en/composite/HDI), in which the US ranks #17, behind Norway, Ireland, Switzerland, Hong Kong, Iceland, Germany, Sweden, Australia, Netherlands, Denmark, Finland, Singapore, the UK, Belgium, New Zealand, and Canada. All of those are perfectly livable countries. My only criterion was "developed democracies", and Hong Kong isn't even much of a democracy anymore. I don't think that's an unreasonably high bar.
Our government is just on the low end of mediocre compared to other developed democracies, at least by the metrics I can think of.
If you can think of a better metric, I'm all ears.
Even this measure by Chrome is extremely limited, as it sounds like they're only blocking insecure (HTTP) sites from making requests to your private network. HTTPS sites are unaffected (for now).
The web is more than just the big CDNs. It is not all about your business model. If I can not use your browser to open a socket to a routable host and render the data, then it is not a web browser anymore. But I don't think Chrome has been one for awhile.
Google's logical conclusion is to only allow you to connect to a whitelist of sites that they have control over. For that matter, domain names themselves are obsolete, we should just have AOL-style keywords like 'amazon' and 'facebook'. Only a poweruser weirdo would want to have their own website.
Users could also be tricked into adding their own self signed certificates, or having their own private DNS. It would not at all surprise me if that is banned soon as well.
I can't count the number of things that this would break, especial VPN accessible intranets.
Having users register with some outside service for getting a private network SSL certificate is a no go. That is like saying that a platform is open because submitting to their app store is free. If I have to rely on an outside authority then it is no longer federated, and it is no longer the web. Which is fundamentally what they want.
> It's kinda crazy that random websites can just send any HTTP request they want to your private network or local PC with no restrictions; that's a pretty huge attack surface to expose to the public internet.
I don't think there will be any restrictions on what advertising and tracking CDNs a webpage can make requests to, so long as they have their certificates in order and are not on a private network. I'm sure that when I go to log into my bank it will still happily connect to 12+ different analytics services.
I do think it would be nice to expose more control to the user for namespaces of what is allowed to be fetched from what context, but that might hamper advertising and content consumption.
Will this race to secure the internet mostly by applying ever more complex band aids end up in completely discouraging small entities from running their own public resources?
It's already next to impossible to run a public home server. Is it likely to become completely impossible?
As far as the rent extraction apparatus, it is enumerated almost entirely by a population that simply did not exist online back in the imagined halcyon days of "the independent internet". The masses didn't come online to tap through homespun webrings, they don't care about that stuff, these shiny hyper-optimized manipulation machines are what keep the masses online in the first place.
No chance at all of most people being able to do that today.
Why do you Open Source Evangelists not understand that most humans cannot do this stuff?
It's great that you know what a build system and github and dependencies are, but most people don't.
And IMO they shouldn't have to.
>No chance at all of most people being able to do that today.
I used to save my notepad files on a floppy drive. No chance at all of most people being able to do that today. Just because things are different doesn’t mean worse. The exact skills and methods used 20 years ago is not a good metric for something as nebulous as “do things independently online”. The “things” you can do are going to change and evolve.
> No chance at all of most people being able to do that today.
What are you talking about?
You can still do all that. There's still free hosting available, just not through your ISP.
You can still hand-edit HTML with Notepad and publish it to a free web host.
> It's great that you know what a build system and github and dependencies are, but most people don't.
> And IMO they shouldn't have to.
Good, because they don't.
Exactly. The last website I made for a festival early last year I wrote by hand with Notepad++. It ended up being 14 HTML files (7 files and 2 languages) and a couple CSS files and a lot of reading about current CSS standards. Initially I started with WordPress but couldn't find a decent theme to do the layout we wanted, so I scrapped it after a couple days of trying to bend several themes to my will.
Not much different than how I did it in the 90s... except back then I couldn't just DuckDuckGo to find thousands of pages with HTML/CSS help.
Just because all these other options exist doesn't mean you need to use them. Plenty people I know still handwrite their HTML.
Somehow people got to the point of thinking that the only way to host a website is renting a VPS and setting up everything themselves, and that's just not true. (and even if you do that, there's a range of how complex you need to make it)
https://neocities.org seems fine for this purpose?
Their API deprecated one method with a security risk once and their root certificate is none of your concern if you run a webserver (and it also only changed once and not "regularly"). Their certificate chain is an issue that may concern you, but if your software is working correctly then it should just serve the chain that you get with a new cert.
Whether it's lets encrypt or Google or Apple or Facebook, the internet has largely moved away from a culture of small time hackers operating on barebones standards to super complex implementations gatekept by a few huge companies with infinite resources and conflicting values. They want to curate the web and monetize it, not lower the barrier to entry. You are free to use their ecosystems to produce things they can share revenue from, but everything else will only keep getting harder... what even is the web anymore but marketing landing pages redirecting to walled gardens.
It used to be a web server was something you could almost auto deploy. Then it became a series of increasingly complex steps as various 'security' measures were employed. You can do these things yourself, and they aren't that hard, but they were never made easy in a way that didn't imply a lot of specific technical know how. I kept up with it for a while, eventually everyone has to deal with the real world and it's time constraints, and the 'security' of today provides undeniable barriers compared to the yesteryears of the web.
I'm not convinced this browser change is a good thing - I think the issue is the aforementioned crap on personal networks, not the ability for a browser to go there. If your security is endagered by your shitty dishwasher, either don't connnect it, or since you are doing the connecting, put it on an isolated private network. This move is encouraging bad security practices while at the same time just throwing another roadblock in the way of legitimate uses of 'home' software.
You don't have to stand up your own servers in your favorite cloud provider and become a Cloud DevOps expert. You don't have to manage deployments, dependencies, etc. You can still pay $3/month to get shared hosting on DreamHost, upload your HTML file, and it gets served. No fiddling with nginx, no operating system patching, etc.
Even if you don't want to pay $3/month, I'm sure there are still hosts that will give you a few megabytes of storage and a couple gigabytes of traffic for free.
honestly, what series of increasingly complex steps? The main thing today is an expectation of HTTPS, and that is added complexity, but also something you can auto-deploy today and lots of tutorials available. E.g. I'm fairly sure I've spent more time of my life on .htaccess and nginx redirect syntax than on HTTPS, despite starting early with Let's Encrypt and not choosing the most-automated solutions - and in other setups "add HTTPS to a domain" is literally a line of config file, with the webserver doing the rest. But that's beside the point I made:
This is assuming that you actually are deploying something to a server, instead of making use of the myriad of ways of having that be someone else's problem. How are those "essentially" not true options?
"We can trust users and random developers to do the right thing" is understandably not the security position browsers take, so this needs some solution eventually. What the right tradeoff is is a good question. (i.e. IMHO there should be clear ways for devices to opt-in to being accessed)
So set it up correctly, or just buy a cert like in the good ole days, or just don't use any encryption like in the good ole days.
All the options from the good ole days are still available to you.
To feed the SSL ponzi pyramid?
And why should a web server need maintenance? I mean, just search Google for your favorite web server software and "CVE" and you'll find plenty of reasons.
Yes. You are missing the point entirely.
I agree, but I think people arguing over that would have expected to maintain the same ratio as the internet population grew. Frankly utopian IMO but one should dream, no?
The other thing to address is about being "independent online". Many of the things that make it so easy to create a website, for example, are made easy at a cost, i.e. vendor lock-in and rent for continued service. Or github will host your code but also use it for their own purposes, training your AI replacement. Those are ultimately good things to have around but do follow the trend of being cages-with-benefits --they increase dependence on central infrastructure.
TLS certificates used to cost a lot of money, now they're free. Pretty much all relevant web frameworks and technology stacks are published under FOSS licenses.
Nothing stops you from running your own web server with either whatever is the current state of the art web tech or whatever you prefer to build yourself.
But of course, the truth is that the web was never easy: it was just naive. Most (NOT ALL: some categorically only are protecting the interests of advertisers or rights holders) of these security bandaids and limitations are fixing actual problems with the web that were always there... developers just didn't know about them or didn't realize the ramifications. It would be better to have solved some of these things with solutions that are more elegant, and the lack of a definitive guide for "what all you should and should not do" sucks, but mostly the web is just banning stuff that never should have existed in the first place :(.
You're not wrong, but if you can go through the paperwork to add a CNAME to external DNS, your team can use DNS validation to verify host record ownership for LE/ACME:
Seems not many people know about using dns-01 for internal-only hosts.
TBH that sounds like you decided to make things painful and then complain that they are painful.
Smallstep makes a basic CA for free that is ACME compliant, meaning you just need to change the URL for Let’s Encrypt on your server and restart. Microsoft also has a CA included with Windows Server if you’re using that which works fine (although it uses a different API to get certs).
It might make some homelab setups slightly more annoying, though.
FWIW, I do the same as the article for my home network: I hijack all DNS requests from intranet devices and respond with corresponding intranet IPs. Externally on the internet trying to resolve those same (sub-)domains would lead to the public facing firewall.
This makes it so I can manage stuff like NASes and IoT stuff fairly easily regardless of where I'm connecting from.
Luckily none of my stuff really depends on making cross-boundary requests between intranet and internet services (it's always completely internal or external) so I should still be OK.
How so? I run a public HTTP server and a VPN server from a Raspberry Pi in my living room. It was pretty easy to set it up. Regarding the HTTP server, the only thing that was different from the last time I did this (around 2004) were SSL certificates.
1) Use public DNS to validate instead of HTTP. I do this for internal-only webservers. TXT records are updated during renewal using Hurricane Electric's DNS service at dns.he.net.
2) Run your own CA. This used to be a huge pain, until I found gnoMint. I use this to generate certificates for OpenVPN. If necessary, installing a root certificate is not difficult on most systems. You can set it to expire in, say, 10 years, so you won't need to update it so often.
You can get a domain name for free from many non-profits (eg. eu.org). And chances are you have a public IP address, it's just dynamic not static, in which case dynamic DNS setup is fairly easy.
The only case you're screwed is in 4G/5G setup where you actually don't have a public IP at all, but only half/quarter IP (just a dedicated port range on a shared IP).
If you wanted to, having a public facing IP that uses challenge files, and just reverse proxying that specific URL-range to the private host might work.
But really, if you want SSL for a private network, self-signed certs or your own trusted CA cert is the way to go. That does mean changing your browser to accept those certs.
Alternatively, drop the SSL requirement, since everything is apparently on private networks.
Last time I tried it was a little annoying with dynamic IPs getting in the way, but possible with just a port forward or UPnP. Are ISPs making changes to prevent this?
The lack of IPv4 address space has started a trend of CGNAT which makes hosting from home nearly impossible. Luckily, IPv6 continues to be rolled out, but in many situations thst would lead to your services only being reachable through IPv6. There are still a great many outdated networks out there that can't reach IPv6, so you might run into trouble there.
If you can get a public IPv4 IP address, I see no reason why you'd run into issues. That's more of an ISP problem than a hosting problem, in my opinion.
If your IP is not dynamic, and you can configure the reverse DNS, there's not going to be problems :)
Except with Gmail and Outlook of course, but well these are the problems, not us.
Some business IP blocks aren't blocked, though, so in rare cases you might get away with running a mail server from a business internet subscription.
I can confirm this. I recently tried to set up being able to send emails from an smtp server in my homelab to my gmail address. Even with all the good stuff - a domain, tls, spf, dkim, dmarc, gmail just straight up refuses to receive mail from residential IPs. I ended up proxying it through my VPS, which works better but still requires me setting up gmail rules to NEVER send messages from my special domain to spam. Which it would otherwise do for no apparent reason sometimes.
It's a solution, but it's hardly the home server project you could (and should be able to) run from your home internet.
Luckily, you don't need a static IP address in most use cases if you set up dyndns, as long as the IP is exclusively used by you and doesn't change too often (e.g. every week or month or so).
Why would it be better? It would be more complex technically, and certainly less resource efficient. Am i missing something?
You'd also save costs moving some of the hosting to the cloud while you're at it, be use you don't don't need to pay a separate electricity bill for a cloud VPS. Plus, VPS storage is usually more reliable than a custom RAID config, as is the power grid around data centers and the internet connection itself.
If you're going for efficency or simplicity then you're totally right, but if you're trying to get value for money I think a cheap VPS would be better.
It's not a bad option if you're already paying for gigabit, sadly nearly impossible to get symmetrical gigabit here, but still for an extra $5 or $10 a month it's ok.
Of course, the real solution to the problem is to find a decent ISP, like a non-profit from FFDN.org federation. Then you have "real" internet and no worries for selfhosting.
Letsencrypt is almost certainly the easiest part of the entire process of self-hosting a website.
A public web server is the easy part if you want to do letsencrypt.
But you might run a device that already comes with software, and letsencrypt support is either limited (example: Synology; their implementation allows only http-01 challenge so if you need dns-01, tough luck. Even wildcards are a new feature) or non-existent (example: Ubiquiti, and their cloud keys (administration UI, guest portal) or routers (Radius/WPA Enterprise needs TLS cert too)).
It's possible as long as your ACME client has hook scripts, and your DNS provider has an API:
My Asus router has a checkbox for dynamic DNS and for getting Let's Encrypt certs. See Method 2:
Why? I try to do this, it's an exercise that I wanted to do.
Now someone in the network could follow a link from a page served from a public IP to a domain with a private IP address—which this change would disallow unless the first page was served from a "secure context" (with TLS) and the internal server responds to a "preflight" OPTIONS request with the required CORS headers to allow following links from public networks.
Ofc this change won't be that big of a issue, things would just need to change a little, using Split-DNS was already a pain when students wanted to say use DNS-over-HTTPS and didn't want the University DNS servers to know every site they visited.
Sure. Just give me a week or two(or several months) to shut down whole network and reconfigure all the servers, devices,
Also all our business partners and vendors who integrate with our services, will be glad to switch to our new setup, exactly when we need them to.
If you are building new site/network ipv6 is way to go. Migrating existing ones is next to impossible due to all of the dependencies out of your control