I love RDP! It really is an impressive technology. I work in-office somewhere, and when I'm on campus, RDPing into my desk laptop from a conference room client has native performance, with audio even.
Hurd failed not because of microkernel design, in 1994 multiple companies were shipping systems based on Mach kernel quite succesfully.
According to some people I've met who claimed to witness things (old AI Lab peeps) the failure started with initial project management and when Linux offered alternative GPLed kernel to use, that was enough to bring the effort even more to halt.
Most famously these days, Mac OS (formerly known as Mac OS X, to distinguish it from all of the earlier ones) is built on top of Darwin/XNU, which descends from Mach.
Is the article really right though? I imagine that much more stuff runs some linux on any machine than there are running intel processors. Even if it was true in the past, it likely has shifted in linux favor even more
Intel had profited tens to hundreds of millions of dollars from Minix 3. Minix replaced ThreadX (also used as the Raspberry Pi firmware) running on ARC RISC cores. Intel had to pay for both.
If Intel reinvested 0.01% of what it saved by taking Minix for free, Minix 3 would be a well-funded community project that could be making real progress.
It already runs much of the NetBSD userland. It needs stable working SMP and multithreading to compete with NetBSD itself. (Setting aside the portability.)
But Intel doesn't need that. And it doesn't need to pay. So it doesn't.
People often forget the best way to win a tech debate is to actually do it. Once multiple developers criticized that my small program is slow due to misuse of language features. Then I said: fine, give me a faster implementation. No one replied.
Ultimately, you have to store your backup codes somewhere. So the only solution besides using your password manager is using a second password manager. Or not using a password manager to save off your backup codes, which has its own disadvantages.
There's lots of cases where 2FA reduces to 1FA. E.g. logging into a website on your mobile phone, and getting your TOTP or SMS code on that same phone. In fact-- that case is so common I wonder if we should just get more used to the idea of 1FA, with smartphone passkeys/biometrics/SSO being the auth factor. As it stands, if you compromise someone's smartphone (and have their smartphone PIN), the odds are great you can autofill any password you like on their phone and pull up any needed 2FA tokens as well.
Caddy is the greatest thing since sliced bread. It is such a good reverse proxy and a paradigm-shifter for its auto-certificates and HTTP/3 support. It's a great example of how high quality Go software can be. (Thank you Matt Holt)
> I still have nightmares about trying to set up SSL with nginx and my own self-managed certificates.
For anyone who needs to run their own CA (which I'm now doing for my homelab), I've found that using GUI software like KeyStore explorer is a sufficiently easy and lazy way of doing that, which actually works well, both for securing regular sites, as well as doing mTLS: https://keystore-explorer.org/
For what it's worth, using OpenSSL directly and automating that for more frequently rotated certificates wouldn't be quite as pleasant, yet doable.
> Shoutout to Let’s Encrypt as well for making this so much easier!
For ACME stuff, Caddy will be excellent and honestly is probably the best option out there right now!
Nginx (and certbot) or Apache (and mod_md or certbot) will get you most of the way there as well, though the route will be a bit longer.
Caddy is amazingly simple to setup. Automatic HTTPS is a killer feature.
I have to use Envoy at work for gRPC services and I want to quit the industry every time I have to edit their YAML/protobuf monstrosity of a config system.
Envoy config surely is complex, but it's also the most flexible and robust way of managing config on a large scale I have come across.
The way envoy lets you create clusters of envoys, then just setup their config to come from a centralized config source through a grpc connection is honestly the most sane way of managing thousands of proxies at scale I have found. Trying to push nginx (or any other config as a file proxy) updates at scale is a nightmare of its own.
We manage a large number of envoy clusters, where the state of how proxying should happen is all contained within a SQL database where the rules and records change dozens or hundreds times a minute. There is one service that's responsible for monitoring the DB and translating it to envoy configs, then pushing them out to 1,000s of envoy processes. It has been extremely reliable and consistent. For a given input, always produce the same output. It's very easy to unit test, validate and verify, then push the update.
Nginx, and Caddy I'd imagine, are great at set-it-and-forget-it configs or use cases. But envoy is a programmable proxy where you can have dozens of clusters with different configs that get updated dozens of times a minute. I don't know of any other proxy that offers something like that.
Caddy does (some of) that too actually. It has a live config API and support for clusters and synchronized configs and TLS cert management. It can also get the proxy upstreams dynamically at request-time through various modules. Some of the biggest deployments program/automate Caddy configs using APIs and multi-region infrastructure.
Envoy is definitely a powerful & useful tool, we use external auth to centralize our authentication, I just dislike editing large yaml documents with 10 levels of indentation.
My websites run on https because how easy Caddy makes it. Caddy made it possible for me. Cannot thank Matt Holt enough for creating Caddy and making it available to all of us.
I haven't used Caddy and I'm sure it's great, but you could have used nginx or anything else as well also. Offering https is honestly pretty easy these days.
I've been using nginx for years and switched to Caddy just because I was so fed up with configuring nginx to automatically renew TLS certs issued by Let's Encrypt. This is so much easier and reliable with Caddy.
I know about certbot and have considered it, but our customers can use their own custom domain name, which means we need to be able to select the certificate depending on the SNI hostname, which is a bit tricky in nginx. It's possible to use the $ssl_server_name variable in the ssl_certificate and ssl_certificate_key directives, but then the certificate will be loaded for each TLS handshake. And also need to be careful with race conditions when refreshing the certificate, to ensure that the certificate and they key are matching. Overall, it's doable, and people do it, but it's not as straightforward as just using Caddy.
It's really opinionated about it though. I still don't know how to stop it from trying to get certificates for specific hostnames. It seems to work with everything auto, or nothing at all.
That's its value really. It has the defaults you usually want with minimal boilerplate. If you need/want something more complex it's not necessarily the right tool any more.
I say this not as any kind of dig against Caddy but I feel like the entire value proposition is that its default configuration covers the 90% case so well. Sometimes being easy to use with good defaults goes a really long way.
Define the host as http://hostname in the config instead of just hostname and it will do only http for that config. You can have a separate https config that's is different as well.
In simple terms, you can think of a reverse proxy as an http server that is a middle man. For a simple case why you might use one. SSL/TLS can be a pain to set up in your web application code. So what you can do is write your web application and not worry about SSL/TLS certs. Then you can place a reverse proxy infront of it and configure the reverse proxy for SSL/TLS. This way your not dealing with that complexity in your code and someone else is managing it. From there, the reverse proxy takes requests and reroute them to your web application. My reverse proxy is exposed to the internet on port 443, when a packet hits it, it knows to rereoute the traffic to server running on my machine at localhost port 8080. You can also have a reverse proxy to have one singular ingress point for many web applications. The reverse proxy will know that requests for http://MyCoolWebApp.com go to localhost:8080 and http://MyOtherCoolWebApp.com go to localhost:8081.
Reverse proxying is just one task that a web server can perform. Caddy also has a file server (directly serving files from disk or from some virtual filesystem), can write static responses, can directly execute code via plugins like FrankenPHP, can manipulate/rewrite/filter/route the request and/or response, etc. Just look at this list https://caddyserver.com/features
What is the best remote desktop server for Linux?