I think the France legislation is aimed at most major resolvers. You might get away with more niche ones for now, but the only stable way is to self-host a recursive resolver (like unbound) that walk the DNS tree themselves.
Hosting is never hard. It's about maintainability. How do you handle HA? How will you expose the service? What about backups? How efficiently are you running it? That's just the tip of the iceberg. For an average joe, this is not something they wanna deal with
You are doing a bare minimum job which is of course not what I intended. Your workloads doesn't seem to be that sensitive. If you can afford a few minutes of downtime, sure. I cannot afford downtime because lots of critical services will fail which will require manual intervention
The idea is that you do not install packages, at all. Instead, you would use Flatpak and the like, plus mutable containers (Distrobox or whatever) if needed.
Flatpak wants to download a gazillion dependencies before the actual package. And even the original package size is huge, compared to apk, deb, rpm, etc. Stop with the bloat. Native package managers are always going to be fast, easy to use and have a minimal size
> Native package managers are always going to be fast, easy to use and have a minimal size
They become necessary when you are dealing with Windows style "applications" that drag in enormous amounts of dependency. I had Libreoffice fall over on Void Linux because of some JVM issue. You think I want to debug a JVM issue in 2020-something to type some text? No thanks - flatpak.
Standard Debian packages share their dependencies. Every Flatpak package is a special snowflake with its own dependencies. That's the point of it. But it makes download and install sizes enormous.
Flatpaks also share dependencies. Usually not quite as much as distro-specific packages, but it's wrong to suggest that each Flatpak ships all dependencies separately.
I know this was sarcasm, but my experience is that I download much less with Debian's APT than I did with Flatpak.
I can think of two explanations: (1) Debian packages have more shared parts, and (2) they have optional dependencies ("recommends" and "suggests") which I disable by default. Because of (1), there will be many library packages to download, but the overall volume is reduced.
This. Installing packages via rpm-ostree is something one can do, but in most cases it should not be necessary. It's more of an escape hatch than an everyday tool. Most additional software will be installed in the user's home directory.
It’s not really about the manifest. It’s about the APIs available to extension programmers. Chrome has made the "webRequestBlocking" API unavailable and that’s what’s affecting adblockers. Chrome will eventually remove the code supporting this API, and it is not feasible for downstream to make it available anyway.
They could, theoretically. But just imagine what that actually means. Unless you cease merging upstream/the project you've forked, you'll have to resolve all conflicts caused by this divergence.
And that's a lot of work for a multi million LOC project, unless the architecture is specifically made to support such extensions... which isn't the case here.
And freezing your merges indefinitely isn't really viable either for a browser
A quick look at the code gives me the impression that webRequestBlocking is a fairly trivial modification to webRequest, and they seem to be keeping the latter. This leads me to two conclusions: it wouldn't be terribly hard for a fork maintainer to keep webRequestBlocking, and Google's technical excuses for removing it are disingenuous.
That may be true now but will it still be true when Google next refactors their request code under the assumption that no requirements for a webRequestBlocking API exist.
So go make an LLM manage the fork or something. Everyone keeps telling me they are amazing at code these days. Surely it can do a task like that if that's all it's doing all day.
The codebase is huge, sure, but the particular feature is relatively small and (as I assume and as verified by another poster) quite easy to implement: https://news.ycombinator.com/item?id=43204603
I agree that voice control is great, but I feel we’re at an “uncanny valley” moment. You can talk to a machine fluently in natural language, until you suddenly can’t and it makes the dumbest misunderstanding, either from recognition or from parsing.
You still get the best results by talking like a robot.
Is that really the case though? I very much doubt it.
I get a /56. Dynamic configuration mechanisms exist. I literally do not have to anything except flip a switch. My router even supports Prefix Delegation, so a downstream router/access point can do its thing.
> Is that really the case though? I very much doubt it.
It's what AT&T fiber does. Well, they give a /60 to their shitbox, but if you want your own router with a public IP then you're stuck with a single /64 for it at least when doing the "easy" path.
You can get some routers to request multiple IPv6 blocks and then you get the freedom of a whopping 7 subnets but you've also left "ridiculously easy" way, waaaay in the rear view mirror at this point anyway
Windows Filtering Platform does it. Windows Firewall barely taps WFP's potential and definitely does not do the whole "ZoneAlarm" style allow/deny thing.
It's an en dash which is supposed to denote ranges eg 2013–2024 basically meaning 'to' so it's still incorrect here even if it weren't extremely similar to - or negative and easy to confuse.
Typographically the latter character is not a minus sign, it’s an ASCII hyphen-minus, which is usually designed to look more like a hyphen than a minus sign. An actual minus sign typically looks more like a dash than like a hyphen-minus.
Interesting stuff! Though I don’t get why b00lin would have to prove that they weren’t cheating. This is not a criminal case, but still. Activision was denying access to a service that was paid for.
I hate GitHub Actions, and I hate Azure Pipelines, which are basically the same. I especially hate that GitHub Actions has the worst documentation.
However, I’ve come full circle on this topic. My current position is that you must go all-in with a given CI platform, or forego the benefits it offers. So all my pipelines use all features, to offer a great experience for devs relying on them: Fast, reproducible, steps that are easy to reason about, useful parameters for runs, ...
An appliance wouldn't be able to use that, of course.