SVG feels much like HTML to me, especially when animations are involved: on the first sight it is quite nice and simple, does its job well, can be handled by fairly basic viewers (as well as converters, editors) and generated easily. Then there are even more features with CSS and JS, which also look neat, but then simplicity goes away, along with it goes the wide support of full functionality, and compatibility (due to partial support, unexpected behaviors in different contexts). It still looks like a fine option when animations are needed, but I would rather avoid those in SVG when they are not necessary.
I think an article like that would benefit from focusing more on protocols, rather than particular APIs to work with those: referencing the specifications and providing examples of messages. I am pretty sure that the article is about chunked transfer encoding [1], but it was not mentioned anywhere. Though possibly it tries to cover newer HTTP versions as well, abstracting from the exact mechanisms. In which case "JS API" in the title would clarify it.
As for the tendency described, this seems to be an instance of the law of the instrument [2], combined with some instruments being more trendy than others. Which comes up all the time, but raising awareness of more tools should indeed be useful.
> It's such a wild idea that I've never heard it in the public discourse.
IIRC there is a Dilbert comic strip about that.
> Clickbait [...] would become worthless overnight
Advertisement is not the only incentive. E.g., the Veritasium YouTube channel's host explicitly switched to clickbait, explaining it by his intent to reach a wider audience. Another example is clickbait submission titles on HN, not all of which are for the sake of advertisement (unless you count HN submissions in general as advertisements themselves, of course).
> When I say advertising, I also mean propaganda. Propaganda is advertising for the state
Not necessarily for the state, the usual definition includes furthering of ideas in general. In more oppressive regimes, propaganda of certain ideas is actually banned, as it used to be in even more places ("heresy" and suchlike). Combined with selective enforcement, it is as good as banning all propaganda. It may be a particularly bad example of such a ban, but still an illustration of the dangers around it.
I think a better path towards the world without (or almost without) commercial advertising is not via coercion, but as kaponkotrok mentioned in another comment, via education and public discussion (which may also be called "propaganda"), shifting social norms to make such advertisement less acceptable. People can make advertisements unprofitable if they will choose to: not just by ignoring them (including setting ad blockers), but also by intentionally preferring products not connected to unpleasant and shady tactics, including those beyond advertisement: slave labor and other human rights violations, unsustainable energy sources, global warming, animal cruelty, monopolies, proprietary or bloated software and hardware are some of the common examples. Social norms and such enforcement seem to be less brittle than laws are, and harder to turn into an oppression mechanism.
Although, since I brought up monopolies and other issues, perhaps state agencies may also usefully assist with restriction of advertisement, as they do with those. Social norms and laws are not mutually exclusive, after all.
I like WHOIS with its extreme simplicity [0]. RDAP, on the other hand, works on top of a large and changing HTTP [1], and uses a JS-derived serialization format [2]. RDAP has advantages, such as optionally benefiting from TLS, the data being better structured and defined, but the cost in added complexity seems high.
As far as I can see, an RDAP request is a simple HTTP request, looking like http://example.com/rdap/ip/192.0.2.0. Web servers still support HTTP/1.1 (or probably even HTTP/1.0 and HTTP/0.9). This is trivial to implement for clients. A simple HTTP request like that is about he simplest thing to do. You'll have to use curl or wget instead of netcat if you want to do it manually. No big deal.
"A JS-derived serialization format" ... You mean JSON, which is about the lowest common denominator in Internet data exchange these days (and has been ever since we found out that XML was overly complex and JSON was much easier to use). You'll have to use something like jq instead of grep to extract information from the data manually. Or rather, you'll be able to use the powers of jq. Again, I don't really see the problem here.
I did not mean that there is a problem with it, only that I appreciate the simplicity of WHOIS. While HTTP-with-JSON is perhaps the most practical solution these days.
To clarify my point of view, an ad hoc HTTP client for this indeed should not be hard to write from scratch, demonstrating that there is not much complexity in that. The server part would be a little more tricky; still doable, but not as easily as for WHOIS, and in most cases a more sensible approach would be to use libraries (or a program like curl, in case of shell scripting or manual usage) for that, as you said. Likewise with JSON: though one can deal with it as with text, some added tools (a library or jq, depending on context) would be sensible to use. But then added dependencies lead to all kinds of issues in non-ideal conditions (e.g., when it is problematic to install those). But again, I am not saying that this should stop adoption of RDAP.
On top of that, a complete and proper HTTP 1.1 implementation, server or client, would be quite large. And JSON, while indeed common and not particularly complicated, still has bits I find awkward (no sum types or a standard way to encode those, but has "objects", arbitrary-looking primitive types; no single standard for streaming, either), so working around it is not exactly pleasurable. Those add up to a difference between a trivial protocol and, well, a non-trivial one. I appreciate such trivial yet working and useful solutions, though the other kind is commonly useful as well.
It's a bit unreasonable, IMO, to criticize the fact that RDAP communicates using a JSON API -- while JSON is inexorably related to JavaScript (and it's not without its issues), it's ubiquitous on the modern web for serializing data, in any even vaguely REST-shaped API.
You could argue that a more compact, binary, wire format is more appropriate (though I wouldn't, in this case, since for small, simple payloads, I think simplicity and human readability trumps sheer wire efficiency). You could argue that JSON's a poor serialization language in general (which is debatable, contextual, and in this case, I don't think there's a widely-accepted better option).
But let's not act like "a JS-derived serialization format" is some kind of mark of the beast here.
Indeed, that one would be more descriptive. At first I thought that the project is a joke about pretend-discovering that you can create a web page without tools other than a text editor, and without embedding external resources, similar to vanilla-js.com. I wonder whether "note" is readily understood as "a document edited in a WYSIWYG editor" by some, possibly because it is called that way in some software.
User experience with it, and opinions on it, seem to vary. I, for one, rather like both postfix and dovecot: both are well-documented, maintained, lightweight, reliable, yet configurable and feature-rich software, with few dependencies and good track records.
More than SMTP hosting, but I use a VPS for multiple online services, including SMTP. SMTP in particular is handled by postfix, OpenDKIM, policyd-spf. Its incoming mail part is additionally supported by uacme with knot, nftables, fail2ban, but sounds like you are not in need of that. 1 GB of main memory, 10 GB of disk space, and a bit of CPU is a comfortable (overkill, even) system to run a variety of services with a few users and without particularly bloated software, and I think it normally costs less than $5/month (or even under $1/month with occasional promotional plans), with plenty of hosting providers to choose from.
Edit: my current setup is documented [0], and the older (slightly different) one is linked from it.
Same, with Firefox on a Linux desktop, even after enabling JS.
Edit: apparently it uses Oklab/oklch() [1], which is supposet to work in Firefox since version 113, but does not seem to work in 115. Or possibly it also uses something else that breaks it.
Same here (not certbot and containers, but the part about reusing certificates for multiple services): it feels wrong to couple certificate acquisition with a web server. Apparently it is convenient when there is just a web server out of TLS-using services, or at least when it is in the center of the setup and HTTP-based certificate acquisition is used, which seems to be a common enough case to justify this, but still an odd coupling in general.
As a counterpoint, I find custom formats to often be nicer than reusing generic ones: atomic values (individual lexemes, primitive types: strings, numbers, etc) tend to have custom formats or semantic restrictions anyway (say, dates, length-limited strings, range-limited numbers, though in some cases numbers are also just restricted strings), so that part is mostly handled from scratch either way. And then there is composition, with JSON--apparently the most popular format now--not having a standard way to encode sum types, so you basically have to hammer those on top. At which point it feels like building on top of a JSON structure instead of building on top of a sequence of characters, but not in a very different way: sometimes it is more helpful in some parts, sometimes adds more clutter than needed. Likewise with XML: it is not uncommon to see something like <my-custom-format>a string in a custom format here, or even a base64-encoded blob</my-custom-format>.
In practice it is easier to use JSON for interfaces, since for someone unfamiliar with parsing it is less work to sort out on the other end, and then you have at least a partially specified structure, the rest can be easier to specify (with JSON schema, for instance), there are readily available libraries and tools to work with it, to do at least some of the parsing without writing a parser. But I think cleaner interfaces can be achieved with custom formats.
reply