Sunset header fields will be served as soon as the sunset date is
Wilde Informational [Page 9]
RFC 8594 Sunset Header May 2019
less than some given period of time.
RFC 7990 -- RFC Format Framework:
There's various tools to automatically format things as necessary, just like any other kind of text wrapping.
As far as the overall "philosophy" behind keeping it this way, the honest answer is that the IETF is just a particularly unlikely group to change things without a clear need, and there are likely all sorts of tools small and large that expect RFCs to follow these conventions at this point.
Here's the "source" XML that is authored: https://openid.net/specs/openid-connect-core-1_0.xml
That can be compiled in to this HTML: https://openid.net/specs/openid-connect-core-1_0.html
Or to this RFC-like plaintext: view-source:https://openid.net/specs/openid-connect-core-1_0.txt
Most new RFCs are authored this way.
> In order to improve the readability of RFCs while supporting their archivability, the canonical format of the RFC Series will be transitioning from plain-text ASCII to XML using the xml2rfc version 3 vocabulary;
Is it readable? Yeah
Is it archivable? Yeah, XML is (AFAIK) one of the most closely followed standards I could think of.
- everyone can do it with any software (or even a typewriter)
- consistency with legacy documents. The format doesn't just change on you as you're reading through legal history
- it works fine, why change it?
I'd also add a guess:
- There's no room for implementation detail to affect formatting. Last thing you want is a whole bunch of formats that are similar but not identical, just because someone's software is a bit different.
- could you imagine trying to get everyone to change? We should be so lucky that everyone's already this consistent
Yes, this is exactly what I do for a living, consolidating policy and legal documents and their related business workflows into modern applications. My requirements for how the text editors work are far more meticulous than your average app exactly for the reasons you stated. Concerns with formatting that most products would blow off as trivial are deal-breakers in this industry.
Because, in such cases, it wouldn’t really matter if the editor renders the source to text incorrectly, as long as the proofer renders it correctly. Just like with WYSIWYG desktop-publishing software.
Also, I guarantee there are any number of downstream consumers of RFCs which take this sort of format as a given, and which will break on even a minor change. And why break those downstream systems if you don't have to?
Basically, any changes will break something. So the benefits of the changes need to be bigger than the costs of the changes. Not to mention the cost in wasted time of all the humans bikeshedding how to change it to make it "better".
Dealing with the ongoing cost of humans having to read across artificial page breaks is a pretty minor concern compared to the costs of all that.
Not saying that proprietary formats aren’t still a bad idea for other reasons, but predictions of unreadability don’t seem to have panned out for any common file formats.
Even with modern Microsoft Word, the formatting of old documents is often mangled.
To this day, up-to-date PowerPoint can’t reliably display presentations made with up-to-date PowerPoint on a different machine, let alone OS!
One someone who has not tried could possibly say that.
The numerous doc file formats are a constant headache for anyone doing document processing. Not even Word itself can read its own older formats reliably. Sometimes you have better luck with LibreOffice, sometimes not.
And that's the mostly widely used document file format. Anything else from the same era is completely dead in the water. Manually viewing them can be done in emulators with a bit of work but any automatic processing is a huge undertaking.
Could be worse. My dad used a video tape format even more obscure than Betamax.
At the surface layer this era of Excel ("BIFF" documents) isn't too bad, getting say, a table of small integers representing people's annual salaries out of an XLS file is very do-able and many programs today will get that right.
As you start to dig down it gets nastier pretty quickly. Formulae require implementations that match not just what Microsoft's published documents (I have loads of these on a shelf I rarely look at now) say, but what Excel actually did, bug for bug, back in the 1990s. Maybe the document says this implements a US Federal tax rule, but alas Excel got the year 1988 wrong, so actually it's "US Federal tax rule except in 1988".
You also run into show stoppers that prevent the oft-imagined "Just transform it to some neutral format" because Excel isn't a typed system. What is 4? Did you think it's the number 4? Because the sheet you're trying to parse assumes it's actually the fourth day of the Apple Macintosh epoch in one place, but in another place uses it to index into an array. Smile!
Finally in complicated sheets (often "business critical") there's a full-blown Turing complete programming language, complete with machine layer access to the OS. Good luck "translating" that into anything except an apologetic error message.
I'm going to have to steal that line. :)
They are formatted in plain text with fixed page sizes because that's what they've always done, it works fine, and there's no compelling reason to change.
> also, how do people write this?
The thing about keeping the same format for a few decades rather than changing it with each shift in popular fashion is that there is plenty of supporting tooling.
Maybe having such a field will help treating those URL differently than "normal" ones, so that the secret is better protected.
edit: I failed to read the question correctly.
Looking at the JSON, the structure is pretty basic. You could see it rendering in any format/style pretty easily.
Need I say more.
lynx -dump $URL > $FILE