Hacker News new | past | comments | ask | show | jobs | submit login
The CSS at w3.org is gone (w3.org)
111 points by internetter on Jan 25, 2023 | hide | past | favorite | 65 comments



The main CSS file they're using is https://www.w3.org/2008/site/css/minimum. If you download this with `curl 'https://www.w3.org/2008/site/css/minimum' -H 'Accept-Encoding: gzip, deflate, br'`, you see that it starts with 03 D7 86 1F 8B 08 08 and ends with 25 10 91 4D 7B 30 00 00 03. This is a brotli metablock header (03 D7 86: length = 3503 bytes, uncompressed), followed by a gzip header (1F 8B 08 08: signature, compression=deflate, filename present), and ends with a gzip trailer (25 10 91 4D 7B 30 00 00: crc32, size of file) and finally an empty brotli metablock header (03).

So, what's happening is that they're serving gzip files from their server (which is hinted by the "content-location: minimum.css.gz" response header), which are being compressed again using brotli somewhere else (e.g. at a reverse proxy).


My guess is they've been serving their css file direct from disk with no content encoding header, but browsers rolled their eyes and decoded it anyway (do they do that?). But now it's changed to serve behind a reverse proxy which isn't so forgiving and recompresses it with brotli (does it even reduce the size?) and that's too much for the browser to infer implicitly. Fin.


Worth noting: if you're not seeing the Brotli header, it's probably because something (e.g. your browser) is transparently decoding the declared Content-Encoding (which is `br` for Brotli). That'll yield raw gzip data. In this case, your browser or other user agent has already applied Content-Encoding, so they aren't going to do it again.


The problem is that they are sending gzipped css without specifying Content-Encoding in the response headers.


Tested it just now (Firefox on Linux), and the response (for me) has 'Content-Encoding: br'.

So, it seems more like it's indicating Brotli compression, but the actual file (https://www.w3.org/2008/site/js/main) is gz encoded.


HTML:

    <link rel="stylesheet" href="/2008/site/css/minimum" type="text/css" media="all" />
    <style type="text/css" media="print, screen and (min-width: 481px)">
    /*<![CDATA[*/
    @import url("/2008/site/css/advanced");
    /*]]>*/
    </style>
    <link href="/2008/site/css/minimum" rel="stylesheet" type="text/css" media="only screen and (max-width: 480px)" />
    <meta name="viewport" content="width=device-width" />
    <link rel="stylesheet" href="/2008/site/css/print" type="text/css" media="print" />
/2008/site/css/minimum headers:

    Content-Type: text/css;charset=utf-8
    Transfer-Encoding: chunked
    Connection: keep-alive
    content-location: minimum.css.gz
    vary: negotiate,Accept-Encoding
Downloading this file, I can see it's valid "gzip compressed data".

Seems to be missing the Content-Encoding header?!


https://www.w3.org/2008/site/css/advanced.css is even still there unzipped.


Interestingly, at least with Firefox on Linux it seems to be decompressing the .css files ok... but borking on the javascript file that has the same incorrect content encoding header.


And the size difference appears to be only 0.2kB? 6.1kB from the zipped version and 6.3kB for the unzipped one


I see advanced.css uncompressed size is 25K and compressed (by zopfli) - 5.4K


This no-js-no-css trend is gaining traction


What's not to like! 0K CSS, responsive, works in desktop, mobile and tablets, and not to mention blazing fast load speed.


Responsive? I have to scroll past the left column ("Site Navigation") to read anything. That's the opposite of responsive (e.g. 2 columns on wider screens, 1 in mobile).


It looks exactly as it does on mobile!


If 2kb of CSS makes the difference between "blazing fast" and not "blazing fast", I think that's on you :)


NoJS I was aware of (and support where it makes sense, which is most of the time), but NoCSS? When did that start, and why?


There's ... something ... to be said for a site with minimal markup which works in text-mode browsers.

Though direct usage of those is probably a sliver of a fraction of all online usage, this also means that automated tools can work with a website. Scraping has both dark and light patterns and use-cases.


I'd assume it's a joke.


But also...

I'd hardly call it a trend because It's been a thing for as long as I've used the web; but some 'classic' style programmers seem to opt out of CSS for their personal sites. One such example is Dan Luu [0], who wrote in a post [1] that styling decreased his viewership.

Additionally, it can emphasize good structure that was concealed with CSS

[0] https://danluu.com/

[1] https://danluu.com/look-stupid/


Anyone who uses the abominable table border design from HTML1 needs some special kind of punishment.


Now I can read it in Emacs


Eww :) Pun intended!

By the way, has anyone used the eaf-browser? I use the other eaf applications like PDF viewer, but for some reason, the eaf-browser seems to eat up a lot of resources.


That's because eaf-browser just uses chrome in a docker container, does screencaps -> ocr, then runs spellcheck, plus chatgpt on top for sanity checks, then copies it all back from an aws instance.


Send one HTML file with blue links like in the old days!


As always when styling comes up there's always a few people on HN who claim to prefer "no CSS" because they "always" use Lynx, or deeply customize all content on their own from scratch, or are using a 1G flip phone, or care about milliseconds of loading time or they're RMS (oh, wait, stallman.css exists!).

Sure, some idiosyncratic blogs can get away with that css-free "look" (eg https://danluu.com/). The W3? NO? but now I guess so!


> some idiosyncratic blogs can get away with that css-free "look" (eg https://danluu.com/)

Not even. The articles there are incredibly compact blocks of text that are way too wide to be comfortable for reading.

Minimal CSS is an absolute requirement if you want a website that is pleasant to read.


But with a couple of keystrokes that are in my muscle memory I can narrow my browser window to any reading width I want. So there’s no problem, and the site is easier to read than almost all modern websites.


Modern web browsers also come with a readability mode built-in.


Though in fairness, that won't work for the w3.org homepage (at least not on Firefox).


The width is only one of the many problems harming readability. You can't increase line spacing with a shortcut for example.


Dan does have a bit of CSS, though:

    .pd{width:4em;flex-shrink:0;padding-bottom:.9em}.par div{display:flex}


As someone with a visual impairment, when I open that website I see someone that wants to put weird internet idealism above not giving me eye strain. This attitude is a bit of oldschool internet culture that I’m glad to see die off.


Originally it was the job of browsers to provide good readable non-eye straining, user controllable default styles. In the browser wars it was decided few websites "wanted" good defaults, they'd rather "no defaults" so that every website could be styled as its own special snowflake (to meet brand goals or design team whims) and so default styles became a lowest common denominator trapped in the 90s.

Eventually browsers rediscovered that need to provide good defaults and now call it "Reading Mode" and force users to opt in to that behavior on a page by page basis.

Something lately I've been wondering is if there should be a way to signal to a browser "this is an intentionally minimal HTML page, please style it by default as if the user chose Reading Mode".


It's like CSS Naked Day [0] in the dead of January

[0] https://css-naked-day.github.io/


The site actually looks decent. Welcome back to 1995.


It's a testament to good structure that the site is legible without its styling.

... legible, not good. If I had to read documentation that looked like that all day I'd consider a career change (or, perhaps, building an infrastructure to improve the legibility of web pages...).


Yeah, props to them for having some structure! It's like CSS Naked Day [0] in the dead of January

[0] https://css-naked-day.github.io/


Good luck if you are using one of those SPA over engineered hipster framework.


A screenshot for those who read this news after it has been fixed:

https://imgur.com/a/AauqekH


Thanks. I also put it into archive.org before posting

https://web.archive.org/web/20230125115413/https://www.w3.or...


You can also inspect the source and delete the link to the stylesheet if you want to experience it yourself.


Firefox permits toggling site CSS:

<https://news.ycombinator.com/item?id=34521875>


It surprisingly reads better without the css. It somehow feels like my room when I throw out things I don't need.


That's a WCAG paddlin'.

    * Focus indicators
    * Text block width
    * Target Sizes
Apart from that, even if it were there there's some trouble with the markup:

    * <abbr> tags missing
    * No landmark elements
    * bloody tabindex
    * repeated same div
    * search input has no visible label
    * no context-sensitive help or whatnot


I see they finally come out in favor of the perfect website: https://motherfuckingwebsite.com/


Best viewed in Lynx.


Someone got very tired of adjusting margins and just said: „screw it, Im done. Its even better without CSS“


If only the CSS WG had gone away ...

Would've been the last thing WWW Inc. has influence over on the web.

And even that has been a joke (of the consider-your-carreer-choices variety) from the beginning, while today it's just annoying holding on for job security.

At least their complaining about being financially dependent on Google and being part of the web standard circus preventing real independent standardization makes for an entertaining read lately [1] if you had any doubt Google are the ones to call the shots.

[1]: https://mastodon.social/@robin/109524929231432913


When will people learn to use web standards correctly?


Am I the only one who likes it better this way?


I personally hate it and want the CSS back.


I like it "better this way" conceptually, in that I like browsing with w3m/lynx/links/eww/etc but the content structure is bad "this way" (ex: scrolling past site nav links too much before I reach actual content; most of the vertical nav should and would be horizontal if this weren't a bug; etc).


a comment you can only see on HN. I really like the pointless hatred towards CSS and javascript on this site lol


No I think pretty most real old school developers prefer that, and it reads alot better on Lynx/Links2 type of browser. Browser != chrome || firefox,


The bug's been fixed.

Firefox users (desktop) can see the unstyled site by selecting View -> Page Style -> Unstyled.

And yes, it is pleasantly readable even without CSS. Not ideal, but good.


To view the missing CSS:

    wget -qO - https://www.w3.org/2008/site/css/minimum | gunzip -



Disabling Brotli from Cloudflare would stop this error until they fix the Content-Encoding


hm, the css seems to be there, but looks very obfuscated? could it be encrypted or attacked?


It is gzipped


Honestly even if this is a bug, it's really refreshing.


I see this as an absolute win


less is more


Slow news day?


Feel free to downvote if you don't like it :)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: