Many people who maintain their own sites in vanilla web technologies tend to create reusable functions to handle this for them. It can generate headers and the like dynamically so you don't have to change it on every single page. Though that does kill the "no javascript required" aspect a lot of people like
Of course you could simply add a build step to your pure HTML site instead!
A few extra <object> in a blog post is a worthwhile tradeoff, if you're literally using raw HTML.
- HTTP/1.1 (1997) already reuses connections, so it will not double latency. The DNS lookup and the TCP connection are a high fixed cost for the first .html request.
- HTTP/2 (2015) further reduces the cost of subsequent requests, with a bunch of techniques, like dictionary compression.
- You will likely still be 10x faster than a typical "modern" page with JavaScript, which has to load the JS first, and then execute it. The tradeoff has flipped now, where execution latency for JS / DOM reflows can be higher than network latency. So using raw HTML means you are already far ahead of the pack.
So say you have a 50 ms time for the initial .html request. Then adding some <object> might bring you to 55 ms, 60 ms, 80 ms, 100 ms.
But you would have to do something pretty bad to get to 300 ms or 1500 ms, which you can easily see on the modern web.
So yes go ahead and add those <object> tags, if it means you can get by with no toolchain. Personally I use Markdown and some custom Python scripts to generate the header and footer.
Yes, I’d add that not merely “raw html” but a file on disk can be served directly by Linux without context switches (I forget the syscall), and transferred faster than generation.
Sounds like premature optimization for a simple page. If the objects are sized their regions should be fillable afterward without need to resize and be cached for subsequent access.
Yes, and a Makefile was an option as well. But an include tag was a no-brainer not long after html was invented. Especially after img, link, applet, frame, etc were implemented.
I've adopted the idea that a blog post is archived when it's published; I don't want to tinker with it again. Old pages may have an old style, but that's OK, it's an archive. Copy/paste works great for this.
The only reason I use a blog engine now (Hugo) is for RSS. I kept messing up or forgetting manual RSS edits.
[EDIT: Dammit, my blog doesn't use that webcomponent anymore! Here's an actual production usage of it: https://demo.skillful-training.com/project/webroot/ (use usernames (one..ten)@example.com and password '1' if you want to see more usage of it)]
yeah clearly there's a lot of ways to solve this issue if javascript is enabled. But there's a big overlap between the folks who wanna use vanilla web technologies and the folks who want their site to run without javascript
Not remotely! Unless you meant Preact. React ships an entire rendering engine to the front-end. Most sites that use React won't load anything if javascript isn't enabled
Yes, it is. Unfortunately HN has a crazy bias against JavaScript (the least crazy part of the web stack) and in favour of HTML and CSS, even though the latter are worse in every meaningful way.
It isn't crazy, judging by the number of times I've seen posts here and on other blogs talking about a 100k web page ballooning to 8Mb because of all the Javascript needed to "collect page analytics" or do user tracking when ads are included. Granted that may not be needed for personal websites, but for almost anything that has to be monetized you're going to get stuck with JS cancer because some sphincter in a suit needs for "number to go up".
> I've seen posts here and on other blogs talking about a 100k web page ballooning to 8Mb because of all the Javascript needed to "collect page analytics" or do user tracking when ads are included
Perfect example. HN will see a page with 6Mb of images/video, 1Mb of CSS and 200Kb of JavaScript and say "look at how much the JavaScript is bloating that page".
I don't even know where to begin with the pretence that you can compare HTML with JS and somehow conclude that one is 'better' than the other. They are totally different things. JS is for functionality, and if you're using it to serve static content, you're not using it as designed.
I don't particularly care about "designed for". If you've got to serve something to make the browser display the static content you want it to, the least unpleasant way to do so is with JS.
Least unpleasant to the developer. Most unpleasant to the user. It breaks all kinds of useful browser features (which frontend devs then recreate from scratch in JS, poorly; that's probably the most widespread variant of Greenspun's tenth rule in practice).
> It breaks all kinds of useful browser features (which frontend devs then recreate from scratch in JS, poorly; that's probably the most widespread variant of Greenspun's tenth rule in practice).
Nah, it's the opposite. JS tends to perform better and be more usable for the same level of feature complexity (people who want more complex sites, for good reasons or bad, tend to use JS, but if you compare like with like), HN just likes to use them as a stick to reinforce their prejudices. (E.g. if you actually test with a screenreader, aria labels work better than "semantic" HTML tags)
> E.g. if you actually test with a screenreader, aria labels work better than "semantic" HTML tags
Interesting how this is opposite to the recommendations from MDN, such as:
Warning: Many of these widgets are fully supported in modern browsers. Developers should prefer using the correct semantic HTML element over using ARIA, if such an element exists.
The first rule of ARIA use is "If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so." -- which also refers to: https://www.w3.org/TR/using-aria/#rule1
Though I can believe that real life may play out different than recommendations.
Also, as I understand it, ARIA is orthogonal to JS, and it doesn't alter behavior for browser users.
Many people who maintain their own sites in vanilla web technologies tend to create reusable functions to handle this for them. It can generate headers and the like dynamically so you don't have to change it on every single page. Though that does kill the "no javascript required" aspect a lot of people like
Of course you could simply add a build step to your pure HTML site instead!