Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Yes, you must copy and paste content

Many people who maintain their own sites in vanilla web technologies tend to create reusable functions to handle this for them. It can generate headers and the like dynamically so you don't have to change it on every single page. Though that does kill the "no javascript required" aspect a lot of people like

Of course you could simply add a build step to your pure HTML site instead!



I recently learned the object tag can do what I wished for in the 90s... work as an include tag:

    <object data="footer.html"></object>
Turn your back for twenty-five years, and be amazed at what they've come up with! ;-)

Should reduce a lot of boilerplate that would get out of sync on my next project, without need for templating.


Unfortunately that will require the client to make additional web requests to load the page, effectively doubling latency at a minimum.


A few extra <object> in a blog post is a worthwhile tradeoff, if you're literally using raw HTML.

- HTTP/1.1 (1997) already reuses connections, so it will not double latency. The DNS lookup and the TCP connection are a high fixed cost for the first .html request.

- HTTP/2 (2015) further reduces the cost of subsequent requests, with a bunch of techniques, like dictionary compression.

- You will likely still be 10x faster than a typical "modern" page with JavaScript, which has to load the JS first, and then execute it. The tradeoff has flipped now, where execution latency for JS / DOM reflows can be higher than network latency. So using raw HTML means you are already far ahead of the pack.

So say you have a 50 ms time for the initial .html request. Then adding some <object> might bring you to 55 ms, 60 ms, 80 ms, 100 ms.

But you would have to do something pretty bad to get to 300 ms or 1500 ms, which you can easily see on the modern web.

So yes go ahead and add those <object> tags, if it means you can get by with no toolchain. Personally I use Markdown and some custom Python scripts to generate the header and footer.


Yes, I’d add that not merely “raw html” but a file on disk can be served directly by Linux without context switches (I forget the syscall), and transferred faster than generation.


sendfile? splice? io_uring?


Yes, most likely sendfile.


Sounds like premature optimization for a simple page. If the objects are sized their regions should be fillable afterward without need to resize and be cached for subsequent access.


The other solutions are even easier and don’t double latency.

> be cached for subsequent access.

So now you need to setup cache control?


Nope and nope.


Good explanation. I’ll stick with cat.


Have a look at the rest of the thread. Chubot explains at length, and I added a few points.


Hey, I need to try this out, so it is like iframe except the frame part and all its issues?


I didn't know you could use object tags in that way! Thanks. That seems like a great solution if you're cool with an extra request


Couldn't you sort of do that using server side includes back en the 90s? Assuming that your web server supported it.


Yes, and a Makefile was an option as well. But an include tag was a no-brainer not long after html was invented. Especially after img, link, applet, frame, etc were implemented.


I've adopted the idea that a blog post is archived when it's published; I don't want to tinker with it again. Old pages may have an old style, but that's OK, it's an archive. Copy/paste works great for this.

The only reason I use a blog engine now (Hugo) is for RSS. I kept messing up or forgetting manual RSS edits.


I really love this! I've seen it in action a couple times in the wild, and it's super cool seeing how the site's design has evolved over time.

It also has the benefit of forcing you to keep your URIs stable. Cool URIs don't change: https://www.w3.org/Provider/Style/URI.html


Or, let me be cheeky: you could add some `<php include('header.html')?>` in your html.


> It can generate headers and the like dynamically so you don't have to change it on every single pa

Yeah, I noped out of that and use a client-side include (webcomponent) so that my html can have `<include-remote remote-src='....'>` instead.

Sure, it requires JS to be enabled for the webcomponent to work, but I'm fine with that.

See https://www.lelanthran.com for an example.

[EDIT: Dammit, my blog doesn't use that webcomponent anymore! Here's an actual production usage of it: https://demo.skillful-training.com/project/webroot/ (use usernames (one..ten)@example.com and password '1' if you want to see more usage of it)]


yeah clearly there's a lot of ways to solve this issue if javascript is enabled. But there's a big overlap between the folks who wanna use vanilla web technologies and the folks who want their site to run without javascript


Isn't using React with a static site generator framework basically the same thing but better?


Not remotely! Unless you meant Preact. React ships an entire rendering engine to the front-end. Most sites that use React won't load anything if javascript isn't enabled


Then you'd have to learn React, and for many of us the point is that we really don't want to learn React, or other frontend frameworks.


Yes, if you want to throw up in your mouth.


In theory yes, in practice good luck maintaining that if you are just a solo blogger.

I doubt your blog would last a single month without some breaking change of some sort in one of the packages.


you mean npm packages? why would you need to update those anyhow?


Because at some point it will cease to work? It needs upgrades like any other project.

Every upgrade in the JS world is very painful.


Why will they stop working eventually? Assuming they are all self contained and you don't upgrade even node js for that project

Edit: Oh right, OS upgrades could do it. Or network keys changing etc...


Yeah I guess React + SSG isn't the best choice. Nano JSX might be better

https://nanojsx.io/


Yes, it is. Unfortunately HN has a crazy bias against JavaScript (the least crazy part of the web stack) and in favour of HTML and CSS, even though the latter are worse in every meaningful way.


It isn't crazy, judging by the number of times I've seen posts here and on other blogs talking about a 100k web page ballooning to 8Mb because of all the Javascript needed to "collect page analytics" or do user tracking when ads are included. Granted that may not be needed for personal websites, but for almost anything that has to be monetized you're going to get stuck with JS cancer because some sphincter in a suit needs for "number to go up".


> I've seen posts here and on other blogs talking about a 100k web page ballooning to 8Mb because of all the Javascript needed to "collect page analytics" or do user tracking when ads are included

Perfect example. HN will see a page with 6Mb of images/video, 1Mb of CSS and 200Kb of JavaScript and say "look at how much the JavaScript is bloating that page".


I don't even know where to begin with the pretence that you can compare HTML with JS and somehow conclude that one is 'better' than the other. They are totally different things. JS is for functionality, and if you're using it to serve static content, you're not using it as designed.


I don't particularly care about "designed for". If you've got to serve something to make the browser display the static content you want it to, the least unpleasant way to do so is with JS.


Least unpleasant to the developer. Most unpleasant to the user. It breaks all kinds of useful browser features (which frontend devs then recreate from scratch in JS, poorly; that's probably the most widespread variant of Greenspun's tenth rule in practice).


> It breaks all kinds of useful browser features (which frontend devs then recreate from scratch in JS, poorly; that's probably the most widespread variant of Greenspun's tenth rule in practice).

Nah, it's the opposite. JS tends to perform better and be more usable for the same level of feature complexity (people who want more complex sites, for good reasons or bad, tend to use JS, but if you compare like with like), HN just likes to use them as a stick to reinforce their prejudices. (E.g. if you actually test with a screenreader, aria labels work better than "semantic" HTML tags)


> E.g. if you actually test with a screenreader, aria labels work better than "semantic" HTML tags

Interesting how this is opposite to the recommendations from MDN, such as:

Warning: Many of these widgets are fully supported in modern browsers. Developers should prefer using the correct semantic HTML element over using ARIA, if such an element exists.

The first rule of ARIA use is "If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so." -- which also refers to: https://www.w3.org/TR/using-aria/#rule1

Though I can believe that real life may play out different than recommendations.

Also, as I understand it, ARIA is orthogonal to JS, and it doesn't alter behavior for browser users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: