Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript hydration is a workaround, not a solution (thenewstack.io)
172 points by fagnerbrack on Sept 1, 2022 | hide | past | favorite | 231 comments



...get a PageSpeed score of 100/100

I'm slowly coming around to the idea that PageSpeed (or Lighthouse, or Core Web Vitals, or whatever Google has invented this week) is what drives a lot of the complexity in web app dev. People refuse to throw out what they've learned, so every time there's a new metric to chase in case you lose SERPS ranking for being slow! devs heap another layer of complexity on to the Webpack bonfire.

Hydration is an example of this. People chased 'first contentful paint' and 'cumulative layout shift' timings because that's what Google told everyone they needed to optimize for. That meant reducing the amount of upfront work done in JS, pushing some barebones HTML and CSS to the client for those sweet, sweet metrics, and then running a massive bundle of deferred JS to make it do anything. Google is pulling that rug with Time to Interactive, First Input Delay and (coming soon) Interaction to Next Paint, so now devs are trying to write the same website but have the server strip out the code they wrote (eg Remix.run).

Everyone wants a fast website. No one wants a heap of fragile complex build tooling. The answer to the first problem is to stop trying to solve the second problem with MORE TECH. Go back to fundamentals. Just make something that works with with HTML and CSS alone, and enhance it with JS. You don't need to be clever about it, especially if the level of interactivity on your website amounts to basically a form.


One story that comes to mind was when I was still relatively early in my career and the COO of the company came to me asking for some help getting some jquery ui tab widget thingie to work.

And as if the words "COO" and "jquery ui" in the same sentence weren't enough to raise eyebrows, what came next was probably one of the most mind bending software engineering episodes I'd ever witness. It turned out he was trying to put together a web UI for displaying a number of reports from some computationally intensive data warehouse query crunching. We're talking visualization of tables and tables full of data. Conventional wisdom would dictate that implementing this the traditional way would involve query optimizations, various layers of caching, separation of concerns, a framework to manage all the complexity, etc, etc, etc. His approach? Just do the naive queries (slow as they were), print directly to HTML old school PHP style, then save the HTML file to disk.

He was a bubbly happy type of guy and you could see the child-like joy on his face when he showed me how lightning fast the thing would load from navigating from one page to another because it was effectively looking at a static website once the pages had been generated once. I distinctly recall feeling a mix of confused feelings at seeing the guy break every rule in the "programming best practices" book and the undeniably good performance results from the sheer dumb simplicity of the solution.


If the data doesn't change often and there's no further changes on the data after its created, seems perfectly valid. Optimizing queries takes time; in some cases, building new tables for amortizing parts of the computation. And of often you don't know whats worth optimizing till later so its easy to spend a lot of time optimizing the wrong thing.

I mean he basically cached the results for viewing later. As cringy as it seems, it was probably the optimal solution.

personally, I would have probably built out a materialized view and added memcache to generate json to make it easier to refactor additional output formats over time but that's just me. It still wouldn't have loaded as fast as sending a pre-generated html file.


COO invents caching. Relatively harmless in the grand scheme of things.


Rewritten as a horror story:

> COO invents cache invalidation


CIO - Cache Invalidation Officer


The purpose of the more complicated solutions is to build something that will be maintained for a long time where complexity will grow, and usually, require multiple contributors.

I can bang out a basic landing page site in a few hours that will be extremely performant while pulling in zero libraries. But I would never want to maintain that over any extended period of time or added complexity.


The purpose of more complicated solutions is to prevent devs from becoming bored and leaving.

In my experience, KISS results in the easiest solution to maintain. Then I get a litany of complaints from devs that "this isn't programming" or "this is a dead end to my career" and then they leave. Which paradoxically makes it more difficult to maintain, as I have no maintainers lol.


What are you working on? Without knowing your use-case or what your codebase looks like, I have no idea if the problem is with other devs, or you are just blind to the insanity that is your codebase.

Not saying that latter is the case here, but I've ran into devs maintaining some monstrosity and then complaining about others pushing for off-the-shelf solutions not realizing that nobody wants to spend time learning some bespoke system they created that's not actually as great as they think it is.


You just need to hire older people. I’ve been doing web development for nearly 25 years. I’m at the point where I would love to just babysit some sites while spending more time on my hobbies and more time with my family.


Another poster beat me to it: hire older devs! I'm 42 and my ideal work is maintaining several systems that move rather slowly. It's chill, still responsible, and allows for creativity and inventive solutions when the need calls for it.

Not going to bash younger devs in general but the fact is that getting the job done without thinking of their future career... is usually not high in their list of priorities.


It shouldn't be hard to make a static site generator for a small pile of queries that's very easy to maintain.

You can only anticipate so much up front. You're always going to have to rewrite things after enough growth, so make it easy to add complexity later.


Yep, my personal webpage probably has like a 10/10 score on any performance tracking thing you could find. Because I literally just wrote some html and css, and dropped it into github pages.


I mean, if it really took you a few hours, maybe you actually could throw it away and start again later? Meanwhile, if you have to keep rewriting your setup anyway to account for the new shiny toy everyone insists you use every year, have you really made something that requires less effort to maintain?


Hey, he basically invented pre-Cloudflare Cloudflare!

(one of their caching solutions -- which works very well, actually -- is to just cache the slow PHP output of many sites and serve the HTML directly)


Speaking as the TL of Lighthouse and PageSpeed, I can comfortably say that adding complexity is antithetical to our goal. Quality UX and a reliably performant web is what we want for all users.

Ideally, folks would use a thinner stack with _less_ JS, as that's rewarded across all the metrics. But in recent years, many teams build a "site" by building a SPA. As they're entrenched in that dubious decision, the only path to making it performant is adding SSR complexity. ¯\_(ツ)_/¯ Perhaps its the apparent conflict between UX and DX that leads to tooling complexity?

And to nitpick: Core Web Vitals debuted with 3 metrics, one of them being FID. The focus on runtime/JS performance has been there from the start. Regardless, I hard agree on your last paragraph.


> Perhaps its the apparent conflict between UX and DX that leads to tooling complexity?

+ PMs wanting to run dozens of AB tests at once in every single page


> Perhaps its the apparent conflict between UX and DX that leads to tooling complexity?

In theory improved DX leads to improved UX. In practice, subpar/lazy tooling needs to be worked around to get better DX.

I’ve added a lot of JS weight to migrate legacy apps to React always improving time to interactive and responsive error handling/messaging. Vanilla JS and even Jquery just don’t have the DX that allows them to scale in a corporate context (specifically because of under experienced engineers and poor “due dates”).

IMO it’s a lack of initiative from browser developers to move the web forward faster (obvious given the funding comes roughly exclusively from the developers of iOS and Android). I understand the inertia of legacy browsers, but it should not take decades to eliminate the need for these DX polyfills (eg like Jquery took to become redundant, and how long it’s taking React/Vue/Svelte style components).


It's ironic of Google to try to drive web performance when websites are full of Google Analytics scripts. Like the fox guarding the henhouse ;-)


The whole idea of a "site" full of "pages" doesn't really suit a lot of modern use cases. What do you do when your "app" isn't really a "site" to begin with? Like most of the apps in Google Workspace, or Maps, Earth, etc. It's not their URL structure that gives them value, but the buttload of realtime clientside interactivity enabled by JS.

How are you supposed to "less JS" your way out of that?


The vast majority of SPAs are not 'Maps, Earth, etc.'.

I thought the 'if the level of interactivity basically amounts to a form' of top-level comment sums it up nicely. Many are SPAs for the sake of it, similarly to packaging a website as an 'app' but not doing anything native, offline, or that really warrants it being its own app at all other than getting homescreen space on iOS.

(Totally off-topic: I can't believe Apple still doesn't let you organise that however you want. It's sort of a small thing, but also so in your face, by far the most immediate turn-off to me of iOS.)


Yeah, you're totally right in that complex web "apps" are different to a simple informational page. But do the metrics take that into account? It's just a raw performance score, no, not performance-per-complexity or performance-per-feature etc.?

How do you compare "fast" between "my journal entry today" and "Photoshop on the web" using one set of metrics...?


You probably don’t care about the SEO ranking of the app just the landing/marketing pages.


If something isn't expressible as a site full of pages, why are you worrying about Google's metrics for sites and pages?


Agreed. If you must login, for example, then that part of the site is not indexed, so googles metrics are not relevant.

Indeed, I’ve seen higher NPS scores when it’s slower… the user feels like they are paying for more.


Applications like you’re describing should consider less DOM and more Canvas API. The apps aren’t documents. They’re more like video games. DOM manipulation and rendering usually degrades performance more than the JavaScript.

One essay on this topic… https://medium.com/young-coder/the-future-web-will-canvas-re...


They can and should work side by side, along with SVG, WebGL, maybe WASM and workers and such. They are all web technologies. But Javascript is still the thing that controls all of them, their state, URL routing, interaction events, etc.

I spent much of the last few years working on such hybrid apps (DOM mixed with graphics techs, with clientside UI and state combined with server-side with and persistence, etc.).

The DOM has never meaningfully slowed down for me, even with CPU throttling or testing on old laptops. Canvas has, especially in some builds of Firefox. Certainly for rendering graphics canvas or WebGL are better choices, but it's not a performance magic bullet for all apps.

More often than not, the performance bottlenecks are just some bug of ours (a poorly implemented search algorithm) or an unoptimized query somewhere.

No modern web app needs to be just one thing or another. Even if you're operating almost entirely in canvas (like a map) there's still a ton of Javascript (open layers, leaflet, mapbox-gl-js, etc.) that you need, along with being able to use the DOM where it makes sense (reusing buttons, alerts, etc.)

Canvas is also really hard to make responsive. CSS evolved alongside the devices and has first class units for all that. Canvas doesn't have those easy abstractions and having to virtualize your UI without CSS-like helpers (much less things like flexbox or Bootstrap) makes it hard to make standard UIs too. HTML still has its uses for now, though arguably React and other abstractions make it much less important.


And sorry, one more thought: Canvas work can itself be abstracted with frameworks like OpenLayers or PixiJs or Unity. It's a testament to how powerful browsers have become that you can run full MMORPGs or other games in the browser. But that all depends on being able to download a huge Javascript bundle at load time, but then the rest of the experience is usually pretty smooth.

Games and maps are just an extreme version of that, but other examples where clientside DOM apps with heavy JS can still be faster (compared to old school server-side HTML) and easier to work with (compared to canvas) are email, e-commerce, ebooks, dashboards, chat, forums, video tubes, search and filtering, galleries, documentation, project management, etc.


I can see email and chat, sure, but ebooks, forums, video tubes, and documentation? All four of those seem like they shouldn't require much JS, and I would be unsurprised to find good examples of each that work even with JavaScript disabled.


Sorry, the threads got a little confusing here. I think this particular one was discussing those examples in the context of using Canvas vs HTML, with or without Javascript.


> Just make something that works with with HTML and CSS alone, and enhance it with JS. You don't need to be clever about it, especially if the level of interactivity on your website amounts to basically a form.

A big problem with this is that html is extraordinarily limited as a widget framework. You only have a handful of widgets available, and many of them offer very little (if any) customizability from CSS.

For example, say you wanted a checkbox, but you wanted to use a custom shape (Edit: shape - color is more easily controlled today) instead of the browser default. Well, tough luck. From HTML you'll have to put the checkbox inside a label, add an empty span next to it, and from CSS make the checkbox invisible and then get creative with drawing some boxes that look like a checkbox [0] (or, more realistically, load some custom graphics). So, if you're doing all of this effort anyway, why go through the effort of using the HTML input checkbox at all?

I always find it fascinating just how barebones and bad HTML+CSS is as a GUI framework. I really don't understand why people insist it's in any way superior to using a JS toolkit for anything requiring more than the simplest interaction (say, more than a newspaper article).

[0] https://www.w3schools.com/howto/howto_css_custom_checkbox.as...


> For example, say you wanted a checkbox, but you wanted to use a custom color instead of the browser default. Well, tough luck. From HTML you'll have to put the checkbox inside a label, add an empty span next to it, and from CSS make the checkbox invisible and then get creative with drawing some boxes that look like a checkbox [0] (or, more realistically, load some custom graphics). So, if you're doing all of this effort anyway, why go through the effort of using the HTML input checkbox at all?

What the fuck? Just use the actual goddamn checkbox. You can set colors with CSS, but also, it will work in exactly the same way as all the other checkboxes on the user's system. It will respond to keyboard and other input shortcuts the same way as all the other checkboxes they use. It will work with any accessibility devices the user has in the same way as all the other checkboxes they use.

Don't reinvent standard controls just because they're not "customizable" enough. Function is more important than form, and the standard control has a lot more function than you're probably aware of. Also, the user already knows how the one on their system works, because they've already used one hundreds of times.

And the same goes even more so for controls which are more complex than the checkbox. Which is, uh... checks notes... all of them.

Just stop.


The idea that all apps should use some kind of system style that dictates shape for every single control is... Odd. I don't know of any system that operates like this - certainly not Firefox on the Android phone I'm typing this from, nor the Windows box IT wants me to use for work, nor the Linux VM running inside that.

Especially for web apps, writing them so that they look good on every browser with default styling for every element is... Difficult.

Not to mention, users often use the same app from different browsers (laptop, phone, tablet), and are much more easily confused by the different styles of each device and browser than if the app itself is consistent, but different from other apps.

Edit to add: I'm very curious if you can find some major sites that do actually use the default checkbox controls - Google doesn't, YouTube doesn't, Apple doesn't, Wikipedia doesn't, Mozilla doesn't (though I did find a default radio button on their donate page), Microsoft doesn't. Even HN uses plenty of default controls, but still you'll see some custom controls, such as the upvote/downvote buttons.


> The idea that all apps should use some kind of system style that dictates shape for every single control is... Odd. I don't know of any system that operates like this

We came damn close to every OS operating like this in the late 90s. Sadly the future arrived.


I don't think that is actually a good thing. I much rather have people less tied to one particular ecosystem, and preventing barriers to moving from one OS to another.


Instead they get to learn new ways of doing things with every single application they use regardless of platform and still have to deal with platform issues. Yay for progress?


However, they also benefit from innovation in how those widgets work, rather than being locked to historic buttons because the committee can agree on on pixel change a year


Innovation like not having a tabstop.


By moving from one OS to the other you really mean Windows/OSX/Linux, right? Because thanks to modern web(browsers) you can almost but forget using another one, regardless of someones checkbox design.


"Just compromise your design", surprisingly, isn't a one size fits all solution. Especially when you can implement things yourself. Yes, it's harder than you think, but sometimes it needs to be done.


One of the things that the words "just compromise your design" make clear is that the focus is on the designer, not the users. Which in my view is often the problem. It's especially hollow here given that much of Karellen's point is about what's best for the users.


It is for things like checkboxes. If your design requires checkboxes to be special and different from everyone else's checkboxes, your design is wrong.


Everyone else’s checkboxes aren’t all the same though.


They used to be, and it was better that way.


No, your designer sucks.

Don't reinvent well known controls. Stop doing it. You can give it a border color and an accent color (check out the new accent-color CSS property) and for the rest you leave it alone.


This is completely silly, and no one does this. If I'm going for a square design, I'll want my checkboxes to be square, not rounded squares like Firefox offers for example.

Try to extend this idea to a game menu and see how much sense it really makes to you: would anyone really want default browser checkboxes in an otherwise medieval styled game UI, for example?

Also, if you truly believe that all checkboxes in all apps should look exactly the same (per browser/OS, and except for color), then why not extend this to other aspects of design? Why even allow different sites to use different fonts? That is much harder to adjust to than a different roundness on checkbox corners.


> And the same goes even more so for controls which are more complex than the checkbox.

HTML has a lot of virtually useless controls due to their limited styling. And yeah, sometimes you want custom-looking checkboxes, because you just fucking do and design matters. Also, on some browsers, checkboxes don't even have the lauded accessibility you're thinking about -- I recall a time where zooming in in Chrome wouldn't change the size of checkboxes!

Those aren't even the most egregious examples, to be honest.


> Also, on some browsers, checkboxes don't even have the lauded accessibility you're thinking about

In which case, people with accessibility needs won't be using that browser. And that's the whole point. The user can pick whichever browser works best for them, and implements the standard controls in the way they want. (Or, in the way that sucks the least, out of the browsers available to them.) And if they decide to get a better browser (different engine, or even just a major upgrade to the current one) suddenly all their websites work better. More importantly, they all work better in exactly the same way, so they only have to get used to any differences once, rather than for each and every website.

If you go and write your own version of a standard control which implements it that way you want, you take that away from the user.

HTML allows you to specify the semantics of the controls you want, and have the user-agent bother with the details about how it works. That's going to be more lightweight, more performant, and more user-friendly for more people than any alternative you bodge together yourself.


The best thing about using standard controls for things in that as things get better. You get it for free. The zoom bug gets fixed.

I have a bunch of legacy custom controls I deal with regularly because things like color pickers were not cool enough in 2009. The down side is that in 2022 they regularly break and require dev time to fix so some report that gets run once a year works correctly.


> The best thing about using standard controls for things in that as things get better. You get it for free. The zoom bug gets fixed.

Hopefully, in someone else's timeline. But the user/customer is complaining to you, not to Google


I agree, I know eventually the datalist won't lag on scroll with even just 100 items in their datalist control, and that they'll name their dropdown picker to something other than "-webkit-calendar-picker-indicator" since it's misleading https://jsfiddle.net/klesun/mfgteptf/ If I sound passive it's because I assumed a native control/component would be performant and switched out a lot of controls for it only to have to revert back to a custom solution. But ultimately they will be fixed and it'll happen to us transparently which is nice.


> So, if you're doing all of this effort anyway, why go through the effort of using the HTML input checkbox at all?

Because HTML is the only way to do a checkbox?... Even if you use JavaScript, at the end of the day you'll have to generate HTML for it to render... so then why go through the effort of using JavaScript for a checkbox at all?


I meant, why use <input type="checkbox" >, not why use any HTML tag at all. Of course you need some kind of div or span to style.


And what do you do for more complex, stateful apps? I don't think it's fair to dismiss this problem as "just use HTML and CSS and a sprinkling of vanilla JS". What happens when you need to build anything more complex than a basic form? A dashboard, or web map, or Figma, or Slack, or Gmail, or Gsheets... everything from state to AJAX (and other async) to persistence to URL routing, etc. becomes insanely complex.

I feel like "don't use tooling/frameworks" is the web dev version of "you don't need Java or .NET or WPF or WinForms, just use assembly and gfx drivers and draw pixels on the screen". HTML and ECMAscript evolve very slowly relative to the needs of software businesses, and it would take 30x (if not 300x) as long to write complex apps in those alone. Having a "build and bundle" step is just the webdev version of compiling developer-friendly code (React and frameworks) into the "lowest common denominator" code of browser-parseable HTML + JS.

It's not about "cleverness", but being able to do your job in a reasonable amount of time and effort, vs going back to a 90s-style internet where every page is basically a dumb terminal and all the actual interactivity has to happen on the backend.

I think is a better option is to know your target audience, their expected device and connection speed (metro desktop? rural mobile?) and not to worry about over-optimization. Nobody cares if your website loads in a second if it's so hard to use that it takes THEM more time to do what they need. Better to take 5 seconds to load, and then be actually usable as an app.

FWIW I don't think web apps are getting slower because developers are getting worse, but because the newer frameworks are enabling new use cases that weren't before possible. Photo editing? You used to have to download and install Photoshop, which can take minutes or days depending on your network and connection. Now you can run Photopea in a browser window in a few seconds and do most of the same things.

As for core web vitals? It's not a bad thing to use a performance arms race to get websites to be faster. It forces frameworks to evolve their delivery, caching, loading, hydration mechanisms. But that doesn't have to mean "give up on frameworks and complex apps". It can also just mean keep up with their latest performance improvements, and/or hope that ECMAscript evolves faster, that things like Web Components or PWAs etc. become more commonplace.


The examples you give are a 100% fit for the SPA model. Nobody would build Google Docs or a game from server-side HTML.

That's not the discussion. The discussion is everything except that. The typical example being a CRUD app. A bunch of cards, filters on the left, search box on top, logged in user. It usually isn't much more than that. Which means its low state and lowly interactive.


The problem is that there are a few cases in every CRUD app where it is undeniable that the best user experience would come from client-side rendering. Stuff like password criteria validation, filtering or sorting a column on a table, upvoting or downvoting a post, and other application specific behaviors shouldn’t require a page reload.

What we tried in the jQuery-era was augmenting mostly-server-side apps with components that handled these behaviors. But it wasn’t a good dev experience and it wasn’t a good user experience.

So a lot of teams choose to build entirely in JS nowadays since it leads to a simpler experience to have everything in a React SPA than some things in <dynamic form language of choice> and some things in jQuery.

I’m not saying this is perfect, I think we still have a long ways to go. But I’m optimistic about the current wave of JS tools (Hotwire, Astro, Fresh), and I think “just go back to server side frameworks” isn’t solving any problems.


But not everything is a black or white good/bad fit for an SPA, and you don't always know upfront...

Building a list of blog entries? Sure, it doesn't need to be an SPA. Add a tag filter? Still doesn't. Then you eventually need more complex filters (by author, dates, whatever), plus a text filter, plus thumbnails, plus maybe a gallery view of the photos, and maybe you want all of that with realtime clientside filtering... hmm. You can rip out that part of the static HTML and make it a drop-in widget.

But what if you need to tie that into the logged-in state of the user to determine what they can see? Or you want to integrate comments and facilitate real-time discussions?

Having a split backend/frontend rendering system like that makes it pretty hard to reason about (it's how a lot of PHP sites were built, for example, with serverside HTML and a sprinkling of clientside JS, but coordinating the two gets tough with ugly hacks like using `window` or `data` props to pass variables to the client).

Ecommerce? You can build it entirely backend, entirely frontend, or some hybrid of the two. Look at a random Shopify example: https://nomz.com/ (suggested by their website, not sure if real) and a Next.js example (https://demo.vercel.store/product/quarter-zip). Try browsing around and looking at different products. The Next version is pretty much instantaneous while the Shopify version is slow and requires a full page load for any navigation.

Another example from a decade or so ago was the "real" desktop Gmail vs the light/mobile (and maybe WAP?) versions of Gmail (maybe still available at m.gmail.com). They both did the same basic things, but the desktop version was a lot faster, featureful, and usable once you let it load for 5-10 seconds at first. The light version shows you the inbox really quickly but then subsequent actions (navigating to a message, replying, etc.) are actually slower than the SPA version.

Even for just a moderately complex site, an SPA model can be slower upfront as it downloads a bunch of JS but then faster after that, speeding up all the subsequent interactions (with JSON content loads instead of full HTML pages, auto-detected image sizes, smart prefetches based on state, etc.).

AND it doesn't require a rewrite once your site gets "complex" enough. You can start with a simple SPA that may be overkill at first, but keep growing it organically and adding complexity without dramatically increasing the bundle size (with proper tree shaking, etc.) since the bulk of it was the framework.

With traditional serverside HTML sent to the client, yes a simple individual page may be smaller, but you quickly lose that benefit as you keep sending the header, footer, etc. on every page load, while still needing some sort of JS management system for the interactive parts.

For all but the very simplest site, that means you sacrifice a lot of the developer experience for a minor increase in performance. The user also loses out if you start thinking in terms of "how do I add interactivity to my dumb HTML" instead of "what's the best UI for this".

Safer (and usually better) to just start from an SPA unless you absolutely know upfront your site is going to stay really simple. It's a lot easier to optimize an SPA for better performance than to rewrite a hybrid backend/frontend rendering stack and migrate architectures, etc. (i.e., Jamstack over LEMP/Drupal/Wordpress/Rails)


And what do you do for more complex, stateful apps?

To quote Albert Einstein, "Make everything as simple as possible, but not simpler."

This applies as much to your JavaScript bundler configuration as it does to the origins of the universe. It's not surprising really, because if I say "an instantaneous explosion that causes the entire universe to be filled with matter" you don't know if I'm talking about the Big Bang or Webpack.


> Having a "build and bundle" step is just the webdev version of compiling developer-friendly code (React and frameworks) into the "lowest common denominator" code of browser-parseable HTML + JS.

One problem is that for all the building and bundling webdevs do, only a few very recent frameworks (Svelte and SolidJS) can even manage to compile into something that even approximates "plain vanilla", progressively-enhanced JS on the client. And use of these is still the exception rather than the rule.


Maybe it's because I'm from GenX and from that era but I still don't get why JS frameworks jump through all these hoops just to avoid server-side programming. Why do all these round trip stuff when you could just render it server-side to begin with?


In large part because HTML was not designed as a "streamable" format. The main benefit of AJAX is that it lets you send information back and forth without needing a whole page refresh.

You don't have to load anything to send a request to the server. But when the server sends a response back, HTML by itself can't do anything with it. Interaction isn't really a part of the language. So Javascript needs to act as the client, and take the information to apply some sort of DOM transformation to the HTML so that the user can actually see the response and further interact with it.

Javascript is what allows HTML to be interactive in real time at the component level. There's a lot more of that these days than in the past.

The frameworks still do the same thing, at a higher level of abstraction and with better debugging and less verbosity and manual labor. They make it much quicker to create and maintain complex compositions of stateful components... if that is meaningless, your page probably doesn't need a framework, but a lot of complex sites today are exactly that (compositions of stateful components)

It's possible that may change in the future, like with the Web Components spec. But for now, once Flash was deprecated, JS is the only tool really available to web devs for this sort of usage. It's not that it's the best choice for an interactive web, it's just what history left us with... flawed, slow, clunky, but popular enough as to be worth the costs.


> Time to Interactive, First Input Delay and (coming soon) Interaction to Next Paint

Having worked on one of the most bloated SPAs at Google and doing an SRE rotation to optimize exactly these metrics, I couldn’t agree more on the hypothesis that this is where complexity, jank and fragility arises.

This is why our startup set out to build the best-in-class frontend for cloud computing and making a bet on SSR. I’m from a JavaScript background and personally think hate for the language is undue. V8 is also a marvel of engineering. We elected Go for our application language. It’s faster for our purposes but with SSR the domain language isn’t as important. We could use Haskell if it was fit for the problem domain. We empathize with how ideas such as “user-perceived latency” came about but SSR is a much simpler model and our philosophy is that “simpler is faster is faster”.


What is SSR exactly in the context that you're describing?

I thought if you're using server side rendering you'd be serving HTML directly from your application.


The difference is more heuristical than technical.

Render the entire page in one go, rather than have the server render small chunks at a time and have the client patch the DOM. I.e. not having to think about below the fold, adding event handlers to pre-empt and preload. In practice, you’ll need to sprinkle in JavaScript interactivity, but the point is keep this to a bare minimum. See Craigslist or HackerNews.


> Having worked on one of the most bloated SPAs at Google [...]

Let me guess. Google Ads?


I'd say probably the Google Cloud Console.


I would say gmail is the most bloated SPA pretty much in existence much less at Google.


Simplicity is almost always best, but the hardest part is pushing back on management and product that often only care whether feature requests are possible and aren't as concerned with what goes on under the hood.


Speed and uptime are also features. If customers like speed/uptime (they do) we need to make sure management prioritizes them and understands how certain features will work against speed/uptime.


been a juggling act for me for some time, and always comes down to - they'd rather take a performance hit vs. harm the branding/interactivity. tho we're in a very niche market that doesn't seem to mind (i do my best to make it as efficient as can be)


This.

We have a very slow eshop, worst than any competitor, yet the priority is always new features that will make the life of users even worse.


Cool-looking things appeal to customers, and also to developers.

It's a bit like your car. It takes you from place to place and could drive much faster than speed-limits or traffic-jams allow. But getting from place to place is not all you want. You want to pay extra for the coolest looking fastest wheels you can afford.

When you work with an app or app-development tool that looks cool your self-image improves. You are a cool dude because you work with a cool-looking application. Seriously, it does have an effect on me.


Frontend frameworks often have better developer experience. I want to use one because it’s less annoying to do thing like template HTML with editor completions, than using HTML.

Another theoretical advantage is that you can get better colocation of content, style and functionality.

Basic JS is pretty annoying to write. It reminds me of using goto for control flow. It’s a lot cleaner to be able to put functionality alongside the document structure.

Of course, you can use HTML + CSS + JS but you can also use Assembly for all code. But the end solution here is not Frontend frameworks.

I think it’s time browsers give us a better abstraction to HTML + CSS + JS. Webassembly and friends are nice, but I think the 3 fundamentally need to be replaced together in a performant way such that the simple case of no client side code is well optimised. Frontend frameworks are just a symptom of the problem which is that Web devs have to use the only abstraction creation mechanism available to get nicer development properties.


> Frontend frameworks often have better developer experience. I want to use one because it’s less annoying to do thing like template HTML with editor completions, than using HTML.

Agreed. As I've been developing a service (https://prose.sh) with no javascript and only go templates, the biggest downside is autocomplete.


If you try and tell inexperienced employees this in an interview for a job when you have well over a decade or more of experience—you’re not getting the job.

I agree with you, but there’s some sort of complexity agreement in corporate environments, because people aren’t willing to accept that you don’t need a large build process, or any at all these days, maybe outside of a build pass for JSX.


> If you try and tell inexperienced employees this in an interview for a job when you have well over a decade or more of experience—you’re not getting the job.

Good. An interview is as much you assessing your potential employer as your potential employer assessing you.


True! Just a sort of unfortunate aspect of our field.


> Just make something that works with with HTML and CSS alone, and enhance it with JS

If you do this and have a sufficiently complex application... you will end up where the modern frameworks are. They exist for a reason.


I've had clients and self-professed SEO gurus carve out useful content in order to chase that 100 pagespeed score. I wish Google would just come out and state how content vs speed is weighted.



> Make something that works with with HTML and CSS alone, and enhance it with JS.

Hydration is... an automated system to do this? Am I missing something


Meanwhile I'm having a ton of fun (and better metrics) by just rendering html from the server and using Unpoly for interactivity and "old style ajax" dynamic updates. And still building the entire application in JavaScript, end to end.



Go back to fundamentals … if the level of interactivity on your website amounts to basically a form

What if it doesn’t? I mean by requirements from various departments/analysts and user’s common sense.


In 99% of cases your website will still be basically a form.


People forget that a form submit is a function call with parameters. The bone of programming; calling functions. Also something something state transfer.


The recent evolution of JS frameworks has been really nice. Performance is basically getting identical to desktop.

The three recent developments I've noticed:

- "Islands" in Deno https://fresh.deno.dev/ and https://remix.run/ where only small isolated parts get hydrated, instead of the whole page

- Using http://linear.app style data-flows ala Replicache (https://replicache.dev/) where JSON data is preloaded for all sub-links and stored offline-first like a desktop app, so clicking on anything loads immediately w/o waiting for network requests to finish

- Now with 'resumability' where the server-side framework was built with client hydration in mind and delivers the bare minimum event/DOM data necessary to make the page interactive (instead of just being a glorified HTML cache load before the usual JS activates)

For people not following JS these might all seem like constantly reinventing past lessons, but there is a logical evolution happening here towards desktop-style performance and interactivity on the web. Or pure server-side performance but with full JS interactivity.

The next set of frameworks is going to be as big of an evolution the way Angular/Backbone->React/Vue was ~8yrs ago. But it's going to require new backend server frameworks, not just a new client framework. There's probably a big opportunity for the project that can combine this stuff properly.


> The recent evolution of JS frameworks has been really nice. Performance is basically getting identical to desktop.

It's getting much much better but performance is only "identical to desktop" if you ignore anything about its resource usage or speed increases in processors over the past decades.

> For people not following JS these might all seem like constantly reinventing past lessons, but there is a logical evolution happening here towards desktop-style performance and interactivity on the web. Or pure server-side performance but with full JS interactivity.

For people following JS these are examples of constantly relearning past lessons. I'm not sure how anyone could reliably expect 100+ms round-trip time (on a good connection) to offer the same experience as something local but I think what it actually means is that the people writing JS-based software haven't actually used a native desktop app for years and have done mostly web-based things.

You could be forgiven it since HTML/JS as a user interface design language appears to have taken over completely, to the point where even the most popular code editors are now web browser-based.

Seriously though, go load up any natively compiled app on your OS of choice and compare the speed of it doing any given task to what you get out of web-based versions, electron versions, etc. There isn't a comparison.

My griping aside, I recognize JS as a language is here to stay and it's important to stay on top of its developments and improvements.


> Seriously though, go load up any natively compiled app on your OS of choice and compare the speed of it doing any given task to what you get out of web-based versions, electron versions, etc. There isn't a comparison.

My experience is a bit different...

Google Docs loads faster than Numbers

Figma loads faster than Illustrator

VS code loads faster than Xcode (not a fair comparison)

Quicknote.io loads faster than SimpleNote (which is blazing fast!)

Google Meet loads faster than Zoom

For VS code, it creates a whole new chromium process, but for websites like quicknote.io, do you count the browser base usage and loading time? Or just the incremental time to load up in a new tab?


"Loads" is an interesting take to consider when looking at application performance.

Simply starting the application is largely irrelevant. What's the experience using it? What features do you have access to? Those are the things that matter.

If we wanted to take that point though, even if we assume the browser-based stuff gets the startup time for free (since most people have a browser open most of the time I'd wager), what's the process of loading a local file on my system like? Docs or Sheets might "load" faster than LibreOffice but what's the time-to-use look like? Do I have to sync files somewhere Docs/Sheets can have access to it? What's the "[double-]click to load local file" part of the equation here?

What about the actual use of the application? Is there the instant responsiveness to open color selection dialogs in the case of Illustrator? How useable is it from a plugin perspective? Are we actually comparing comparable applications? There's lots to ask there.

This is why I don't think startup time actually matters. What matters is what the experience of using something like Figma is over Illustrator, Sheets vs Numbers, XCode vs VSCode, etc.

The example I gave of two local apps is a relevant and more directly comparable one. Eclipse does everything just faster than VSCode. VSCode is my typical "general editor of choice" for a variety of things but the slowness of reading a few large Java packages led me to get a current version of Eclipse and give it a go. Seriously, go try it out on any substantially large Java project. There's measurable lag in syntax highlighting upon opening a file, switching between files, performing some basic "find references" type tasks, and so on. In Eclipse it's just fast.

Maybe everything will eventually be JS-based and it will become a meaningless comparison. I don't buy that we're there yet.


> Figma loads faster than Illustrator

Since CS6, all Adobe apps have been getting worse and worse in terms of start speed, UI performance, etc.

Some apps like Acrobat Pro are an absolute disgrace in terms of performance. It's like it's trying to do PDF editing in an AngularJS app from 10 years ago.


VS Code loads slower than Sublime or Kate (though they're a bit lighter on features than Code, not sure it's slower than Qt Creator). And Google Docs takes many seconds (worse on older hardware) to load 100-150 page documents (4 seconds for warm load on Firefox on a top-of-the-line (for single-threaded) Ryzen 5600X) than LibreOffice Writer (just over 1 second on warm startup, 0.6 seconds if Writer is already running).


The initial claim wasn't about load-time, it was about "doing any given task," and I frequently encounter delays when actually using Google Doc, delays I never experience with compiled text editors or spreadsheets.

All that and TextEdit still starts up much more quickly than Google Docs for me.

Your mileage may vary, of course.


Slight nitpick, the equivalent for Google Docs would be Pages, and the equivalent for Numbers would Google Sheets.


> It's getting much much better but performance is only "identical to desktop" if you ignore anything about its resource usage or speed increases in processors over the past decades.

The high power use is what kills me. That and input lag. Fix those and I'd give way fewer shits that an Electron app eats 10x the memory that's remotely justifiable by what it's doing, and more like 20-100x what a well-made desktop program would for the same purpose.

[EDIT] Yeah, I know, high power use and input lag are in part because Webtech disrespects things like system memory, so in practice I'm not going to see one fixed without the other.


> Seriously though, go load up any natively compiled app on your OS of choice and compare the speed of it doing any given task to what you get out of web-based versions, electron versions, etc. There isn't a comparison.

I highly recommend trying out Linear.app, it's as fast as any desktop app I use.

Replicache made a clone demo https://repliear.herokuapp.com/ the production linear app is even faster

But if you're comparing the current era of React/electron apps (or even most Next.js apps) of course you're not going to see Desktop-type speeds yet... these new developments are closing the gap but it's only just starting to be adopted.


That demo literally shows a brief "Loading..." screen for the first open issue, which was pretty much my point.

I'm not sure what other desktop apps you use but for something as simple as viewing a single database record I've not seen loading screens since about 2000 on any native apps.


> That demo literally shows a brief "Loading..." screen for the first open issue, which was pretty much my point.

Yes when you open a desktop app that has to load the data from the internet it has to load first... You can't get around that fact unless you have zero data remote? Which defeats the purpose of a collaborative B2B app.

After that first load, strictly the first time you open the app, you never see another loading screen.

Saying desktop apps don't do the same thing is disingenuous.

And as I mentioned, that's a simple demo for a new framework in beta hosted on a free Heroku instance. Linear's production app is even faster. Especially when you download the desktop version.


> Yes when you open a desktop app that has to load the data from the internet it has to load first... You can't get around that fact unless you have zero data remote? Which defeats the purpose of a collaborative B2B app.

This could be true for data being loaded from the Internet, yes, though it assumes the premise that JS based applications will still do this faster. However, collaborative B2B apps existed before the JS craze and worked perfectly well (and faster) without it. They also don't strictly need to be querying data from the broader 'net but can be doing things like talking to a local database server, asynchronously retrieved local cache, etc.

> After that first load, strictly the first time you open the app, you never see another loading screen.

That's actually only half true. The "Loading" screen does indeed not popup. Instead, you just get broken empty UI until it eventually loads the record. What I observed was scrolling to the end and selecting any of the final few records there was a substantial and noticeable second-plus delay loading the record data into the view. Steps are: Load page -> scroll to very end using scrollbar -> pick last record.

For fun, I profiled it. According to the profiler it takes 3.8s for it to successfully load and process the record, of which 2.1s is just in one promise.

It's also a demo site and could easily cheat this (but to their credit appears as though they are not quite doing so at the expense of heavier page loads). It's not a compelling example of what you're talking about.


It seems like you’re missing the point of the demo. It is purposely not trying to hide the initial load. There’s more (a lot more) to application performance than page load speed. This obsession with page load speed is weird. How many times a day do you open Excel? How many time a day do you click on things in it?


> I've not seen loading screens since about 2000 on any native apps.

Have you used any real apps, because Macromedia Dreamweaver/Adobe Photoshop/etc had literal minute long splash loading screens where they zoomed text on a small snippet telling you what it was doing since at LEAST 2000....


My comment was clearly in the context of loading an individual database record within the application once running. Sure, splash screens exist. Are those part of using the application?

Arguably, I guess though if that was really the comparison to draw, should we then compare it to closing down a browser entirely and booting up the demo via shellopen or equivalent for the URL in question?


The web apps that take 100+ms round trip are ones that have to download the entire page. Downloading a new desktop app takes even longer. If you use a local-first webapp that is cached on your desktop, that's a fairer comparison


This makes no sense. Downloading a web browser takes even longer. It's an insane comparison to draw in the context of the experience of actually using an application.

Nobody cares about startup time when it's a task you do once a day compared to using the application which might be constant throughout the day. How often are people booting up their IDEs?


It's equally unfair to compare the round-trip load of an application that people expect to download on-demand, versus an application that is already installed on the client


I find myself hating visual studios lag but okay with visual studio code.

Maybe visual studio isn't getting attention but it's painful to use.


Agreed. Visual Studio is awful.

I actually just updated a bunch of old Java code in Eclipse. It is simply faster than VSCode at everything -- syntax highlighting (noticeable delay in VSCode on a 1000+ line Java class), switching between files, loading files, etc. I only updated and used Eclipse because VSCode was being noticeably slow.

That was in a bare J2EE Eclipse instance, and didn't have any of dozens of plugins running that I typically would have back in my enterprise-y Java dev days. Visual Studio seems to have gone the "kitchen sink" route. JetBrains' IDEs typically wind up crammed with plugins from what I've seen. I wonder how much that screws with people's perceptions.


The things you have listed minimize the impact of network latency, they don't affect the rendering performance which is still a big deal. Apps that need to render large amounts of data still kind of suck, you'll see many apps "virtualize" things. So rather than having 10,000 elements, you have however many fit in your viewport + N and as you scroll they get reused. The tearing hurts my soul.

Compare the scrolling in Excel 97 + Windows NT to Google Sheets/Office 365. It's night and day. The webapps that render everything with WebGL do preform better, but then you have non-native widgets.

I hope this problem gets solved one day.


In my experience, time-slicing and deferring renders for large lists (via APIs like useDeferredValue and/or requestIdleCallback, etc), combined with memoization can be a great alternative to virtualization.

For a lot of use cases where people jump straight to virtualization, it's not actually the number of elements that exist in the DOM at once that's the problem (React and browsers these days can handle a lot more than what's intuitive to most people). The problem is usually the cost of rendering all those elements at once in the initial mount, which often can cause visible frame drops and even noticeable freezes of the page.

Deferred rendering and time slicing can amortize this cost over a longer period of time while providing essentially the same UX (by eagerly rendering the first X # of items in a list), while memoization can keep transitions to new states fast by reusing the same large list of nodes when it hasn't changed. All of these techniques combined is still orders of magnitudes less complexity and requires fewer UX compromises (tearing, loss of browser search, etc) compared to virtualization, which should be reserved only for cases where there is no other workable solution IMO.


Native OS widgets virtualize large lists as well. Some do it better than others.


> The recent evolution of JS frameworks has been really nice. Performance is basically getting identical to desktop.

Web browsers in general are not able to match applications on the desktop. Additionally, typical JS frameworks come with at least a 2x performance penalty compared to hand-optimized vanilla JS (Not a commonly done thing).

Being excited about getting reasonable performance with a great development environment is fine, but deluding yourself into thinking that its great performance is not.


> but deluding yourself into thinking that its great performance is not.

What's the actual issue though? Sure on HN we care a lot about performance. But outside these walls performance has to be really bad for someone to actively avoid it. Even then, if the product has a stronghold on its userbase, you have to really degrade performance for engagement to falter.


The problem is not that a particular library, framework, or app is slow. The problem is the mindset "This is the best performance possible". Having that mindset makes you accept extremely poor performance even when it is not economically advantageous to stop optimizing.

Even though I am personally frustrated at how willing users are to accept slow software; I do recognize that the velocity and comfort of development outweighs the performance considerations in many cases. However, a mindset that makes you never check to see if the performance considerations are worth it to the users WILL make you make the wrong decisions.


i'll bet there is a possibility that resource heavy JS software "wastes" more energy than cryptocurrency


> For people not following JS these might all seem like constantly reinventing past lessons, but there is a logical evolution happening here towards desktop-style performance

Funny, my desktop itself is already written in JS, and supports/integrates with apps written that way, too, (and also that aren't), and that's been the case for a while now. And the same has been true of apps from the Mozilla platform lineage for even longer; Firefox has been an Electron-style app for 100+% of its lifetime, for example. Talk to any self-styled JS programmer for any length of time, though, and it's like these things don't even exist—like the latter was actually invented by the Electron folks, and the only thing that made JS development viable generally as a serious applications programming language is the trio of NPM+Webpack+React/React-alikes.

It's overall not worth taking their opinions at face value. They tend to be the ones who are "not following JS". They're worshipping some weird toolchain cult and calling it JS. Indeed, judging by the compulsion to try to work around ostensible deficiencies in the language and deal in cargo cult advice (esp. e.g. concerning things like `==`/`===` and `this`, and insisting on NodeJS-isms like its non-standard `requires`) it's evident that they actually hate JS, despite what they're likely to say to the contrary.


> Firefox has been an Electron-style app for 100+% of its lifetime

That's technically correct, which is the best kind of correct, but horribly misleading.

Firefox has always been mostly written in JavaScript, but not HTML [1]. A bunch of features that are being standardized now, like Web Components [2], are pretty similar to stuff that Firefox has used in non-standard form for decades.

[1]: https://en.wikipedia.org/wiki/XUL

[2]: https://briangrinstead.com/blog/firefox-webcomponents/


1. Mozilla deprecated XUL a long time ago; it's been a long time since began the transition to favoring HTML over XUL within Firefox.

2. Even if Firefox were still 100% XUL today, it wouldn't matter. The context here is the use of JS as a general applications programming language and a signpost addressing the uninitiated who haven't been "following JS". Whether it's touching DOM nodes that are in the XUL namespace or the (X)HTML one (or whether it involves DOM nodes at all) is orthogonal. Bringing this up is the kind of change of subject that constitutes misdirection.

3. I don't know what you think the role of Web Components being like XBL plays in this conversation, but it strengthens the underlying point; it doesn't weaken it...

Overall, this is a very odd response, to be generous. More accurately, it's horribly misleading to label my comment "technically correct[...] but horribly misleading".


The point is that "JavaScript Applications" aren't just written with JavaScript, and so-called "JS frameworks" really aren't doing much with JavaScript (except, I suppose, JSX).

Mostly, they're DOM enhancers, and Firefox has always been coded against its own internally-maintained, enhanced version of the DOM.


"The point"?

So, again, my comments were not misleading. And these remarks don't provide any information that contradicts the claims I made. (You are, though, calling attention to what is actually misleading here—and tacitly supporting what was my argument all along, without seeming to realize it.)


Ruby, Go, Rust, Python, PHP, Elixir folks rolling eyes. The next web framework will not be limited to only one language and these coupling paradigm.


You did a good job summarizing the frontend framework evolution, but I'm curious where you think the evolution in backend frameworks are going? I was thinking LiveView, but I also think WASM could come into play as well.


> The recent evolution of JS frameworks has been really nice. Performance is basically getting identical to desktop.

They're not even within orders of magnitude. What makes you say that the performance is nearly identical?


While I think the "resumability" that Builder have developed for Qwik is very clever, I increasingly prefer the approach taken by HTMX and Alpine.js. Move back from JSON apis and render your html fragments on the server (you need to anyway!). It removes so much duplication of logic, simplifies your tool stack and reduces your risk of vulnerabilities.

I would even be tempted to say, both "Hydration" and "Resumability" are workarounds.

Attaching your event handlers to the DOM via HTML attributes rendered on the server à la HTMX, Alpine.js or god forbid even 'on{event}=""' attributes removes so much of this complexity.

There will however always be places for client side html rendering though, the closer your webpage becomes to being an "App" the more likely you will need to do client side rendering, especially if you want to work offline.


I just had to do a site which involved a fair amount of interactivity. A sidebar with different types of filters, a search bar at the top, and a complex boolean filter builder inside a modal (think like infinitely-nestable AND/OR filters, where each filter involved 5 dropdowns interacting with each other). Then some area to collect the results and do some operations on them. I was feeling brave and decided to skip Vue which we otherwise use for complex UIs and do the whole page with HTMX and nothing else. And it worked.

It took me longer, thats for sure, but also because I had to get a feeling for how to approach many things I would have just routinely plumbed together with Vue. And the feeling at the end was really satisfying, a fast-loading, complex interactive page, powered by ~10kb of Javascript, with all filtering logic, validation etc. completely server-side (we use ASP.net). And I have confidence it will work exactly as it is years from now. With our Javascript tooling, I expect things to break by that time.

That said, I certainly pushed the library in parts, and the vote is still out on how easy it will be to understand the flow of interaction if some time has passed.


Just curious, if it's pure htmx, doesn't this mean that most of UI interactive actions do a roundtrip to the server to update the app state? Or have you used hyperscript for some parts?


Yes, no hyperscript, and yes, all interactive actions do a roundtrip.

I tried not to get too detailed with the updates, so e.g. when you are in the filter builder dialog, and change the value of a select (and now the other selects have to change their data accordingly), the whole dialog is fetched again from server, instead of just the other selects. I felt that, while more selectively updating the UI would be more performant (htmx has to switch out less elements and the request payload is smaller), I don't want to cluster up my server with lots of small endpoints that return bits of HTML. All in all it was like 10 new endpoints returning HTML for that site, which was okay in my book. Could probably have been less if I had been more clever.

What I learned was to really pay more attention to make my HTML "leaner" than normal, because I noticed that if I don't pay attention to that, payloads can get big quite faster than when using JSON.


I haven't dug into Alpine.js, but I've been messing with HTMX on some personal projects for a few months now and am really pleased with it. It's honestly made it a pleasure for me to return to front-end work.


This is a very confusing, and then disappointing, article.

It starts by describing what is, essentially, plain javascript or how you'd use jQuery in the old days. Attaching events to the DOM at startup is not hydration.

Then it starts describing what seems to be React's particular flavour of hydration, where it re-renders the entire app before applying its diff/reconciliation algorithm. But in a very roundabout way, it's not clear what they are even talking about. Not every framework works like that. And, it's definitely not the heaviest part, diffing the DOM can be extremely fast. The slowness comes from, erm, everything else and just the sheer amount of code these frameworks run.

By the third section, I had the feeling this would be an advertisement for Qwik, because of the 'closure around app state' concept, and how they appear to believe that lazy loading everything is the best idea since sliced bread. Bingo :)


Guys, multiple things can be true at the same time.

1. Webapps are largely overloaded and dont need to be as huge/complicated as they are.

2. Webapps are an objective computing miracle that brings full app functionality to tech illiterate people that is platform agnostic, everywhere on the planet, and thus most of their complexity is justified.

3. Performance for performance sake is never a hill to die on as it always leads to increasing complexity.

4. Performance when performance matters, is invaluable, and you should be prepared to make dramatic concessions for it.

5. Chasing metrics where the value is not something you easily understand, is usually never worth it.

6. Chasing metrics whose value is obvious (for example, PageSpeed) is worth it.

When websites should be static, they should be static. When webapps should be webapps, they should be webapps.

Software is a tool. A means to an end. Getting hung up on things like this isnt worth it. Just focus on delivering value to your users, and making it a good experience. Sometimes thats best achieved through a static website, sometimes its through a webapp.


and when websites are kind of apps, but not really so you need the SEO benefits of regular websites yet the interaction of apps....then you get into the fun bits


I started in web dev 17 years ago, but didn’t get my first professional dev job until 3 years ago.

I had never had to learn angular, react, separated backend and frontend when writing code for my own stuff and my own company.

I would regularly process as much as I could on the server and ship the HTML and a JSON object to the browser and then just use native JS or more SSR from there.

My sites and apps were complex, just as complex as the stuff we were making in my first two dev jobs but the dev ex was so much nicer.

Shipping a heap of JS to the browser and getting that to do the heavy lifting of making the HTML etc just felt like an anti pattern but I went with it because “that’s the way we do it now”

Seeing Remix, Laravel, RoR still going, Astro etc is starting to convince me that perhaps separated FE and API isn’t the one true way and the old way might have been better.


The overhead is arbitrary if done properly. I did this in Joystick [1] and was shocked at how overcomplicated folks make it.

You're literally just saying "render to static HTML on the server, and on the client, have a way to render a root component to screen and attach event handlers." Without any serious thoughts about optimization (practically none yet), a no-cache refresh/mount takes 227ms to DOMContentLoaded and 696ms to a full load.

Here's the SSR I do:

https://github.com/cheatcode/joystick/blob/development/node/...

Here's the mount ("hydration"):

https://github.com/cheatcode/joystick/blob/development/ui/sr...

The only "magic" is that I embed the hydration logic into the built JS for the current page and it fires automatically on load (no need to add manual hydration code).

[1] https://github.com/cheatcode/joystick


Are you saying it takes 469ms to attach event handlers? That's well over a billion clock cycles. That doesn't sound efficient.


No. A quick rough profile: https://imgur.com/a/hVLD5Db

Event handlers are attached by queueing them up as I render the component tree and then after the tree is mounted to the DOM, I just run the queue to attach the listeners.


Incoming rant: I’ve had to do more hands-on hydration work as I explore static site generators and I’m just deeply unhappy with the state of front end tooling. I now have a taste for

* directory based routing and opinionated defaults that give you the basics to string together html pages with reusable partials and be production ready in minutes (think rails)

* postcss (css tooling)

* reusable and compostable components

* hmr-style dx

* serverside generated pages, as inlined as possible — fastest experience for end user, avoids js if not needed

* not having to think how and where css or images get compiled

Vitejs looked headed that way but recenty dove into it and… it ain’t it yet. Getting something to render on a server then “turn on” with react in the browser is not straightforward.

Remember how much sense $(document).ready() made? Hydration should be that easy and it is not.


Take a look at remix.run. It has most of what you list and the rest can be easily added on.


Judging from the landing page, it's not any way close to `$(document).ready()` as parent said.


> In web development, Hydration is a technique to add interactivity to server-rendered HTML. It’s a technique in which client-side JavaScript converts a static HTML web page into a dynamic web page by attaching event handlers to the HTML elements.

This is how I use JS. I may be misunderstanding something, but this is a nice way to use JS to add targeted interactivity to a page while keeping load times and interaction-latency low. Modern JS is good enough for this purpose without the webpack/VDOM/bundling/dependencies that have made many websites sluggish.


That's how everybody used Javascript when it first came out, coming up on 30 years ago. This is how it was designed to be used in the first place.

This seems to happen with everything - somebody solves a problem, somebody else gains a very partial understanding of the solution, adds unnecessary hacks on top of the solution that they thought they needed because the didn't spend any time understanding how the solution actually worked, and somebody else comes along and adds back the things that were already in the original solution on top of the hacks so that it works the way it always worked, but much slower and in a way that will break in surprising ways when you least expect it.


There is also the situation where someone gets a really partial understanding of a certain solution and as a result wonders why the author didn't implement a simpler solution.


Their definition is wrong. Hydration takes static HTML and replaces it with client-rendered HTML.


Yep, some duplicated templates where they need to be is dead simple and cheap. I've been thinking about this for many years and still think that is by far the best.


I thought that was called "progressive enhancement". I like that too


Clicking on the builder.io link redirects me to:

https://uc.appengine.google.com/_ah/conflogin?state=%7E...

which says:

    An application is requesting permission to access your Google Account.
    Please select an account that you would like to use.

EDIT: you can see it if you go to http://builder.io (https works fine)


I'm not sure if this is reflective of a normal Qwik project, but in the TODO app, javascript fragments are not even downloaded until the event fires, which adds a very noticeable latency.

e.g: on first click (to check off a TODO item), my browser went and fetched q-6642ef59.js, which has the below contents and in turn caused the fetch of two other javascript fragments, which in turn caused a fetch of two more.

    import{d as o}from"./q-dd8cb722.js";import{t}from"./q-3d9b01e7.js";const s=o((({item:o,todos:s})=>t(s,o)));export{s as Item_onRender_on_click};
Looking at the network tab, this took a full quarter-second to finish fetching on my Macbook on fast office internet. They cite "50 ms to ready for interaction" for this demo app, but it's really 300ms until the the interaction begins.

Perhaps judicious use of preloading javascript and/or CSS animations could hide this from a user, but it seems icky, especially if you're targeting mobile with much higher latencies.


While the points against hydration are valid, the proposed solution sounds to me like it would have other (opposite) performance problems:

i.e. hydration has an inherent startup overhead, and possibly a continuing memory overhead (though the latter doesn't seem inherent), whereas resumable sounds like it may start fast, run slow (and I can't see where they solve the mentioned memory overhead problem).

I haven't tested this myself so since they have some Qwik demos linked I gave them a try. Anecdotally, startup seems very slow in them. I thought this was meant to be the problem being solved. Maybe the component download is fast but the framework download/parse is the slow part?

Either way, none of the examples are complex or interactive enough to properly test ui latency so I don't know.


No! That's not how it is. The way it is, is

- every app more complex than how I like to do things, is an overengineered mess

- every app less complex than how I like to do things, is an offensive relic from the distant past


I've been trying to stay away from doing public facing web apps for a decade, so, from my point of view, hydration doesn't make any sense.

Authorization is in the browser, meaning I get to serve the login page "real fast".

That said, I get the reasons people do it, but frankly, it doesn't sit well with me. It's guaranteed that the whole process is unreasonably complicated as opposed to serving a few static files that can be cached and everything.

I'm theory, it could benefit people browsing without JS but I am not sure if things work out this way. Do next next require js on the client to display whatever was "rendered" on the ui server?


Authorization is in the browser...

I'm probably missing some details, but this seems like a Bad Idea.


realistically speaking, if done correctly - SSR would allow the site to work without JS. Now, in practice, that comes down to what is required on the client side for interactivity - which is a case by case basis, and if your site doesn't require JS on the FE at all, why would you use it for the backend


Yeah, but at the same time, who the hell would use nuxt/next for the type of site that is perfectly usable without js.


i ask myself the same question, but apparently they're out there


You can still cache ssr websites and render them only once.


A symptom of everything js. The desire to write template once in one language and ignore that role of frontend and backend are different, so are business logic usually. At the end of the day some duplicated codes are simpler and better solution.


Of all the terms in tech "hydration" makes me cringe the most.


Any idea what the origin of the term is? And why "progressive hydration" was needed over the pre-existing term "progressive enhancement"? I can't figure out what's different about "progressive hydration" that the old term didn't cover.


Have you not heard of the new programming language that's gaining traction? Moist


Anyone tried Phoenix Liveview recently? Seems like a solution to opt out of this kind of situation entirely.. I'll be exploring the framework


liveview still uses JS, its just rather out of the way in what it does

edit: livewire in laravel land does a similar approach - tho not as fast currently, leverages alpine.js under the hood


While we use JS and websockets, even with js disabled or curling the page will send all the HTML. So the initial render is strictly a regular html page with all your expected content.


hate replying to something i already did - but i also never said liveview required JS, just...it still uses it if there. Liveview is a really slick answer to this whole thing, and I recommend readers of the thread to check out. And now I have nothing.


right, i get that - and liveview is a sick piece of kit, so congrats. just saying that for what most would use it for - it is still using some JS


i suppose i should have prefaced with liveview uses JS...if you intend to use JS or dynamically update anything

edit: much like the SSR being described


Miso has hydration w/ a global event event handler.

https://haskell-miso.org/


Misko is right of course, but it remains to be seen if Qwik will be everything it promises to be.


Burn it all down. Just stop using JavaScript.


Good luck using even a barebones site like HN without JavaScript.


HN works fine without JS!

That being said, the third-party search service it uses does not.


Maybe I'm just getting old, but Javascript jumped the shark at some point. Hydration, lazy loading, managing flashes of unstyled content, a lot of this is built to address things that wouldn't be problems if we treated the browser as the dojo that it is and not be so dang wasteful.

im sure someone with shinier boots than mine will pop in and tell me how im wrong, and perhaps youre right, but the web was a much better place without all the toolchain and shenanigans. We must return to fundamentals.


I agree with you from a technical standpoint — something created for displaying and navigating between documents is being misused for distributing apps.

At the same time though none other model affords the same distribution, write once run everywhere. No other distribution platform can compete. And users have become accustomed to the web being apps as well as documents. In fact, many have become accustomed to more or less treating their web browser as an OS (or more realistically, not giving the distinction any thought).

Maybe an interesting way forward could be a web model specifically for apps, which of course many would say would be WebAssembly. The HTML/CSS/JS combo has been remarkably successful and resilient though, so I'm not convinced it will be dethroned easily.


It existed in the platform. But Mozilla has lost the first _and_ the second Browser War and has been sentenced to shepherd some "web developer" Wiki and tell everyone how fun that all is.

Yes it can make your day sad, but I'm sure, out there is a crowd that is just amazed by all of it. And its told that the payment was not that bad as well, so we truly only have winners.


My boots are pretty dusty too, and a web browser isn’t usually how I usually want to engage with a tool or toy.

But the systems architecture model of infinitely beefy backends with simply-adequate thin clients predates both of us. Browsers, the cloud, and all this obnoxious javascript tooling are the contemporary implementation of that and not without reason. They do it pretty well!

Like with any new tool, people get carried away and start using it for things that really don’t need it. I’m with you that most websites don’t need all this stuff and get caught up in it anyway.

But that’ll burn off, and in the meantime, we’ll end up with a rich, mature thin client system for the solutions that need it and that system will be around for decades. No sharks jumped.


Browsers of today are begging so much for new clothes, they ain't browsers anymore. Gib dem Affen Zucker.


I remember when HTML5 and CSS3 were just in the RFC phase and many web developers thought they would make their jobs simpler. Because the things they were being asked to do, for which they were currently creating tortured workarounds, could be done directly with the new standards.

But of course the job did not become simpler. The fixed value turned out not to be what clients would demand, but the quantity of tortured workarounds devs could be enticed to endure.


I'll be an anecdote: I literally only came back because of CSS3 and jQuery. Managing layout with tables and all hover effects with javascript did not sound like a fun career choice when I first started. And now I've fully embraced flexbox and grid and styled components - I rarely have to think about classes anymore. New tools have been making my job a lot more fun over the years.


Almost got back into webdev but the technical interview was 100% JavaScript.


"Am I out of touch? No, it is the children who are wrong."

This is an old, tired take that boils down to "we should be ashamed for wanting nice things." If only everyone would just accept simple websites like HackerNews! This is backwards, it is user-blaming. It turns out that the web is an incredible platform that has revolutionized the world, and thousands of people have worked hard to build tools that make it easier to develop on. "Web fundamentals" can't give you Google Docs.


> "Web fundamentals" can't give you Google Docs.

There's real irony in this statement, since you can't get much more fundamental than that. WorldWideWeb.app (Nexus) was created as a read-write client for both navigating and authoring content. Not only are modern Web apps not a necessary precondition for that, but neither JS nor any form of mobile code are necessary, either.

(The thing that Docs does wrong where "fundamentals" are concerned is its de-emphasis on the importance/role of the URL and making every Docs doc a sort of second-class publication that exists in this "other" kind of space—i.e., Google Drive and the Docs editor, which you always get a sense of being "inside", instead of the content just being out there on the Web.)


People want to be able to author and share docs, directly in the browser and share with no effort. You cannot do this without JS, and the level of complexity is sufficient that you need robust tooling in order to manage it.


You can do this directly in the browser* without JS—in exactly the way shown by the example I just gave: by using a browser that is capable of editing and sharing docs (directly).

* If someone really wanted to be a stickler, they could point out that that you've set up Google Docs to fail your own rubric, since it involves indirect editing. The browser itself has no direct role in the editing process. It only manages to do so by fetching and executing the minified bundle on the Docs site.


So your solution is to write a browser so you don't have to write JS?

Supposing I just want to create an internal tool that allows me to collaborate in realtime with my colleagues on a document (multiple cursors, everyone editing at once) and I have to do it in an existing browser because I don't write C++ and only have a few weeks to deliver - how would I do this without JS? Also, it needs to work on Android, iOS, Windows, and macOS in FF, Chrome, and Safari.


> So your solution is to write a browser

First, you're moving the goal posts...

> so you don't have to write JS

... and attacking a strawman. (No, I'm not worried that I might "have" to write JS. I've written a lot of JS. And I put a lot of effort making sure there was high quality documentation about the language in the early days of developer.mozilla.org—so that other people could have a nice time when they write JS.)

Secondly, you are aware—I'm certain of it—that the company behind Google Docs actually does have a browser.


> You can do this directly in the browser without JS—in exactly the way shown by the example I just gave: by using a browser that is capable of editing and sharing docs (directly).

So your users will need a specific browser to use your Web app?

This goes against a primary advantage of Web apps... They work on many different browsers/devices*


> So your users will need a specific browser to use your Web app?

No. For the use case mentioned (editing and sharing docs in Google Docs), no one would be using a Web app, because there isn't one anymore. With the browser itself able to do directly what Google Docs is being used for, there's nothing left.


<contenteditable


Sure. And now you need thousands of lines of JS to make it cooperative in real time.


That's not a requirement for most people, versions and locking is fine.

There are certainly refinements that can be added with JS, but JS is not a requirement, and certainly is not for rendering an editable page.


We're talking about programming apps such as Google docs. It is a hard requirement. Your reply is completely useless from technical perspective. Sorry-not-sorry to be so blunt but I have no sympathy for "yeah you can do it without any JS if you remove all features". Let's just go a little further and throw away our computers - we can send carrier pigeons to each other, right? And what's this paper stuff? Useless! Stone tablets 4ever!

And as a user, it is a hard requirement for me too. I never want to go back to locking and versions, that's absolutely terrible and completely kills the workflow I have with my colleagues. Collaborative docs editing is single most awesome development of the modern web and users are choosing the otherwise not-so-good Google Docs solely based on this feature. Perhaps not every user needs it - but there are countless users that do. If I wanted to edit without collaboration I'd use Word - much better UX and document editing capabilities... Sadly no truly working collaboration - it's too much like locking/versions, so it's not an option. I'd rather use collaborative raw text editor than the best of the best locking/versioned WYSIWYGs.


I presume it didn't have realtime collaborative editing or comments?


You can have simple websites without them being ugly


Users don't want this shit, startups do.


This is an incredibly broad statement based on practically nothing. You don't think it's weird that most of the largest, most successful websites have mountains of tooling and frameworks to help them remain performant and reliable? All these tools either help the user directly, or solve real problems for companies to help them build more quickly.

I also think you grossly overestimate the average user's abilities. They literally cannot tell what is possible and what is not. They don't want simple, they want something to solve everything, as easily as possible.


You don't think that it's weird that most of the largest, most successfull companies have mountains of office politics?

Clearly this means that office politics is good for your company

Largest economies in Europe have the oldest and most drafty, poorly insulated housing stock. Therefore shitty housing stock must be good for the economy.

Largest economies in the world have the most pollution. Therefore pollution must be good for the economy.

Backwards reasoning.


You're arguing that correlation does not imply causation. Fair. So tell me: how would you manage to build a website like Facebook without robust JS tooling?

The answer tends to be "wellll I hate most of these features anyways, let's get rid of them and then it can be as simple as HackerNews!" But obviously, millions of people use those features every day and like them. The tooling solves a real problem.


There are industries that use real robust tooling to run banks, build databases, control surgical robots. They don't choose Javascript, it has all the robustness of a wet noodle.

Desktop and Mobile application are reasonably robust, have very complex software and could do everything Facebook does trivially. All this 'robust tooling' has evolved because we are trying to shove an application into a browser, and despite decades of effort. it still kinda sucks.

If Apple, Microsoft and various distributions of Linux pulled the finger out of their collective asses and agreed on a half-decent, cross-platform GUI software package in 2005, none of this JS madness would exist.


> cross-platform GUI software package

And cross-platform APIs, cross-platform libc, cross-platform networking... and then the same shuffle on mobile, making sure everything is compatible? Supporting x86 and ARM?

The web does suck, there's no getting around that. HTML and the DOM are garbage even at their original purpose, let alone at writing applications, and everything on top of that is a kludge on a kludge on a kludge.

But let's not pretend dealing with OS APIs from the 90s is much better. We've been trying to get away from that as early as we could with e.g. Java.


Lots of "robust" industries choose the JS ecosystem.

I work in Fintech. There's a lot of tooling to make JS-as-a-language more robust, including TypeScript and some of the best linters, debuggers and introspective libraries available in the entire tech industry.

The only reason JS isn't used more than it is, is because its robustness is young, not non-existent.


Ah can’t wait to finally build a cross platform app without complex JS tooling once I take my time machine back to 2005 and change the course of cross platform GUI development forever.


> If Apple, Microsoft and various distributions of Linux pulled the finger out of their collective asses and agreed on a half-decent, cross-platform GUI software package in 2005, none of this JS madness would exist.

This is what browsers are. You are describing browsers. They run on any device, on any architecture, and nowadays even on the edge. This isn't just about GUIs. I can't stand this lazy mentality of "pfft, I know better than these billion dollar companies, have they tried just making it good???"


Strawman

He argues tooling is a necessary but not sufficient* condition to performant web apps that solve meaningful user-facing problems

You argue he claimed correlation is causation

*Edit: had necessary and sufficient backwards


> You don't think it's weird that most of the largest, most successful websites have mountains of tooling and frameworks to help them remain performant and reliable?

I don't fully agree with the person you're responding too, but it feels like you're making their point for them with this. Users are engaging with a tool or some content for themselves and today. Startups are the ones dreaming of standing among the "largest, most successful websites" someday.

Users don't care how efficient or inefficient your development process is, or how much technical debt you carry, or how many concurrent users or requests you can run, or how scalable you are if things go well. If your service stands up and works for them today, none of those things are relevant to them at all.

That's not to say that the startups concerns aren't critical to making sure that the service is still capable tomorrow and the day after, and that the company doesn't collapse under bad process and inefficiencies, and that the service maintains utility rather than becoming data and stale. They're valid concerns.

But the users are so far removed from those concerns that it can be legitimately frustrating when pursuit of those long-term concerns degrades their immediate experience.


> long-term concerns

And that's it. If you want to ever reach those long-term goals, you need robust tooling.


Free money solutionism's finally dying, dude. Better learn to solve actual problems instead of problematizing what you can barely do.


Do you have any proposed solutions for the problems that exist? Going back to fundamentals doesn't eradicate the problems, it will only recreate them in a slightly altered state.

One of the most commonly proposed solutions is we keep the web to only static content. Well now we've just shifted all the media-heavy interactive content onto a dedicated app instead of the browser, shifting all the same exact problems onto a new platform instead.


> Well now we've just shifted all the media-heavy interactive content onto a dedicated app instead of the browser

And then they complain because "This could have just been a PWA, why do I have to install an app for every website I use!?"


You're not wrong, the websites I enjoy the most are basically all static.

I think the problem is that the "web fundamentals" aren't that good to begin with. A web application is very often the least bad solution, but you're not going to have "rich" web applications without tons of JS. Show the average user HN and they won't like the interface.


> Hydration, lazy loading, managing flashes of unstyled content, a lot of this is built to address things that wouldn't be problems if we treated the browser as the dojo that it is and not be so dang wasteful.

What's your point here, exactly? That if random wordpress blogs and recipe websites were less wasteful, the problems these solutions are addressing would not exist?

I can make you an extremely non-wasteful webapp which still needs to display a hundred images on a page (because reasons), so lazy-loading the images is still important.

I can find you a very well-optimized website that is only a few kilobytes, but still loads slow as shit because their network is bad and I'm on a terrible 3G link. FOUC would still be an issue.


I think the point is people should use the correct tool for the job. Instead, sites that should (or can) be static sites are unnecessarily using some JS framework.


Newer JS frameworks like Svelte and SolidJS can be almost transparent to the user, though - with no need for bootstrapping a complex codebase in the client before they can interact with the site. I'm guessing that OP describes a further development along these lines.


i think you're conflating static sites with sites that lean heavily into JS frameworks. plenty of sites are not static, cause they need that whole session thing, and don't use JS frameworks


HTML has native support for lazy-loading images these days. No need for added JS.


You can lazy-load a lot more than images. In my example, yes I wouldn't use js for it, but it's irrelevant to the point I was making. My read of GGP is that "lazy loading is useless because you shouldn't have so much content that it's needed, ever", which is a bad take.


it does, but when design and top brass say it needs to fade in - back to some JS you go


> FOUC would still be an issue.

hell, just using webfonts can cause FOUC - no JS needed.


> if we treated the browser as the dojo that it is

I have no idea what this means. Isn't a dojo a place for learning or meditation? I don't understand how that fits the browser, the internet, or web development.


My take was something along the lines of 'the browser is a place to strive for simplicity and mastery' instead of 'move fast and break things'.


I thoroughly agree, society was much better before computers came along.


Whoa whoa whoa, let's not get carried away. The peak of technology obviously occurred in $YEAR_AUTHOR_GRADUATED_UNIVERSITY and it's all been downhill from there.


I would add a -50 to that.

We want to live in golden age


Can't tell if this is is a joke or not lol


There is some bias in this comment, perhaps Ordnung is what compels it, but simpler was better, and though collectively we were less informed, we were also less burdened.

There is value in a plain lifestyle and community. We've lost a sense of community because of this pervasive false-familiarity that the internet and phones have created. You no longer need to see your relatives and community members because you can 'see' them on a website or app or call them.

Whether it gets acknowledged or not, for all of the positives this connectivity brought, there were an equal amount of problems. Our advances in telecommunications led to an uncomfortable truth that not everyone's innermost feelings should be foisted upon the greater community. Everyone knows someone who's consumed with worry/negative-excitement over the struggles currently taking place.

We will never again have another private thought, another uninterrupted conversation, eye contact, feeling someones hands as you tell them how much their being means to you. All of that are the incontrovertible consequences of the Internet, our prodigal problem child.


This sort of generalization really doesn't help any discourse.

While it's indeed easy to lose the things you mentioned we're still not forced to do so. Many are already stepping back and reconsidering their choices, and rediscovering various real-life elements. I'm seeing it around me.

And this:

> We've lost a sense of community

...is extremely one-sided. I started my life in a small town where if a bunch of prejudiced people didn't like you then God help you because they could even make sure grocery stores will not be selling you food. I wish you luck even existing on a basic level there.

For people who don't have the approval of their small communities, internet (or physically moving) has been a blessing.

So while your point is valid, you went way too far without regard for nuance. The fact that a lot of people waste their precious time on bullshit on the net does not in any shape or form mean internet hasn't been extremely helpful and enabling for many people.


> We will never again have another private thought, another uninterrupted conversation, eye contact, feeling someones hands as you tell them how much their being means to you.

I disagree. It is just that these now need a conscious decision from us. My thoughts are still private until I choose to publish them. My conversations are only interrupted if I allow them to be. We make eye contact when we decide to. The intimate moments with my partner are there when we create them.

True, it’s easy to loose that and growing up today, you need someone to show you that it can be done. Which is why I agree that we stand to loose that, but we are not there yet.


poe's law strikes again


(it is)


agreed, wouldn't be able to read this comment


We must return to fundamentals.

These are not fundamentals. Overtone window has shifted out of its initial position completely, but browsers ignored it for two decades and offloaded that to webapp developers. Web 2.0 is not a browser, it is what became possible with everything people have built outside of it, on top of “take it or leave it” attitude. Web 1 is an archaic network of winword-level documents that is a huge step backwards in ui, ux, common sense.

We must not return to anything, browsers must get their ass up and running towards what other people achieved through hard work despite all the obstacles.


The core issue is that browsers were not made to provide the sophisticated features we require these days. Because of this fundamental problem, you can't build something like Gmail or Spotify without increasing the complexity of development exponentially.

The "SSR + sprinkled JS" paradigm is still totally valid though for many use cases.


Developers, just coding their websites, not a toolchain in sight.


> but Javascript jumped the shark at some point

Contemporary frontend, in-browser app development, you mean. JS is a programming language. By conflating a language with a particular culture of software development, you implicitly transfer more power to that culture, the people in it, and their practices, even though your message expresses a clear desire for the opposite.


SSR is a hack, not an architecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: