
You Really Don’t Need All That JavaScript [video] - tambourine_man
https://youtube.com/watch?v=e1L2WgXu2JY
======
uses
More than anything this reminds me of the fact that HTML has failed on a basic
level, to keep up with the ways that users want to experience content, and
that designers want to present content. The browser has no real concept of
"page navigation" or "site navigation" with any kind of state, or the
"current" navigation item, or of hierarchy, breadcrumbs, menus, or other core
navigational concepts involving site and page structure.

With a few built-in "site structure" features in the html standard, the
browser could do a large part of what developers end up reimplementing over
and over again - both with and without JS. A large part of "knowing about
usability" is just knowing how to execute these basic navigation features
without goofing up on the many pitfalls available to you.

For example, imagine if there was some kind of Site Structure standard, or
Page Structure standard. The browser could then generate standardized
navigation elements with perfect UX on every platform, and sure, you can style
them, but the basic functionality just works everywhere without having to make
advanced javascript/html/css menus that take into account concepts like
responsiveness, touch interface, or accessibility.

~~~
feoren
No thanks. Good languages start small and stay small. HTML is already bloated
with tons of elements of questionable usefulness. It's a document presentation
format, and that's all it should ever be. Does Markdown need knowledge of
state and navigation and breadcrumbs and menus?

~~~
satyrnein
At this point, the choice is not whether the web should have apps, but whether
they should use multiple frameworks that invent the missing pieces on top, or
whether some of that should be standardized. For me personally, knowing HTML
is small will be cold comfort if we are all using react-router or whatever in
the future.

~~~
el_dev_hell
> For me personally, knowing HTML is small will be cold comfort if we are all
> using react-router or whatever in the future.

Serious question, what's the issue with react-router?

~~~
mercer
I can't speak for OP, but my issue with react-router is that for just a single
project I had to significantly change things as they kept doing things in
completely new ways. Once I gave up and used page.js (or some other non-react-
specific solution), not only was the resulting code shorter and cleaner, it
needed much less upkeep.

I'm sure there are situations where using react-router makes sense, but for me
it was one of the few packages 'blessed' by the react ecosystem that gave me
more trouble than it was worth.

------
ebiester
I saw this earlier today (Edit: instead of the first 10 minutes, I ended up
watching the whole thing.) After going into the problems of client side
rendering, he does get into the benefits (9:00), but then argues that these
are after-the-fact rationalizations. He then argues that the real problem
designers and devs are trying to solve is the loss of control - that the
browser controls loading, and the blank screen is "bad."

He then says that the solution is using an iframe (called a portal.) which
looks to be a proposed API. [https://web.dev/hands-on-
portals/](https://web.dev/hands-on-portals/) \- this is available as an origin
trial in Chrome 85.

So, he's saying that SPAs are bad and we should invest in a Chrome-only
feature or future feature that might make it through standards? This makes no
sense to me.

There are multiple uses for SPAs. One is building a genuine web application,
such as Gmail or a SAAS. SPAs are also used in places they shouldn't, such as
news sites, because of the page load problem as mentioned above. Portals would
be useful here.

But it gets to the real problem: client side rendering in the past decade has
had a lot more thought and effort in tooling, tutorials, and patterns on the
client side than server side, especially with mixed mode. (Mixed mode: adding
javascript on top of server-side rendering.) The problem is that server-side
rendering and mixed mode is not a good development experience. The back button
was never a fully solved problem and POST-redirect-GET is essentially an
insufficient hack.

If someone wants to talk about making server side rendering for web
applications a viable experience, we are likely going to need changes to the
standards and we will need to build up a great set of tooling to solve the
problem. (Oh, and since it's server-side rather than javascript, each
framework or language will have to replicate it just like React, Angular, Vue,
and others have done.)

~~~
Aldo_MX
> So, he's saying that SPAs are bad and we should invest in a Chrome-only
> feature or future feature that might make it through standards? This makes
> no sense to me.

That was my own conclusion, at first I thought "whoa, <portal>? never heard
about it but looks cool", I proceeded to search for `HTMLPortalElement` and
then I got to a W3C page saying "Draft" and I was like "what? a yet to be
standardized feature? thanks, but no thanks, I'll might check it back in 2
years".

~~~
OzzyB
This presentation is essentially Dev Advocacy, he's describing where we want
to go, not necessarily where we are now.

I hear you though; I thought this was gonna be another "you don't need React,
here's the Vanilla JS that does the same thing" kinda presentation, but was
actually intrigued to learn about this <portal> tag. I does seem to address
the (main?!) bugbear we have with interactive webpages and the fight between
the server and client.

Although I am surprised there's little to no support for it _right now_ \-
thought at least Chrome would.

[https://caniuse.com/?search=portal](https://caniuse.com/?search=portal)

Edit: Ofc it _can_ be tried out w/ Chrome if Dev Enabled, but it's not user
supported at this time.

------
PaulDavisThe1st
His example of using portals to solve "a real problem" is really interesting
to me, because it seems to point to a much more fundamental problem with UI
development via HTML and browsers.

The example involves a common page design: left hand table of contents, right
hand (typically most of the width) with current contents (chosen via ToC).
Jump to a new section, and the scroll position in the ToC is lost.

He describes how to use a portal to fix this.

Compare with desktop GUI toolkits. The ToC would be its own element
(GtkListView or QtWhateverList or NSThisNThat). The contents would be their
own element (GtkTextView or QtCanvas or whatever). Both are drawn
independently of the other. You can call for an invalidation of part of the
window. You do not need to consider them part of a single window rectangle
that will always be redrawn in-toto for a page load.

Now, my (weak) understanding of why we've ended up with things like React is
that in part they are an attempt to solve this by effectively creating the
equivalent of a widget heirarchy inside the browser. When a React
implementation of his example gets new content, it doesn't invalidate or even
necessarily "redraw" the ToC, and it tells the browser what to do by
invalidating certain DOM elements (or even just one DOM element).

I think he misses this as a key reason for the movement towards frameworks.

But ... notice that the browser will probably (I'm not 100% sure about this,
would appreciate comment from someone who knows), re-render the entire
rectangle the contains the page.

Desktop applications don't work this way. If you turn on Quartz debugging on
macOS you can actually see the rects on screen that are re-rendered to reflect
invalidation by the application (e.g. after new content needs to be
displayed).

Maybe I have this wrong, and React and similar frameworks have leveraged a
subtle capability in the browser to invalidate a DOM element as a way to only
re-render part of the on-screen rect.

But assuming that I've have this roughly correct, this model in which the
"atomic, indivisible rendering unit" for browser apps is the page rect seems
fundamentally disadvantaged and mis-designed compared to desktop apps.

~~~
acemarke
I don't have technical references to point to atm, but as far as I know,
browsers _do_ go to great lengths to only actually repaint portions of the
page that have changed. There's lots of layer compositing on GPUs, checks to
see what DOM nodes were mutated, etc. Nothing framework-specific here at all -
this is how all modern browsers are implemented.

React in turn does work to let you, the developer, divide your UI code into
discrete components that control the layout of some subset of the page, and
lets you declare what the UI should look like at any given point in time based
on our data. React then tries to minimize the number of DOM updates that have
to be modified to make the previous DOM structure look like the structure you
said you want now.

~~~
PaulDavisThe1st
Assuming that you're right about the browser avoiding invalidating windows
rects unnecessarily, that still leaves the fundamental problem that DOM
elements are not widgets in the way that (say) GtkListView and GtkTextView
are. In a desktop GUI toolkit, you can alter one of those entirely
independently of the other (and because of the typically non-overlapping
assumption of most widget-centric toolkits, only resizing will ever create a
dependency between them for the parent window).

More technically: if you change the content of a desktop text view widget,
then there is more or less a straightforward and simple code path from the
widget invalidating its own rect to the native window/graphics system causing
a re-render of just that rect. The toolkit really has almost no work to do to
make this happen, and neither does the native window/graphics system.

But in the browser model, the DOM is an abstract model that may or may not map
easily onto window rects. So to decide what will be invalidated in the
(native) window, the browser has to essentially recompute the entire window
and then decide what to invalidate at the native window/graphics level.

Are you sure about layer compositing on GPUs within the browser? My guess is
that this is done mostly by the (desktop/native) toolkit used by the browser
and is an optional optimization, since the browsers will function normally
without a GPU.

At some level though, I think you're right and that the video is wrong in some
deep way: the way I was first told about React made it sound precisely like
"it is designed to work more like a desktop toolkit", and given this, the
video misses one of the more fundamental reasons for contemporary web
frameworks.

~~~
acemarke
A few seconds of googling turns up these relevant articles on how browsers
paint via GPU (which date back many years):

\- [https://www.chromium.org/developers/design-documents/gpu-
acc...](https://www.chromium.org/developers/design-documents/gpu-accelerated-
compositing-in-chrome)

\- [https://www.urbaninsight.com/article/improving-html5-app-
per...](https://www.urbaninsight.com/article/improving-html5-app-performance-
gpu-accelerated-css-transitions)

\- [https://hacks.mozilla.org/2017/10/the-whole-web-at-
maximum-f...](https://hacks.mozilla.org/2017/10/the-whole-web-at-maximum-fps-
how-webrender-gets-rid-of-jank/)

and also how they handle painting and redrawing in general:

\- [https://www.phpied.com/rendering-repaint-reflowrelayout-
rest...](https://www.phpied.com/rendering-repaint-reflowrelayout-restyle/)

\-
[https://www.html5rocks.com/en/tutorials/internals/howbrowser...](https://www.html5rocks.com/en/tutorials/internals/howbrowserswork/)

\- [https://james-priest.github.io/udacity-nanodegree-
mws/course...](https://james-priest.github.io/udacity-nanodegree-mws/course-
notes/browser-rendering-optimization.html)

So no, browsers don't "recompute the entire window". They know what DOM nodes
have been mutated, and are able to fairly efficiently redraw just those pieces
of the UI that have changed.

I haven't watched the video yet, but React's approach to defining UI structure
and updates is considerably different than your typical desktop UI toolkit
(ie, Swing, Windows Forms, MFC, Qt, etc). With those classical frameworks, you
generally have a bunch of object-oriented class inheritance hierarchies and
manually instantiated child widgets with specific parameters
(`button.setWidth(80)`, `button.setPosition(20, 40)`, etc), then add a lot of
event listeners to manually modify both the data and the UI.

As I previously mentioned, React expects to work with a "retained mode"
environment, such as the DOM, where previously-set UI layouts persist until
such time as they are modified. React is explicitly based on determining what
the current UI _ought_ too look like based on your current state. The problem
that frequently plagued jQuery-type UIs is figuring out how to transition the
UI contents from where it was previously to where it needs to be now, taking
into account all the possible permutations of what could be shown on screen.
With React, you tell it what UI output you want at any point in time based on
your current state, and it takes care of figuring out what relatively minimal
set of changes actually need to be applied.

~~~
PaulDavisThe1st
Thanks for the links.

I'll have to hunt around a bit harder, because it seems that in general, the
people who talk in the most detail about browser rendering don't seem to
understand the rendering processes in non-browser apps, and vice versa.

For example, all of the discussions of GPU usage that you linked to all seem
to skip over the fact that (most) native desktop apps do the same thing
implicitly c/o the toolkit they're using. If your ListView widget is
ultimately drawn using Vulkan or Metal etc., then the exact same set of
abstraction processes and delivery to the GPU are going in a a native toolkit
app as the browser (which is not suprising, really).

AFAICT, React has no access to the browser render tree, and it can only
control the DOM. As such, it is really positioned at a very different level in
the "drawing stack" than, say, Qt. At the same time, as you point out, it also
isn't particularly close to the widget layout models of "classical frameworks"
(even constraint-based versions as have become more common in recent years),
because it deliberately tries to prevent you from having too much direct
control over the current DOM - that's supposed to be driven by the state.

I wish there was a more fruitful meeting place where the experiences of 30+
years of "classical frameworks" could shape and in turn be shaped by the
decade plus of "web frameworks". I think that the former has a lot offer the
latter and the latter has many ideas and insights that could probably improve
the former.

------
hardwaresofton
I am definitely one that loves a good "get off my lawn" moment, but it seems
like we rehash this point every few weeks (and I don't even read frontend
twitter). There is a reason we are where we are in the front end world -- most
every useful tool/wave of engineering came to be for a reason. A (very spotty)
timeline[0] was even submitted in the not too distant past to HN, and I won't
go into it here, but from jquery to mootools to snowpack/vite, people built
tools to solve problems (some of which don't exist anymore or won't in the
near future) -- a strong grasp of fundamentals and knowledge of _why_ a thing
exists and you will feel a lot less like you're being thrown around by the
waves of frontend development zeitgeist.

All that said -- SSR is the best of both worlds. To the untrained eye, it
looks like a step backwards, but there's more to it than that -- it gives
separation (if you want it) for your frontend code that usually moves at a
different pace than the backend code, server-side rendering for performance
(obviously), and complex client side functionality for building rich
experiences. If you can withstand the increase in complexity, you can mostly
have your cake and eat it too -- server-side rendered robust client
applications.

Even for the most stalwart monolith enthusiast, isn't it nice to make it
relatively difficult for your View code to call some Model/Controller/util
method/function that it shouldn't be calling? If you go through the trouble of
committing to separating your "API" and your "frontend server" (there's a
trendy name for this which I have forgotten -- "companion" server? BFF?[1]),
you get better focused and hopefully more manageable domains.

All _that_ said, portals are a pretty cool new feature, but I'm not going to
hold my breath for them to be ready to use any time soon.

[0]: [https://bestofjs.org/timeline](https://bestofjs.org/timeline)

[1]: [https://docs.microsoft.com/en-
us/azure/architecture/patterns...](https://docs.microsoft.com/en-
us/azure/architecture/patterns/backends-for-frontends)

~~~
austincheney
> There is a reason we are where we are in the front end world

Insecurity resulting from a lack of proper training.

~~~
hardwaresofton
Well yeah, but that's everywhere. More realistically I think it's because
browsers are the biggest software delivery platform every created (and likely
that ever will be created until we get some sort of neural/in-body interface),
and they change faster than regular desktop software ever could.

The browser is so popular as a software distribution method, it now competes
with GTK/QT/Cocoa/etc for (large amounts of) "RAMshare" on your neighborhood
desktop.

~~~
PaulDavisThe1st
Yes to everything here except ... browsers do not change faster than the
equivalent part of desktop software: the toolkits (GTK/Qt/Cocoa/etc). The
latter change at least as fast and often much faster than browsers ever had.

------
whoisjuan
Let's say you're updating your UI through Ajax requests, something like
Laravel Livewire. For example filtering a list. If you do a web request,
although is going to be fast that request and the re-rendering could take up
to 300ms.

That's very noticeable when you're browsing. It makes the website feel slow.
It's very hard to beat that latency if you're going to update everything from
the server.

So you really do need JavaScript to create that snappiness. If you're
filtering in the browser with the data that you already received, it's going
to feel instant.

I purposefully chose the filtering example because it's something that you can
do server-side or client-side. But the server side is always going to feel
slower if you're sending all the data in each request and simply changing the
rendering.

So where do you draw the line? I think you should only let the server re-
render the UI if there's new data being queried and sent. I'm of the opinion
that JavaScript should only manage the interactivity of what receives and
should be fetching very few data.

Of course this only makes sense if you're using something like Laravel or
Rails. But even if you are building a SPA and working against a series of
endpoints I think you should make an effort to write those endpoints in such
way that you're sending as much data as you can in each request and let the
client handle it without requesting again just to re-paint the UI.

~~~
vbezhenar
I've yet to encounter client-side filtering. Usually there are many pages of
items. Server returns selected page according to filter criteria. Client just
does not have all the items, so he'll be able to filter only already visible
items which is not useful.

------
fetbaffe
"Blank page" problem is usually only a problem if your site is really slow,
but if it is fast (because of server side rendering) it is less of a problem.

As the presenter in the video said, browsers can render HTML documents really
fast, this leads to some interesting conclusions.

Assume that you need to render a modal as popup for each item in a grid.

Frontend solution would be to reuse the same modal for every item and just
swap in the on-the-fly rendered content for that item. Or maybe even on-the-
fly create & render the entire modal when needed.

Backend solution would be to render one modal per item. This is actually
easier, just loop & have unique ids (you can even reuse the same loop that you
build the grid with) & no need for dynamic loading, rendering, etc. So even if
you send more HTML down the line, the page acts smother.

~~~
ebiester
Let's say you have a query that takes 3 seconds, and about half second to
manipulate that into something that makes sense and start sending the data
over. With the single page solution, you can have a spinner that is allowing
the user to know that something is happening. What does it look like if you
click that link?

~~~
fanf2
You know browsers have built-in loading spinners?

[https://media.giphy.com/media/anjRJ4nv9WJzO/source.gif](https://media.giphy.com/media/anjRJ4nv9WJzO/source.gif)

In my experience, any time a web site reimplements basic browser functionality
in JavaScript (e.g. loading spinners, scrolling, navigation) it is slow,
buggy, and annoying.

------
holoduke
A proper fully clientside rendered website gives a much better user
experience. Yes you need a strong clientside framework able to render
components, maintain state, history etc. Once the application is loaded, the
only data communication between server and client is JSON or some other
format. Much more efficient in bandwidth and speed. Another plus is that you
can actually have multi platform versions of the application with the look and
feel of the target platform. Good luck to get that feeling with a 800ms delay
serverside rendered page.

The main reason why you shouldn't do it, is because of SEO. Dynamic sites are
considered weak in SEO. Very challenging to get them properly indexed by
Google etc. You also got still a lot of engineers who somehow favour the
serverside rendering. For inhouse internal apps or b2b apps I would always
prefer a clientside app.

~~~
rcxdude
I agree in theory but in practice SPAs chug on initial loading and they chug
on loading new content (and they often even chug when scrolling this
content!). the experience of using many websites increased drastically with
javascript disabled. Something is clearly wrong with current SPA practice if
this is the consistent result that users see.

------
karmakaze
Until convinced otherwise, I'll continue using Vue and if necessary server-
side rendering of it. This post didn't change my opinion.

