
Browsers are pretty good at loading pages - csande17
https://carter.sande.duodecima.technology/javascript-page-navigation/
======
jakobegger
I don't get client side navigation. It's a worse experience in every way. It's
slow, often doesn't support things like command-click, it usually breaks the
back button, and even if it doesn't it breaks the restoration of the scroll
position.

The only thing worse is a custom scroll UI.

Why do people try to reinvent the most basic features of a webbrowser? And if
they do, why do they always only do a half-assed job at it?

It's infuriating.

~~~
tiborsaas
Because it's faster. If you don't have to download all the content again,
force the browser to re-render everything, then by design you just get the new
content from the server faster, if you have to download anything at all. The
idea exists since the introduction of AJAX.

Furthermore, you don't lose state, which makes things much more simple.

Imagine a simple image gallery. You just update the <img> tag, update the URL
with the history API and everybody is happy. If you were to navigate via
links, you get the same behavior.

Of course shitty implementations exists and you only notice the bad ones. If
done right, you don't notice that it's happening at all.

Lastly but most importantly, context matters! It's not a silver bullet, but it
can be really useful.

~~~
TeMPOraL
> _Because it 's faster._

The whole point of the article is that it's not true. It's not the only
article that disproves it, and honestly, it's not difficult to notice it. Just
go to any blog running off a static site generator; loading times of pure HTML
webpages on good connection are so fast they whole thing outruns client-side
page switches even if the page is already in memory.

~~~
StavrosK
Case in point, navigation on my site is pretty fast (to me, at least) and
doesn't use much JS at all: [https://www.stavros.io](https://www.stavros.io)

(I promise I'll reply to your email soon)

~~~
greenhatman
It's not as fast as [https://dev.to/](https://dev.to/), which is an SPA. I.e.
client side routing.

~~~
detaro
And it took me 2 minutes clicking around to break its idea of the page state.
I am partially scrolled down the home page, and it just decided to deactivate
scrollbars and the ability to scroll.

A great example of how it's quite difficult to reimplement stuff that works
perfectly well on traditional pages.

(At least they seem to haven gotten rid of some of the dark patterns they had
in the past, that's nice to see)

EDIT: and within a minute more found another state bug :D

Yes, you can make perfect SPAs, but many people fail and it's a good question
if the effort required to do it properly is worth it.

~~~
greenhatman
I've never found a bug on there, and I've been on it many times.

I'd love if you can show me how to reproduce this bug.

I just don't have this experience with SPAs breaking. I actually have no idea
where it's coming from.

~~~
detaro
It doesn't happen 100 % of the time, but right now going to the homepage,
clicking one of the listings in the "newest listings" box, and then returning
to the homepage through the browser back button triggered it.

~~~
nkurz
I am able to reproduce this bug with Safari on a Macbook. I clicked on an item
in "newest", then quickly pressed Cmd-Left to return to the previous "page".
The front page reappears, but I'm unable to scroll with arrow keys or
trackpad. An additional press of Esc returns the expected functionality.

It seems to be a fast and responsive site when it works, though.

~~~
nathan_long
> It seems to be a fast and responsive site when it works, though.

LOL.

Y'all, I love speed as much as anyone, but your development priorities should
be 1) it works and 2) it's fast.

Using HTML links where 1) is never in doubt and all focus can be placed on 2)
seems like good engineering to me.

Reinventing browser navigation is like building a rocket. You should be
really, really sure that you need to do it before you try.

------
nojvek
This may get downvoted to oblivion due to the HN bias against js.

The correct answer is it all depends. Certain things are faster to do in JS.
Certain things are faster as a page load. One has to profile and see what
makes sense.

There is a reason Atlassian is dog slow and trello runs circles around in
terms of UI performance. The immediate <100ms navigation between views is
probably why Atlassian ended up buying Trello. It would have eaten their
lunch.

Once you have a substantial amount of JS, and you’re an app site, which a lot
of sites do, it’s usually faster to not make the browser parse and compile
again. Just stay in js land as a single page application and communicate with
server purely in REST.

~~~
klez
> This may get downvoted to oblivion due to the HN bias against js.

Now, come on. Very few people in here would say that js had no place in the
web. A lot of people are against js when alternatives exist. Trello is an
application, so js makes sense there. A blog article that doesn't even display
when js does not load, that's where people have a problem.

~~~
baroffoos
It seems a large chunk of people would rather trello was built entirely out of
html forms so you press a <button> to make the card move to the left and then
the page refreshes with the card moved.

~~~
acdha
This is an uncharitable interpretation: many people would like it if things
which were billed as being faster were in fact consistently faster and
degraded well. That doesn't mean that Trello should trigger navigation every
time you move a card but it does mean that anyone taking over a core browser
function is taking on a higher level of responsibility to do performance,
accessibility, and compatibility testing.

A similar issue comes up with Google AMP: it's billed as a web performance
move but it's regularly slower and the failure mode is that you don't see
anything at all. That doesn't mean that the problem is completely intractable
but the act of taking over a core function put the onus on them to do a less
shoddy job.

------
compumike
(Disclosure: I'm Carter's desk-neighbor at Triplebyte.)

I think there's actually a middle ground where you can utilize some of the
more modern techniques to actually do _better_ than the pure static pages
approach, while still using normal browser-based page navigation.

As a personal challenge, I wanted to see what could be done about performance
for a recently-launched side project: the Ultimate Electronics Book [1], which
is a free online electronics textbook that has interactive circuit simulations
built in. Here's what I ended up with in the spirit of progressive
enhancement:

1) Static site generated by Jekyll (with a few custom plugins) and hosted on
S3+CloudFront with appropriate caching headers

2) No JS blocking initial page load

3) No custom CSS fonts to download

4) Prefetch and dns-prefetch headers

5) Tell browser to preload next page on link mouseover (instant.page)

6) Lazy-loading of schematic images (lozad / IntersectionObserver) when
they're nearly within scroll range

7) Client-side instant search index (awesomplete + custom code)

8) Tooltips with section descriptions on internal navigation links
(balloon.css)

There are heavier components to this too:

1) Equation rendering (MathJax) is heavy and slow but starts rendering the
equations after initial page load, and prioritizes equations within the first
few thousand vertical pixels first. (Sadly I need the full power of MathJax;
the faster KaTeX engine can't handle all of my equations.)

2) The schematic editor / client-side circuit simulation engine is an entirely
separate SPA. (But by modern standards it's probably a smaller payload than
many sites today use for serving totally static content with a few forms.)

The result is a site that loads pretty darn fast -- maybe faster than a static
page due to preloading and lazy-loading -- but packs in a lot of functionality
appropriate to the problem. Any techniques I'm missing?

[1]
[https://ultimateelectronicsbook.com/](https://ultimateelectronicsbook.com/)

~~~
cousin_it
Loads about as fast as I'd expect from a static site.

Hover is a bit messy. When I move my mouse down the table of contents, the
tooltip obscures the next chapter title, and sometimes grabs the click as
well.

Back button handling seems to be buggy. If I click from the table of contents
to a chapter, then hit back, then click to another chapter, then hit back
again and do that a few times, the history becomes filled with many instances
of table of contents.

Also something weird happens when I click a chapter link. Like, the whole
table of contents scrolls for an instant and then I see the chapter.

Edit: if I disable JS, all these problems go away and the site feels just as
fast. So good job on that :-)

~~~
compumike
Thanks for the feedback. Tooltip hover on TOC is annoying -- need to think
about that... The scrolling on TOC click was intentional as I wanted to
highlight the section you clicked and center it if you do go back to TOC to
"remember you place" in the book, but maybe I overdid it here.

------
dustingetz
He says the reason people code client-side navigations is to load pages more
quickly. That's not the reason. Client-side navigation evolved from single
page apps, where we use javascript to create dynamic content on the fly, for
example a chat application. Now that you're a stateful single page app, we
need to rebuild page navigation if our single-page-app has more than one
"page". I understand that projects like Gatsby have used this technique for
performance, but I see no reason that today's performance optimizations will
be valid tomorrow. Page load performance is not the need driving this pattern
generally.

~~~
albedoa
> He says the reason people code client-side navigations is to load pages more
> quickly. That's not the reason.

That was the reason given by MDN.

------
buboard
Try visiting indiehackers.com, the come back to hn for a comparison. I would
love to love that site but i value my time and sanity more. Waiting 10 seconds
for every click is just torture.

Ajaxy navigation has its place i guess, but your website has to be fast for
that too. The worst offenders i have seen here must be all the advertising
managers like google adwords, facebook ads. Each request takes a horrible
amount of time to load, which makes the interface completely unintuitive. Is
the browser loading a page or not? Is this popup from the first time i clicked
the button or from the second time? Where did this other popup come from and
why now? It's like reading HTML sent through UDP.

~~~
soheilpro
I stopped using Indie Hackers because of its terrible UX.

------
MattyRad
This is definitely in the same vein as
[http://boringtechnology.club/](http://boringtechnology.club/) After several
years of drinking the client-side kool-aid, I've come to realize that it
almost never has a positive ROI. On top of the false premise that it "makes
pages faster", it doubles the cost of the entire pipeline/stack, it doubles
the amount of documentation that needs to be read/written, and it doubles the
statefulness of the app, amongst other things. Maybe under the best conditions
by the most knowledgeable devs, client side apps could be impressive, but your
typical company doesn't have those resources.

~~~
cottsak
There's such an "ego load" associated with this investment too. I find that
even when a developer comes to realise that the SPA mess is largely
unjustified, they're rarely willing to admit or change anything.

~~~
MattyRad
Agreed. It's very much a sunk cost/escalation of commitment fallacy.
[https://en.m.wikipedia.org/wiki/Escalation_of_commitment](https://en.m.wikipedia.org/wiki/Escalation_of_commitment)

Also bad is that junior engineers see shoddy client side code and internalize
it. As an example, one junior engineer asked me how he was supposed to iterate
over an array server side when I told him he didn't need to use Vue on a page.
Server-side rendering fixtures had been completely forgotten.

I accept my responsibility in causing this mess though. I niavely pushed the
SPA on my myself, my coworkers, and our end users years ago and have to live
with that sin daily.

~~~
cottsak
That must be humbling. Props for identifying it and adjusting course tho.

------
russellbeattie
HTML needs to be split into two: A spec focused on dynamic applications using
JS and components, and a spec specifically focused on documents, CSS and
hypertext.

The fact that in 2019, we still don't have standard browser features that let
us create rich text documents in a WYSIWYG interface is ridiculous.

We've ended up with a dozen editors all producing slightly different markup,
workarounds like Markdown and AMP, and browser engines that are so complicated
even a massive company like Microsoft finally gave up creating its own.

This isn't about technology at this point, it's about standardization. We need
to have a simple tag which marks the beginning and end of textual information,
with massive restrictions on what tags are used and the JS and CSS in that
section, so that a simple, standard and ubiquitous editor can both read and
write hypertext 100% the same while still including all the basic
functionality of a common word processor, like fonts, colors and sizing.

This isn't a technical challenge on the level of WebGL, web sockets, HTTP/2,
media extensions, etc. It's just a matter of specifying a subset of
functionality for a specific use-case, standardized for the sake of simplicity
and interoperability.

------
snek
I installed noscript to test this, and yes, the beta rewrite, whose selling
point is react, _is faster when react isn 't allowed to run_.

------
peteforde
The current generation of less-experienced developers tends to default to
building every project in React, even if there's no tangible benefit to
accepting this complexity. It's unpopular to express, but the truth is that
many junior devs don't know how to do it any other way. I don't blame them for
this, because they literally haven't been doing it long enough to have
mastered multiple techniques.

Managers go with it because it's still hip and easy to hire for. If things go
wrong, well, it was good enough for Facebook.

~~~
azimuth11
We can’t blame junior devs using React and other frameworks to try to build
more responsive applications. That’s where the demand is.

I wouldn’t say it’s easier to build in these SPA frameworks nor harder.
Attention to detail is something people recognize or learn over time.

~~~
peteforde
I actually blame the old guard for not putting in enough time and energy to
mentor juniors. It didn't have to end up like this.

As for whether an SPA is harder or easier, that's not really the relevant
dimension. You should not be making technology decisions for your company
based on what your new junior devs are comfortable with. They will actually
level up faster if they are forced to take what they've learned and apply it
to something they weren't working on in their nine-week intensive.

Meanwhile, there's nothing "easy" about React and co when you factor in the
layers of abstraction and bikeshedding involved in a typical full-stack
deployment today. Compared to when I learned, you suddenly have to also be
confident with bash, git, docker, AWS, postgres, webpack, and the whole
concept of a virtual DOM before you even start modelling your data or thinking
about state transformations. Now go compare that to the original Rails "blog
in 15" video and you'll have a hard time claiming that anything is easier. The
drop in developer ergonomics over this golden era of JS tooling is stunning in
its unneccesary masochism.

~~~
t0astbread
> bash, git, docker, AWS, postgres, webpack, and the whole concept of a
> virtual DOM

That's very far fetched. A React app can just be the default template from CRA
+ some place to host the generated files (like Netlify if you want something
simple). You don't need to know any of the above.

~~~
peteforde
Sure, you can get an example of a React component working on a webpage. I'm
talking about the daily lived experience of a working junior developer trying
to build something real using typical tools for 2019.

------
Endy
It's highly amusing to find out about this, as someone who disables ECMAScript
on every browser I use. Why is it that certain groups of web authors seem to
be under the impression that more "tech" = better? Why can't they be satisfied
with what works in a simple and easily-accessed manner?

For that matter, why does anyone engage in this constant race to the bottom?
If more web authors and developers were to make a stand for decency and refuse
to implement these unnecessary hacks, the Web might be a better place.

~~~
jakub_g
It's self-fueling phenomenon and reasons are plenty. My random thoughts:

\- Everyone who thinks about themselves as doing "frontend" needs to do hot
tech of the day because that's what everyone talks about in the internets
(FOMO). If you leave the race, you can't fill your CV with buzzwords and won't
find a job in a few years (would you hire a Java developer who still writes
Java 4?)

\- New people come in and that's the reality they start with and it feels
normal. They never wrote a clean piece of HTML+CSS by hand.

\- When you're in a company where all people have they minds set up on a given
tech because of the reasons above, it's impossible to suggest an alternative
tech to the currently fashionable one. You can't tell dozens of people to do a
career suicide. Anyway it's typically a very small group (or a single dev) who
choose the tech stack and architecture for new projects, and they typically
have a very limited time to do so. Going with the flow is the "safe" option.

\- Hiring: you don't want to use unpopular unsexy tech, because you won't find
candidates, who will fear to take it to not work in a non-future-proof tech
stack (COBOL job anyone?)

\- Sometimes you just have to do something complex which requires a ton of JS
anyway (any highly interactive widgets/apps) and then having a single well-
understood framework is better than gluing together many things in a random
way.

\- Finally, doing complex stuff in a huge team is tricky. To make stuff not
collapse, the developer experience is generally favored over user experience.

------
nepeckman
I'm going to push back against the idea that proliferation of client
navigation is caused by inept developers. I think that component architecture
is a fundamental improvement for web development. Breaking the interface into
smaller, well defined chunks promotes code reuse and makes the relationship
between parts of the interface more clear. Additionally, almost any web
application is going to include some functionality that is best handled with
client side rendering. I posit that this puts developers in a position where
client side rendering eats more and more of their application, creating SPAs
where they are not needed.

------
visarga
You can also make a mess of server side nav. Just take a look at PyTorch
documentation: 2.5MB of shit loaded in 26s. At least 10 fonts, 300KB of CSS
and 455KB of JS. What I hate most about it is that the 'in page search'
function doesn't work for many seconds after the page starts showing, and it's
the only way to navigate such a long-ass page. Even with such a huge page most
functions have no example of usage, it was better with the TurboPascal 7 help
in the MS-DOS days.

[https://pytorch.org/docs/stable/nn.html](https://pytorch.org/docs/stable/nn.html)

Apparently I am not the only one who noticed this bad behaviour:

[https://github.com/pytorch/pytorch/issues/20984](https://github.com/pytorch/pytorch/issues/20984)

~~~
_nhynes
And the rustdoc for Iterator [0] (everything else is great, though).

[0] [https://doc.rust-lang.org/std/iter/trait.Iterator.html](https://doc.rust-
lang.org/std/iter/trait.Iterator.html)

~~~
steveklabnik
As of the next release, it will be a lot better!

------
Jonnax
It's the "app-like" experience that everyone is chasing.

Loading a new page feels like browsing the internet.

Single Page apps feel like a native app on your phone.

At least that's what I think the reasoning behind the decision to go client
side rendering is.

~~~
TeMPOraL
Nah, it doesn't "feel like" a native app. It feels like an incredibly fragile
app-like porcelain on top of a webpage. A native app, a webview packed as an
app and a website looking like an app are all easy to tell apart if you've
used all three categories in the past and have at least little attention to
details.

But yes, I can buy this is part of the reasoning. I think most of it is "we
need to make SaaS; cool kids use React, so we'll use React too; now we have an
SPA, so the easiest path is client-side navigation".

------
thrwaway48295
What I can't get over is Chrome and Gmail. They're the same company, but
managed to make an abstraction between these two groups where it takes several
seconds to load and display a list of text fields on a mid-range PC. The HTML
version is instant.

~~~
moksly
It takes a little longer to load, but it also lets you continue working while
your train goes through a no-internet zone.

~~~
jancsika
textarea manages client-side state just fine in the HTML version.

~~~
moksly
Can I open and reply to 25 different emails, and have the replies sit in my
outbox until I reconnect? Because that’s a lot more useful to me than how fast
it loads.

~~~
noisem4ker
It sounds like a traditional email client would fit your needs perfectly.
Check out Thunderbird:
[https://www.thunderbird.net/](https://www.thunderbird.net/)

------
tmd83
Modern web is all about bloat and ever complicated ways of doing the
meaningless things and how to make a web page more expensive and slower by far
than applications that did 10 times 10 years ago. Web has good things going
for it biggest being the nearly universal access across devices and platforms
but bloat free is not one of them. It might be fast but I feel like it's never
going to be efficient in my lifetime. All the browser advancements if they
ever make things faster will do so at ever increasing use of resources.

~~~
zzo38computer
Yes, it is true. It is too complicated and messy, and also stupid. There are
other file formats (e.g. plain text) and protocols for other purposes anyways,
which is sometimes useful.

------
cozzyd
I remember once I had to access the benefits page (Workday) at my employer but
I was in a remote location (the middle of the Greenland ice sheet) with a
slow/high-latency connection and the required page just would not load
properly (presumably the client-side navigation wasn't robust to a slow
connection...). I had to VNC (slowly) to a computer in the US in order to fill
out some shitty HR form.

------
sword_smith
The biggest advantage of using modern SPAs is that it forces the developer to
build the backend as an API with which you can interact programmatically

~~~
hedora
The API between the SPA and the server is an internal implementation detail
that doesn’t need to support interoperability or backwards compatibility.

I see no reason to believe interacting with it directly is easier than
scraping an HTML page. (In fact, I’d expect it to actually be much harder and
more fragile than scraping html in practice)

~~~
chc
In my experience scraping both, APIs have tended to be more stable than the
structure of HTML pages. I'd guess this is because there's an incentive to
keep your API stable (a breaking change for my scraper is also a breaking
change for the front-end), whereas there's basically no reason not to rewrite
giant chunks of the HTML to accommodate a design change.

------
gpvos
The only good reason I've seen to have client side navigation is when there's
a permanent sound player embedded in the page, so you can keep listening while
you browse around the site (possibly for other things to listen to).

Although really, for longer-term listening, playing the stream in a separate
media player is a better idea. But sites tend to hide their stream addresses,
and this is also usable.

------
SilasX
> Most web browsers also store the pages you previously visited so you can
> quickly go back to them.

Not anymore, they don’t. Most of the time I get a huge lag when I hit the back
button as all kinds of stuff reloads.

I’d prefer if the back button worked like going to a previous tab, where
nothing has to reload, but they don’t work like that.

~~~
lucb1e
> Not anymore, they don't.

Hell yes they do, it's the websites that make it impossible most of the time.
Try it on a standard website that doesn't use many megabytes of resources
(which it would evict for memory reasons). I notice it sometimes when using
such websites, it's amazing how fast it feels compared to even so much as a
304 Not Modified round-trip.

~~~
SilasX
No, they don't make it impossible, because browsers could just implement the
same behavior as if I had opened the page in a new tab, and indeed, many
people do (and are advised to do) this as a workaround.

------
hrktb
Very concise and well written article.

I remember when smartphones became popular, it was an occasion to reset the
trends and do basic pages without much JS, no endless hovering menus, no 5
column and 3 popup layouts, no flash intro, etc.

Then it started to creep back as power increased.

I wish the next big thing resets the field again, one can dream.

------
rocky1138
If you're into this and you still use Gmail, switch to HTML mode. I've done it
for all my accounts and it's so much better than the JS-laden garbage they've
switched to recently.

------
inian
Same thing happening on Github -
[https://www.youtube.com/watch?v=4zG0AZRZD6Q](https://www.youtube.com/watch?v=4zG0AZRZD6Q)
[https://jakearchibald.com/2016/fun-hacks-faster-
content/](https://jakearchibald.com/2016/fun-hacks-faster-content/)

------
Animats
Why is management paying for all those overdesigned web pages?

~~~
grishka
Because they don't know a thing about underlying technologies.

------
hjitegi6ehi
Just as a side note. SPA style design often claims that the initial load is
slower for the benefit of successive loads. FF reload does override client
side caches by default.

Relying on warm caches for a good user experience is pretty bad, imho.

------
lolc
For many sites transferring HTML works fine. I'm surpirsed that MDN would be
implemented with client-side-rendering.

On the other hand I've had the pleasure of working with both Meteor and Elm
thae last few years. It was the first time I had fun implementing client-side
code. The resulting apps cannot be reproduced with HTML generated on the
server. Often page switches in those apps don't take a round-trip to the
server because the data is already loaded and can be rendered right away.
Meteor also does incremental rendering by design. So the parts of the data you
already have client-side are displayed right away while the rest is being
loaded.

It is more work to get it right because you have to reproduce functionality
the browser would do for you. But often you just don't want to do a full page
load and then fetching and patching the right HTML snippets to update parts of
the site gets messy real quick.

------
buro9
I browse using Brave, with JavaScript disabled by default and full "shields
up" on every site.

Anecdotally I would say that 10% of the web is unusable, another 20% is barely
usable... but to my utter surprise the other 70% is functional to some basic
level.

The one I am most impressed with is Amazon. I am not an Amazon fan at all, and
try to avoid shopping there or using their services, but when I do happen to
need something in a small quantity and fast Amazon are very good, and so when
I visited recently it took me a while to comprehend that the site looked
right, felt right, hadn't degraded the experience to any shocking degree, and
was fully functional even without JavaScript and 3rd party cookies, etc.

There is still a lot to be said for the very simple approach of make it all in
HTML on the server first, and only use JavaScript to add small enrichments
that can only be done in the client.

------
syllable_studio
Page load speed is not the only reason why client-side rendering is important.
When traditional server-side rendered pages need to update state on the page
they have two choices: Use DOM manipulation or reload the entire page from the
back-end just to update one small piece of the page. The more state that needs
updating, the more complex it becomes to manage rendering on both the server
and the client. Code redundancies creep in as html templates are duplicated on
both sides.

In my experience, as soon as a website has any app-like qualities at all, a
tool like react just makes sense. It's a great tool that solves a very real
problem. I always use universal rendering so that you get the best of both
worlds and the page still renders fine for the no-js folks out there
(respect).

------
zokier
Client-side rendering/navigation makes sense when you have high ratio of
markup to content, or fixed content to changing content. In the olden days
people used frames for such fixed navigation stuff etc, but I think we can
agree that it was not all that great of a solution.

------
CriticalCathed
> It’s not worth it to try and go behind their backs—premature optimizations
> like client-side navigation are hard to build, don’t work very well, will
> probably be obsolete in a couple years, and make life worse for a decent
> portion of your users.

I don't know what is wrong with me -- when I read this sentence all my brain
can see is "job security."

______________________________________________________

My take on client side navigation is the desire to cater to smartphone users
instead of desktop users. Every few days it seems a popular website does a
redesign that is centered around smartphones. They're increasingly bloated and
wasteful. Why Twitter, why???

------
jbverschoor
It's funny, actually frustrating, that people create all these frameworks to
mimic "native" looks and behavior. But there are soooooooooo many elements
that are overlooked, and usually only tested on a few platforms. Don't you
just love sites that capture your scrolling? Or buttons that have different
hotspot areas or don't support canceling by dragging out of them?

------
smileysteve
We've also spent 10 years getting to the current state.

Backbone was notoriously difficult to manage state in. AngularJs was quick to
prototype (though often slower to develop in than HTML) but had massive memory
leaks. Even modern Angular and React were new iterations that didn't solve the
state problem well until Ngrx/Redux became mainstream.

------
barbarbar
Have just read this article again. The author has a very pleasant style of
writing. Very good explanations and good reasoning in the conclusion. What
could also be mentioned is the sideeffect from having n different proprietary
"Routers" from different framework vendors as well as "code-splitting" and so
on.

------
robbrown451
If you are thinking of things as "pages", this is of course true.

I think a good example of a site that makes good use of this sort of thing is
Wikipedia (the non-mobile version). I love that I can mouse over links and get
more information on something, without having to click to its actual page. It
pulls down only the information needed, and leaves everything else in place.
This makes browsing far more efficient, at least for me. Even on a fast
computer with a fast connection, going to a whole new page is jarring,
comparatively. Obviously if you need more information than the summary
provides, great, go to the new page. But if not, this works great.

Maybe what Wikipedia does isn't what the article is complaining about. But at
least I think the article should mention that "page" is not the only
meaningful unit of information on the web, or at least it doesn't have to be.

~~~
ryanbrunner
This feels more like the old "progressive enhancement" model than the full
blown SPA style that's popular today.

Most navigation on Wikipedia is plain old browsers loading HTML pages, but if
you happen to support JavaScript, it's used to augment the page to make it a
little easier to browse.

(I don't actually know for sure whether Wikipedia pages are primarily server
rendered, but what you describe is perfectly possible using progressive
enhancement).

~~~
progval
> I don't actually know for sure whether Wikipedia pages are primarily server
> rendered

Yes, Wikipedia works perfectly fine without Javascript

------
madsbuch
In google maps, gmail, and other places (where data shared between pages
generally is loaded) I am very content with client site navigation.

As far as I see it really depends whether it is an app or a site that is being
developed.

------
fnordsensei
I've been working on search UI. This search UI is substantially more complex
than the average one you might come across.

There are potentially multiple search bars (added on command), to which many
different variants of filters and search terms can be added. They provide an
aggregate search expression which is not reducible to boolean logic. The
search results update interactively when the search terms are altered.

It's specialized tool, and meant to be used by people of a certain vocation
rather than the general public.

The reason that the search results update interactively is to provide a short
feedback loop on the effect of the search terms. Building a mental model ahead
of time for the interactions between the query and the hundreds of millions of
potential results is a tall order. It therefore seems sensible to provide
affordances which facilitate exploration and (in some sense) experimentation.
To this end, search results also expand inline to provide a way to inspect
results quickly, with the purpose of validating whether the search terms were
effective or need alteration.

Now, where do I draw the line between what counts as a "new page" or the "same
page" with this? Do I reload the page on every alteration of the search terms?
Do I scrap automatic reloading between alterations of search terms and only do
so explicitly when the user clicks a button? Should it navigate to a new page
when interacting with search results rather than previewing them inline?

This is all rhetorical, of course. All of those changes would make for a much
poorer experience for this particular tool and the particular target user
group.

Today the web is used to build a lot of stuff where "page navigation" is a
poor model to work with as a basis. I can see there being a case for it where
a "page" really is a self-contained unit of information. But there are also a
lot of cases where declaring some particular state in a long user journey of
interconnected actions as a "different page" is going to be arbitrary.

------
yazboo
> A big one we’re seeing here is called progressive rendering: browsers
> download the top part of the page first, then show it on the screen while
> the rest of the page finishes downloading

I question your reasoning here -- I don't think this is how progressive
rendering works in browsers. The browser has to download the entire document
to construct the DOM, and CSS in the head tag all has to be downloaded and
parsed into the CSSOM before the browser can make smart decisions about what
to render.

For a simple page, I also doubt that progressive rendering could account for a
>1s difference in rendering time. I would suspect the disparity in loading
time is related to network IO.

~~~
csande17
> The browser has to download the entire document to construct the DOM

Actually, the browser can construct the DOM for the first part of the page
while it's waiting for the rest of it to download. Like, if the browser starts
downloading a page and it sees this:

    
    
        <body>
          <h1>My Cool Website</h1>
          <p>Hello there
    

...it can add the <h1> element to the DOM, since no matter what comes after
this in the HTML code, the <h1> will always be first on the page. (It might
actually be able to add the <p> tag, too. I'm not 100% on the details.)

A cool demo of this is
[https://harmless.herokuapp.com/main](https://harmless.herokuapp.com/main) , a
JavaScript-less chat app that works by holding the connection open
(essentially never "finishing the page load") and sending new chat messages as
they arrive.

> CSS in the head tag all has to be downloaded and parsed into the CSSOM
> before the browser can make smart decisions about what to render.

That's true, but in this case, the CSS is already present in the browser
cache, so it doesn't need to be re-downloaded.

~~~
progval
Do you know how contact the author? There's an issue with the CSS (it sets the
background of the textarea to #fff, which is the same color as the text when
the browser/OS uses a dark theme)

------
saagarjha
Strongly agree with the premise, but

> it just isn’t possible for a highly dynamic language like JavaScript to run
> as fast as the C++ code in browsers

With modern tracing JITs, this isn’t always true ;)

~~~
kevingadd
As far as I know, no production browser uses tracing anymore. The only modern
tracing implementation I'm aware of is LuaJIT.

~~~
zucker42
Do you know why browsers moved away from tracing JIT?

~~~
saagarjha
Well, one reason is probably because they don’t perform well on programs that
cannot trace well.

------
nokya
Let's just rethink the discussion without the iOS/macOS crowd and we wouldn't
even have to talk about it...

------
darkhorn
Client side navigation is for single page applications like applications
written with ExtJS.

------
LoSboccacc
this is the first time I see mentions of react being difficult to optimize and
I'd like to know more, is there any in depth quality article about it anybody
knows on top of the usual light blogs a search finds?

~~~
iddan
No there isn’t because it’s BS. React is super easy to optimise.
[https://reactjs.org/docs/optimizing-
performance.html](https://reactjs.org/docs/optimizing-performance.html)

~~~
csande17
People _say_ React is easy to optimize, but many of the React apps I use in
practice have crummy 50-100 millisecond response times on basic operations
like "click button" or "press key in text field". And a significant portion of
the performance difference between beta MDN and old MDN was the ~half second
the React code took to run in the beta.

I'm pretty sure front-end developers aren't writing slow code on purpose. The
most reasonable explanation I can come up with is that React makes it easier
to write reliable UI code but harder to reason about performance -- features
like "by default, all of a component's descendants are recreated and diffed
with the DOM on any state change" point strongly in this direction.

------
nikanj
File under: ”Read, then ignore”. Crappy JS navigation (and scrolling!) is here
to stay, unfortunately :(

------
everydaypanos
He lost me when he described Canada using a dictionary-like wording but also
adding some pretty personal perspective. Describing an entire country as
Socialist seems super opinionated to me.

Client-side navigation is quite tricky to get right - since there is no
definition of right. Browser back buttons and scroll positions between back
and fourth page loads are not standards-based things and the only way to study
them is by just using the browser.

The only fashion that I have come to hate in client side navigation is endless
scrolling on product pages. In my view it is completely pointless and hitting
the back button NEVER returns you to the exact point you were. On product
pages I think page-based pagination is the only way to go. Amazon does that. I
think only feeds are a use-case where infinite scrolling makes sense.

P.S. I wonder if client-side ajax calls are GZIP compressed🧐

~~~
bscphil
> He lost me when he described Canada using a dictionary-like wording but also
> adding some pretty personal perspective. Describing an entire country as
> Socialist seems super opinionated to me.

I think this was pretty clearly supposed to be a joke. Author is probably
American - the joke is that healthcare and free (very slow) internet are
considered socialist by Americans.

~~~
saagarjha
> Author is probably American

He goes to college here in the US (but I think he might have lived in Canada
in the past?)

