Hacker News new | past | comments | ask | show | jobs | submit login
In spite of an increase in Internet speed, webpage speeds have not improved (nngroup.com)
719 points by kaonwarb 54 days ago | hide | past | favorite | 769 comments



It doesn't have to be this way. I am not sure when there was a new rule passed in software engineering that said that you shall never use server rendering again and that the client is the only device permitted to render any final views.

With server-side (or just static HTML if possible), there is so much potential to amaze your users with performance. I would argue you could even do something as big as Netflix with pure server-side if you were very careful and methodical about it. Just throwing your hands up and proclaiming "but it wont scale!" is how you wind up in a miasma of client rendering, distributed state, et. al., which is ultimately 10x worse than the original scaling problem you were faced with.

There is a certain performance envelope you will never be able to enter if you have made the unfortunate decision to lean on client resources for storage or compute. Distributed anything is almost always a bad idea if you can avoid it, especially when you involve your users in that picture.


This type of anti-big-js comment does great on Hacker News and sounds good, but my personal experience has always been very different. Every large server-rendered app I've worked on ends up devolving to a mess of quickly thrown together js/jquery animations, validations, XHR requests, etc. that is a big pain to work on. You're often doing things like adding the same functionality twice, once on the server view and once for the manipulated resulting page in js. Every bit of interactivity/ reactivity that product wants to add to the page feels like a weird hack that doesn't quite belong there, polluting the simple, declarative model that your views started off as. None of your JS is unit tested, sometimes not even linted properly because it's mixed into the templates all over the place. The performance still isn't a given either, your rendering times can still get out of hand and you end up having to do things like caching partially rendered page fragments.

The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive. You're still of course free to mangle it into confusing spaghetti code, but the basic structure often just feels like a better fit for the domain if you have even a moderate amount of interactivity on the page(s). As the team and codebase grows the structure starts to pay off even more in the extensibility it gives you.

There can be more overhead as a trade-off, but for the majority of users these pages can still be quite usable even if they are burning more cycles on the users' CPUs, so the trade-offs are often deemed to be worth it. But over time the overhead is also lessening as e.g. the default behavior of bundlers is getting smarter and tooling is improving generally. You can even write your app as js components and then server-side render it if needed, so there's no need to go back to rails or php even if a blazing fast time to render the page is a priority.


>The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive.

Sadly, this is probably where the core of the problem lies. "It makes code more readable and intuitive" is NOT the end goal. Making your job easier or more convenient is not the end goal. Making a good product for the user is! Software has got to be the only engineering discipline where people think it's acceptable to compromise the user experience for the sake of their convenience! I don't want to think to closely about data structures, I'll just use a list for everything: the users will eat the slowdown, because it makes my program easier to maintain. I want to program a server in a scripting language, it's easier for me: the users will eat the slowdown and the company budget will eat the inefficiency. And so on.


> Making your job easier or more convenient is not the end goal. Making a good product for the user is!

This is a very limited perspective. Let me give you an enterprise software perspective:

1) Software maintenance costs. Better maintainable software allows more services to be delivered with a smaller budget.

2) Software is never finished. Ability to respond to new or changing user requirements matters to users and is perceived as part of the quality of service.

3) Software is going to be maintained by someone else. If they can not maintain it then users have to go through another iteration done by another team (best practice: fewer features, but at least the implemented features have more bugs).


> 1) Software maintenance costs. Better maintainable software allows more services to be delivered with a smaller budget.

Better is not the same as easier. This is such an equivocation fallacy since both of those terms are highly subjective. Maintenance costs are measured in numbers and not some imaginary developer ideal of job security.


Code that is easier to read is easier to maintain and easier to debug. Code that is easier to read and more intuitive will, more often than not, result in a better product and better experience for the users.


A: "Your restaurant's food tastes terrible."

B: "Nutritious, healthy food will ensure the longevity of my customers, thereby maximizing lifetime customer revenue."


I think a better analogy might be telling a restaurant to cut down their oversized menu so that they can actually make at least one good meal


That's an awful analogy


I just logged into nest for the first time on a new laptop. It took 15 seconds to load and get to the screen to change the temperature on the thermostat.

Then I refreshed the page and it still took 10 seconds to reload.

I'm sure their code is also relatively bug free.


It's actually pretty good analogy, when you're searching for an analogy of something being technically accurate while missing literally the entire point.

Making your code "more intuitive" does not result in a better product. Making a better product does. The argument is that software development is one of the few jobs where the employees' experience seems to be equal or more important than the client experience. Sacrificing a restaurant goer's experience (taste) because it makes the restauranteur's experience (customer LTV) better is a decent analogy.


A: "I enjoy your restaurant's food but my chef friend said your kitchen is inefficient".

B: "As long as you enjoy it, and we can keep making it for you at the same quality you enjoy, we don't care how inefficient our kitchen is".


A: "Okay, but the new place next door serves identical food with a more efficient kitchen, at lower prices you can't match without making significant changes. If you don't improve efficiency somehow you'll start to lose customers."


I disagree with the premise. "Readability" is an excuse people use for writing slow code. It's not an inevitable tradeoff.

Like, most of these people are not saying, "we could do this thing which would speed up the app by an order of magnitude, but we won't because it will decrease readability." They have no idea why their code is slow. Many don't even realise it is slow.

My favourite talking point is to remind people that GTA V can simulate & render an entire game world 60 times per second, 144 times on the right monitor. Is that a more complex render than Twitter?

Computers are really fast, it doesn't take garbage code to exploit that.


IMHO it’s in part because of what I assume are different business models between a game like GTA and many / most businesses where a website / web app is core to their product.

Different business models result in different environments in which to conduct software engineering; different constraints and requirements.

IMHO constant and unpredictable change (which I assume happens less for games like GTA) is one of the big differences, as is the relationship between application performance and profit.

But I like what you’re saying and would love to see that world.


How long does it take to start up GTA V on your computer?


More than a website that preloads its structure into the cache and then transfers blocks of 280 characters, a name & a small avatar, rather than gigabytes worth of compressed textures.

Is the difference because GTA has more "readable" code?

I have other games that do load up quicker than Twitter, which I do think is damning, but it's not really the point I'm trying to get across here.


Well, the "less readable code"—ie, the goddamn mess that a lot of game code is, slapped together barely under deadline by staffs working 80 or more hours a week—is part of why AAA games like GTA have so many massive bugs requiring patches immediately after release.

But then, you brought up GTA and games, which aren't even apples and oranges with a website. Websites—even the Twitter website—don't require GPUs or dedicated memory, they don't have the advantage of pulling everything from the local hard drive, and yet they actually work as designed, not merely in a low-resolution, low-effects mode on computers more than a couple years old.

And while I wouldn't point out the Twitter home page as remotely fast for a web site, have you actually even looked at it recently? It shows a lot more than just a few tweets and avatars. It's got images, embedded video, etc.


This is a dumb argument. My point is that readable doesn't imply slow, and "readability" is not actually the reason slow things are slow, most of the time. I don't think you even disagree with me.

There's definitely another discussion to be had about why web tech is so disastrously slow given what computers are capable of, but it's not worth having here. We're never going to settle that one, and regardless if you are a web guy, you're stuck with JS.

>It's got images, embedded video, etc.

Bad excuse IMO. Lazy load them.


I don't have 8GB RAM to dedicate just to Twitter.


GTA V has graphics card requirements that, I suspect, may be quite large compared to Twitter. Although it is worth noting that GPU acceleration in the browser is increasingly a thing: https://support.mozilla.org/en-US/kb/upgrade-graphics-driver...

That being said, I don't think it addresses the problems with client side applications, although it may allow more complex ones.


Websites like Twitter are not render bottlenecked. Even a software rendered game from decades ago like doom is doing more on the screen than Twitter.


Then please explain why the increasing trend in developer convenience is correlated with the decline in user experience.


It’s a false correlation.

Because the universal rule is that 90% of everything is terrible, including software. Corollary rule is that work expands to fill all available time, and software expands to fill all available resources.

If you go back to before the ascent of age of front-end frameworks, you would find that there were still tons of sites that were slow and poorly performing, despite running entirely in server-side technologies.

Something that made Google incredibly appealing when it first came out was it’s instantly-loading front page with a single search box and a single button. This was in drastic, shocking contrast in the age where every other search engine portal had a ton of content on it, including news, stock tickers, and the kitchen sink.

In the end, unless the developers of the sites make performance a priority, it makes absolutely no difference what the tech stack is. The problem is that companies don’t prioritize it.


I wholeheartedly agree.

Maintainable code can quickly and easily be extended into new features for the customer.

Unmaintainable code usually results in a ton of support tickets and late nights hunting and fixing bugs that originated from deploying into production that day. This leads to heartache and frustration for the customer.

The customer comes first, yes. Good maintainable code, is a way to achieve this goal.


The article this thread is about clearly shows that your second statement is not true.


Page loading speed is not the only concern the users have.


Tell this to Reddit developers


One can make code more readable and intuitive and making a good product for the user.

I have done both in heavy server side rendering using templates and SPA rendering on the client side. It all comes down to your user base and the devices/browser they are using, if they have an aversion toward running JS in the browser.

By using JS on the server side, you can maintain quite bit of logic on both server and client side. If you are doing web development, why not make it the same. Yes, one shall not trust the client validation, but many people find JS validation to be more user friendly than a form submit.


I find this comment a bit odd. It seems clear to me that code that is easier to write, reuse, test and maintain saves time and money that can then be spent on building a better product. Do you disagree?


How does that help sell more pizzas per day?

Blunt example, however that is what matters to a customer that just commissioned a web site for pizza delivery.


Then again, my decision to order a pizza doesn't hinge on whether I have to wait an extra 5 seconds for the initial payload.

But it does hinge on how good the delivery website is though. If you haven't been to a pizza website in the past 10 years, let me point out they are complex with interactive drag-and-drop build-your-own-pizza wizards. Better client-side tech helps build those features to sell more pizzas.

How does your envisioned alternative help sell more pizzas than the heavy-client approach that pizza corporations have decided on?


> Then again, my decision to order a pizza doesn't hinge on whether I have to wait an extra 5 seconds for the initial payload.

Really? Because personally I find I tend to do more business with places that have interfaces that don't make me want to beat the developer with a wrench.


By making use of APIs that deliver ready made HTML snippets via WebSockets, with on device caching.


It’s even more important in that case because writing a pizza delivery website isn’t a complex nor new problem so doesn’t need a complex solution.

But to answer your question, if you write clean code then when that company expands it’s operations it is easier for you (or whoever next gets commissioned) to later expand on that site to add more restaurants, thus then allowing your customer to sell more pizzas.


That is definitely not a feature that requires JavaScript, if you now want to pick on my example.


> That is definitely not a feature that requires JavaScript

You’re right it doesn’t but I thought the context had drifted from that topic and onto code quality.

If we’re talking strictly about JS heavy sites then I’m definitely in favour of the less is more approach. There’s times when it makes sense having JavaScript trigger restful APIs rather than have the entire site rendered on the server side. The problem is JavaScript often gets overused these days.

I could write an essay on where I think modern tech has gone off the rails though.

> if you now want to pick on my example.

That’s a strange comment to make considering you presented the example for discussion. Of course people will then “pick on” the example, in fact you’d be the first to moan about a straw man argument if people cited a different example.


Which engineering discipline's primary concern is user experience?


One of the truly differentiating factors for good software engineers is being able to recognize when your habits are in harmony with the objectives of the project you're working on. And on the meta-level, developing the sense for how to keep them in harmony with the trajectory of a project which will likely prioritize different things at different points over its lifespan.

Propensity to change is one of the most common features I've found in software projects I've worked on in my career and most software engineering "best practices", as conceived by the authors of opinions about these things, are usually strategies for managing rapid change. i.e. structuring code so it's amenable to change, understandable to the maintainer who inherits your code, has guardrails around important invariants and guarantees via assertions and tests, etc.

The details of how (and to what degree) these things should be done are highly contextually sensitive, and that is where the dogma of "best practices" can start to interfere with creating a good user experience. But I find it a little eye-rolling when people talk about hygienic software development practices and user experience as though they are in opposition. Tests, legible code, flexible structure, etc. are enablers of good user experience, because they're what allow us to change products to fit the needs of our users. They're what allow us to ship things that people can use without them exploding.

The tendency toward asset bloat on the web and just the general use of cheap-in-development-costly-for-the-user solutions (scripting languages, inefficient data structures, verbose serialization formats, piles of dependencies) is definitely an industry problem, but I think its naive to attribute these decisions to lazy devs or devs trying to make their jobs more convenient. In my experience there's two common causes for this state of things:

1. In all seriousness, the nature of capitalism. In reality, most businesses don't actually care about the majority of prospective users. They care about a couple narrow segments of users, and if those users happen to be equipped with hardware to handle this kind of inefficiency (i.e. if they're first world clients on desktop or high end mobile devices with 4G access), the business largely doesn't care. Responsiveness, low resources consumption, low energy impact, etc. are fungible engineering goals if they don't negatively affect your sales objectives.

2. The org hasn't figured out how to incentivize responsiveness, low resource utilization, etc. as objectives. Software developers get requests to focus their efforts on all different manner of criteria, and the without designing incentivization schemes and feedback loops that orient toward these objectives, there is no particular reason why they'll be inclined towards them.


> I'll just use a list for everything: the users will eat the slowdown

"Premature optimization is the root of all evil." You often should use a list until you've identified a specific performance issue. The list isn't the problem. Not actually optimizing is.


Choosing the right data structures and algorithms is not premature optimization.


I agree if you know, e.g., that the list is going to contain thousands of items or is going to be iterated over in your inner loop. That's not premature optimization, that's just common sense. Hence the qualifier "premature".


> You often should use a list until you've identified a specific performance issue.

Identifying a JS performance issue is something that almost no company ever does unless revenue is obviously threatened (and even then many fail to act). So IMO it pays off to do a little bit of premature optimization in JS land.


> "It makes code more readable and intuitive" is NOT the end goal. Making your job easier or more convenient is not the end goal. Making a good product for the user is!

I think the end goal is more about balancing the value you can create with the budget you have. It's an optimization problem.

If I can deliver 80% of the value for 20% of the price, I will do that.


Until somebody comes around and sells the 100% product for a 50% markup and enough people value completeness and UX more than their money. See Apple's MacBook Pro and iPhone.

Another example: I absolutely hate Amazon Prime Video for its UX and bugs they didn't fix for years (like asynchronous audio). So even though Prime is significantly cheaper in Germany, I rather pay for the more expensive Netflix instead.


Yes, but low end notebooks would still exist. Different customer segments.


I agree with you that it makes sense from a dev point of view but what about the users?

Imagine all restaurants and cooks in the city decide that from tomorrow they want to make their job easier so they will drop ingredients, hygiene and processing quality with 80% so they will work less and make more profits. The ones that won't follow the new way will be put out of business because they will have more expenses and the lazy ones can put some of the new profits evangelizing the new ways making it look cool.


People's decisions actually do hinge on those restaurant examples, though. Gross food? Hair in your meal? Horrible service? Restaurants can't get away with that due to stark business and reputation penalties. People will simply never come back.

Quite a different scenario than all websites taking an extra few seconds to load.

In fact there's very little a website can even do to turn off customers like a restaurant can. Imagine if HN took 10 seconds to load. Who cares? There's no "HN across the street" that I can go to that hinges on a 10 second wait time.


You’ve just described fast food!

Except of course it’s not the workers but the business decision-makers plus economic environment/pressures that make the decision.


No, they will serve different customer segments and the market will split into the various strategic positions it can support.

It’s just a different business model. Less value, less expensive, higher volume. More value, more expensive, lower volume.

In your example, one of the restaurants become McDonalds and another becomes a gourmet restaurant. They compete differently.


Right, so a native fast application would have a niche of users that care where the apps made with electron will have a larger user base because it will be cheaper to create for the developers but users will pay with lectricity and frustration.

My issue is that you can market some cheap food like "our food is cheap but good enough, come here to save money" where with software is "our software is slow,buggy,eats your battery - use it because we are lazy and we want to use latest coolest language to put on our CV".

Sure when I do a proof of concept toy project I will be lazy and use whatever I like, if I share it it will be free = I have a problem with big projects say a news site that has millions of users and your laziness(or using latest cool stuff) affects such a giant number of people.


There’s no moral obligation to make incredibly efficient and streamlined software. Solving the problem and proving sufficient value is usually enough.

Sure, they might get displaced by a competitor in the future - but by then they’ve probably got a large user base and a warchest to compete with. Slack is a great example of this.

If it’s software that’s life critical, then maybe, but that’s a small minority.

Markets and buyer preferences are always changing - I think it’s better to be agile (ie high developer velocity with talented product managers) to be able to detect and capture these shifts.


Slack has a budget of multiple millions, RipCord has a budget of zero, and one programmer who works on it in his spare time. Which is faster?

Polishing RipCord so it looked indistinguishable from the Slack client wouldn't be that expensive. I would argue its much more "they don't know how" than "they are making a business decision not to."


That should only remind you that people care about a lot more than just performance.


Why would I assume this particular failure is a choice?


I'd add that the OP was advocating for static HTML which is, in this day, unsellable to a lot of clients. I love producing static HTML server-side and in house a lot of our internal tools get written like that and operate fine.

Additionally I feel like there was an implication that in such a setup the client would be doing hard processing - i.e. aggregating a raw data set on the fly - I've seen this done and it's terrible in most circumstances and it certainly isn't the norm. The server can do the heavy lifting and then hand things off to the client to do all the minor display adjustments like localization, adjusting for timezones - display stuff.

A well written front-end can provide a far more responsive page by using over-fold only rendering that the server is mostly ignorant of. Both sides of the product should be independent systems setup to treat the other as foreign I/O pipe where data is just being requested and returned.


The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive.

I have seen phrases like that being used to (over)sell something so many times that I feel suspicious and doubtful whenever I see them --- Enterprise Java™ is sold using similar verbiage, and yet I'd never want to work with it again. Some of the worst codebase I've worked with --- ridiculously indirect and abstracted --- were created and described with such terms.

I'll take spaghetti code over whatever dogmatically following "best" practices produces. The former "flows", while the latter "jumps".


> Every large server-rendered app I've worked on ends up devolving to a mess ... that is a big pain to work on.

I can honestly say the same about every large SPA style app I've worked on and I've worked on several. Rewrites are the norm in the JavaScript arena because code devolves into a complicated mess then people look at it and say "It's just a JS app–there's no reason it should be this complicated!" Then they rewrite it with the latest framework magic, rinse, and repeat.

EDIT: I have to go further and (respectfully) say this comment I'm replying to is a load of BS. It's responding to a big pile of data saying that pageload speeds are not faster today with 'well that's not my experience.'

The other aspect of this comment that sets me off is what I'll call the "you don't have to use JSX to use React" type rebuttal. With this type of rebuttal, you respond to complaints about the way 99% of people use JavaScript by claiming that it's not strictly necessary to do it that way, despite the fact that "that way" is how everyone does use JS as well as the way thought-leaders & framework authors suggest you use it. It's responding to real-world conditions in workplaces with a hypothetical-world where JS is used differently than it is today.

This type of argument always shifts the blame on to individual developers for "using JS wrong." When 90+% of people are "using the tool wrong" there is a problem with the tool and it's not reasonable to shift the blame to every user who's trying to follow the latest "best practices."

I wish we could acknowledge the facts of the JS ecosystem (like those presented in this article) rather than deflect with "not all JavaScript apps..." when mostly what you're talking about is demos & contrived speedtests, not real-world applications.

If 'ifs' and 'ands' were pots and pans, we'd have no need for a tinker.


We side-step large parts of this argument for our more complex business UIs by leveraging server-side technologies like Blazor.

We also recognize that by constraining some aspects of what we are willing to support on the UI that we get profound productivity gains.

Hypothetically, if we wanted some animations in our server-side blazor apps, we could add some new methods to our JS interop shim for explicitly requesting animation of specific element ids. We could also conditionally adjust the CSS stylesheets or classes on elements for animations on round trips of client-triggered events. Putting an @if(...){} around a CSS rule is a perfectly reasonable way to get the client to do what you want when you want. In this model the client has absolutely zero business state. When you go all-in with server-side it means you always have a perfect picture of client state from the server's perspective, so you can reliably engage in this kind of logic.

There are compromises that have to be made. The above is not a perfect solution for all use cases. But, it does demonstrate that with some clever engineering and new ways of thinking about problems that we can get really close to meeting in the middle of several ideal philosophies all at the same time.


> The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive.

This reads like vague marketing speak. In reality SPA/front end JS frameworks do the exact opposite - violates all kinds of software best practices like duplicating logic server/client, creating brittle tests, conflation of concerns, etc, etc. SPA's front end JS frameworks are an anti-pattern, imo.


SPAs treat the browser just like any other client. If anything, it's more design-/architecture-consistent.

- iOS speaks to your JSON server.

- Android speaks to your JSON server.

- CLI speaks to your JSON server.

- Desktop GUI speaks to your JSON server.

- Other machines speak to your JSON server.

Meanwhile...

- Browsers use browser-specific html endpoints to utilize a historical quirk where they render UI markup sent over the wire that the server has to generate, and now you're dealing with UI concerns on both the server and webclient instead of just dealing with biz-logic and data and the server.

I find it very hard to see how this is somehow what avoids duplicating logic on server/client and conflating concerns.


I think you’re confusing the view layer with adding an entire SPA/JS framework where you duplicate the data model and offload business logic to the client vs a standard req/resp and rendering html/json or even JS from the server. It’s much much cleaner to do the latter. This is speaking from hard earned experience building web apps over the last ten years.


On the contrary, a traditional server-side/jQuery hybrid is what results in duplicated templating and logic. A SPA at least has everything in one place.


Sounds like you mostly had experience with old style server side development (like php with no frameworks maybe?)

Have you tried building an app with a more modern framework, like Phoenix or Blazor? You won't even need to write a single line of javascript. All business logic is on the server (no duplication of models) an can be easily testable.

In my experience, SPAs tend to get overly complex, and they mix (and duplicate) business logic with UI logic.


> This type of anti-big-js comment does great on Hacker News and sounds good, but my personal experience has always been very different.

And yet, here we are, with a complete article devoted to the opposite effect.


I've worked on (non-SPA) webapps that are old enough to vote and I just have to disagree with this assessment.

Beyond that, most websites are not webapps, most are serving up static or near-static content so most websites shouldn't be designed and engineered like they are complex webapps that are doing heavy data processing.


This might be the root cause of the "goopy" feeling I get working on web apps. I couldn't get away from it so I just gave up on web apps entirely.

So much of web tooling seems to aim to perpetuate the conceptual model of the web instead of daring to improve on it. Declarative views, for instance, are a breath of fresh air: instead of trying to put a JS/Python/Ruby coat of paint on the same ideas of what a web app should be, they aimed at trying to reduce a view to its essential complexity.

In a sense, being too in love with the web keeps you at a local maximum because you think web programming should be HTML/CSS/JS only.


IMHO the solution is a smart combination of server and client side rendering. Client side rendering in my experience tends to increase the number of requests, especially if you are using a microfrontend approach. More requests means more latency, unless all request are running in parallel, which is not always feasible. Note that with regards to latency the speed of the web did not change much and won't change much anytime soon (at least noton the desktop)With Vue and some other frameworks the complexity of this approach became acceptable in the meantime.


Every bit of interactivity/ reactivity that product wants to add to the page feels like a weird hack that doesn't quite belong there, polluting the simple, declarative model that your views started off as.

This doesn’t need to be the case though. There have always been server-side frameworks which generate the client-side parts for you automatically, so you can mostly avoid writing any javascript even for the interactive parts. Check out rails/turbolinks/stimulusjs (used to build basecamp and hey.com) or the TALL stack (increasingly popular in the php community) for modern day examples.


It is unfair to compare the careless jQuery spaghetti that was pretty common time ago with a client side framework such as React regarding maintainability. If you put some effort (which I'm convinced is a lot less than the effort we put to build SPAs) into doing the "sprinkles" right, it can be as maintainable as the SPA or even more, with the added benefit of not sending so much crap to the end user. There are modern approaches to this (Turbolinks, stimulus, Unpoly, Intercooler, etc...), no need to do manual JQuery manipulation anymore. Pair it with Node, and you still have a pure JavaScript solution without wasting the user's CPU, memory and patience.


They don't understand the original tradeoffs that led to client-side only applications.

Let them experience the pain of rebuilding that old wheel :)


The original tradeoff was mega-cap FAANG companies trying to offload processing power to the client. There never was an organic open source push for SPA's or front end JS frameworks. They add a ton of tech debt and degrade the UX. Premature optimization and anti-pattern for everyone but a handful of companies, imo.


The old world was having a complex web stack that included strange templating languages hacked onto languages that were sometimes invented before HTML was even a thing (see: Python) that spat out a mix of HTML and JavaScript.

Then there was the fact that state lived on both the client and the server and could (would...) easily get out of sync leading to a crappy user experience, or even lost data.

Oh and web apps of the era were slow. Like, dog slow. However bloated and crappy the reddit app is, the old Slashdot site was slower, even on broadband.

> They add a ton of tech debt and degrade the UX.

They remove a huge portion of the tech stack, no longer do you have a backend managing data, a back end generating HTML+JS, and a front end that is JS.

Does no one remember that JQuery was used IN ADDITION TO server side rendering?

And for what its worth, modern frameworks like React are not that large. A fully featured complex SPA with fancy effects, animations, and live DB connections with real time state updates can weigh in at under a megabyte.

Time to first paint is another concern, but that is a much more complicated issue.

If people want to complain about anything I'd say complain about ads. The SINGLE 500KB bundle being streamed down from the main page isn't taking 5 seconds to load. (And good sites will split the bundle up into parts and prioritize delivering the code that is needed for initial first use, so however long 100KB takes to transfer nowadays),


> Oh and web apps of the era were slow. Like, dog slow. However bloated and crappy the reddit app is, the old Slashdot site was slower, even on broadband.

Just those that attempted to realize every minuscule client side UI change by performing full page server side rendering. Which admittedly were quite a few, but by far all of them.

The better ones were those that struck a good balance between doing stuff on the server and on the client, and those were blazingly fast. This very site, HN, would probably qualify as one of those, albeit a functionally simple example.

SPAs are just a capitulation in the face of the task to strike this balance. That doesn't mean that it is necessarily the wrong path - if the ideal balance for a particular use case would be very client side heavy (think a web image editor application) then the availability of robust SPA frameworks is a godsend.

However, that does not mean it would be a good idea to apply the SPA approach to other cases in which the ideal balance would be to do much more on the server side - which in my opinion applies to most of the "classic" types of websites that we are used to since the early days, like bulletin boards, for example.


> Oh and web apps of the era were slow. Like, dog slow. However bloated and crappy the reddit app is, the old Slashdot site was slower, even on broadband.

Which reddit app are you talking about, the redesign or old.reddit.com? I ask because the old version of reddit itself certainly wasn't slow on the user side, iirc reddit moved to the new SPA because their code on the server side was nigh unmaintanable and slow because of bad practices of the time.

> Time to first paint is another concern, but that is a much more complicated issue.

That's the thing though, with static sites where JQuery is used only on updates to your data, the initial rendering is fast. Browsers are really good at rendering static content, whereas predicting what JS is going to do is really hard..


The new reddit site on desktop is actually really nice. Once I understood that it is optimized around content consumption I realized how it is an improvement for certain use cases. Previously opening comments opened either a new tab, or navigated away from the current page, which meant when hitting back the user lost their place in the flow of the front page. The new UI fixes that.

Mobile sucks, I use RIF instead, or old.reddit.com if I am roaming internationally and want to read some text only subreddits.

> That's the thing though, with static sites where JQuery is used only on updates to your data, the initial rendering is fast. Browsers are really good at rendering static content, whereas predicting what JS is going to do is really hard..

Depends how static the content is. For a blog post? Sure, the content should be delivered statically and the comments loaded dynamically. Let's ignore how many implementations of that are absolutely horrible (disqus) and presume someone at least tries to do it correctly.

But we're all forgetting how slow server side rendering was. 10 years ago, before SPAs, sites took forever to load not because of slow connections (I had a 20mbps connection back in 1999, by 2010 I was up to maybe 40, not much has changed in the last 10 years) but because server side was slow.

If anything more content (ads, trackers..) is being delivered now in the same amount of time.


New reddit makes it easier to push ads; any other motivation for its implementation is an afterthought. There's plenty of valid criticism that can be levied against the claim that the redesign is "superior" by default. And I think often we confuse amount of information with quality of information exchange. Due (mostly) to the ever increasing amounts of new users that it desires, you could easily make the point that the quality of content on reddit has nosedived. Optimizing for time on site is not the same thing as optimizing for time well spent.

Reddit as a company obviously wants more users; a design that lets people scroll on through images ad nauseam is certainly better than a design that is more information dense, so if that's something you'd cite as an example of "better in certain use cases" then I agree, otherwise there's plenty of reasons to use old.reddit.com from an end user's perspective.

The concept of Eternal September applies.


Even if everything you said was true (it's definitely not!) that doesn't explain why the web is bogged down with entirely static content being delivered with beefy JavaScript frameworks.


10 years ago it was static content being delivered by ASPX, JSP and PHP, with a bunch of hacked together JS being delivered to the client for attempts at a rich user experience.

It still sucked. It just sucked differently. I'll admit it was better for the user's battery life, but even the article shows that it was not any faster.


The original trade-off was having a non-sucky web email client.

GMail, the first real "client-side" app, was so far beyond Hotmail / Yahoo Mail in usability that it's hard to even phantom ever going back.


I don't know where this misconception came from - XMLHttpRequest was invented by Microsoft for use in Outlook Web Access, Gmail was essentially a copy of that.


True! I remember using OWA in my university around 2003, before the first invites for Gmail went out.

You could switch to the horrendous non-Ajax interface.

I never realized that before, always thought Gmail was the first Ajax webapp.


Ah, maybe. I didn't use OWA back then... And even now it's pretty shit (compared to GMail), so ...


The first web versions of Outlook were plenty fast and usable on my then workstation (PIII 667 MHz w/ 256 meg). In fact, a lot of the web applications made 15 years ago were fast enough for comfortable use on Pentium 4 and G4 CPUs, because most used combinations of server-side rendering and vanilla JS. It was painful to develop, sure, but the tradeoff in place now is severely detrimental to end users.


What were the tradeoffs? Genuinely curious.


I am not sure when there was a new rule passed in software engineering that said that you shall never use server rendering again and that the client is the only device permitted to render any final views.

Maybe it's coming from the schools.

I worked with a pair of fresh-outta-U devs who argued vehemently that all computation and bandwidth should be offloaded onto the client whenever possible, because it's the only way to scale.

When I asked about people on mobile with older devices, they preached that anyone who isn't on the latest model, or the one just preceding it, isn't worth targeting.

The ferocity of their views on this was concerning. They acted like I was trying to get people to telnet into our product, and simply couldn't wrap their brains around the idea of performance.

I left, and the company went out of business a couple of months later. Good.


This narrative has been going hard for the last 6~7 years. For me, it's difficult to pinpoint all the causes of this.

I feel many experienced developers can agree that what you describe ultimately amounts to the death of high quality software engineering, and that we need to seriously start looking at this like a plague that will consume our craft and burn down our most amazing accomplishments.

I think the solution to this problem is two-fold. First, we try to identify who the specific actors are who are pushing this narrative and try to convince them to stop ruining undergrads. Second, we try to develop learning materials or otherwise put a nice shiny coat of paint onto the old idea of servers doing all the hard work.

If accessibility and career potential were built up around the javascript/everything-on-the-client ecosystem, we could probably paint a similar target around everything-on-the-server as well. I think I could make a better argument for it, at least.


I think it's not as much narrative as the explosion of the tech field in general.

When i started out tinkering with tech in general in the early 2000s people drawn to the internet were still mostly a smaller core of passionate forerunners many of whom subscribed to artistic values like simplicity, beauty and the idea of zen. This began to change with web 2.0 circa 2008.

Both my Mac and my Pc from 2005 are way snappier than todays OSX or Win10. Not as fast but has way less latency.

Today tech education is aimed at highly paid and fancy careers for lots of kids with little passion for engineering or designing who never learned to use a desktop because they just had an iPad - they hardly know that you can copy and paste and almost don't know what a website is outside of walled gardens, i kid you not.

This year had had the most students ever start in tech related fields, and i know from teaching shortly at university that about 95% have not gone into the field because they love to tinker but because it's highly paid or a "cool career".

I know there are still the oldschool "designers and experimenters" out there but it's all about signal to noise. Of course size, complexity, high level modularized abstraction and dependency hell is also an issue, but this probably won't get resolved as 99% of tech people today don't care and don't remember using a computer that didn't have 500ms of latency when closing a window.


I was with you until

> Both my Mac and my Pc from 2005 are way snappier than todays OSX or Win10. Not as fast but has way less latency.

The single most dramatic performance increase I have ever experienced on a computer was with the advent of the SSD.

Load times < 2009 when SSDs became mainstream were atrocious.


SSD improve programs' loading time, they do nothing for the latency of the UI of those programs.

When win95 arrived and brought the desktop as we know it to the masses it brought with it some latency that has not been reduced since. Subjectively it has increased, it might just be my patiece that goes shorter, so YMMV.

Yes SSD did bring back some of the lost time, and some more, but the programs don't feel as responsive as in "the old times."


I'm not sure what you mean by mainstream, but SSDs were still pretty niche in 2009 and traditional hard drives have remained to be pretty popular until recently due to the lower cost.


My first SSD was in 2008, was 64gb SLC for $1000. Still going strong. But yeah, was pretty niche.


I bought my first 128gb SSD for like 300 I think in 2008.


It's cargo-culting. Everyone just follows the herd. Once something gets the scarlet letter of being "old" compared to something "new" (both of which are purely perception), it's really hard to ever get back to using the "old" thing (even if it's performance is better or some other tangible benefit).


I remember back around 1999 reading about a guy who had hand-coded his own http server. It was extremely barebones, no server side processing, just fed out html pages. And it was fast, faster than anything you could pay money for. And secure; since it's functionality was so limited compared to all the other web servers, the attack surface was super small. Fast, safe, reliable.

And of course it was used to power porn sites.


I've noticed that this incessant trendchasing is largely confined to web development, although it has spread a lot from there (Electron...)

To be blunt, I do not care how "new" something is. Newer is not always better, and change is not always good. Churn is not progress. Perhaps those should be the core values that developers need to be "indoctrinated" with, for lack of a better term.

Then again, I'm also probably much older than the average web developer by at least a decade and a half, and saw lots of silly fads come and go.


> the death of high quality software engineering

I think this is the crux of it. It isn't client side or server side rendering, it's simply bad code. Both approaches can be bad if they're poorly engineered. With modern trends we see front end based rendering more prominently, and with that we see a lot of truly terrible implementations. A lot of this is the ease at which you can include a library that does some task for you, but often at the expense of bloating the payload. Good software engineering is hard. It requires effort. It requires more time. It's the antithesis of agile development; moving slowly but being robust. Most companies or side projects aren't willing to choose to build features more robustly if it means cutting the dev speed in half.


> This narrative has been going hard for the last 6~7 years. For me, it's difficult to pinpoint all the causes of this.

I always thought it came with the serverless meme. All those services used to do that cost money, so the decision was made in a lot of places to put the work on the client. New people to the industry maybe haven't connected the dots and think it's for scale when it's really for costs.


I think that’s certainly influenced this trend. Make the client side do all the hard work and server costs go down. I’m not sure how much that cost is, but I’m guessing it could be substantial.


Here's the thing.

An advanced website (or web app) is likely going to have a lot of JS on it already. Should most blogs? Probably not, unless they want interactive demonstrations of concepts, or a commenting system more advanced than what HN has. But beyond that, JS is everywhere.

So at some point, there is JS running on the browser, code that has to be architected and maintained.

So then someone proposes adding another language on the server into the mix, one that will generate HTML and deliver the JS.

The dev experience is, likely, not as good. It is more complicated in a myriad of ways, debugging is harder, and the tech stack is more complicated and more fragile.

And the thing is, good SPAs are really good. But the opposite strategy having every request round trip the server is going to suck in certain cases no matter what the developer does. Something as simple as the site being hosted far away from the user, or there being an above average amount of network latency, is enough to slow down every interaction.

Now all this said, when I'm overseas and suck on 256kbit roaming, HN is about the only site I can use.


It probably wasn’t the first, but the first notable SPA I can think of is Gmail. Do you think that architecture was just decided on by some noobs right out of school? Since it was novel at the time, probably not. That leads me to believe (along with my own anecdotes and observations) that there are real benefits to SPAs. Are there trade offs? Of course. But there are benefits too.

The idea of an SPA is only strange in the web dev community. An SPA is equivalent to a native app client - they are long running, stateful processes that communicate with a backend. That architecture has worked for decades, and there are things that you can do as a result of having that stateful process that you simply can’t do with server rendering, like caching response data to share with totally unrelated view components later on in the application. You also get to use a real programming language to design your front end instead of living within a template language, which no one acknowledges as the biggest hack of all time. Template languages exist precisely because HTML is not a sufficient UI tool in all cases.


Gmail is probably the example that I hold up as an SPA gone awry — on a new machine & T1 broadband you can still get _very_ slow loading times to open the site at all.

& what it's serving you in the end doesn't feel like it's heavy lifting in this day & age — a subject line & preview of your first N emails — I think the site could be much more responsive if they had a JS enhanced page (e.g. using it for their predictive text) rather than an SPA.


GMail has gone downhill massivly since it first started. I Can read the entire contents of a mail and go back to inbox and it still shows the email as unread. Closing the gmail tab usually comes up with a prompt saying it's busy and am i sure i want to close tha tab. I would consider it as an example of how not to write a SPA these days


If we're comparing things to how they were just a few years ago, Maps has been far worse to me than Gmail lately IMO.


I made a comment elsewhere about page load time not being the only dimension of performance. Page load time’s importance gets amortized across the length of the whole session. You should always try and minimize it, but in an application like Gmail you’ll be using it for minutes at a time. Each subsequent interaction is now quicker because there doesn’t need a page refresh.

Page load time is one dimension. We get really hung up on it.


I agree with your reasoning but not how it applies to Gmail :)

I'm always hazy on the term amortized, but my understanding of it was the opposite — in that, you should try to chunk up as much as possible. If you are using Gmail for a longer period, having to wait for functionality like advanced search or predictive typing is less annoying as the extra time you need to wait is a smaller portion of your session overall. Plus, these things may have lazy loaded by the time you come to use them.

On the other hand, if you just wanted to reference an email quickly — e.g. I often need to dip into email quickly, for example, if I've travelling and have booking confirmations / QR Codes / PNRs etc. in starred messages in the inbox — you'll notice any extra load time so much more.

So to me, feels preferrable to have as small as possible initial load time, then extra functionality progressively loaded in the background.

I do agree for something like Netflix for example that a 'big load up front' is probably preferrable. I'm 'in for the long haul' as I'm going to be watching something that's at least 30minutes up to a feature length film, so a few seconds extra load time is negligble.


When I said “amortized” in this case, I just mean that the cost of the initial page load is spread out throughout the length of the session. Like you said, if you just want to quickly open an email then close the app, the session time will he short and the page load cost will be a large percentage of the session time.

I’m having trouble comprehending some of your sentences (there’s some running on going on). So I don’t understand what you mean by “waiting for advanced search” - is that better or worse in the multi page world? I think it’s worse. With an SPA, all of the UI navigation can happen on the client, which is as quick as possible (no server round trip). The actual searching in an SPA would be via an API and you get a nice spinner in the meantime, again with no page load. All smooth UI transitions which indicate to the user what exactly is happening. No blank page while a new page is computed without knowing what exactly is going on.


sorry writing while eating lunch earlier so no doubt could be phrased better :)

What I was trying to get at was that there are lots of "optional extras" that seem to get loaded up front at the moment that would be better if they were progressively loaded.

I think you see that a lot with SPAs in a way that you didn't when JS was used to 'progressively enhance' sites rather than for all interactions.

I don't think it's a problem of SPAs by definition — you could engineer an SPA to progressively load what's needed, and make the first load very slimline & then only load additional features while idle or on request, which is why I think GMail is a bad example. For example, I've just tried logging into Gmail in a fresh Firefox window (no cache) that I set to throttle to a "regular 3G" connection speed in the browser. It took literally 30s to load (with one email in the inbox).


Gotcha. React Suspense is trying to be the best of both worlds. It allows you to load code for components on the fly, when they are needed, and not all up front.


Gmail has a "basic view" that works without JavaScript, just good old fashioned HTML forms and hyperlinks and server-side state.

Though it's more or less static feature wise, if it weren't for the updated logo you'd never be able to tell it was written within the last decade.


there are real benefits to SPAs.

Yes, but a lot of the benefits of SPAs and the libraries needed to work on them accrue to the developers, not the users.


It's funny to think of developers and users as two distinct constituencies with distinct interests. I had thought any good developer, by definition, would do what's best for the user. Therefore the interests of both groups are the same.

But I suppose you're right that the average developer is their own damn person with their own interests which can only ever partly align with their customer's.

I suppose there are actually three parties, all distinctly interested: the programmer, the employer, and the end user. The programmer wants to get paid and do "good work". The employer is actually a multipart entity: management and investors, each of which have distinct interests. The end user is not a single such person but a mass of individuals, each with a different set of interests.

This is getting messy!


The individual end user can also be multiple distinct groups, some simultaneously:

Parents / caregivers / guardians and their charges, who can be further split in to children, teens etc etc.


Some benefits are for developers. And some are for the users. For example, many users, particularly ones who care about the look and feel of things, prefer when there is a smooth transition between screens in response to their actions.

You simply can’t do that with server rendering (barring hacks like Turbolinks, which are just approximating SPAs).


I agree, and I'm more critical of heavy front end Javascript frameworks that are part and parcel of SPAs. Things like the Turbolinks hack you mention, IMO, provide the best user experience, especially fast initial page load. But that can be messy to develop, so we end up with Angular or React which smooth the development issues, but make pages fat and slow for users.


Sure, SPA is the correct technology for certain applications, but not for every damned website.


So why are we conflating web apps vs web sites exactly?


Most people who aren’t involved directly with web development just call everything a ‘web page’, or ‘the web’.


Because that’s what everyone else does when saying that the performance characteristics of the two should be evaluated in the same way, e.g. page load time.


> Gmail. Do you think that architecture was just decided on by some noobs right out of school? Since it was novel at the time

This is Google we’re talking about.

Does fit the caricature.


SPAs are a good fit for webmail. They are a bad fit for many other projects.


And 99% of the time all of that is pure YAGNI.


Was Gmail always an SPA? My recollection is that it was a regular web site when it launched, and went SIA when everyone else did.


Gmail has been a SPA since launch. There were some pre-launch versions that were not, and it has always had a plain HTML fallback mode.

I believe there may have been some major changes to the way it did the client-side rendering a few years after it launched. The original version was more ad hoc, since it was pretty much the first time anyone (at Google, at least) had built a real SPA. Later they built tools and libraries, and developed a more systematic way of doing things. (I was at Google during that time period, but never worked on Gmail.)


While Outlook Web Access (2000) was earlier, Gmail (2004) and Google Maps (2005) were definitely instrumental in establishing AJAX and the SPA pattern (though the term SPA came much later I think).


As a current student Compsci student who's done an (introductory) web dev class, may I say I have not been taught this way _at all_. In fact this notion of offloading stuff to the client never came up at all in the class. That's just my experience.

I think these ferocious views must be coming from the individual - but I do realise not all courses are the same and this student may have actually been (wrongly) taught this way.


Compsci education is more focused on theory. Your prof is more likely to come from old-school corporate dev or 90s startups and probably has jaded views towards much of this new stuff like a lot of the passionate "designers and tinkerers" do nowadays

I'd bet this doctrine is probably being promulgated by the non-academics at trade schools and boot camps


In my class we learnt server and client side JS. I feel my prof's approach to webdev was quite modern. It was a very practical class and none of the theory we learnt was actually tested.


It is most likely a result of an echo chamber of students with no or little industry experience. It doesn't help that the younger people generally own client devices with more up-to-date specs.


Also, young people like to cling to new industry trends. It makes sense for them - if they would try to become an expert in something that is decades old, they would have to compete with people with decades of experience, while new tech creates a level playing field for the both the graduates and the veterans. This might also be one of the reasons for why we see such an diarrea of technologies in our industry - coding is mostly done by young people, many of them not more than a couple years into their careers. Pushing for new tech pays off for them.


Some, not all but some, new grads hear something from a prof or maybe departmental philosophy, and they will defend it to the death until they see real life for a while.


Unfortunately, the latest model is where the users are who habitually spend money on new stuff and are likely to pay you for anything.

If you're optimizing for money, rather than good use of system resources, it makes perfect sense.

The underlying platform purveyors unfortunately have the same view, which means that anything before the model before the current model is not supported any more. It probably has an outdated version of the OS. The current OS won't fit. The APIs are changing, and so supporting the old device requires maintaining a separate stream of the code that is backported. Someone has to test it on the old device and OS. And for what? Someone who won't pay.


> When I asked about people on mobile with older devices, they preached that anyone who isn't on the latest model, or the one just preceding it, isn't worth targeting.

Looking at twitter links on my aging iPad is a painful experience these days.


Twitter, where displaying short pieces of text and occasionally images and very occasionally video can require extravagant amounts of processing power that can only be found in the latest hardware and software...

Years ago, it was a common conspiracy theory that the hardware manufacturers were forcing obolescence --- and to a certain extent they still are --- but now it seems software developers have outdone the hardware manufacturers without any help from the latter...


Wirth's Law, 1995 - "software is getting slower more rapidly than hardware is becoming faster", attributed to back to 1987 or so.

- https://en.wikipedia.org/wiki/Wirth's_law


Simply compare with nitter.net to see how the exact same content can easily be presented in a simple, lean, fast way.


https://m.twitter.com or https://nitter.com

Use Privacy Redirect in Firefox/Chrome for iOS if you can.


mobile.twitter.com is also terrible


At 22, you bristle at the thought of someone in their 30's or 40's year old calling you a child, and why does the entire car rental industry agree with these tired old farts? It's not fair.

By 30, you start to allow that they might have a point, but you abstain from saying anything because you remember how it feels. You smirk (privately) at Sarah in Labyrinth instead of identifying with her. "It's not fair. It's not fair." Brat. By 35 you've lost track of how many times you've resisted the urge condescend, and you start to lose the war by degrees.

    The best lack all conviction, while the worst
    Are full of passionate intensity.

I am coming to appreciate Jim Highsmith's position (we aren't solving problems, we are resolving paradoxes, but refuse to see).

It's not that they're wrong, or they're right. It's that everybody is wrong (and always will be). That excitement at finding a new strategy (which the old cynics point out is merely new to you) is the hope of escape. It's also the hope of changing the narrative so that everybody is on an even footing. You aren't competing with people who have 10 years experience in this (the only people who do are 50 and a mix of short memories and ageism prevents them from taking over).

If 'progress' looks like taking our foot out of one bucket and putting it into another over and over, we're just going to spiral toward the future forever, which is going to be boring and slow. Probably we need more specializations based on problem domains instead of techniques. My exceedingly vague understanding of the history of medicine is that they didn't make much progress either until they did that, and that a lot of people died until they started getting serious about issues. We are 75 years old as an industry. It's time to talk about kicking out the snake oil and cure-all vendors.


More focus on Performance and not just Eye Candy would benefit.

Growning up with the constraints of weaker computer hardware helped the Old Guard appreciate performant solutions.


It's coming from the mega cap FANG companies. There never was an organic open source evolution of SPA's or front end JS frameworks because offloading compute to the client is an optimization that only a handful of companies on the planet need.


It very may we’ll be. When I was at CMU (MISM) part of our curriculum discussed distributed systems. Some took it as another tool in the kit, but many came away thinking it was taught to us because it should be how we do things in the field.


> When I asked about people on mobile with older devices

How about people further than 50ms away from your server? Server side tendering gets old quite quick there.


You kidding!? SPAs have a significantly worse user experience with high latency, the vast majority of SPA will just display a blank screen if the latency is too great.


IME latency is much more a problem with shitty (ie most) SPAs that only start loading the data after the scripts are loaded.


Amen! A person near and dear got let go on some really snarky nonsense etc etc. Really stressful. But guess what? 90 days later they canned 2/3rds of everybody else including a whole bunch of the jerks. Their old office is black; not a soul around. They've got big customer and cash flow probs. When we heard this ... straight to the bar for a couple of celebration rounds. Hee hee hee. Meanwhile the person in question got a way better job.


I mean the logic is pretty straightforward, your servers will never have the computing power of all client devices combined, and the slowest part of any webpage is going to be communication between server & client. If you are writing a web app with such intense JS that people can’t run it that is an indictment of that app, not all client side rendering.


The slowest part of any modern webpage is the part where the client (usually a mobile phone) has to download 1MB+ of JavaScript, parse it, execute it, use it to fetch more code and then display it to the user.


But servers are typically fast at serving up html, which the server isn't rendering, only generating. Very fast these days, particularly with proxies and server caching.


The flaw in that logic is assuming that the total amount of work being done is the same for both client-side and server-side rendering.


Even more-so, it's resources you don't pay for.


While SPAs are somewhat inefficient, I'm convinced that the unnecessary bloat is mostly related to

A) advertising and/or tracking

B) improperly compressed or sized assets

C) unnecessary embedded widgets like tweets or videos

D) insane things like embedding files into CSS using a data URI and therefore blocking rendering

E) nobody using prefetching properly

These are very loose figures for my computer and internet connection, but a small server-side rendered site is in the 10s of KB, and loads within 2 or 3 seconds. A small client-side rendered site is in the 100s of KB, maybe a little more or a little less, and takes 4 or 5 seconds. The sites that I really hate are in the 5 MB+ range and don't load for anywhere up to 10 or even 20 seconds, which goes above and beyond the bloat caused by client-side rendering.


If you have any reasonably sized app you very quickly run up to 100s of KBs. Hell, just including moment.js to deal with timezones gets you there.


Yep, but that only takes you into the range of 100s of KBs. To get above 1 MB is possible, but at some point before you hit a 5 MB initial page load it becomes the fault of things other than the technology you're using.

EDIT: For reference, https://www.tmz.com/ with no ad blocker is 19.22 MB (7.51 compressed) and takes 19.99 seconds to load, and they only use jQuery. I don't think the underlying technology is the problem for them.


F) dynamically generating layout on the client when it could just be generated once on the server and cached for 99.999% of all websites


On the other hand, round-tripping to the server on every interaction isn't the paragon of efficiency either.

There are only trade-offs, not a faction war. People who don't realize that are usually just part of a cargo cult.


I don't think the problem is client rendering, especially when you consider latency. Sending views over a network, especially a high-latency, low-reliability one like a cell network, isn't going to beat the performance of doing UI rendering on-device.

Same goes with storage. What's faster: a readdir(3) call that hits an SSD to see how many mp3s you have downloaded, or traversing a morass of cell towers and backbone links in order to iterate over a list you fetch from a distributed data store running in AWS? It's the readdir(3) call.

Giant bundles of unnecessary JS are also bad for performance, but there's a reason why when we had more limited computing resources, we didn't try to make every screen an HTML document that needed a roundtrip to some distant server to do anything. Computing happened on your computer. That's also why native apps on smartphones exist: Apple tried to make everything websites with the first iPhone, it was unbearably slow, and so they pivoted to native apps.

Plain old documents are best as HTML and CSS. Highly-interactive UI isn't.


You know what? I don’t think the trend for client side rendering is the problem. It seems logical. The problem is they hijacking of client side development by frameworks like React that produce a 1 Gig bundle of JS soup and dependencies rolled into a memory hogging ball, when all you needs is 50 kilobytes of basic vanilla JavaScript that would download into the client before you can say “chrome has run out of memory”


Is that really the central problem though? Or is it that there is so much cruft?

Most every time I try to load a web page my sole aim is accessing a little plain text and perhaps a picture or two if clearly relevant and illustrative. But (if it weren't for ublock or the like), for the few KB of the content I actually want I have to wade through irrelevant stock photos, autoplay videos, innumerable placements serving promotions/ads/clickbait, overlays, demands for entering my email address or logging in, social media icons and banners - and that's to say nothing of the stuff I don't see, the trackers and the scripts. Surfing the web like this is frankly a strain, one that we've accepted as normal because everyone does it.

If we serve cruft faster, certainly that will improve speeds, but those gains might simply motivate the powers that be to add more cruft so - just as the case with network speeds - we'll end where we started. We need to be radical and tear web pages down rather than merely focus on serving them faster through technical means.


It seems like a lot of websites could replace react/angular/framework with some simple jquery and html but that's not cool and not good on your resume. So now we have ui frameworks, custom css engines with a server side build pipeline deploying to docker just for a photo gallery.


Why would you want to replace something like Vue with jQuery?

Just because jQuery isn't "brand, new frontend framework" doesn't mean it is good or something

From my experience jQuery leads to difficult to maintain codebases relatively fast in compare to e.g Vue.


Difficult to a newskool kiddos fresh out of bootcamp that can't wrap their heads around complexity, maybe.

In my experience it's the frontend frameworks that make for worse code. Callback soup is downright simple and easy-to-follow compared to some of the atrocities that React et al have wrought. Worse, they repackage old tech as "new" while completely reworking the vocabulary and paradigms, so old hats have to relearn shit they already know because some snot-nosed Facebook engineer needs another resume badge for his inevitable 18-month departure


Because to some people, programming in strings with Vue is unpleasant.


> with some simple jquery

Plain javascript, if you want to go down that route. Jquery is around 30kb of javascript, if I remember correctly.

> but that's not cool and not good on your resume

On the contrary, writing vanilla js is pretty cool and impressive on the resume, when every other developer puts react there; but it's pretty miserable too, compared to using frameworks.


If I see the words "vanilla JS" or "SQLite" on a resume, I will automatically place it into my maybe pile.


Is that a good or bad thing?


Yes, its basically a shortcut when I am doing a quick first scan through resumes. The moment I see one of those keywords I will flag it for 2nd pass review and move on.


I'll bring some demos of my snowflake animations and cartoon character gifs that follow the cursor around to the interview

I actually quite like writing vanilla js in my spare time, but I can never tell if I'm doing everything horribly wrong or not


As long as it makes sense when you read it after a few days (i.e. its well organized) and it also works as expected, then you are probably doing well.

Javascript is actually incredibly fast on most devices, especially if you are constraining yourself to the vanilla API and not something like jQuery. Remember, every web browser's JS engine has been hyper-optimized to support decades of half-assed website implementations.

It's really hard to screw up perf on document.getElementById(). There's honestly not a whole lot of ways to hang yourself with the vanilla methods unless you are trying to build something ridiculous like a raytracer or physics engine.


i think the biggest thing for your average app is just causing a lot of layout repaints. as long as you spam requestanimationframe everywhere and group style changes and polling, you'll be good. I was really surprised how good I was able to get things on even an ancient ipad 2 I had laying around


> document.getElementById()

Can we at least do document.querySelector? Pretty please? :-)


Just use the 1 line jQuery:

const $ = document.querySelector.bind(document)


What does it do?


Vanilla JS is pretty cool. Eleventy is even cooler, it gets you most of the things a framework gets you, but gets compiled to fast, vanilla JS and that's what served to the browser.

Svelte/Sapper also has a lot of potential as it only ships the parts if the framework that are absolutely needed instead of the whole thing.

But in reality, you can make plenty of very fast React sites and plenty of slow vanilla JS sites.


Any modern JS Framework gets compiled to vanilla JS. That‘s exactly the problem: Because Browsers don‘t implement ES6+ Syntax natively, it has to get compiled to complicated, long winded ES5 or filled by heavy polyfills. If browsers were all spec compliant, code bundles would be way smaller.


> Because Browsers don‘t implement ES6+ Syntax natively

Erm what? What browsers are you talking about now?


This almost certainly means Internet Explorer, if they’re only referring to ES6. But honestly I love all the feature since then. It’s actually difficult for me to imagine not using TypeScript these days, let alone not having access to asynchronous/await or async iterables, plus all the tiny little syntax improvements it’s easy to take for granted like Array.includes()


Old Android browsers, UC Browser.


Feel free to replace jQuery with regular JavaScript in my post.


It's trendy to take a shot at new things. You need to build a solution to the problem - sometimes HTML can get this done and sometimes it can't. I agree that we've gone off the deep end of "everything must be react" but it's silly to say this is simply to drive resumes. It's mostly under-experienced folks using react as their hammer to deal with any nail, screw or bolt they come across - but that hammer is useful and when you've got a nail you should use it.


I think this is comming from the same direction as desktop computing.

Try to install windows 7 on brand new machine (hopefully you will get the drivers). Regardless of all the new "improvements" in windows 10, it will fly.

What we did to hw performance increase is just staggering. Instead of having software that works much faster, we ate the performance for the sake of cheaper development - filling software with lasignas of huge libraries that in most cases are not needed, employing incompetent developers that know how to code but are clueless about the computer/os/browser they are running their code on, not optimizing anything,...

Suboptimal technologies, suboptimal languages (to make the developers less expensive for the companies), lack of knowlidge. It stacks up, todays webpage is easly a few megabytes, for a kb of text. Due to huge list of third party dependancies that are not really needed, but are there for minor details on web page. It is just crazy and far worse than when server side rendering was "THE thing".

The result is here, not only on web.


What specific part of windows 10 actually makes it run like ass? I really don't understand what the situation is. Are users being punished for the sake of being able to hook everything with ad revenue?

I have noticed that on Windows Server 2019 when I remove Windows Defender, explorer.exe seems to get 10x snappier (start menu appears more quickly, etc) but it still feels like something isn't quite right.


Static things should be generated server-side (even if those static things are dynamically-gwnerated), and things that change on the page after load, interactively or by timers, should be rendered browser (client) side.

Client side rendering has become popular because it reduces server load... but unfortunately increases processing time in the browser, which can slow things down for users.

There's solutions (like Gatsby, which is its own layer of complexity) and cheats and workarounds, but the standard should be that if a page will contain the same information for a certain state on an initial load for that page, that content should be generated server-side. Anything that can't be or is dependant on browser spec or user interaction should be client-side.

I just don't believe in making the user process a bunch of repetitive static stuff that can be cached on the browser from the server or compressed before sending. There's gotta be more consideration of user experience over server minimization.


You're not the only one who thinks this way: https://twitter.com/ID_AA_Carmack/status/1210997702152069120


This sounds a lot like the old argument of developers not being careful about how much memory/cpu they use. Engineers have been complaining about this since the 70s!

As hardware improves, developers realize that computing time is way cheaper than developer time.

Users have a certain latency that they accept. As long as the developer doesn't exceed that threshold, optimizing for dev time usually pays off.


Server-side is no panacea. I starting paying attention recently, and WordPress-based sites frequently take well over a second to return the HTML for pages that are essentially static—and that's considered acceptably fast by many people running WP-based sites. Slow WP sites are even worse.


Wordpress is not static at all. It support commenting and loads comments by default, it shows related articles dynamically depending on categories and views, it displays different content to different viistors, resize and compress pictures on the fly, etc... and a thousand more things if you are a logged in user, it really is dynamic.

It's actually pretty good considering what it does (if you don't setup a ton of plugins and no ads). There can be 50 requests per page but that's because of all the pictures and thumbnails. The page can render and be interactive almost immediately, pictures load later.


When I consider the "server-side" argument, I think of it in apples-to-apples comparison: custom code that is either server-side or client-side rendered. Wordpress on the other hand is a packaged application, typically used with other packaged plugins and themes. Moreover, many Wordpress sites are run on anemic shared hosting. Custom applications can as well, but I feel that's far less likely.


"I think of it in apples-to-apples comparison"

That's a restriction in your mind that has nothing to do with the topic.

"Moreover, many Wordpress sites are run on anemic shared hosting."

The same hosting that would be perfectly fine for a static site.


Everything can be done poorly. The problem with wordpress is it allows a plethora of plugins and hooks that allow for lego-style webapp construction. This is not going to result in cohesive, performant experiences.

If you purpose build a server-side application to replace the functionality of any specific wordpress site in something like C#/Go/Rust, you will probably find that it performs substantially better in every way.

This is more of a testament to the value of custom software vs low/no-code software than it is to the deficits or benefits of any specific architectural ideology.


"If you purpose build a server-side application to replace the functionality of any specific wordpress site in something like C#/Go/Rust, you will probably find that it performs substantially better in every way."

You'd find the exact same thing for a Python or Node site, too.


I mean I just opened a Twitter profile and counted 10 full seconds before the actual tweets loaded. 1-2 seconds is pretty blazing by comparison. How much faster do you need to be than the most popular social media site?


Attitudes like this are why websites load slowly.


If you use a plugin like WP2Static to render actual static pages, you get far better performance. If you have a well-designed theme that, e.g., doesn't have render-blocking JS, you should have seemingly instant load times.


Your first point may be true, but it completely ignores the reality of a vast number of WordPress sites that don't use a plugin to generate a separate, static, site.

As to your second point, what do you imagine that "well-designed theme[s]" have to do with sites taking well over a second to start returning HTML?


> Your first point may be true, but it completely ignores the reality of a vast number of WordPress sites that don't use a plugin to generate a separate, static, site.

For the record, I would never run WordPress as non-static unless I had no other option. I'm not defending WordPress in any way. I just didn't see the need to mention its flaws because the parent comment already had.

I would personally prefer not to use WordPress at all, but it is the de facto standard for marketing websites, and marketers don't know or care about the performance and security nightmare that is standard WordPress. Since that is the reality, I felt that it was helpful to let people know how to deal with it constructively instead of just deploying insecure, slow websites.

> As to your second point, what do you imagine that "well-designed theme[s]" have to do with sites taking well over a second to start returning HTML?

That was stated in the context of static websites. If you're running non-static WP, you're just fucked.


Laravel LiveWire + Alpine.js can do 80% of SPA needs from your PHP backend code.

https://laravel-livewire.com/


I've been enjoying using both of these a lot.

Alpine has replaced situations where I would previously use vanilla JS or jQuery (i.e. simple UI interactivity, but Vue would be overkill), but is far nicer to use.

LiveWire is perfect for things like data tables—it's not really interactive per se, but a full page refresh to change filtering or sorting sucks, and implementing it as a purely JavaScript component makes it harder to use all the cool Laravel stuff I have on the backend. With LiveWire I can just pass in the path to a Blade partial to use as the table row template, and use all the back-end stuff I like.

That just leaves the complex, high interactivity stuff, which I continue to use Vue for.

LiveWire is missing a couple of features that's stopping me using it in production (namely the ability to apply different middleware to different components), but V2 is out soon so hopefully that will include it. If not I'll probably look at contributing it myself.


This is neat. I have been meaning to give PHP another look just to see how they've gotten along over the last half decade.


From Dan Abramov from the React team yesterday https://twitter.com/dan_abramov/status/1290289129255624706

> We’ve reached peak complexity with SPA. The pendulum will swing a bit back to things on the server. But it will be a new take — a hybrid approach with different tradeoffs than before. Obviously I’m thinking React will be a part of that wave.

Combine that with NextJS's new features in server side rendering, I think we are going back to that. My React site is server side rendered.


Users say they want this, but then people get upset when list filters don't automatically update the listing when you click them, or when you have a set of drilldown cascaded dropdown and the first doesn't automatically filter the second, or when inapplicable widgets are visible that should be hidden when inapplicable.


Okay, but why does that require a supercomputer to do with acceptable latency?

We had UIs of that complexity on DOS, and they were far more responsive. Modern eye candy has its cost, yes, but that doesn't explain most of the difference.


Do you have any data to prove that server-side rendering will lead to "faster" websites? The article definitely doesn't provide any.

Server-side rendering means the web page will be blank until the server responds. If a majority of the heavy lifting is done on the server, you increase the opportunity of slower server response times. That's a worse UX than a web page gradually loading on the client-side.

Things like CSS cannot be rendered on the server, yet CSS is often a bottleneck to rendering. Same goes for images and fonts. Where's the data showing "client storage" and "client compute" are the culprits of slow websites?


The server has more knowledge than the client. Once economies of scale kick in, you can start to do things like speculative rendering of pages for users based upon prior access patterns, time of day, region, preferences, etc.

For instance, you could have a pre-render rule that says to trip if there's an 80% chance the user is just going to proceed to checkout and not back to the store based upon the type of product in the cart. This would mean that while the user is reviewing their shopping cart & options, the server could be generating the next view. Once the client hits "Proceed to Payment", the server (or CDN) can instantly provide the cached response from memory. This basically takes UX latency down to RTT between client & server if you have a very predictable application or are willing to speculate on a large number of possibilities at once.


> Server-side rendering means the web page will be blank until the server responds.

With technologies such as Turbolinks and Stimulus.js it doesn't have to be that way, that's what Basecamp uses.


Turbolinks and the like just move the problem elsewhere. Great, now your site is no longer blank on subsequent loads but your first input delay has jumped by a magnitude since the server still takes forever to respond. There's still a delay regardless.

Also Turbolinks only becomes useful after the page has loaded. So every fresh visitor is still going to see that horrible flash of blankness, wondering if the site is broke.

That isn't to say it's worse than just default server-side rendering: I think it provides a better UX. But who knows how much, and who knows if it's better than a SPA. Nobody is citing any real data here, just talking out there ass.


There’s no arguing against server rendering being simpler - it objectively consists of fewer components. But to say that it is more performant by nature? You can’t actually argue that. There are plenty of performance downsides to rendering a full page in response to every user action. Don’t forget, an SPA can fetch a very small amount of data and re-render a very small part of the page in response to a user action. There is a noticeably performance benefit to doing that.

Performance doesn’t only boil down to the first page load. Hopefully your application sessions are long, and the longer page load time gets amortized across the session if interactions are performant after that.

Note, I primarily work on enterprise apps where the sessions are long, and the workflows are complicated. Of course page load time matters much more for a static site or a blog / content site.

But to claim that SPAs are all cost with no benefit is just disingenuous. Of course they have their own set of trade offs. But there is a reason people use them, and it’s not some conspiracy fueled by uneducated people. Server rendering isn’t some objective moral higher ground.


> Performance doesn’t only boil down to the first page load.

For 99% of websites I open it really does boil down to just that.


> It doesn't have to be this way.

Your could hire competent developers who know how these technologies actually work. Server side rendering is better but still not ideal, because the incompetence is reduced from the load event to merely just later user interactions. The performance penalty associated with JavaScript could be removed almost entirely by suppling more performant JavaScript regardless of where the page is rendered.


To me, client-side rendering feels like an end-run around incompetent full-stack devs who don't know how to make server-side rendering fast. So why not throw a big blob of JS at the user (where their Core i7 machine and 16GB of RAM will munch through it), and on the backend, the requests go straight to the API team's tier (who know how to make APIs fast).


There are other advantages to server-side other than the specific professionals involved in the implementation.

Server rendered web applications are arguably easier to understand and debug as well. With something on the more extreme side of the house like Blazor, virtually 100% of the stack traces your users generate are directly actionable without having to dig through any javascript libraries or separate buckets of client state. You can directly breakpoint all client interactions and review relevant in-scope state to determine what is happening at all levels.

One could argue that this type of development experience would make it a lot easier to hire any arbitrary developer and make them productive on the product in a short amount of time. If you have to spend 2 weeks just explaining how your particular interpretation of React works relative to the rest of your contraption, I suspect you wont see the same kind of productivity gains.


This is completely subjective, but if you want reduced maintenance expenses then don’t rely on any third party library to do your job for you regardless of which side of the HTTP call it occurs. Most developers don’t use this nonsense to save time or reduce expenses. They use it because they cannot deliver without it regardless of the expenses. The “win” in that case is that developers are more easily interchangeable pieces with less reliance upon people who can directly read the code.


I am not aware of any such rule, given that I keep coding server rendering using Java and .NET stacks since ever.

The rule to pay attention to is not to follow the fashion industry of people wanting to sell books, conference talks, trainings, while adopting an wait-and-see attitude.

If you wait long enough then you are back at the beginning of the circle, e.g CORBA/DCOM => gRPC.


"With server-side (or just static HTML if possible), there is so much potential to amaze your users with performance."

Actually I am amazing my users with C++ data servers and all rendering done by JS in the browsers. What I do not do is hooking up those monstrous framework. My client side is pure JS. It is small and response feels instant.


And it does not scale to a business application that needs to be deployed independently of target systems across the world.


Maybe? Only if the business is actually deploying the server-side to folks. Using C++ to run data-servers is a choice you can make - one that I'd be a bit wary of since if C++ has a glaring weakness it's everything having to do with strings and I/O[1] which is going to be a big component of what you're writing.

1. C++ can do these things, and can do them quite performantly - but it takes an amount of effort far exceeding doing the same thing in say, Go or Java.


>"C++ can do these things, and can do them quite performantly - but it takes an amount of effort far exceeding doing the same thing in say, Go or Java."

Not my observation, writing business servers in modern C++ using some libraries is a piece of cake. I do not have any problems with I/O and strings either.


What string weakness?


The lack of native marshalling and unmarshalling approaches outside of the style perpetuated by sputf and no support for any on-the-go string variable injection or templating without pulling in libraries.

This is a weakness that can be overcome, but it's a weakness.


> without pulling in libraries

I don't see having these things as separate libraries to be a weakness - this way they can evolve independently from the language and can be much more specialized.


The irony for me is that I often see very small applications that use a long list of technologies... and end up taking much longer to build (and load on a client browser) than a server-side rendered application would have.

To be sure, if you can accurately roadmap an application such that you can see how it's going to grow and expand across teams, then you can see where it makes sense to use frameworks to build areas, navigation, components, etc. and then be able to distribute work across teams.

But often very small applications with very small teams are built in a way that is unnecessarily complex, and the expected (later) payoff never arrives.


Well some applications must be able to run on IE 4.0. Granted I do not cover such cases. But I do not really care. So far my clients (from across the world mind you) do not have such requirements, hence it is not my problem. What I do have instead is stellar scalability and performance.


If the speeds have increased, are we (users) paying an increase in price? If yes, what are we subsidising?


Why should I care about future generations – what have they ever done for me?” Groucho Marx

Sure, off-loading the work onto the client doesn't help speed.

But Groucho would say now, "Why should I care about the client? What has the client ever done for me?"

Sure, the web pages don't load any faster 'cause they're now running a cr*p load of javascript. And that javascript is running more and more annoying ads. And that's because ads support most websites and there's a finite ad budget in the world and that budget is naturally attracted to the most invasive ads available.

I often consult d20pfsrd.com, a site that hosts the open gaming license rules to the Pathfinder rule system (D&D fork/spin-off). Information itself is just static text and once was, apparently, supported by text adds. But now, naturally, it serves awful video ads as well. I would strongly suspect the site isn't getting more money for this, it's just that now that advertisers can run this stream of garbage, advertisers must run this stream of garbage.


There is no rule, just tons and tons of mindless cargo-culting.


I think that when the UI (in general) was starting to go server side rendered (again) people started to find ways to make it client rendered for speed (again). In fact I can imagine the guys at google building the first gmail said "It doesn't have to be this way."


admin.google.com is a great example of unnecessary, over-engineered and almost comically bad client side rendering.

First off it's painfully slow. Then you go to manage users. There's a list of users; so far so good. Then you try to add a user. First there's a loading(?!) indicator. Then add user dialog shows up. You fill in the form and add a user. Dialog closes and the list of users does not refresh. You don't see the user you just created. It shows up only after you reload the page. How does something like that event happen?


This is coming from someone who built an entire server-side rendering framework with PHP and then added Node.js for sockets and other realtime stuff...

Client first apps are the future.

Look, this is what I was able to get with a mix of clientside and serverside... does it load fast?

https://yang2020.app/events

The document is loaded from the server and then the client comes and fills in the rest. That first request can preload a bunch of data, to be sure. But then it can’t be cached.

Please read THIS as to the myriad reasons why client-first is better:

https://qbix.com/blog/2020/01/02/the-case-for-building-clien...


> Look, this is what I was able to get with a mix of clientside and serverside... does it load fast?

That's a resounding 'no' from me: [0]

It takes over a full minute to finish loading the page. As to when the titles for the calendar events, the interesting part, first appear, that's about the 30sec mark.

For comparison, this page I'm writing from loaded in 216ms.

[0] https://sixteenmm.org/personal/yang.png


Under 3 seconds here on an old $999-when-new Windows 10 laptop.

I have no idea what you are running on that makes it 30 seconds. The site was quite fast. Scrolling is a bit abrupt, it should probably pre-load more aggressively, but other than that the site works really well.


Cleared cache, loaded everything in 3 seconds. Strange:

https://streamable.com/883nsf


FWIW the page loaded relatively fast for me but the images then loaded in very slowly (both the backgrounds and avatar icons).

I was curious (not picking on you, and I'm hardly an expert) so threw it at gtmetrix and you can see the same (click on Waterfall, the suggestions on the main PageSpeed tab seems pointless).

https://gtmetrix.com/reports/yang2020.app/2vndjPO4


Well they seem to give an A on almost all metric except small images, they claim we could save 50 KB overall LOL. And the reason they are wrong is that retina displays have 2x density per logical pixel.

The suggestion to use a CDN and set up HTTP Caching is a good one. As well as minimizing Javascript. My point was specifically to illustrate how fast an image-heavy page can be without it. It lazyloads images on demand, batches requests and does many other things to speed up rendering.


> Look, this is what I was able to get with a mix of clientside and serverside... does it load fast?

6.60s for me, on desktop.


As others have pointed out, this is an example of front-end going wrong. It just feels so sluggish.

Ryzen 2700X, 32GB RAM, 300/300 Mbps internet (hard-wired.)


I just rewrote my personal website ( https://anonyfox.com ) to become statically generated (zola, runs via github Action) so the result is just plain and speedy HTML. I even used a minimal classless „css framework“ and ontop I am hosting everything via cloudflare workers sites, so visitors should get served right from CDN edge locations. No JS or tracking included.

As snappy as I could imagine, and I hope that this will make a perceived difference for visitors.

While average internet speed might increase, I still saw plenty of people browsing websites primarily on their phone, with bad cellular connections indoor or via a shared WiFi spot, and it was painful to watch. Hence, my rewrite (still ongoing).

Do fellow HNers also feel the „need for speed“ nowadays?


That's fantastic - as near to instantaneous as you need, and it's actually slightly odd having a page load as quickly as yours does; we've become programmed to wait, despite all the progress that's happened in hardware and connectivity. The only slightly slow thing was the screenshots on the portfolio page as the images aren't the native resolution they're being displayed at.

Does the minification of the css make a big difference? I just took a look at it using an unminifier, and it was a nice change to see CSS that I feel I actually understand straight away, rather than thousands of lines of impenetrable sub-sub-subclasses.


I just settled on https://oxal.org/projects/sakura/ and added a handful of lines for my grid view widget, that's all.

Maybe it's me, but I originally learned that the concern of CSS is to make a document look pretty. Not magic CSS classes or inline styles (or both, this bugs me on tailwind), so the recent "shift" towards "classless css" is very appealing.

Sidenote: Yes, the screenshots could be way smaller, but originally I had them full-width instead of the current thumbnail, and still thinking about how to present this as lean as possible. Thanks for the feedback, though!


I use the picture tag with a bunch of media queries to deliver optimized images for each resolution in websites that I build, resizing a 1080p image to only 200px width does wonders to mobile performance while keeping it perfect for full HD monitors.


Since Zola has an image resizing feature and shortcode-snippets, this could be a nice way to automate things away (i‘d hate to slice pictures for X sizes by hand).

Will have a look, thanks!


Your pages are excellent by comparison to comparable offerings of similar information density mostly seen.

But there's always room for experimentation.

How about preserving a copy of your portfolio page now (and the PNG files it's now using) and giving it an address like /portfolioOLD?

Then using an image editor, ruthlessly resize/resample-at-lower-bit-depth one of your PNG's so their actual rectangular pixel dimensions are about the same size that they appear on a full-size monitor now.

Then ruthlessly compress it until it looks just a little less high-quality than it does now. Just a little bit, you want to be able to tell the difference but you don't want other people to notice. These are just thumbnails anyway.

Use these editor settings on the rest of the PNGs, renaming them accordingly as you go.

Deploy the new portfolio page linking to the resized renamed thumbnails instead.

Just guessing, but I expect it can bring the load time down to about 10 percent of the old portfolio.

And it would be really easy for anyone to A/B test and get representative numbers.

This is how we used to party like it's 1999.


Thank you for using sakura.css, really appreciate it and glad you enjoyed using it! ^_^

On the other hand, I really enjoy working with tailwind. Having html and css "together" works really well with my mental model, and I can iterate very fast with it.

Though setting up tailwind is a bit of a pain, and I still use sakura + good old css everywhere I possibly can.


Very impressive. One cool thing you can further do to improve perceived speed potentially at the expense of some bandwidth is to begin to preload pages when a link is hovered. There are a couple of libraries that will do this for you.

It can shave 100 - 200 ms off the perceived load time, and since your site is already near or below that threshold it might end up feeling like you showed the page before anyone even asked for it.


I have done the same with Hugo on my blog[0], but actually had to fork an existing theme to remove what I would call bloat.[1]

The interesting thing for me is, while I personally certainly feel the "need for speed" and appreciate pages like yours (nothing blocked, only ~300kb), most people do not. Long loading times, invasive trackers, jumping pages (lazily loading scripts and images), loading fonts from spyware-CDNs - are only things "nerds" like us care about.

The nicest comment on my design I heard was "Well, looks like a developer came up with that" :)

[0] https://chollinger.com/blog/ [1] https://github.com/chollinger93/ink-free


That's perfect! Most pages should load instantaneous, at least those serving text for the most part.

I did the same for my website [1], and I hope this becomes more of a standard for "boring old" personal pages and blogs.

[1] https://marvinblum.de/


Even for most businesses it should be the norm. When you think about it, most businesses have almost no actual dynamic content on their website - other than any login/interactivity features, they might change at most a few times a day...


The businesses with no dynamic content also tend to be the ones who rent a wordpress dev who just finds a bunch of premade plugins and drop 50 script tags in the header for analytics and other random crap.


Interesting. Your site triggered our corporate filter as "Adult/Mature Content". I wonder what tripped it up.


Oh, wow. I have no idea, there is not much content yet, and zero external dependencies... maybe its the "anon" in the name? I mean, I even bought a Dotcom domain to look ok-ish despite my nickname :/


Try looking up 'blum' on urbandictionary :)


Wow, that's unfortunate. Well, I can't do anything about that :)


Major manufacturer of cabinet hardware - seems fine.


And means 'flower' in German.


That's very cool. Nice little project to speed the site. One data point. A cold loading takes about 2.2 seconds; subsequent loads take about 500ms, from a cafe in the Bay Area using a shared wifi.

The cold loading stats:

     Load Time 2.20 s
     Domain Lookup 2 ms
     Connect 1.13 s
     Wait for Response 68 ms
     DOM Processing 743 ms
     Parse 493 ms
     DOMContentLoaded Event 11 ms
     Wait for Sub Resources 239 ms
     Load Event 1 ms
Edit: BTW, the speed is very good. I've tried similar simple websites and got similar result. Facebook login page takes 13.5 seconds.


I do not really understand why it is _that_ slow...

DOM Processing 743 ms Parse 493 ms

... I mean, it is just some quite light HTML and minimal CSS, right? what could possibly make your browser so slow at handling this?


My guess? It's doing streaming parsing/processing, so it's network bound.

It started downloading html, once it got the first byte it started processing it, but then it had to wait for the rest of the bytes (not to mention the css file to download).

The parent comment is clearly using some really slow wifi, so I think it's likely that's what happened.


FWIW, I re-run the test at home. Cold load is about 400ms; repeated loads are about 240ms.

Cold load stats:

    Load Time 409 ms
    Domain Lookup 37 ms
    Connect 135 ms
    Wait for Response 40 ms
    DOM Processing 165 ms
    Parse 123 ms
    DOMContentLoaded Event 8 ms
    Wait for Sub Resources 34 ms


Might be pretty good depending on the specs


The page is 1.03KB of HTML and ~1.5KB of CSS. The HTML has about a dozen lines of Javascript in the footer that, at a glance, seemed only to execute onclick to do something with the menu. I'm pretty sure a 166Mhz (with an M) Pentium could process 1.03KB of HTML and render the page in under 700ish ms, so I agree that that seems oddly slow for any modern device, unless they're browsing on a mid-range Arduino.


The HN effect?


Since this runs solely on Cloudflare workers sites itself (no server behind it), it would be quite funny when HN hugging the site would have any measurable effect :D


I have a similar setup for my personal site, although it's still a work in progress. I've really been interested in JAMstack methods lately. I build the the static site with Eleventy, and have a script to pull in blog posts from my Ghost site. To bad I haven't really written any blog posts though, maybe one day :) Anhhow, I really like Cloudflare workers, would recommend!


There is no support for comments in the blog and no pictures at all. No images, no thumbnails, no banner, no logo, no favicon.

Also, no share button. No top/recommended articles. No view counter.

Once you start adding medias it will be quite a bit slower. Once you start implementing basic features expected by users (comments and related articles for a blog) it's gonna be yet again slower.

I remember when my first article went viral out of the blue, I think have to thank the (useless) share buttons for that. Then it did 1TB of network traffic over the next days, largely due to a pair of GIF. That's how bad pictures can be.


> no banner, no logo, no favicon...Also, no share button. No top/recommended articles. No view counter.

All of which I can live without.

Still the best way of sharing content on the web is via a url, which is handily provided, so most of these aren't even needed. As for recommended and view counts, these don't inherently add a lot of value to users. If anything, it's a nice change to have a page that doesn't try and infer my desires for once.


Should have said stats instead of counter. As the webmaster, you want to know how many visitors there are on which pages?

A simple "last 5 articles" in the corner do add value. Users frequently read more than one article.


You can get that from your logs though?


Usually not, because the hosting doesn't provide access to request logs (consider github pages, heroku, wordpress, LAMP providers).


I agree that the comparison is poor - there are business where those media components are required. But an issue with the modern web is that everything has all those components - nobody[1] cried over the lack of a "Share to Facebook" button on CNN. So, while saying stripping out all those components would solve the problem is inaccurate since those components are part of the business requirement - chances are a lot of those components aren't. Maybe you don't still need that "Share to Digg" button or maybe, as a news site, you don't need a comments section - I think it's a mix of both. Websites are being written unreasonably burdened with unnecessary features and those features are usually implemented with out-of-the-box poorly performing JS.

(As an aside - nobody nobody has ever derived value out of a page counter except the owner of the site - who could just look it up in the logs. This isn't really an argument against anything you mentioned but I found it amusing it was one of the things you brought up)

1. Mostly nobody - sure there were some folks, but then again I'd wager a significant portion of those folks were just loud voices echoing from the marketing department.


>No view counter.

Myspace era wants their featureset back.

More seriously I think for personal sites a JAMstack site is perfectly sufficient


I disabled comments on my websites and it made me a happier person.

The other things you name are present on my other website (which I linked above). The site is still blazing fast.

https://nicolasbouliane.com/blog/no-comments-on-website


Follow up: I just added some social sharing buttons, but without impacting page performance. The snippet is here:

https://anonyfox.com/spells/frontend-social-buttons-without-...

Very basic but does the job I'd say. :)


My own blog is statically generated too. I don’t have most of these either, because as a user I barely care about any of them or even actively dislike them.


Seems mostly good to me after cloudflare caches it, but you have made one annoying mistake: you forgot to set the height of the image, so it results in content shift. Other than that, it's great! :)


Hey brother, I made an account just to reply to your comment, I enjoyed your website and grew my knowledge reading it.

Just wanted to let you know there's a typo @ https://anonyfox.com/tools/savings-calculator/

```Aside from raw luck this ist still the best```


Thanks, didn‘t see it even after you posted it. german autocomplete probably :(


If you would specify the width/height of the image, you could avoid the page reflow that makes the quicklinks jump down.


Alignment of list on https://anonyfox.com/grimoire/elixir/ seems a bit off.

Love the style though. Very crisp, very snappy.


Thanks for The feedback, will have a look!


"Do fellow HNers also feel the need for speed nowadays?"

I stopped using graphical browsers many years ago. I use a text-only browser and a variety of non-browser, open source software as user-agents. Some programs I had to write myself because AFAIK they did not exist.

The only speed variations I can detect with human senses are associated with the server's response, not the browser/user-agent or the contents of the page. Most websites use the same server software and more or less the same "default" configurations so noticeable speed variations are rare in my UX.


Yes, a lot of people are browsing in less-than-ideal conditions. Many apps fall on their face when you try to use them on a German train with spotty reception.


Very interested on how you used Zola. The moment I wanted to customize title bars and side bars and I was basically on my own. Back then I didn't have the desire (or expertise) to reverse-engineer it.

Have you found it easy to customize, or you went with the flow without getting too fancy?


Sometimes a little bit of inline html within the markdown comments will do for me... otherwise: had been a great experience so far.

AFAIK you can set custom variables in the frontmatter of the markdown files, your layout/template html can use them (or use an IF check, or ...).


That's fantastic, all _static_ site need to have this rendering speed, but unfortunately static content applicable to very narrow niche. Most sites have to provide dynamic content to certain range and this is where it becomes incredibly slow


Looks good and loads instantly.


Thank you sensei


zola ?


https://www.getzola.org/ Static site builder written in Rust


tks


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: