Hacker News new | past | comments | ask | show | jobs | submit login
Using React, Redux and SSR to acommodate users without JavaScript (klungo.no)
55 points by danielskogly on June 7, 2020 | hide | past | favorite | 51 comments



For relatively simple use cases of server-side rendering with React, I think it’s worth knowing that you can render any component to static HTML with just one function call (and it works in Node environment). Another function call (this time browser-only) can be used to make the markup rendered interactive with no unnecessary DOM mutations.

React library makes these functions available from ReactDOMServer[0]. I like this API, it opens various possibilities and is easy to build on top of.

Of course, Daniel goes further to create a universal approach to authoring an app that not only supports SSR but works without JS entirely, which is pretty cool!

[0] https://reactjs.org/docs/react-dom-server.html#rendertostrin...


How to accommodate users without JavaScript without changing the development flow in modern web app development is something I've been thinking about for a while, and I'm happy to finally share this writeup and proof of concept.

If you just want to check out the demo, you can do that here: https://todo-react-redux-noscript.herokuapp.com/


SSR gets you most of the way there... your code just has to respect the URL as the 'global state' and turning any user input into link-based data (ie, click this, provide a URL, process the URL).


> turning any user input into link-based data

I wish all javascript websites would do this too. Unlinkable stuff grinds my gears =)


> URL as the 'global state'

Are there still (browser) limits on how long such a URL can be? Would it e.g. be possible to store a text of 50.000 words in the URL alone?


Well you could always store the state in a database serverside and give the client some token to look up the state...


Ah the good old JSESSIONID…


You can submit a form on click with an HTTP POST method - without any JavaScript. Doesn't make sense to cram so much data in a URL.


But then you don’t keep the state in the URL.


Modern browsers support very long URLs. Chrome supports URLs up to 2mb for example. Probably makes sense to think about the impact on users before using such long URLs, I have found it confuses people, and can also make sharing things on social media impossible.


You wouldn't cram the entire state into it, but you would cram enough data to be able to recreate your state (ie, an API-backed application might store the parameters required to make API calls)


That can definitively work for some applications!

My goal with this post and demo was to outline a general strategy that could be employed in both small and large applications. For some cases, you could implement the server-side dispatching of actions as listening on GET-requests, but you would potentially be polluting the browser history more than necessary, and be capped to ~2k characters (varies in browsers) for actions (which for many cases should suffice).

I imagine that keeping all _state_ in the URL, while it can work for some small applications, might not scale sufficiently to be a general approach to the problem, nor have the kind of persistence usually found in eg. applications with login, where a user might expect to be able to authenticate and continue where they left off.


I tried the app with and without JS. Its so slow in non JS mode that it feels very frustrating.


For me the response time is ~100ms on average, me being in Norway and the server in the U.S., but YMMV.

I'm sure there are more efficient ways of implementing something like this, but it demonstrates the approach. The main goal here isn't to make something that is super snappy for users with JS disabled, but rather that they get something _at all_, without burdening the developer too much.


A big thank you to those who are starting to support people like me who are js-allergic for varied reasons of security, efficiency, privacy, and general control over our own machines.


What is your internet experience like in general? How often do you encounter road blocks on sites that shouldn't require JavaScript to function?


It would be astonishingly easy to find out... give it a try.

I should clarify, I'm only moaning about JS used where it need not be, in the display of text and pictures. Essentially, stativc web pages. There are guys on this thread who are trying to make apps work withou JS and I'm getting uneasy about that - that level of functionality should, and perhaps must, be done client-side or the excess bandwidth and server-side load will become unmanageable. I would still not enable JS to allow those, but I can see why others would.

To answer you, try it. Some is fine, like no-JS gmail pages being snappy and working correctly. I'm sure I'm missing some stuff, I think drag & drop of attachments from the desktop are there, instead I have to click buttons and select the attchment from a popup, that's fine.

Others are pages that literally are text and pics but won't show anything, such as https://www.washingtonpost.com/ Microsoft used to be particularly bad at this.

Edit: some are text and image but will fail to render properly, with text flowing over images, images overlapping, general crapness like that. This is uncommon.

Others are absurdly worse - they actually show the text etc. as the page renders, then hide it when it's loaded and tell you you need to enable js to view it. Can't find an example now (but MS was one).

Now the upsides - I don't need any security or anti-virus/malware software, which speeds things up already.

No JS means pages appear much faster (compared to JS-enabled computers I uses at work).

No adverts (except small self-hosted ones starting to appear) - much faster. So much less distraction! No animations to draw your eyes. In one way this is the biggest immediate thing (you need a blocklist to do this properly - see MVPS (http://winhelp2002.mvps.org/hosts.htm) or similar, or some add-on, but just disabling JS kills most of it anyway).

No f%^&* popups in your face with 'helpful' agents trying to chat to you (this would be occasionally useful if they weren't so goddamn intrusive and cover stuff you want to see), or trying to get you to buy stuff, or distracting you or covering shit up, or playing sounds, or auto-playing videos.

Honestly, once you get used to that, the odd stillness of it all (it does seem a little odd at first, we naturally are drawn to movement) it's bliss, it really is (speaking IME).

So, give it a try for a day, and see for yourself - would you consider that?


I just realised I hadn't actually answered your question.

I'd say about 80% of the websites remain usable, to varying degrees.

It's hard to say because I'm used to losing some functionality so I won't bother going to shadertoy for example, so my stats are skewed.


I've been wondering about this recently too. I really want something that does almost everything server-side, like the old handlebars-style templates, but allows a modern component based page structure.

I guess React SSR is probably the closest thing to that but (without having used it) I'm guessing it is full of caveats given that React wasn't originally designed to work that way.

I wish there was something that was, and also was written in a language other than Typescript.


We're building that.

https://beta.bridgetownrb.com

Use Liquid template components on the server-side that render out to static HTML, and for any interactive bits you can "hydrate" them with something like LitElement-based Web Components. A heavy SPA approach using React is simply not necessary for many types of sites.


The main caveat is that you have to create alternative paths for situation where the thing is rendered on the server and client side - browser APIs are not available in the former case.

As for other options: Vue.js offers something like this with Nuxt.js.

There's also this:

https://sapper.svelte.dev

Interesting and pleasant, but still 0.x.x, so caveat emptor.


Everything old is new again.


If we are going to use SSR, why not use asp.net core mvc like frameworks. Its orders of magnitude easier to reason about and highly productive during development.

I used to be a fan for react, but a month ago I started a project in asp.net core mvc, and am hooked. Of course I have a lot of experience in c#, so that helps i guess.


> Its orders of magnitude easier to reason about

This may be the case for you, but for me I can hardly think of anything easier to reason about in terms of producing a document tree than a pure function over state which returns that tree.


What kind of things does it do to make you feel like that? What got you hooked?


1. Dependency Injection and convention based routing allows to get started on projects quite quickly.

2. An integrated approach to data access and view generation is a god send when you are a lone developer. With the front-end and back-end separated, there is twice the cognitive load to handle, no matter if the languages are same on both ends.

3. Reasoning about a web-app, in terms of views (or pages) is much easier to start with. Of course, we can always complicate the thing by adding JS, but ASP.NET Core MVC provides a very well laid out template to begin with, that is suitable for most web apps.

4. C# is an amazing language to work with. It's mind blowing awesome, especially after working for a year with JS/TS. To quote Steve Jobs, "It's like giving someone a glass of cold water in hell"

5. Async/Await in C# is easier to work with.

6. Expressing Ideas in C# is very easy and systematic.

7. Performance is fantastic. Just see the Techempower benchmarks. Performance is money, in today's cloud world.


Thank you for sharing - I've thought about looking into it, but never got around it - this peaked my interest once again :)


If a website that is rendered in the browser using a JS framework adds SSR, does that mean that the client-side JS code will be delayed (not render-blocking) so that the first render can happen? How does that work?


If I understand you correctly, then the answer is no. As strogonoff mentions in his comment[0], most of the frameworks that provide helpers to enable SSR also provides a way to "rehydrate" in the client side. React has to ways of rendering the app to a string, where one of them outputs html that can be used as the basis for rehydration[1], while the other one[2] drops any extra DOM attributes that the former adds.

[0] https://news.ycombinator.com/item?id=23446779

[1] https://reactjs.org/docs/react-dom-server.html#rendertostrin...

[2] https://reactjs.org/docs/react-dom-server.html#rendertostati...


The thing that I don’t understand is, how can the browser render the SSR content if the JS bundle is render-blocking? Doesn’t the JS bundle prevent anything from rendering until all its JS has been executed?


It's only render-blocking if put before the content! You can actually see this if you load up the demo[0] and reload the page. For some reason, I've made the mistake of putting the script-tag for the JS bundle _after_ the main content but _before_ the footer. The footer is also rendered on the server, but you'll see - especially if you throttle your connection - that the main app will appear instantly, but the footer ("Try without javascript :)", etc.) won't appear until the JS has finished loading.

This is why today it's common to put script-tags for external JS at the end of the body-tag.

[0]https://todo-react-redux-noscript.herokuapp.com/


Thanks. That explains it. I wasn’t sure if <script> at the end of <body> blocks rendering of the preceding content. Apparently not.


I am curious on how SSR impact security. JS enabled web clients have many security mechanisms, but are there any relevant ones that are missing from the server side engine?


I can say that speaking as someone working on a high-volume eCommerce website, you would not even be allowed to checkout without JS enabled.

The fraud is just insane. With JS anti-fraud tech you can eliminate a significant majority of fraud.


You rely on client-side JS to secure checkouts?

Everywhere I've worked doesn't really care about the frontend (it is merely the first line of defense); everything is validated and authorized on the backend.


Anti-fraud is somewhat different than traditional web application security. With anti-fraud, it's about recognizing behaviors and patterns that are likely to result in a fraudulent transaction (stolen credit card, mismatched information, sketchy browser, script/bot-like behavior etc).

In large scale ecommerce, fraudulent transactions in the 5 figures is so common it's not even notable. So using JS to reduce this even by half saves the company literally millions of dollars.


Not easy to advocate for no js support at the js shops


The value you receive from forcing JS (anti-fraud, tracking, feature enhancement) is orders of magnitude more than the value you lose from the no-JS users who leave your website.

There's way more active IE 11 users than genuine no-JS users, in my experience.


I agree! What I hoped to showcase in this post, is a way to support users without JS, while not changing how the web app is built. You would still _develop_ the site using JS :)


How does this compare with Microsoft's Blazor?


I mean, similar to using react/redux to support noscript scenarios I imagine you SHOULD be able to use any SSR supportive framework to do the same, though it might require more than usual care to do so.


Exactly! As long as your framework supports SSR, and you got an event-based state container, you're good to go.


I haven't used it, but it appears that there will still be Javascript running in the browser. In "client" mode, it uses WebAssembly, which needs Javascript to accomplish anything in the DOM. In "server" mode, it still needs to communicate back and forth with the browser and merge changes into the DOM. (Looks very similar to Phoenix's Live View or Rails's Stimulus Reflex)


In as so far as I am aware disabling JS kills webassembly so WASM would be no here. Same with server side blazer (requires JS)


What reasons would you disable JS other than to block ads and tracking?

If it’s just those things, why would the developers want to accommodate those people? That would result in lost revenue.


We require all of our code to work without Javascript (although in the most basic sense, it's perfectly fine to use Javascript to enhance an experience), and there's advantages even if you ignore "people who turn off Javascript in their browser":

- Certain problems, like ensuring that browser navigation works properly, either disappear or are massively reduced.

- We wrestle with bundle sizes a lot less, since a majority of our code ends up being genuinely server-rendered only - there's little to no advantage to doing additional client side rendering of a straightforward form or other mostly static content.

- Integration tests can be faster and more straightforward to write if you don't need to worry about JS. Of course, they're not full end-to-end if you're not testing JS, but for secondary flows they can suffice for the "good enough" category, and often (but not always) failures in your client JS will allow the SSR code to keep running fine as a backup.


Not everything is about revenue. If it was, JS based advertisement revenue isn’t the only source.


Perhaps you’d rather not give websites the ability to execute arbitrary code on your computer?


security, add to that pages load faster without JS or with JS disabled.


Screen readers. They generally work better with JS disabled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: