React library makes these functions available from ReactDOMServer. I like this API, it opens various possibilities and is easy to build on top of.
Of course, Daniel goes further to create a universal approach to authoring an app that not only supports SSR but works without JS entirely, which is pretty cool!
If you just want to check out the demo, you can do that here: https://todo-react-redux-noscript.herokuapp.com/
Are there still (browser) limits on how long such a URL can be? Would it e.g. be possible to store a text of 50.000 words in the URL alone?
My goal with this post and demo was to outline a general strategy that could be employed in both small and large applications. For some cases, you could implement the server-side dispatching of actions as listening on GET-requests, but you would potentially be polluting the browser history more than necessary, and be capped to ~2k characters (varies in browsers) for actions (which for many cases should suffice).
I imagine that keeping all _state_ in the URL, while it can work for some small applications, might not scale sufficiently to be a general approach to the problem, nor have the kind of persistence usually found in eg. applications with login, where a user might expect to be able to authenticate and continue where they left off.
I'm sure there are more efficient ways of implementing something like this, but it demonstrates the approach. The main goal here isn't to make something that is super snappy for users with JS disabled, but rather that they get something _at all_, without burdening the developer too much.
I should clarify, I'm only moaning about JS used where it need not be, in the display of text and pictures. Essentially, stativc web pages. There are guys on this thread who are trying to make apps work withou JS and I'm getting uneasy about that - that level of functionality should, and perhaps must, be done client-side or the excess bandwidth and server-side load will become unmanageable. I would still not enable JS to allow those, but I can see why others would.
To answer you, try it. Some is fine, like no-JS gmail pages being snappy and working correctly. I'm sure I'm missing some stuff, I think drag & drop of attachments from the desktop are there, instead I have to click buttons and select the attchment from a popup, that's fine.
Others are pages that literally are text and pics but won't show anything, such as https://www.washingtonpost.com/ Microsoft used to be particularly bad at this.
Edit: some are text and image but will fail to render properly, with text flowing over images, images overlapping, general crapness like that. This is uncommon.
Others are absurdly worse - they actually show the text etc. as the page renders, then hide it when it's loaded and tell you you need to enable js to view it. Can't find an example now (but MS was one).
Now the upsides - I don't need any security or anti-virus/malware software, which speeds things up already.
No JS means pages appear much faster (compared to JS-enabled computers I uses at work).
No adverts (except small self-hosted ones starting to appear) - much faster. So much less distraction! No animations to draw your eyes. In one way this is the biggest immediate thing (you need a blocklist to do this properly - see MVPS (http://winhelp2002.mvps.org/hosts.htm) or similar, or some add-on, but just disabling JS kills most of it anyway).
No f%^&* popups in your face with 'helpful' agents trying to chat to you (this would be occasionally useful if they weren't so goddamn intrusive and cover stuff you want to see), or trying to get you to buy stuff, or distracting you or covering shit up, or playing sounds, or auto-playing videos.
Honestly, once you get used to that, the odd stillness of it all (it does seem a little odd at first, we naturally are drawn to movement) it's bliss, it really is (speaking IME).
So, give it a try for a day, and see for yourself - would you consider that?
I'd say about 80% of the websites remain usable, to varying degrees.
It's hard to say because I'm used to losing some functionality so I won't bother going to shadertoy for example, so my stats are skewed.
I guess React SSR is probably the closest thing to that but (without having used it) I'm guessing it is full of caveats given that React wasn't originally designed to work that way.
I wish there was something that was, and also was written in a language other than Typescript.
Use Liquid template components on the server-side that render out to static HTML, and for any interactive bits you can "hydrate" them with something like LitElement-based Web Components. A heavy SPA approach using React is simply not necessary for many types of sites.
As for other options: Vue.js offers something like this with Nuxt.js.
There's also this:
Interesting and pleasant, but still 0.x.x, so caveat emptor.
I used to be a fan for react, but a month ago I started a project in asp.net core mvc, and am hooked. Of course I have a lot of experience in c#, so that helps i guess.
This may be the case for you, but for me I can hardly think of anything easier to reason about in terms of producing a document tree than a pure function over state which returns that tree.
2. An integrated approach to data access and view generation is a god send when you are a lone developer. With the front-end and back-end separated, there is twice the cognitive load to handle, no matter if the languages are same on both ends.
3. Reasoning about a web-app, in terms of views (or pages) is much easier to start with. Of course, we can always complicate the thing by adding JS, but ASP.NET Core MVC provides a very well laid out template to begin with, that is suitable for most web apps.
4. C# is an amazing language to work with. It's mind blowing awesome, especially after working for a year with JS/TS. To quote Steve Jobs, "It's like giving someone a glass of cold water in hell"
5. Async/Await in C# is easier to work with.
6. Expressing Ideas in C# is very easy and systematic.
7. Performance is fantastic. Just see the Techempower benchmarks. Performance is money, in today's cloud world.
This is why today it's common to put script-tags for external JS at the end of the body-tag.
The fraud is just insane. With JS anti-fraud tech you can eliminate a significant majority of fraud.
Everywhere I've worked doesn't really care about the frontend (it is merely the first line of defense); everything is validated and authorized on the backend.
In large scale ecommerce, fraudulent transactions in the 5 figures is so common it's not even notable. So using JS to reduce this even by half saves the company literally millions of dollars.
There's way more active IE 11 users than genuine no-JS users, in my experience.
If it’s just those things, why would the developers want to accommodate those people? That would result in lost revenue.
- Certain problems, like ensuring that browser navigation works properly, either disappear or are massively reduced.
- We wrestle with bundle sizes a lot less, since a majority of our code ends up being genuinely server-rendered only - there's little to no advantage to doing additional client side rendering of a straightforward form or other mostly static content.
- Integration tests can be faster and more straightforward to write if you don't need to worry about JS. Of course, they're not full end-to-end if you're not testing JS, but for secondary flows they can suffice for the "good enough" category, and often (but not always) failures in your client JS will allow the SSR code to keep running fine as a backup.