Hacker News new | past | comments | ask | show | jobs | submit login
Zero-Bundle-Size React Server Components (reactjs.org)
227 points by danabramov 26 days ago | hide | past | favorite | 135 comments

My biggest concern is separating the concerns via file naming convention and relying on presumably linter plugins to enforce its rules and on modifications to packagers, rather than something that can be used more directly in a type system and can more directly resemble ordinary tree-shaking.

I don't have a strong counter-proposal off the top of my head, but I do think the RFC is too quick to dismiss async/await and dynamic imports.

async/await makes a lot of sense to differentiate server-side components (Promise<T> returns would be a very different type of return from current components), many things you would use server like DB access would want you to use Promises today. (That seems a clear problem with the server side examples as presented in the RFC: they use synchronous node 'fs' instead of FS Promises, and some sort of synchronous 'db' that looks less and less like any modern Node db provider.) There's even a possible intuition pump there that hooks "obviously" won't work in an async function making it easier to keep the rules in mind between the types of components. If the components are then async/await by "requirement", dynamic imports stop looking so out of place and start to look much more correct in the component body.

>That seems a clear problem with the server side examples as presented in the RFC: they use synchronous node 'fs' instead of FS Promises

It isn't synchronous, it only looks so. It actually uses FP Promises under the hood[1]. We're doing a bit of a trick there (it throws a special value the first time, and then React retries when it's resolved). Then the result gets cached, and the repeated render succeeds synchronously. We will describe this mechanism in detail in a future RFC.

All of the examples in the demo (including fetch, readFile, and the DB query call) are async under the hood.

However, they do leave us the ability to use sync I/O in the cases where it makes sense (depends on whether we have other work to do, etc).

[1]: https://github.com/facebook/react/blob/6cbb9394d1474e3a728b4...

> It isn't synchronous, it only looks so. It actually uses FP Promises under the hood[1]. We're doing a bit of a trick there (it throws a special value the first time, and then React retries when it's resolved). Then the result gets cached, and the repeated render succeeds synchronously.

I don't mean to offend, but this level of magic behavior sounds terrifying to me. React's original slogan was "it's just JavaScript". That made it easier to understand without special knowledge, easier to pair with whichever libraries you wanted to use, and easier to apply tooling to without special integration. But it seems to be getting further and further away from that goal with each passing year, turning into an opinionated, all-in-one framework like Angular.

Maybe this is just the cost of gaining and maintaining popularity. I won't say it's intrinsically a bad thing, but I personally don't feel good about it.

> I don't mean to offend, but this level of magic behavior sounds terrifying to me. React's original slogan was "it's just JavaScript". That made it easier to understand without special knowledge, easier to pair with whichever libraries you wanted to use, and easier to apply tooling to without special integration. But it seems to be getting further and further away from that goal with each passing year, turning into an opinionated, all-in-one framework like Angular.

I don’t know if this is that kind of magic (I certainly feel like hooks are, though they still don’t make React feel as much like a framework as Angular). When I saw Promise references in JSX, it looked a little confusing at first, but I thought about how it would probably be trivial to write a createElement wrapper that does the same thing on the client, just wrapping in a Suspense or similar until all of the promises resolve. I’m sure it’s more complicated than that, but it does still seem like “it’s just JavaScript”. Promises are widely used in JS already, after all.

See for me hooks - while a bit obtuse and weird - are not this kind of magic because they come down to just function calls. You can use almost nothing but your existing knowledge of JavaScript and reverse-engineer what's going on. That isn't the case with a lot of this other stuff

I’ve written a lot of comments on this the last few days, so please forgive my brevity. But the “magic” of hooks is that they’re not functions (though they look like it), and their use transforms components into not functions, again even though they look like it. Where a functional component is props in -> data structure out, a hooks component is a closure over state and its return value is something like a constructor or factory resembling the data structure you define. The component is no longer referentially transparent, even though it has all of the telltale signs that it should be.

All of that makes not only that component hard to reason about, but makes every function component suspect. And there goes the simplicity of “it’s just JavaScript”.

Yes, and that's a problem: there's no way to know when you call react-fs/react-fetch/etc whether it will be synchronous or asynchronous. async/await would make this code substantially clearer.

(To be clear, right now the code is using a, uh, very surprising pattern to make asynchronous code appear synchronous: if the result value is cached, it returns the value synchronously, but if not, the fetcher throws a Promise. You know, you'd normally throw exception objects, but JS lets you throw any value, so why not throw a Promise amirite?!? When React catches a Promise (a thenable) it awaits the result, caches it and then re-runs the React component; now the component won't throw a Promise and will run to completion normally.)

The FAQ says extremely little about why async/await were avoided:

> Why don’t use just use async/await?

> We’d still need a layer on top, for example, to deduplicate fetches between components within a single request. This is why there are wrappers around async APIs. You will be able to write your own. We also want to avoid delays in the case that data is synchronously available -- note that async/await uses Promises and incurs an extra tick in these cases.

A layer to deduplicate fetches sounds great, but that library could use async/await, too.

Using async/await will make this code substantially easier to understand, and the cost of a "tick" is trivial (and certainly worth the price, particularly in server-side code).

EDIT: Thinking about this a bit harder, I know the React team has been extremely resistant to async/await in components for years now. Fine. But that needs its own RFC. Some clear written document spelling out in detail why async/await is the wrong approach for React, and not just a comment on this RFC.

I'd like to ask the React team to write that RFC doc because I think it can't be written: you'll find that the argument falls apart when you try to explain it.

The Server Components FAQ has been updated. It now says:

> Why not use async/await?

> The React IO libraries used in the demo and RFC follow the conventions we've discussed previously for writing Suspense-compatible data-fetching APIs. Suspense-compatible APIs return data synchronously when it is already available, throw if there is an error, or "suspend" to indicate to React that they are unable to return a value. The mechanism for Suspending is to throw a Promise value. React uses resolution of the promise to know when the API may be ready to provide a value (or that it has failed) and to schedule an attempt to render the component again.

> One new consideration in the design of Suspense from this proposal is that we would like to use a consistent API for accessing data across Server Components, Client Components, and Shared Components. Overall, though, the design of Suspense is outside the scope of this RFC. We agree that we should document this design clearly and will prioritize doing so in the new year.

Thanks for incorporating this; let's see how the design doc plays out next year.

We will definitely do the RFC in the future. Challenge accepted. :-)

I know this is old now by a day but to add on,

Is it possible you could document why then react doesn’t handle plan promises natively? You can conditionally return a promise there by not incurring the cost of the sugar of async/await

Also related: why wouldn’t generators be a good fit for some of these use cases? Especially since iterating over data is a common action. I imagine this would be easier to integrate into the framework for things like this

> why wouldn’t generators be a good fit for some of these use cases? Especially since iterating over data is a common action. I imagine this would be easier to integrate into the framework for things like this

I think this is answered by Seb, in the "Contributing to Fiber" issue in the React Repo: https://github.com/facebook/react/issues/7942#issuecomment-2...

But maybe things have changed since then!

I suspect that "why doesn't react handle plain promises natively" would be addressed in the RFC detailing the whole "let's throw a promise" thing.

> It isn't synchronous, it only looks so.

Ah, all the more dangerous then.

I appreciate that throwing promises is a clever flow control hack and that y'all have proven it to be a useful micro-optimization, at least in current V8. But my experience tells me that I'd rather a junior developer write incorrect async/await code than accidentally introduce a synchronous dead-lock. Async code that looks synchronous can mask synchronous code that should be async much easier.

> However, they do leave us the ability to use sync I/O in the cases where it makes sense

async/await don't stop you from doing that either. The .NET CLR/BCL has a lot of optimizations in place today for fast-pathing synchronous Task<T> (and now ValueTask<T>) code. As Promises become more and more common in the JS ecosystem it should be expected that more and more such optimizations will arrive in JS engines "tomorrow".

Even if we don't feel like we can expect such optimizations soon, server side code is obviously easier to scale for the application developer and less needful of such micro-optimizations, and I think adds more weight to my gut instinct that explicitly async (async/await) code on server side would possibly be one good way to make a distinction between client and server-side only code.

Surprisingly, their implementation of `react-fs` doesn't use synchronous node `fs.readFileSync`… instead, they're doing some trickery (throwing a Promise!) to make `react-fs` appear synchronous when it's really asynchronous! https://github.com/facebook/react/blob/master/packages/react...


However, I agree that using async/await would be better.

There is also another RFC that proposes alternatives: https://github.com/reactjs/rfcs/pull/189

Probably worth leaving this feedback on the RFC itself, then :)

This feedback is me working out loud the problem if I have solutions to offer the RFC itself, but I would happily apply that feedback to the RFC itself if I did feel like I made headway toward the solution.

The RFC, which explains what these are without a video: https://github.com/reactjs/rfcs/blob/bf51f8755ddb38d92e23ad4...

Thank you! I have very little patience for information delivered by video, it feels so inefficient compared to text in a lot of cases

Overall I agree with you (I'm the same) but some things are very hard to convey without seeing them in practice. Like I replied in the other comment, for those who don't want to watch the whole video, you'd still get a lot out of watching the demo (starting at 11:56).

Thanks Dan, sorry I missed your reply. Will check out the demo - sounds like a great new feature which I’m sure took a huge effort, so congratulations on the unveiling! Look forward to being able to use it :)

I had a quick look and that seems very convoluted. How do you decide which jsx goes to server and which to the client? Do you need to maintain jsx in both files? How to avoid duplication of code?

I haven't worked with server-side React, but my understanding is that you can parameterize a component to go one way or the other in a given context. The main value proposition (vs traditional server-side templating) is code-sharing

Thank you, I need to read more about that. Sounds promising.

Also worth noting: versions of this kind of thing have been around for years (see Next.js for a prominent example). It's unclear to me precisely how this announcement improves on the existing status-quo

I really recommend watching the talk and the demo for those who didn't. It's hard to convey the nuance without seeing it. If you want to skip the talk itself, just focus on the demo (timestamp 11:56).

Server Components are very different from what Next.js does today (traditional SSR). Here's a few differences:

* Server Components code is never sent to the client. By comparison, with traditional SSR all of the component code gets sent to the client anyway in the JS bundle.

* Server Components let you access the backend directly from anywhere in the tree. In Next.js, you can access the backend inside getServerProps(), but that only works at the top level page level, which means componentization is very limited. E.g. random npm component can't do that.

* Server Components can be refetched without losing the Client state inside of their tree. Because the primary transport is richer than HTML, we can refetch a server-rendered part (e.g. a search result list) without blowing away the state inside (e.g. the search input text, focus and selection).

That said, it's not a dig at Next.js -- the whole goal is to enable Next.js and similar frameworks to be much better.

Happy to answer specific questions!

Is there any investigation planned or underway to pre compile these to some intermediate thing so non-node servers could hook into this? I’m thinking maybe some WASM interop off the top of my head perhaps. WASM is maturing fast and I’ve already seen demos where you can make network calls using a WASM interop server side.

Certainly pre rendering would be more efficient too with these types of components so I imagine updates to hydration will be part of this.

What will be the changes in bundling? Will it just be a simple webpack plugin or will more discipline be required?

Can this be run in any other back end other than Node since it only transmits a stream of serialized vdom?

>Will it just be a simple webpack plugin or will more discipline be required?

Broadly speaking, it's a webpack plugin that finds all Client components and creates an id -> chunk URL map on the disk. And then the Node.js Loader that replaces imports to Client Components with access to to this map. You will be able to wire it up yourself, but there are other bits (like routing integration) so we're going to make it work in a framework like Next.js first. Then once there is a quality integration, you can copy how it's done in your custom setup.

>Can this be run in any other back end other than Node since it only transmits a stream of serialized vdom?

React components themselves are JS so we'd expect the backend to be in JS. (I mean, you could reimplement it in Rust or something, but then you can't easily move components to the Client and back.) There is no hard dependency on Node.js itself — as long as your JS environment has some sort of streams, we could make it work.

Next.js does server rendering, but it doesn’t do this. Even with full static site generation, Next.js still sends the whole thing in a JS bundle to render on the client, even fully static components with no interactivity.

FWIW, you can disable this per page by adding `export const config = { unstable_runtimeJS: false}`.

Marked unstable because it was just recently added and is experimental. Per page is about the best Next can do, as it would need React itself to decide how to do this per component, which looks to be a big part of this new RFC.

Thanks. I meant to mention this but forgot. And it’s certainly an improvement over “just schlep everything to the browser”, but it’s not a great solution for something like a long form static blog post with an interactive button in the site header.

Thank you, I clicked around everywhere trying to find something with a written explanation.

Same here - I will invest a great deal of effort in not having to watch a video, which increasingly makes me feel at odds with a large swathe of the engineer community.

Principal Skinner: Am I so out of touch? No, it's the children who are wrong.

On the plus side it's a pleasure to watch Dan Abramov's videos. I always feel I got a bit smarter afterwards.

Thank you for kind words! I felt very self-conscious about this one because there was no audience and recording at home was a lot harder than I thought. So it's nice to hear that you liked it.

Agree with the others, you're great at this.

I don't like the video format, but yours are always very enjoyable.

I thought you did a fantastic job, too.

I don't mind watching a video for a deep-dive, but I don't want to scan through a 60 minute video looking for a tl;dr

in case it helps anyone I went thru and timestamped the talk video and extracted some bits i figured worth emphasizing: https://twitter.com/swyx/status/1341124408060985345?s=20

One thing I haven’t seen addressed is versioning: what happens when you push out an update so your server component now takes in a different set of props (or renders a different set of divs/spans/css than before?)

Previously, the abstraction level between your server and client used to be APIs, with well-known patterns for versioning... this has now shifted to the level of component props, which I suppose is still workable — it’s very similar to graphql — but is certainly not very intuitive.

That's a great question :-)

In general, this isn't a new problem per se, as backend API changes can similarly break different clients that have been cached. Or when you deploy client code and the code-split chunks have changed. It's true that with this approach it likely becomes a more common problem.

One way to solve this is immutable deployments where the server runs the version of the code that the client is on. Many providers already do immutable delpoyments so there's possible integration there. It could also have some kind of a fallback where a coarser refetch is triggered when the versions become incompatible.

We expect that hosting providers will be innovating in this area in the future, too.

While builtin support for server side React makes a lot of sense, as a developer I'm really missing a high level browser framework in React that gives me routes, auth, api access.

Maybe I've just been out of the loop. At work every React app I've built had all this custom logic for each of those pieces. But when I start a new project I'm loathe to do all that setup again.

I'd just like an opinionated framework built on React that gets rid of some of this boilerplate of browser routing, auth, api access, etc.

This isn't a knock on React. React is awesome. But at this point it's almost too barebones on its own.

Next¹ offers a more fleshed out framework with far less boilerplate for both client and server code.

Server side, Nest² offers much of the same. Think Angular but for backend dev. Yes, it has some boilerplate to it but for the most part, its CLI eases redundant typing.

[1] https://nextjs.org/ [2] https://nestjs.com/

You can be productive with Nest.js but I'd hate to promote it as a great solution. It's really a mashup of other JS libs with Nest bringing a rather convoluted form of dependency injection and http action decorators. You can build your own soup rather easily without the Nest overhead and with better choices. It's not the worst place to start, there's just better shit out there.

I don't know. I use next.js and love it. But I have to implement two static methods for every thing I want to fetch... This also locks me into next.js...

I think more people should take a look at https://blitzjs.com. It makes your lock in problem slightly worse I suppose but only because it argues that you want most apps to be coupled like this in real life. In exchange, it gives you great developer experience around things like data fetching.

Touched this about a half a year ago. Feels like rails for react. Very opinionated and interesting how prisma is used to set up the data schema.

I’ve shopped around a lot and most projects targeting a similar featureset are generally adopting the Next.js APIs for server-side data fetching. So it’s probably not as bad in terms of lock-in as you’d expect.

That's exactly what Next.js is:


(as well as a few other tools in the ecosystem)

Wait, Next.js would be useful for pure client SPAs? I built a prototype for the SSR and loved it, but it didn’t occur to me it could be used for client apps too...

Does it need to be served from a Node.js environment or can it be used as a bundler and be served from other servers (eg Django or Rails)?

If you're building a dashboard-like SPA it's not really suitable, because every page is its own component, so you can't do nested layouts or routing.

You actually can build nested layouts in Next.js — see https://adamwathan.me/2019/10/17/persistent-layout-patterns-...

Next.js can output static pages now on build time, so you don't necessarily need to use SSR unless you're hydrating data on the server.

Thanks! Somehow I thought next.js was only for SSR.

What are the other tools?

Totally agree. The goal here is to give better building blocks for frameworks to build upon.

AFAIK an explicit goal of React (at least in the beginning) was to not be opinionated at all and to only provide the view/rendering part of an app. Since then it feels like its scope has expanded a bit. Or maybe not? Not sure which batteries are supposed to be included anymore. Would be interesting to hear Dan's thoughts on this.

There's also Kretes¹ that comes with a React.js template. It pre-configures few popular libraries such as React Query, React Hook Form, built-in REST & GraphQL API, authorization/authentication, etc - so you can (relatively) quickly create a «full-stack» (TypeScript) app

[1]: https://kretes.dev

Take a look at https://blitzjs.com! It's built on top of Next.js, which everyone else has been recommending to you, but goes a step further that I think you might find enjoyable, reading your comment.

I dont know if I'm a 100% correct but this is rendering static parts that are serializable in the server. But instead of as HTML its in a custom format (I'm guessing something very similar to virtual dom representation of a component). But they are better than static HTML rendering because the components can maintain state even though parts of it are rendered on the server. I'm very excited for this and I'm just beginning to build a POC for Next.js from tomorrow. May be when this becomes stable we'll be able to use it via Next.js.

This sounds correct! See also my clarification.


Oh my god!! Dan Abramov validated my thoughts!!!

It seems every few years we keep jumping between client side and server side rendering. How is this better than strictly client or server side? It seems like it just has the disadvantages of both with additional complexity to maintain an application now coupled to both your back and front ends.

Presumably if your react app could be more performant with server-side rendering it would be better to simply use traditional server side rendering and sprinkle JS, and if it's really a "web app" I'm not sure how substantially faster react server components will be in practice.


I read the FAQ and some of the answers seem to need clarification:

> Doesn’t always re-fetching the UI make interactions slow?

The answer to this isn't really definitive.

> What are the performance benefits of Server Components?

The answer to this makes it sound server components make fewer requests than client, but in the context of GraphQL which is mentioned in the answer, the total amount of requests should be the same. If anything wouldn't a regular react app be more responsive?

This lets you choose on a per-component basis.

The way I think about it: in most apps there are a lot of components that only rerender when there is new data from the server. Those components can be run on the server without losing any interactivity. Especially if a component has highly branching logic, this reduces the amount that has to be sent to the client by a large margin.

The programming model is very similar and it’s possible to make one component that works on both server and client simultaneously, which means you’re not locked in if you want to move a server component back to the client later.

One of the problems with server side rendered React is hydration. Even a 100% static website built with Gatsby will download a bunch of JS just to hydrate components that end up doing nothing. There have been some coarse workarounds for this such as gatsby-plugin-no-javascript for Gatsby and `unstable_runtimeJs: false` in Next. But those both operate at the entire page level.

React server side components feels like it's taking those coarse solutions and making them more fine grained and less hacky. But my entire exposure to them is the video they just released, so who really knows at this point.

This looks like something I've been hoping for ever since Dan hinted at it on twitter, but I'm kinda worried when it's going to become generally available.

My worry is mostly due to suspense for data fetching (which I really care about) and concurrent mode (which I care less about but which is still nice) releases, which were announced in late 2018 and promised by mid 2019, but then delayed at least twice since then.

We're trying not to promise dates anymore since as you rightly noted we aren't very good at estimates. Server Components rely on Concurrent Mode for streaming, but as I mentioned in the talk, we feel pretty good about our progress there, and the remaining work on CM is mostly to simplify it and make it easier to adopt. So it's not too far ahead.

As for Suspense, Server Components largely are the evolution of Suspense (although it works on the client too). We've decided to hold back releasing Suspense a year ago because we didn't have a solution to waterfalls. Now we do. So it's all coming together. In general, a lot of what we do is research, and research has many dead ends and takes time.

I think this is easier for them to pull off than Suspense and Concurrent mode because of the difference in complexities. I'm expecting this to be mainstream by the end of next year.

It's too soon to have a deep opinion on the rfc, but just looking at the examples and thinking ahead...

if folks start using this approach of leveraging a db client in their components, I hope they are embracing dependency injection (DI) somehow, otherwise their codebases are going to become stupidly complex to test.

For an example of how (React + Server Side only plugins + DI) works today, take a look at how its implemented within fusionjs https://fusionjs.com/docs/references/creating-a-plugin/depen...

I know people like to hate Java because it's to enterprise, but I've found Java with Spring Boot to be very pleasant to work with to develop a REST API. Nice dependency injection, nice to implement port and adapters, same with Kotlin, probably similar with C#.

More of a mess to do it with NodeJS.

React going full stack may be the first early good new from 2021. I hope they solve routing soon as well.

That's part of the solution and RFC ^1.

NextJS has a good solution already -- which is likely part of the reason it'll be the 1st framework where RSCs will get integrated. The core React team has been working closely w the Vercel folks for a while now, and this result is exactly what I've been waiting for.


My initial impression is this is a good fit for Next. Towards the end of the video Dan even says initial adoption will be through frameworks. I wonder how this impacts Remix and Gatsby.

This seems really powerful, but I can't shake my discomfort with just how much magic is going on (and I say this as someone who's fairly intimately familiar with the traditional, client-side React/JSX/Babel/Webpack story):

- Webpack and/or React must be very aware of what the other is doing here in order to drop the server-only JS from the bundle. Is it even possible to use alternate bundlers with this feature?

- The React server has to be deeply concerned with the application logic to know how to transparently pass state back and forth, presumably via some generated JSON endpoints. I guess maybe this was already the case for hybrid client/server apps.

- What's the lifetime of "persisted server-component state"? A component lifecycle? A page reload? A user session?

- The react-fetch package that "works as if it were synchronous so you don't have to wrap it in an effect or anything" is, from the outside, super weird. How did they make an asynchronous call synchronous? JS doesn't normally allow that outside of an async/await context. Did they wrap the JSON response object in some kind of lazy proxy or something?

Of course "magic" is relative, and can do wonders for productivity in the right environment. I just find myself in a place again where I have no idea how these abstractions are actually working, and I don't love that.

>Webpack and/or React must be very aware of what the other is doing here in order to drop the server-only JS from the bundle. Is it even possible to use alternate bundlers with this feature?

The bundler-specific code is here[1]. As you can see, it's not much. We will happily merge PRs and tweak the infrastructure to work with other bundlers. E.g. we're already in conversations with Parcel who are also interested. Yes, there is a cost in under-the-hood complexity, but we think you get a pretty high benefit (e.g. automatic code splitting), so we think it is worth that integration. Just like we take many other things bundlers do that we now take for granted.

>The React server has to be deeply concerned with the application logic to know how to transparently pass state back and forth, presumably via some generated JSON endpoints. I guess maybe this was already the case for hybrid client/server apps.

Not quite sure what you mean by this, but you be the judge: [2]. There is no "state" being passed. Server Components are completely stateless. (But they can render Client Components, which are normal React components on the client and have the lifecycle you're familiar with.)

>What's the lifetime of "persisted server-component state"? A component lifecycle? A page reload? A user session?

Server Components are not stateful. They follow a request-response model, just like traditional server pages. The novel part is that we're able to merge the result into the client tree (instead of .innerHTML = newHTML with old school partial templates, which destroys client state).

>How did they make an asynchronous call synchronous? JS doesn't normally allow that outside of an async/await context. Did they wrap the JSON response object in some kind of lazy proxy or something?

We will be posting a separate RFC in the coming weeks/months that dives into details. But the high-level answer is that we want to model it as a cache that you read synchronously. (So if the answer isn't synchronously available, we throw and retry later.) Async/await adds unnecessary overhead when content is already available synchronously, and especially on the server we'll expect many synchronous cache hits because some data has already been accessed from a parent component. Think of DataLoader-like abstractions.

[1]: https://github.com/facebook/react/blob/6cbb9394d1474e3a728b4...

[2]: https://github.com/reactjs/server-components-demo/blob/a8d5c...

> Server Components are not stateful.

I may have misunderstood. Here's what I was referring to:

> Server Components preserve client state when reloaded. This means that client state, focus, and even ongoing animations aren’t disrupted or reset when a Server Component tree is refetched.

As well as the part in the video where non-serializable props are referred to, in the context of implicitly sending data between client and server (these may in fact be two separate topics).

But either way, surely the quoted paragraph requires some highly-magical behavior?

>(these may in fact be two separate topics).


Here's one way to think about it. Imagine a traditional webpage. It renders to HTML. If you refetch it, you get new HTML. You can't simply "merge" two HTMLs on top of each other, so if you were to "update" document.body.innerHTML to the new result, you'd blow away focus, selection, etc.

Now imagine the server sent "virtual DOM" in the response instead. React knows how to "merge" such updates into the tree without destroying it. This is what React has been doing all along, right? So this is why we can refetch the Server tree, and show the result without blowing away the Client state inside.

Now, this virtual DOM tree contains things like divs (with their props) or references to Client Components (e.g. "module number 5 in bundle called chunk3.js") with their props. These are the things that need to be serializable. But they're only passed from Server to Client, not back. Think of them as the same role as HTML attributes.

For those who want a walkthru of the demo, I did a 2hr livestream with Dan and Lauren's help going thru all the assignments in the README. https://www.youtube.com/watch?v=La4agIEgoNg

the video is timestamped so you can skip around to the part you want to focus on

So weird to draw attention to the bundle size rather than the fact that it involves server-side rendering. The first would never imply the latter and is not really a big deal to begin with. Also it is technically not even true because you obviously still transfer data to the front-end.

The server-side rendering aspect is pretty cool though.

Bundle size is a huge part of the appeal of this. Right now if you use React for SSG in popular frameworks, loads of static content is duplicated in client bundles even though it’s completely useless for the client experience.

They cover some of this in the RFC. But I’ll use a site I’m working on as an example. There’s currently very little interactive content on the site, nearly everything is just Markdown content rendered to HTML and styled with CSS. But Next.js doesn’t know it’s static and sends it all again as React (well, Preact for my site) components. This doubles the weight of my pages on the wire, not to mention the performance impact of hydrating that static content.

I’ve been looking for weeks for a way to do what Server Components is doing, because while I love JSX for dev, that’s a really awful UX to foist on readers. There’s some prior art (search for Preact partial hydration if you’re interested), but it’s all very complex to set up. Having a first party reference solution will raise the bar significantly.

It's true that in the demo video, it's only called out that a third party dependency was removed from the client bundle. And it's true that these types of "infra" dependencies don't make up a large % of your codebase to begin with.

But there's another angle to consider. In thick client apps, product code (everything else) tends to take up the majority of the size of the client bundle. There's a real opportunity here to move some of that into Server Components which would help reduce that footprint significantly. For example consider the case of deeply wrapped components that ultimately render to a single <div>. Server Components could help remove that abstraction tax.

JSON bytes are cheaper than JS bytes, and it means if people want to write expensive components they can spend the processing power of their own servers instead of your phone.

Agreed, it's a strange bit to focus on. But it is true, even technically, bc of the way they go hand-in-hand: you don't need to send a "bundle", when you've done the rendering server-side. NextJS does a better job of making the case.

especially weird since you would use `date-fns` (vs. something like moment) specifically because it allows you to include only the functions that you require (via tree-splitting).

This is a bit of a convoluted example, but it also illustrates the point — I personally didn't know that! And I'm sure many people don't, either. The nice thing about Server Components is that for many use cases, you don't need to know the "right way" to import a library because it's simply not shipped to the client. And even the "right way" doesn't give you 0 bytes.

So we're back to Custom Tags from ColdFusion :-)

I don't necessarily have a problem with it. If concepts work well, there is no reason not to reuse them in new libraries.

I was envious of Custom Tags at the time - when I was writing PHP3.

I'd like to see a modern reincarnation.

Indeed. I know CF has fallen out of vogue, and I understand many of the reasons. However, there's a number of features there that I feel were not only innovative, but have yet to be implemented in other platforms some 20 years later.

This looks ok, but generally I'd much rather just use the Next.js model of something like `getInitialProps`, which is then passed to the component. This generally avoids the "waterfall" issue, and separates data and render code.

What I’m looking forward to is the integration of Server Components with Next.js SSG, where I can in theory build my whole site in JSX and generate the vast majority of it statically (Next already does this part) but only send/render the small portion that’s interactive to the browser (this has been discussed elsewhere, including in Next’s issue tracker, as “partial hydration” and is currently only possible with quite a bit of hacky effort, and I don’t think any of the existing reference implementations reduce bundle size).

This model still lets you do that, but we've found that composition does become important at some point — where parts of your page conceptually become like individual pages.

I'm not sure if I'm misunderstanding it or not, but if it uses JSON instead of HTML doesn't that mean if you have JS disabled there's absolutely no content? Won't that utterly destroy your pagerank score?

They address this in the RFC. They use JSON as a base transport layer that can be streamed into an HTML renderer. They explicitly target SSR/SSG as use cases for this, and they’re actively working with Next.js for a reference implementation.

I suspect (though they didn’t say so outright) that their choice of a non-HTML layer is because they want to continue to provide core functionality as renderer-agnostic. They do mention React Native as another use case. Presumably the same could be said of renderers targeting e.g. CLI, or smart TV APIs.

tldr: we use a richer format to preserve state on refetch, but this format can be turned into HTML for first render.

Our underlying transport is not exactly JSON but something like "JSON with holes". It's basically a JSON with placeholder string values like "$1", "$2", that later get filled in the future rows. This lets us have breadth-first streaming where we can show some content as early as possible, but always have intentional loading states.

The choice of JSON is because we need to be able to

* Pass component props (which are JSON) to Client Components

* Reconcile the tree on refetches so that state inside isn't thrown away

HTML doesn't let us have either of these two. However, starting with a richer format, and then converting it to HTML for initial render, works.

Since we can also do this at the build time, you could build a website that does all of this work at the build time, and turns it into HTML. However, as you might expect, it won't be interactive without JS.

Thank you, this is a great explanation (and I skimmed some of the RFC on my phone so probably missed some of that). The approach certainly makes sense.

It’s also an interesting approach compared to some of the partial hydration approaches I’ve seen. They all seem to use either wrapper divs/spans (bad for all kinds of reasons) or adjacent script “markers” containing initial hydration data, which seems similar to the slots approach but doesn’t necessarily enable streaming/prioritizing certain content for first render.

Since you’re here, I know the RFC discusses compile time static analysis to identify content which could be server-/static-only, but is there any consideration for supporting that where better static analysis is available (e.g. TypeScript/Flow)? I don’t mind manually marking components static or dynamic if it means a better UX, but doing it automatically in the compiler/bundler could be great for DX, as well as for preventing mistakes.

We did experiment with some pretty aggressive compilation approaches a few years ago (see Prepack). Ironically our conclusion from this (which informed Server Components design) is that you don't want this to be done automatically. Because then you don't have precise control and confidence over what gets shipped to the client and what gets shipped to the server. One compiler bailout, and the difference in the bundle is huge. So you'd want to add a way to enforce things, and now we're back to manual annotations.

(We did have a prototype that does this, though.)

I have looked at Prepack! I was wondering if it would be revisited. I certainly understand that lack of confidence, though I think I'd worried more about the false negative: something should be available to the client, but for whatever reason the compiler couldn't identify it. That said, I think tooling could go a long way to address the false positive. For example, in Next.js dev mode, there's an indicator that says whether a page can be statically rendered. If there were tools that said "we've automatically marked this component for the client bundle" and allowed the dev to manually opt out if they're sure the component shouldn't be sent to the client, it's still nice to have for the non-pathological case.

Anyway, thanks for filling in more detail!

For what it's worth, it is always possible to build an automatic approach on top of a manual one. Maybe somebody motivated will do it. :-)

I’ve seriously considered it! It seems like it would probably be a pretty big endeavor, but it would certainly be cool to have.

Is the streaming you talk about here and in the RFC over websockets, or the event-stream HTTP content type, or something more custom? Will the architecture have the potential to support realtime applications where the server will push updates to the client without the need to poll on an interval?

It's just regular chunked encoding, which we consume via a ReadableStream[1]. The key part is that we're able to consume partial data because we read it row by row. Each row represents a piece of JSON with placeholders for missing values. When we receive a row, we try to "continue" rendering on the client, and if we have a loading state we can show (even before all values arrived), we do that.

>Will the architecture have the potential to support realtime applications where the server will push updates to the client without the need to poll on an interval?

I don't see why that wouldn't be possible.

[1]: https://developer.mozilla.org/en-US/docs/Web/API/Streams_API...

> I’m not sure if I’m misunderstanding it or not, but if it uses JSON instead of HTML doesn’t that mean if you have JS disabled there’s absolutely no content?

Assuming you are using server components but not using SSR for the whole app, sure.

But in the case, what you are doing is building what amounts to an isomorphic React app; a client-side React app leveraging server components for an easy-to-integrate backend. So, whatever the pagerank impacts are of using a client-side React front-end application is are expected.

If you're using SSR (or static generation) for the initial pageload, you can continue to do so. The payloads can just be used to update the page when the user performs subsequent interactions.

Additionally, you can always fall back to clickable links if you really want to support subsequent interactions for users with no JS, if you've built the routing infrastructure for that

Would be interested to hear if they explored using protobufs or did they default to JSON...?

Default to JSON cause it's tied rather directly to props

The same code can run and render to HTML on the on the server under Node.js

First reaction: I started my career with writing Windows & web apps using .NET over a decade ago, and what I just saw, kinda-sorta-maybe reminds me of ASP.NET Webforms.

Does anyone else feel the same way?

A bit. It also reminds me of what MS is doing with Blazor.


See https://github.com/reactjs/rfcs/blob/07dd4bc4807605020351606...:

>We don’t send the whole state of the program to the server. Interactive bits should be Client components.

Does this mean that for any user interaction some of your code will run on backend which means you'll need to monitor the behavior & performance of the server. Imagine something breaking in a Server Component in production. Your breakpoint debugging browser-skills now need happen on the server.

Btw there's nothing wrong with this. I dig the idea of owning more of backend and Server Components are a really interesting paradigm to enable that!

This is going to improve so many websites! This is huge, folks.

Just a thought: Does this mean we can just throw in server code in our app, and it is possible to have zero React code downloaded on the client side? That is, we can have a "server-only" React, even though it might not be ideal?

From what I can tell, it would seem like it. Though I don't understand why it wouldn't be ideal.

Sounds interesting. Is this kind of like MS Blazor but with React instead of Razor? And JSON instead of WebAssembly?


I don't think this is exciting without the ability to create server components that are interactive.

Hmm, not sure what the hype is about. This still does not solve a lot of my problems with React. You still need the base ~97.5kb react runtime to render a "server component". And if a component is a ~10kb in JS size and their new JSON UI representation is ~1kb in size then if you request 20 renders of that component then over the wire it would be 20kb of fragments than the 10kb component...?

Seems odd that they haven't worked out how this works for routing yet (https://github.com/reactjs/rfcs/blob/07dd4bc4807605020351606...) and that in the demo the whole app is rendered, not individual components. Just seems like turbolinks but for rendering to React VDOM rather than HTML. And I thought most people used React for CSR through hosting files on static buckets leaving out the server...?

One way to think about it is that we want to change the shape of the curve as the app grows. In our experience, as apps become more complex and take on dependencies, the fixed cost of React is offset by all the product and other third party library code. You might think it's not a problem for small/hobby apps, but those often fail to track their bundle size, so they also have perf problems. So we want to change the shape of the curve instead of just "making React smaller". If React itself is really your bottleneck, use Preact.

>And if a component is a ~10kb in JS size and their new JSON UI representation is ~1kb in size then if you request 20 renders of that component then over the wire it would be 20kb of fragments than the 10kb component...?

Keep in mind that you can shift things between Client and Server very fluidly. That's very much the point of the proposal. If you refetch something twenty times over a small period, it might be a good candidate to move to the Client. On the other hand, on most navigations you have to fetch anyway (to get data), and so you might as well do a bunch of CPU work on the server and avoid sending abstraction bloat down.

One part that's missing from the current demo is granular refetching. That's going to be an important piece (including for routing). So that the refetch happens for partial subtrees and not always from the top. We have a rough idea for how it should work but it's still on the TODO list (as many other things we mentioned as ongoing research).

Cool thanks for clarifying. I appreciate the work being done to utilise the benefits of using the server for rendering. It might be worth mentioning the payload inflection point in the drawbacks on the RFC. Looking forward to where this goes!

CSR-only apps quickly run into issues when it comes to SEO and initial page load speeds. For reasonably static content it can be solved by snapshotting the routes and serving the snapshots from the CDN. If you have more dynamic content you are forced to SSR, and this is where this feature comes in.

Is this similar to Phoenix LiveView?

Not quite.


>Our server isn’t stateful. The tradeoff is we refetch more coarsely.

Does this have any advantage over Svelte + Sapper?

The advantage for me is that it uses JSX. I like everything about Svelte except the weird semi-custom language, module format, and magical state management. Being able to produce similar (though probably still less optimized) products with plain JS functions which simply return a data structure, and which can easily render to other targets besides the browser, is fantastic.

Not except this is in React. So similar thing. Also Sapper as of recently is no longer a thing :).

Can you expand on why Sapper is no longer a thing?

Since g4k already answered, I can only say, isn't it wonderful at the pace things are going. And we are no longer having weekly new framework thing.

Is Svelte compile-time? I don't think RSC are compile time, just server-time.

They mention SSG as a goal in the RFC, so they intend to support build time (which would be transpile time if you want to split hairs).

I imagine this will lead to many more nodejs servers.. and in turn many more CPU/s spent per user than mostly static cachable resources.

Will it have a big impact on aws usage?

We have some ideas about how static caching integrates into this (for component subtrees). But this is definitely much further ahead into the future.

You're right there is a tradeoff here. However, that's also part of the point. When the tradeoff isn't right, you can move much of your logic back to the client -- the point is the fluidity and the ability to choose.

I use .NET for server-side rendering of components. I also use vanilla DOM APIs to manage changes in a component's state.

Why get all complicated with React?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact