I don't have a strong counter-proposal off the top of my head, but I do think the RFC is too quick to dismiss async/await and dynamic imports.
async/await makes a lot of sense to differentiate server-side components (Promise<T> returns would be a very different type of return from current components), many things you would use server like DB access would want you to use Promises today. (That seems a clear problem with the server side examples as presented in the RFC: they use synchronous node 'fs' instead of FS Promises, and some sort of synchronous 'db' that looks less and less like any modern Node db provider.) There's even a possible intuition pump there that hooks "obviously" won't work in an async function making it easier to keep the rules in mind between the types of components. If the components are then async/await by "requirement", dynamic imports stop looking so out of place and start to look much more correct in the component body.
It isn't synchronous, it only looks so. It actually uses FP Promises under the hood. We're doing a bit of a trick there (it throws a special value the first time, and then React retries when it's resolved). Then the result gets cached, and the repeated render succeeds synchronously. We will describe this mechanism in detail in a future RFC.
All of the examples in the demo (including fetch, readFile, and the DB query call) are async under the hood.
However, they do leave us the ability to use sync I/O in the cases where it makes sense (depends on whether we have other work to do, etc).
Maybe this is just the cost of gaining and maintaining popularity. I won't say it's intrinsically a bad thing, but I personally don't feel good about it.
(To be clear, right now the code is using a, uh, very surprising pattern to make asynchronous code appear synchronous: if the result value is cached, it returns the value synchronously, but if not, the fetcher throws a Promise. You know, you'd normally throw exception objects, but JS lets you throw any value, so why not throw a Promise amirite?!? When React catches a Promise (a thenable) it awaits the result, caches it and then re-runs the React component; now the component won't throw a Promise and will run to completion normally.)
The FAQ says extremely little about why async/await were avoided:
> Why don’t use just use async/await?
> We’d still need a layer on top, for example, to deduplicate fetches between components within a single request. This is why there are wrappers around async APIs. You will be able to write your own. We also want to avoid delays in the case that data is synchronously available -- note that async/await uses Promises and incurs an extra tick in these cases.
A layer to deduplicate fetches sounds great, but that library could use async/await, too.
Using async/await will make this code substantially easier to understand, and the cost of a "tick" is trivial (and certainly worth the price, particularly in server-side code).
EDIT: Thinking about this a bit harder, I know the React team has been extremely resistant to async/await in components for years now. Fine. But that needs its own RFC. Some clear written document spelling out in detail why async/await is the wrong approach for React, and not just a comment on this RFC.
I'd like to ask the React team to write that RFC doc because I think it can't be written: you'll find that the argument falls apart when you try to explain it.
> Why not use async/await?
> The React IO libraries used in the demo and RFC follow the conventions we've discussed previously for writing Suspense-compatible data-fetching APIs. Suspense-compatible APIs return data synchronously when it is already available, throw if there is an error, or "suspend" to indicate to React that they are unable to return a value. The mechanism for Suspending is to throw a Promise value. React uses resolution of the promise to know when the API may be ready to provide a value (or that it has failed) and to schedule an attempt to render the component again.
> One new consideration in the design of Suspense from this proposal is that we would like to use a consistent API for accessing data across Server Components, Client Components, and Shared Components. Overall, though, the design of Suspense is outside the scope of this RFC. We agree that we should document this design clearly and will prioritize doing so in the new year.
Thanks for incorporating this; let's see how the design doc plays out next year.
Is it possible you could document why then react doesn’t handle plan promises natively? You can conditionally return a promise there by not incurring the cost of the sugar of async/await
Also related: why wouldn’t generators be a good fit for some of these use cases? Especially since iterating over data is a common action. I imagine this would be easier to integrate into the framework for things like this
I think this is answered by Seb, in the "Contributing to Fiber" issue in the React Repo: https://github.com/facebook/react/issues/7942#issuecomment-2...
But maybe things have changed since then!
I suspect that "why doesn't react handle plain promises natively" would be addressed in the RFC detailing the whole "let's throw a promise" thing.
Ah, all the more dangerous then.
I appreciate that throwing promises is a clever flow control hack and that y'all have proven it to be a useful micro-optimization, at least in current V8. But my experience tells me that I'd rather a junior developer write incorrect async/await code than accidentally introduce a synchronous dead-lock. Async code that looks synchronous can mask synchronous code that should be async much easier.
> However, they do leave us the ability to use sync I/O in the cases where it makes sense
async/await don't stop you from doing that either. The .NET CLR/BCL has a lot of optimizations in place today for fast-pathing synchronous Task<T> (and now ValueTask<T>) code. As Promises become more and more common in the JS ecosystem it should be expected that more and more such optimizations will arrive in JS engines "tomorrow".
Even if we don't feel like we can expect such optimizations soon, server side code is obviously easier to scale for the application developer and less needful of such micro-optimizations, and I think adds more weight to my gut instinct that explicitly async (async/await) code on server side would possibly be one good way to make a distinction between client and server-side only code.
However, I agree that using async/await would be better.
Server Components are very different from what Next.js does today (traditional SSR). Here's a few differences:
* Server Components code is never sent to the client. By comparison, with traditional SSR all of the component code gets sent to the client anyway in the JS bundle.
* Server Components let you access the backend directly from anywhere in the tree. In Next.js, you can access the backend inside getServerProps(), but that only works at the top level page level, which means componentization is very limited. E.g. random npm component can't do that.
* Server Components can be refetched without losing the Client state inside of their tree. Because the primary transport is richer than HTML, we can refetch a server-rendered part (e.g. a search result list) without blowing away the state inside (e.g. the search input text, focus and selection).
That said, it's not a dig at Next.js -- the whole goal is to enable Next.js and similar frameworks to be much better.
Happy to answer specific questions!
Certainly pre rendering would be more efficient too with these types of components so I imagine updates to hydration will be part of this.
Can this be run in any other back end other than Node since it only transmits a stream of serialized vdom?
Broadly speaking, it's a webpack plugin that finds all Client components and creates an id -> chunk URL map on the disk. And then the Node.js Loader that replaces imports to Client Components with access to to this map. You will be able to wire it up yourself, but there are other bits (like routing integration) so we're going to make it work in a framework like Next.js first. Then once there is a quality integration, you can copy how it's done in your custom setup.
>Can this be run in any other back end other than Node since it only transmits a stream of serialized vdom?
React components themselves are JS so we'd expect the backend to be in JS. (I mean, you could reimplement it in Rust or something, but then you can't easily move components to the Client and back.) There is no hard dependency on Node.js itself — as long as your JS environment has some sort of streams, we could make it work.
Marked unstable because it was just recently added and is experimental. Per page is about the best Next can do, as it would need React itself to decide how to do this per component, which looks to be a big part of this new RFC.
Principal Skinner: Am I so out of touch? No, it's the children who are wrong.
I don't like the video format, but yours are always very enjoyable.
Previously, the abstraction level between your server and client used to be APIs, with well-known patterns for versioning... this has now shifted to the level of component props, which I suppose is still workable — it’s very similar to graphql — but is certainly not very intuitive.
In general, this isn't a new problem per se, as backend API changes can similarly break different clients that have been cached. Or when you deploy client code and the code-split chunks have changed. It's true that with this approach it likely becomes a more common problem.
One way to solve this is immutable deployments where the server runs the version of the code that the client is on. Many providers already do immutable delpoyments so there's possible integration there. It could also have some kind of a fallback where a coarser refetch is triggered when the versions become incompatible.
We expect that hosting providers will be innovating in this area in the future, too.
Maybe I've just been out of the loop. At work every React app I've built had all this custom logic for each of those pieces. But when I start a new project I'm loathe to do all that setup again.
I'd just like an opinionated framework built on React that gets rid of some of this boilerplate of browser routing, auth, api access, etc.
This isn't a knock on React. React is awesome. But at this point it's almost too barebones on its own.
Server side, Nest² offers much of the same. Think Angular but for backend dev. Yes, it has some boilerplate to it but for the most part, its CLI eases redundant typing.
(as well as a few other tools in the ecosystem)
Does it need to be served from a Node.js environment or can it be used as a bundler and be served from other servers (eg Django or Rails)?
What are the other tools?
Presumably if your react app could be more performant with server-side rendering it would be better to simply use traditional server side rendering and sprinkle JS, and if it's really a "web app" I'm not sure how substantially faster react server components will be in practice.
I read the FAQ and some of the answers seem to need clarification:
> Doesn’t always re-fetching the UI make interactions slow?
The answer to this isn't really definitive.
> What are the performance benefits of Server Components?
The answer to this makes it sound server components make fewer requests than client, but in the context of GraphQL which is mentioned in the answer, the total amount of requests should be the same. If anything wouldn't a regular react app be more responsive?
The way I think about it: in most apps there are a lot of components that only rerender when there is new data from the server. Those components can be run on the server without losing any interactivity. Especially if a component has highly branching logic, this reduces the amount that has to be sent to the client by a large margin.
The programming model is very similar and it’s possible to make one component that works on both server and client simultaneously, which means you’re not locked in if you want to move a server component back to the client later.
React server side components feels like it's taking those coarse solutions and making them more fine grained and less hacky. But my entire exposure to them is the video they just released, so who really knows at this point.
My worry is mostly due to suspense for data fetching (which I really care about) and concurrent mode (which I care less about but which is still nice) releases, which were announced in late 2018 and promised by mid 2019, but then delayed at least twice since then.
As for Suspense, Server Components largely are the evolution of Suspense (although it works on the client too). We've decided to hold back releasing Suspense a year ago because we didn't have a solution to waterfalls. Now we do. So it's all coming together. In general, a lot of what we do is research, and research has many dead ends and takes time.
if folks start using this approach of leveraging a db client in their components, I hope they are embracing dependency injection (DI) somehow, otherwise their codebases are going to become stupidly complex to test.
For an example of how (React + Server Side only plugins + DI) works today, take a look at how its implemented within fusionjs https://fusionjs.com/docs/references/creating-a-plugin/depen...
More of a mess to do it with NodeJS.
NextJS has a good solution already -- which is likely part of the reason it'll be the 1st framework where RSCs will get integrated. The core React team has been working closely w the Vercel folks for a while now, and this result is exactly what I've been waiting for.
- Webpack and/or React must be very aware of what the other is doing here in order to drop the server-only JS from the bundle. Is it even possible to use alternate bundlers with this feature?
- The React server has to be deeply concerned with the application logic to know how to transparently pass state back and forth, presumably via some generated JSON endpoints. I guess maybe this was already the case for hybrid client/server apps.
- What's the lifetime of "persisted server-component state"? A component lifecycle? A page reload? A user session?
- The react-fetch package that "works as if it were synchronous so you don't have to wrap it in an effect or anything" is, from the outside, super weird. How did they make an asynchronous call synchronous? JS doesn't normally allow that outside of an async/await context. Did they wrap the JSON response object in some kind of lazy proxy or something?
Of course "magic" is relative, and can do wonders for productivity in the right environment. I just find myself in a place again where I have no idea how these abstractions are actually working, and I don't love that.
The bundler-specific code is here. As you can see, it's not much. We will happily merge PRs and tweak the infrastructure to work with other bundlers. E.g. we're already in conversations with Parcel who are also interested. Yes, there is a cost in under-the-hood complexity, but we think you get a pretty high benefit (e.g. automatic code splitting), so we think it is worth that integration. Just like we take many other things bundlers do that we now take for granted.
>The React server has to be deeply concerned with the application logic to know how to transparently pass state back and forth, presumably via some generated JSON endpoints. I guess maybe this was already the case for hybrid client/server apps.
Not quite sure what you mean by this, but you be the judge: . There is no "state" being passed. Server Components are completely stateless. (But they can render Client Components, which are normal React components on the client and have the lifecycle you're familiar with.)
>What's the lifetime of "persisted server-component state"? A component lifecycle? A page reload? A user session?
Server Components are not stateful. They follow a request-response model, just like traditional server pages. The novel part is that we're able to merge the result into the client tree (instead of .innerHTML = newHTML with old school partial templates, which destroys client state).
>How did they make an asynchronous call synchronous? JS doesn't normally allow that outside of an async/await context. Did they wrap the JSON response object in some kind of lazy proxy or something?
We will be posting a separate RFC in the coming weeks/months that dives into details. But the high-level answer is that we want to model it as a cache that you read synchronously. (So if the answer isn't synchronously available, we throw and retry later.) Async/await adds unnecessary overhead when content is already available synchronously, and especially on the server we'll expect many synchronous cache hits because some data has already been accessed from a parent component. Think of DataLoader-like abstractions.
I may have misunderstood. Here's what I was referring to:
> Server Components preserve client state when reloaded. This means that client state, focus, and even ongoing animations aren’t disrupted or reset when a Server Component tree is refetched.
As well as the part in the video where non-serializable props are referred to, in the context of implicitly sending data between client and server (these may in fact be two separate topics).
But either way, surely the quoted paragraph requires some highly-magical behavior?
Here's one way to think about it. Imagine a traditional webpage. It renders to HTML. If you refetch it, you get new HTML. You can't simply "merge" two HTMLs on top of each other, so if you were to "update" document.body.innerHTML to the new result, you'd blow away focus, selection, etc.
Now imagine the server sent "virtual DOM" in the response instead. React knows how to "merge" such updates into the tree without destroying it. This is what React has been doing all along, right? So this is why we can refetch the Server tree, and show the result without blowing away the Client state inside.
Now, this virtual DOM tree contains things like divs (with their props) or references to Client Components (e.g. "module number 5 in bundle called chunk3.js") with their props. These are the things that need to be serializable. But they're only passed from Server to Client, not back. Think of them as the same role as HTML attributes.
the video is timestamped so you can skip around to the part you want to focus on
The server-side rendering aspect is pretty cool though.
They cover some of this in the RFC. But I’ll use a site I’m working on as an example. There’s currently very little interactive content on the site, nearly everything is just Markdown content rendered to HTML and styled with CSS. But Next.js doesn’t know it’s static and sends it all again as React (well, Preact for my site) components. This doubles the weight of my pages on the wire, not to mention the performance impact of hydrating that static content.
I’ve been looking for weeks for a way to do what Server Components is doing, because while I love JSX for dev, that’s a really awful UX to foist on readers. There’s some prior art (search for Preact partial hydration if you’re interested), but it’s all very complex to set up. Having a first party reference solution will raise the bar significantly.
But there's another angle to consider. In thick client apps, product code (everything else) tends to take up the majority of the size of the client bundle. There's a real opportunity here to move some of that into Server Components which would help reduce that footprint significantly. For example consider the case of deeply wrapped components that ultimately render to a single <div>. Server Components could help remove that abstraction tax.
I'd like to see a modern reincarnation.
I suspect (though they didn’t say so outright) that their choice of a non-HTML layer is because they want to continue to provide core functionality as renderer-agnostic. They do mention React Native as another use case. Presumably the same could be said of renderers targeting e.g. CLI, or smart TV APIs.
Our underlying transport is not exactly JSON but something like "JSON with holes". It's basically a JSON with placeholder string values like "$1", "$2", that later get filled in the future rows. This lets us have breadth-first streaming where we can show some content as early as possible, but always have intentional loading states.
The choice of JSON is because we need to be able to
* Pass component props (which are JSON) to Client Components
* Reconcile the tree on refetches so that state inside isn't thrown away
HTML doesn't let us have either of these two. However, starting with a richer format, and then converting it to HTML for initial render, works.
Since we can also do this at the build time, you could build a website that does all of this work at the build time, and turns it into HTML. However, as you might expect, it won't be interactive without JS.
It’s also an interesting approach compared to some of the partial hydration approaches I’ve seen. They all seem to use either wrapper divs/spans (bad for all kinds of reasons) or adjacent script “markers” containing initial hydration data, which seems similar to the slots approach but doesn’t necessarily enable streaming/prioritizing certain content for first render.
Since you’re here, I know the RFC discusses compile time static analysis to identify content which could be server-/static-only, but is there any consideration for supporting that where better static analysis is available (e.g. TypeScript/Flow)? I don’t mind manually marking components static or dynamic if it means a better UX, but doing it automatically in the compiler/bundler could be great for DX, as well as for preventing mistakes.
(We did have a prototype that does this, though.)
Anyway, thanks for filling in more detail!
>Will the architecture have the potential to support realtime applications where the server will push updates to the client without the need to poll on an interval?
I don't see why that wouldn't be possible.
Assuming you are using server components but not using SSR for the whole app, sure.
But in the case, what you are doing is building what amounts to an isomorphic React app; a client-side React app leveraging server components for an easy-to-integrate backend. So, whatever the pagerank impacts are of using a client-side React front-end application is are expected.
Additionally, you can always fall back to clickable links if you really want to support subsequent interactions for users with no JS, if you've built the routing infrastructure for that
Does anyone else feel the same way?
>We don’t send the whole state of the program to the server. Interactive bits should be Client components.
Btw there's nothing wrong with this. I dig the idea of owning more of backend and Server Components are a really interesting paradigm to enable that!
Seems odd that they haven't worked out how this works for routing yet (https://github.com/reactjs/rfcs/blob/07dd4bc4807605020351606...) and that in the demo the whole app is rendered, not individual components. Just seems like turbolinks but for rendering to React VDOM rather than HTML. And I thought most people used React for CSR through hosting files on static buckets leaving out the server...?
>And if a component is a ~10kb in JS size and their new JSON UI representation is ~1kb in size then if you request 20 renders of that component then over the wire it would be 20kb of fragments than the 10kb component...?
Keep in mind that you can shift things between Client and Server very fluidly. That's very much the point of the proposal. If you refetch something twenty times over a small period, it might be a good candidate to move to the Client. On the other hand, on most navigations you have to fetch anyway (to get data), and so you might as well do a bunch of CPU work on the server and avoid sending abstraction bloat down.
One part that's missing from the current demo is granular refetching. That's going to be an important piece (including for routing). So that the refetch happens for partial subtrees and not always from the top. We have a rough idea for how it should work but it's still on the TODO list (as many other things we mentioned as ongoing research).
>Our server isn’t stateful. The tradeoff is we refetch more coarsely.
Will it have a big impact on aws usage?
You're right there is a tradeoff here. However, that's also part of the point. When the tradeoff isn't right, you can move much of your logic back to the client -- the point is the fluidity and the ability to choose.
Why get all complicated with React?