Hacker News new | past | comments | ask | show | jobs | submit | acemarke's comments login

Yes, here's a few excellent articles that explain what problems build tools solve and why they exist:

- https://sunsetglow.net/posts/frontend-build-systems.html

- https://www.innoq.com/en/articles/2021/12/what-does-a-bundle...

- https://www.swyx.io/jobs-of-js-build-tools

Loosely put, they're the equivalent of all of `gcc` or `rustc`: compile the source code, run type checking, output object files, transform into the final combined executable output format.



Note that this is the full package, and that it's normal to use a bundler that will treeshake things you don't use away. For example react-dom includes server-side rendering functions that are not necessary on browser side.


Afraid that's not accurate.

The package size I linked is specifically the `react-dom.production.min.js` bundle that is used on the client side. The `react-dom` package does include _separate_ bundles used for server rendering, but that whole React client bundle will get included in your app, and it does _not_ tree-shake at all.

(To be clear I _like_ React, but it's best to be accurate about what happens here.)


So this package is not the actual react-dom package I get when I npm install react-dom? Thanks for the correction!


The `react-dom` NPM package includes multiple different JS bundles that are used for different purposes. See that `unpkg` link I pasted in the parent comment.

There's 3 different flavors of `react-dom` for use in the client (dev, prod, profiling), and then several variations of `react-dom` for use _on the server_.

The client bundles don't include any of the server functionality.

_None_ of the bundles are treeshakeable at all, because A) they're shipped as CommonJS modules and not ESM, and B) the way the React library is written and architectured as a whole. All of the reconciler logic is intertwined and unshakeable, so even if React did suddenly switch to shipping ESM modules instead of CJS, it would still end up as the exact same bundle size.


So how small can you get for a hello world page with a single "hello $name" component, spending an hour or less on it?


That bundle size + the size of your component.

Just created a fresh Vite+React app and rendered a "hello world" component, and the resulting output is:

    dist/assets/index-uoOveHrm.js   142.66 kB │ gzip: 45.76 kB


Conceptually, a `<Suspense>` component acts like a `try` boundary around a given component subtree. If _any_ component inside of that subtree suspends, it needs to "bubble up" to the ancestor `<Suspense>`.

But, React implements the core component tree rendering logic via a single `while` loop that iterates downwards. That's flat, logic-wise, whereas the tree is nested.

Meanwhile, React already had similar behavior for its error boundaries, where a thrown error in a component would get caught by the component rendering logic and it would "bubble up" to the nearest error boundary.

So, they opted to implement Suspense's mechanics the same way, except that instead of throwing an error, you throw a `Promise`.

Meanwhile, React components on the client have always been pure synchronous functions. No `async/await`, no generators, and thus no support for returning a promise from a function.

With React Server Components, React now supports `async function` components _on the server only_. They've done some prototyping with support async components on the client (and I think even briefly accidentally had a couple releases where that technically was turned on), but there's some kind of either technical issue or release planning issue that's kept them from building out that support for client components (possibly support for `AsyncContext` in browsers).


Thanks for your explanation.

What I don't understand is why there's no easy way to conditionally render a subtree depending on promises contained in the props of the top-level component.

Kind of like a guardian HOC triggering a re-render whenever all props promises resolve.

Is it because that would be a one-time thing and not synchronously reactive?

Seems like people reimplement or reuse this kind of thing all the time, but often using useEffect (transforming side-effects into state, losing deterministic rendering).

You're right that this would get ugly really fast with nested suspense boundaries though.


Hmm. Trying to understand what you're suggesting here.

You can pass _any_ JS value as a prop to a child component, and there's nothing special about any of that as far as React is concerned. You can pass a primitive, an object, a Promise, an AbortController, a DOM node, anything. All React cares about is "here's what gets passed into the child".

Suspense has a couple key bits of behavior: it needs to let _deeply_ nested components trigger the _nearest_ Suspense boundary, and it also needs a way for React to know when the async behavior is done (ie, the promise resolves) so that it knows when to re-render. Throwing a promise is certainly unusual conceptually, but makes sense in light of those constraints.

(I'm probably misunderstanding what you're envisioning and didn't manage to answer it properly - feel free to clarify with an example if you'd like!)


Yes I'm not sure myself if what I say makes sense when thinking it through again.

I was thinking something like

  <WithPromises asyncProp={promise} asyncProp2={promise2}>
    {(resolvedValues) => children(resolvedValues)}
  </WithPromises>

But that's already possible to do yourself I guess, but better in different way.

E.g. not using a function prop as "children" etc.

Many libraries also provide nice and clean interfaces to provide async state (e.g. react-query) and then of course there's good old useEffect.

I think I see why they use a different approach.

Guess I'd mainly just love a streamlined API for data fetching built into the core.

Right now it's a lot better to just have async code outside of react change prop values.

Love the section on client data fetching in the react docs though.


Note that Redux usage patterns have changed significantly since 2019. Modern Redux with our official Redux Toolkit package is drastically simpler and easier to use than the original legacy hand-written patterns:

- https://redux.js.org/tutorials/essentials/part-2-app-structu...

- https://redux.js.org/introduction/why-rtk-is-redux-today


Hi, I'm a Redux maintainer.

Context is not an "improvement" over Redux, because they are different tools with different purposes. (This is the primary misunderstanding people have when they try to compare Context and Redux.)

Context is a Dependency Injection tool for a single value, used to avoid prop drilling.

Redux is a tool for predictable global state management, with the state stored outside React.

Note that Context itself isn't the "store", or "managing" anything - it's just a conduit for whatever state you are managing, or whatever other value you're passing through it (event emitter, etc).

I wrote an extensive article specifically to answer this frequently asked question, including details about what the differences are between Context and Redux, and when to consider using either of them - I'd recommend reading through this:

- https://blog.isquaredsoftware.com/2021/01/context-redux-diff...


Thanks for the info. I'd say you're speaking from some position of authority about what redux is and isn't. And to some degree, what react context is and isn't. I vaguely recall seeing this blog post before, maybe.

So redux is for "managing and updating" shared state. And context is for "sharing values". All this seems to suggest that react actually isn't that good at shared state, at least when it can change.


I'd both agree and disagree with that.

React is based on the core concepts of encapsulated components, with the ability to manage state on a per-component-instance basis, and for parent components to pass _any_ values they want to their children as props, forming a "one-way data flow" approach.

This is _good_, because it both enables predictable behavior and React's overall rendering model.

It's _limiting_, because it means that React's own state management is inherently tree-shaped. If two widely separated components need to access the same data, you have to hoist the ownership of that state up to the nearest common ancestor, which could easily be the root `<App>` component. To put it another way, not all state is inherently tree-shaped, so there's often a mismatch.

Context is essentially "props at a distance". Put a value in a `<MyContext.Provider>` component somewhere in your tree, then any deeply nested component can read it via `useContext(MyContext)` without having to explicitly pass the value as a prop through however many intervening levels of components.

This simplifies making values accessible to that subtree, but doesn't solve the tree-vs-nontree-shaped state management question.

You _can_ build an entire app out of nothing but React component state. Plenty of folks have done it, but it's limited in what tools you have available and how you can structure things.

That's a large part of why there have been so many different state management libraries created for React, to provide alternative approaches that don't have the tree-shaped issue.


Hi, I work at Replay.io. There's a very good reason for that "custom browser" requirement.

Session replay tools like LogRocket or Jam do capture a lot of information, but they can only capture what's _in_ the page, and are thus limited by the JS execution environment and permissions. This is still very useful for seeing what the user did in the page, and you do get a good amount of detail (video, DOM, network).

Replay.io works by capturing the browser's calls into the operating system. (This is really complicated! Our fork of Chromium has thousands of lines of custom C++ and JS modifications in order to capture the runtime information and make it queryable.) That enables actually _debugging_ any line of code that ran at any time. That's something that session replay tools _can't_ do.

So, yes, both Replay.io and session replay tools let you _see_ what happened, but only Replay.io lets you _debug_ the code _as it ran originally_. And that's only possible because we do capture the _entire_ browser's execution.

We've got some sections in our docs that dive into this in more detail:

- https://docs.replay.io/time-travel-intro/what-is-time-travel...

- https://blog.replay.io/how-replay-works

- https://docs.replay.io/comparison/session-replay

Not only does Replay.io let you _debug_ recordings of bugs, we've got a Test Suites dashboard that lets you record Playwright or Cypress E2E tests _as they ran in CI_. This is possible because we can run your E2E tests with our own browser, and thus record them as they're running.

Finally, a sneak peak: we're currently prototyping some new advanced functionality that would actually _diff_ passing and failing E2E tests to figure out where a failing test went wrong, surface that info to developers, and help them identify common failure reasons in their tests ("27% of your failures in the last month were due to Apollo Client failing to connect"). Still very early, but we've got the core functionality working! Again, this is only possible because we've recorded the _entire_ browser's execution, and can then replay that browser at will and query for every bit of JS execution that happened.


Can you clarify some terms here?

- What do you mean by "React as a library"? Using it as a pure SPA with a bundler like Vite? Using it as a plain `<script>` tag?

- What do you mean by "go back to MVC"? What does "MVC" mean specifically in this case?


- Using it as a plain `<script>` tag?

Yes.

- What do you mean by "go back to MVC"? What does "MVC" mean specifically in this case?

MVC as in Model-View-Controller.

To give more context, I'm prototyping a lot, and having to create endpoints for every single APIs is quite a chore. Also I want to avoid the overhead of having to run separate server just to host the frontend (e.g. NextJS)


I know what "MVC" _stands_ for, but I'm asking what _context_ you mean that in. Are you talking about how to define your server-side data models and endpoints? How you're organizing client-side fetching and caching?

Normally "MVC" as a concept doesn't get used in the React ecosystem (the way it did with Backbone.js).

FWIW it's certainly _possible_ to use React as a script tag, but it's extremely rare. It's normally expected that the frontend _is_ actually bundled and compiled, whether it be using a pure-SPA build tool like Vite, or one of the full server-side frameworks like Next or Remix.

Note that the SPA build output is just a set of static HTML/JS/CSS files, which do not require a separate Node server process for hosting - they can be served by any HTTP server.

My own advice would be to use Vite and build as an SPA.

_If_ you absolutely want to use React as _just_ a `<script>` tag with no build step, I'd recommend also using https://github.com/developit/htm to at least give you JSX-like syntax for writing your components.


Depending on how you slice the numbers, it seems plausible.

If you look at https://npm-stat.com/charts.html?package=react&package=react... :

- The older `react-query` package is 1.6M DL/w, the newer `@tanstack/react-query` is 3M

- React is ~22M

So accounting for hand-wave-y rounding and/or adding old and new versions together, that's a reasonable simplification.

For comparison, React-Redux is at 6.8M DL/w, which is how I've generally been measuring percentage of React apps that use Redux (which once upon a time was around 60%, and now is at about 30%).


I did title it "_Mostly_ Complete Guide" for a reason :)

Everything in that post should still be 100% accurate and relevant.

I specifically did _not_ try to go into further details on Suspense or some of the intricacies of Concurrent Rendering (beyond "React can cancel or reset those render passes"). My overall goal was to explain the core mental model of how React's basic rendering works on the client side.

As far as Suspense goes, that can be summarized as "throw a promise while rendering, `<Suspense>` acts like a `try/catch` at the component level, React will re-render this component when the promise resolves and from its perspective that function call _always_ returned the data synchronously".

Concurrent Rendering is really complicated internally, but loosely: React has an internal notion of priorities for queued renders. `startTransition`, `useDeferredValue`, and Suspense all mark renders as low priority, so those render passes can be "rebased", interrupted, or canceled as needed based on updates that come in later.


React 19, a library release, is completely separate from the eventual release of the React Compiler build tool (which is currently implemented as a Babel plugin wrapper around a full compiler implementation core).

That said, React Compiler will _depend_ on React 19, because it needs a new memo/caching hook that will be included in 19.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: