Hacker News new | past | comments | ask | show | jobs | submit login

You can get around this identity problem by creating all derived/composed objects via `useMemo`. This ensures that their identity only changes when that of their dependencies do. This lets you get around this "identity problem", but comes with some issues:

- Relying on `useMemo` preserved object identity assumes a semantic guarantee, which React docs tell us explicitly not to do [1]. Not providing this guarantee is ridiculous. If their cache is implemented correctly, this should be no problem.

- The alternative is to leverage an external lib, which does provide this guarantee [2]. However, it's weird that bringing in an external lib as the more "correct" solution to this incredibly common problem (this is seriously relevant to like 1/2 the components I write)

- Wrapping every bit of derived state in a `useMemo` hook is incredibly verbose and annoying, especially when you take dependency arrays into account. I feel like I'm writing generated code.

1. https://reactjs.org/docs/hooks-reference.html#usememo

2. https://github.com/alexreardon/use-memo-one




Two thoughts.

One, you don’t rely on a semantic guarantee if you use useMemo for derived state. Avoiding rerendering counts as an optimization as far as the React docs are concerned (your program works if there’s an extra render), and this is in fact exactly what it was intended for. The docs you linked seem to agree: Regardless of whether an offscreen component keeps or doesn’t keep its useMemo, the code is correct and there’s at most one extra render.

Second, while I agree with the verbosity complaint, I personally make a point to use useMemo as coarsely as possible. It’s often completely fine to compute all derived state in a single big lambda that returns one big (immediately destructured) object. It’s only when you have multiple pieces of derived state that update individually and are also all expensive to compute that you actually need fine-grained useMemo calls. And in this case, you can always think about extracting sone of that logic into a helper function/hook.

It’s not perfect, but I think it’s possible to avoid a lot of the pain most of the time.


I'm with you on thought #2. Regarding your first thought, however: if you want control over when `useEffect` callbacks fire, identity isn't just an optimization, it's a necessity. For example (used in another comment): if you're not using a smart intermediate layer like `react-query`, you can unintentionally trigger loading states and re-fetches if you're not closely watching dependency array identities


I'm curious if you've used useDeepCompareEffect, the use-deep-compare-effect npm package? I've found that it is pretty reasonable foolproofing for many of these identity questions. I'm well aware of Dan Abramov's objections to the deep equality checking [1] but I still find it a bit easier for me and other devs to reason about when doing things like data fetching.

[1] https://twitter.com/dan_abramov/status/1104414469629898754


Hadn't heard of that -- thanks for sharing!

Crazy how many choices there are in the mix (use-deep-compare-effect, the `JSON.stringify` approach mentioned by Dan, `useMemo`, and `useMemoOne`). Feels like a "pick your poison" scenario, as each one has a significant issue.

That being said, `useDeepCompareEffect` does seem the most "foolproof", and "foolproof" is probably more important than intuitive or performant in most cases.


Oh, that’s a good example. I’ll argue that you should try to use primitives (strings/numbers) as keys in those cases. But if you can’t, then you’re right that identity is critically important.


Why do you need identity except for increasing performance?


Identity is necessary if you want to predictably trigger `useEffect` callbacks.

Example: if you're not using a smart intermediate layer like `react-query`, you can unintentionally trigger loading states and re-fetches if you're not closely watching dependency array identities


Though this is also a strong reminder that though useEffect can entirely replace service layers and state management layers such as react-query/Redux/Mobx/Relay/what-have-you doesn't mean it necessarily should. (Ultimately that's the bottom line summary from this article.) useEffect is a very "raw" low-level tool, at some point it is a good idea to consider a higher-level tool (maybe one based on useEffect under the hood).

Don't forget too that trying to do everything in raw useEffect code may be a sign of putting too much business logic in your views and abstracting that out can be a good separation of concerns no matter how you decide to abstract that service layer (and/or which tools like react-query/Redux/Mobx/etc you choose to make that easier).


I can understand re-fetches can occur, that would be a performance rather than correctness issue though.

Nothing stopping you from keeping the prior response while loading the new one to handle loading states.


Disclaimer: I am working on a project which does not use hooks (or React or anything like that), but has a fairly complex set of data processing specifications. The project is > 10 years old, the project is a product in the sense that it has direct end users, but a library/framework in the sense that its behavior is also defined by end users (it’s not a UI around arbitrary spreadsheets, but that’s a pretty good common frame of reference). Most of these questions are informed directly by work I’m actively doing, some by past work on distributed systems and UI/UX.

- What about anything computed from the previously fetched data? Will it be computed the same way?

- What about any user-provided state downstream? Will it be preserved? Will it still be valid?

- What about any user-provided state midstream? Even if preserved, will it evaluate the same way after a refetch?

- If you know mid-/downstream user input might be impacted, can you detect that and ensure each case has a desirable outcome, or does this responsibility spread to all of those cases?

- What about inconsistent network connectivity? Will it fall back to the previous state in case of timeouts? Is it even supposed to? (Is the request idempotent? Do you know? Can you know? If it’s not idempotent, will it recover after a timeout once network available resumes?)

- What happens if user/event/timer-caused state changes while the request is in progress? How will computations be reconciled?

- What happens if network-provided data is also supplied by user input from other users? Do you have a reconciliation strategy?

- What happens if this first request triggers N requests? What happens if each of those N requests similarly has to answer all of the above questions?

- What happens if any one of these has a pathological case which causes it to cycle? What if it causes a cycle intermittently?

- What if your user is using the cheapest mobile available and has an expensive data plan?

- What if everything is really fast, actually, and your user has motion sensitivity?

I’m just rattling instinctive thoughts after stumbling on this comment. There are surely more I could come up with if I were actually dealing with concrete problems where unexpected redundant network requests are being evaluated as “is it more than a performance issue?”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: