Hacker News new | past | comments | ask | show | jobs | submit login
Preact Signals (github.com/preactjs)
159 points by no_wizard on Sept 6, 2022 | hide | past | favorite | 105 comments



I really wish developers would move away from talking about performance as a benefit of using their libraries. Pretty much all libraries are "fast by default", especially in the trivial examples that library devs tend to publish to show off their work. No matter what library you use, if your app is slow it's probably not the library. When performance is a part of the marketing naive developers move to it expecting their app to magically stop being slow because they used the fast library!. 99.99% of the time the libraries they were using before weren't the bottleneck and moving to a library that's faster doesn't actually solve the performance problems in the application's logic. If the app gets faster it's usually because they removed the slow bit as part of moving to a different library.

If you build an application that's genuinely slow because of the library you're using to pass data around, rather than the way you're using that library, I will eat my hat, your hat, and many other hats. In fact, I will adopt an entirely hat-based diet.


Agreed. I once worked with an engineer who seemed to have dedicated their career to reducing bundle sizes.

I think it's easy to do things like this in the name of "better" because they're measurable.

I'd advise that if you've not accumulated Gandalf levels of experience yet, or you genuinely don't really need to reduce bundle sizes, or eek out an extra 10ms of execution time, your energies are best focussed on creating better software for users (here's the important bit) _that doesn't hurt the developers who'll need to modify it in future._

Why? Because bad code compounds in cost. When you make a decision and write code, it always results in other tangential decisions to need to be made in future. If your decision makes future decisions harder, they're more likely to be wrong. And then it makes all the tangential decisions after that likely to be even harder. Compounding, like interest, but the bad kind: debt.

So it's far more valuable to invest your time in learning to make inoffensive decisions than to go swapping libraries or trimming bundle sizes in the name of kilobytes.

On that front, it's more noble to choose a framework because it makes the pit of success as wide as possible, helping even the least skilled engineers make changes that are simple to change, fix or undo when needed, than to save 30ms per user, only to lose it because someone screwed up because of some bad code you wrote in one of the features.


You're just flat out wrong. The original point is correct, all modern frameworks have comparable performance, which is why the app will perform similarly, no matter which framework was used.

However, bundle size has a direct impact in loading speed and there are multiple thresholds where even kilobytes can matter.

The only time you can ignore it is if your users will exclusively access your website from a direct broadband connection.


To some people, loading speed is everything. My experience has taught me that it is not.

I work on web apps and I've learnt from experience that optimising software for simplicity and ease of change over a few milliseconds or kb is far more valuable than a lot of people would have you believe.

Your users will barely notice a few kb or ms. But your colleagues do notice software complexity. Your stakeholders do notice delivery times. Your directors do notice costs. And ultimately you'll notice your share value.

That said, you might benefit greatly from fine tuning your bundle sizes and library choices. But what I'm saying, is that maybe you'd make a different choice if you take the time to consider the broader activity of software development.


You are correct that loading times are not valuable for B2B apps or apps behind a login. But the main issue is that there is no simple or straightforward process to achieve performance in load time / run time if its not followed from the start. You build the same capabilities but they are just smaller.

I worked for an analytics product that never really bothered about performance on load or any of the web vital metrics. When they wanted to strike a deal with a fortune 20 customer, they were questioned on the performance of the app. It was not good. Took around 5 to 10 seconds to load everything on a macbook pro.

Now all of a sudden we needed to fix the performance for all the pages. I helped set up the processes we needed to follow to measure and focus. This was one of the most complex scenarios you can face in an Enterprise product. Touching old code for the sake of speed.

btw, the biggest perf improvement we realized was when we switched from React to Preact. There were some issues to iron out but it was okay considering the perf benefit.


IMO the other reason to not care about bundle size is because the browser is going to cache them for you for free.

All the API calls you make in a SPA are bound to be much much slower than your asset load time.


this. unless your serving static content, the impact of bundle size is minimial. of course there is a difference between 500kb and 5mb. Just try not to be the 5mb guy.


Even if the bundle is always cached locally, a large bundle can hurt startup performance.

A little discipline goes a long way. You probably don’t need to fight against every few kb, but having a bot post the bundle size in every commit can help a lot. If you think you can ignore it, you will, until you can’t ignore it, and then it’s a very annoying problem to solve.


But some libraries are not build with performance in mind and libraries like [momentjs](https://momentjs.com/) have proved to be performance bottlenecks on many occasions.


That's why we have day.js!


Game UI in the DOM. When you need to update something 60 times a second all performance matters.


Every website, web app, browser-based game, etc should be nailing 60fps. That's the baseline minimum framerate for web software in my opinion.

However...

window.requestAnimationFrame in browsers (which is what games will be using to manage their main loop in the background) matches the refresh rate of the display. If you're looking at a game on a a laptop that's 60Hz, but if you're looking at it in a VR display it's probably 90Hz, or on a top end flagship phone it might be 120Hz. Games, even games written in React, manage to do this quite easily so long as the dev doesn't screw it up. You definitely don't need a 'faster state library' to do it well.


Hitting 60 fps for a normal web app is usually not about actually updating the DOM 60 times a second. Two different issues here.

Yes, of course you're using RAF for the game loop. Not sure of the relevance of that comment. I just said 60 to simplify things.

> Games, even games written in React, manage to do this quite easily

That's a really broad and generalising comment. All games are different. My way works but only because I'm using a library that is fast enough. Sure, maybe I should batch update calls and do all sorts of optimization techniques on my end, but why would I when I can just do it like this and focus on the core gameplay instead? Are you saying that libraries should not be performant in order to punish lazy developers like me?


Hitting 60 fps for a normal web app is usually not about actually updating the DOM 60 times a second. Two different issues here.

It is though. We've all used web apps where something doesn't nail 60fps and it's so jarring. I used one yesterday where typing in a textbox was laggy. WTF is the dev who made it doing when a textbox is slow?!

This is the sort of problem that developers face and, if they're not great at their job, they reach for a library that claims to be faster rather than looking at their code to work out why it's slow. The impact of marketing something as "magically better perf by using this super special library!" is that developers stop trying to make things fast enough themselves.

Worse, some people start believing all web apps are slow because the web must be slow by default if you need these clever tricks that only libraries can provide to be fast. It's nonsense. Browsers are pretty damn fast and if you encounter a webapp that's slow it's almost guaranteed to be the fault of the developers who made it and not the libraries it's built on.

There is no library that a bad dev can't make slow software with.


But there are libraries that a bad dev can make fast software with. You can accidentally make a laggy textbox in one framework more often than another, that's why what framework you use matters. The textbox is just an example, there are myriads of ways the DOM can behave badly without the developer having any intention and possibly the application layer logic is perfect.

Each framework has its quirks. Do you optimize for correctness or speed? Does every update need to be 100% correct/consistent or can you allow it to drop updates? These are performance details that the framework should provide tooling to deal with, or it may not, and it may have abstracted away all these decisions and made it impossible to fix the textbox lag unless you hack into the DOM manually. Which framework you use matters a lot.


> Game UI in the DOM. When you need to update something 60 times a second all performance matters.

What sort of Game UI has to be updated 60 times per second and would be implemented with something like React?

In-game UI like healthbars would be implemented in canvas/webgl or whatever you use for the game itself. The UI around the game can be implemented as DOM and overlayed, but rarely have to be updated 60 times/second


I’ve built web games.

Don’t bother trying to render the DOM 60 fps, it’s unnecessary. Render your actual game with canvas and use regular react for your UI. If for some reason your UI needs to render that often, move it into canvas too.


Pretty strange to call it unnecessary when it's magnitudes easier to build the UI in in the DOM than re-implementing it in canvas. Yes, the UI needs to update that often because every game tick there are a lot of numbers being updated (think colony simulator). It works so not sure what the problem is or why I would change it.


If you're talking about things displaying numbers in real-time, it's not magnitudes easier to do it in the DOM, that's a gross exaggeration. If you're talking complex form controls then maybe, but I wonder what kind of game would have complex form controls that also need to be rendered at 60 fps.

In my experience anything that needs to be animated should be done with canvas, otherwise at some point you're either going to hit performance walls or end up fighting the DOM.


I'm not talking about displaying just numbers. Bars, numbers, animated things with shadows, filters and tweens. All trivial to do in the DOM with CSS while making it from scratch in a canvas (not using any engine) would be extremely cumbersome.

And even if it's just text that's explicitly something you shouldn't do in canvas if you care about performance: https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/...

> Avoid text rendering whenever possible.

So then I'm forced to create rendered font glyphs and use tooling for the texture atlas when drawing characters. Again, much easier just overlaying an element and do it in the DOM.


The implementation involves unsafe monkey-patching:

1. For React, a hook is injected into every single component (!) regardless of whether it uses signals or not via __SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.

2. For React, React.createElement is patched so that it can render signal values as Text nodes.

3. For Preact, they claim it's using a "pluggable renderer", yet they monkey-patch the global shouldComponentUpdate (and they don't call the old one so they break other patchy libraries)

[1] - https://github.com/preactjs/signals/blob/d25e8bac09c94ed3bad...

[2] - https://github.com/preactjs/signals/blob/d25e8bac09c94ed3bad...

[3] - https://github.com/preactjs/signals/blob/d25e8bac09c94ed3bad...


For Preact, there is a TODO pointing to [1] which was merged in 10.10.3, so hopefully that can be fixed soon?

[1] https://github.com/preactjs/preact/pull/3671


Update: it completely breaks with React 18 and strict mode enabled - https://github.com/preactjs/signals/issues/70


Does someone know why the counter for this issue is at "70", but the repo only shows "21" issues in total? Maybe there are deleted issues that are not part of the total counter?

https://github.com/preactjs/signals/issues?q=is%3Aissue


No, that's because GitHub uses the same number for both issues and pull requests. We did about 70 pull requests. You can verify that yourself by going on an issue and incrementing/decrementing the number until you hit a PR at which point you'll be redirected.


Good catch. If this were an internal library, it wouldn't have passed code review.


the best vocabulary to describe the monkey patching react ecosystem


Is the main draw of this performance? I gotta say, I think it's a lot more confusing than React hooks.

I actually like how useEffect has an explicit dependency array because you know when it will be triggered. Signal effects are implicit, and if you don't want it to trigger you have to use .peek(). I think I prefer React's explicit accessing of previous state to peeking when needed.

Why can you destroy effects? That seems like a recipe for disaster. Again, I like how hooks are permanent top-level calls. You know they always run.

When are effects executed? React makes it explicit that they are executed after the render.

What is the purpose of batching other than performance? Are there any possible negative effects of making multiple signal updates without batching?

Writing this has made me a realize I really like the explicitness of React's model.


> Is the main draw of this performance?

I guess performance in some cases, but mainly better developer experience.

> I actually like how useEffect has an explicit dependency array because you know when it will be triggered. Signal effects are implicit, and if you don't want it to trigger you have to use .peek(). I think I prefer React's explicit accessing of previous state to peeking when needed.

Fair, but in real code, or at least in the code that I write, peeking/untracking is not nearly as common as just reading a signal, on average this should be cleaner.

Also with this execution model you can support conditional dependencies, with the dependencies array model the array of dependencies is fixed, so your effect potentially will re-execute even for things that you don't really even read at some point.

> What is the purpose of batching other than performance?

There's no purpose to batching other than performance.

> Are there any possible negative effects of making multiple signal updates without batching?

A signal can cause other effects/memos to execute, and they can do arbitrary computations, so the cost of over-executing can be arbitrarily high.


> on average this should be cleaner

I care more about explicitness than cleanliness though, especially if my effect touches 3+ state variables in a relatively large function.

I can definitely believe that the average case is better with signals, but is the average case even important? I don't really care about the average case with something like state management because all the libraries are pretty good for that. It's the edge cases and complex state logic I care more about.

Yeah useState, setState, prevState blah blah blah is a lot of overhead for the average case but I really like the explicitness it provides me in the not-so-average cases.

What do you mean by conditional dependencies?


Each case and each person is different, maybe dependencies arrays just work better for you. Potentially huge effects that read a couple of signals shouldn't exist, because you can always read the signal at the top and pass their values to an external function containing this huge amount of code.

> What do you mean by conditional dependencies?

Random example:

    useEffect ( () => {
      if ( !supportsThemes () ) return;
      if ( darkMode () ) {
        doSomething ( darkTheme () );
      } else {
        doSomething ( lightTheme () );
      }
    }
In a reactive system this is all you have to write, if the first check never passes you only read "supportsThemes" so you will only be listening to that. If that passes in the future then maybe you'll start listening to "supportsThemes", "darkMode" and "darkTheme", but not "lightTheme". If "supportsThemes" becomes false again you'll automatically stop listening for "darkMode" and "darkTheme".

In React this works differently, all those signals now become dependencies in the dependency array, the entire array will be diffed each time, whenever any of them change your effect will be re-executed, even if something you don't care about in that moment changed.

Basically the same problem exists for hooks too, all hooks must be called all the time, which also as a side effect implies that some hooks calls need to become non-sensical, like if I don't have an onChange function what do I need to call useDebounce for it for? Should the hook support receiving a non-sensical null argument? Should I call it with a pointless noop function? This problem doesn't exist either in a reactive system, fundamentally because dependencies are dynamic, if you don't want to call a hook you can just not call it. If you want to nest effects you can do that.


> you can always read the signal at the top and pass their values to an external function

Which is basically re-inventing a dependency array... except it's not explicit and now I have to look at another function as well.

I don't understand how you are supposed to reason about complex effects with the signals model.

> Random example

Why would I want to complicate things with conditional effects though? It makes it so much harder to debug and reason about.

> the entire array will be diffed each time, whenever any of them change your effect will be re-executed, even if something you don't care about in that moment changed

An array diff on the order of ~10 elements isn't an expensive operation and I can implement conditional logic in React just like you've demonstrated here. What is different about signals in regards to conditional flows?

> if you don't want to call a hook you can just not call it. If you want to nest effects you can do that

I'm not convinced this is desirable. Complex implicit reactive effects seem fragile and difficult to debug and reason about. This is somewhat mitigated with React because of the Rules of Hooks, which gives you expectations about how your code/system should behave but even then people have complained about them being difficult to understand.

This feels like a step backwards in terms of complexity. Without better tooling/observability (E.g. easily trackable effect history, value history, etc that is integrated into the dev env) implicit reactive dataflow based systems feel like they will be way harder to manage.


You’ve basically presupposed your conclusions re: the reactive approach, so yeah, if I were you I wouldn’t bother.

Incidentally the “rules of hooks” are frustrating precisely because the expectations they impose around hook use are confusing and unintuitive.


I think dataflow mechanisms are really interesting, but I don't see them as being something easy to build a complex system on. I stated this very clearly, saying "complex implicit reactive effects seem fragile and difficult to debug and reason about".

If you're building something that is heavily focused on computing values rather than more event driven effects I could see a reactive system being more useful and appropriate though.

I'd love to hear from developers/teams who have built complex systems with a reactive framework (Solid, Signals, etc) to hear about the upsides and downsides of doing so.


> I stated this very clearly, saying "complex implicit reactive effects seem fragile and difficult to debug and reason about".

It is definitely easier to reason about dataflow in a good incremental library with dependency autotracking ("Self-Adjusting Computation"[1]) than to reason about nondeterministic concurrent rendering in React :)

1. https://www.cs.cmu.edu/~rwh/students/acar.pdf


> but is the average case even important?

I believe it does. Or there won't be a huge discrimination between theoretical react performance and real world web sites. React developer expect user to do all required optimization themselves. But web developer don't even care. (as long as they can ship it without serious complain)

Developers always tend to do it the easy way instead of the proper way.

If the easiest way is messed up. Then it is.


> Also with this execution model you can support conditional dependencies

This is an anti-feature.

To elaborate, a “conditional dependency” is still a dependency as much as any regular dependency would be.

The only difference is that the conditional check is now taken outside the dependent code, (where the condition is both explicit and colocated with the dependent code) and put somewhere else, becoming less explicit and harder to find.


The conditional check in the example code I provided is pretty much just the if/else that one writes, that's as closely located as you can possibly write it I guess. Like if some signals are read under a branch that's never taken they are just never read, that's it.

Also maybe there are good reasons for disliking it, but this seems harder to mess up, by default all dependencies are accounted for, automatically, that's not the case with dependencies arrays.


> I guess performance in some cases, but mainly better developer experience.

It has better developer experience when you apply it to optimize performance. It is impossible to beat from-scratch recomputation in terms of DX.


It's a matter of opinion at best, for example I hate dependency arrays, and rules of hooks I find very weird, I understand why they are there, but to my brain they just fundamentally feel like workarounds for the lack of signals.


> for example I hate dependency arrays

It is also an optimization and I agree that it is worse in terms of DX than autotracking dependencies.


Completely agree. I'd always be open to typing a lot more if it meant reading that code back will be obvious later. The true test is whether someone who knows the language but not the framework can read a piece of code and roughly understand what it does. Looks like Preact Signals utterly fails this test and that makes me sad.


Why do people like reactivity à la Solid, Vue, MobX etc? The reason I moved away from these towards React was because reactivity introduced subtle bugs with two way data binding, where it was hard to tell where data was being mutated (v-bind, computed() etc).

In contrast, with React I don't really have to worry about that because it explicitly mandates one way data binding with only a few places that data can be changed, such as in click handlers. I can basically treat the state as a render loop that runs from top to bottom every time. If I want to preserve state, I use useState, and if I want to cause an effect, I use useEffect.


One-way or two-way binding is pretty much completely orthogonal. You can have both approaches in both systems. For instance Solid is pretty close to React in unidirectional flow and read/write segregation.

I think the appeal of a reactivity system is different, for some people is better performance, for others it may be better memory usage, for others it may be better DX as a bunch of "workarounds" stop being needed (like dependencies arrays, rules of hooks, useRef, useEvent...), for others it's just simpler to understand and reason about.


Vue also has a one way data flow, but uses some DX to mask that to something that looks like Angular original two way binding.

These days with Vue 3, it’s even more explicitly one way.


I think of React as a functional language, not exactly, but as a similar mental model. Once you start thinking of React in such a way, a lot of their decisions make sense. Just as in Haskell we need to use somewhat more convoluted ways to make things work, such as monads, we do the same in React, using useEffect to encapsulate side effects (hence the name). We also keep the tree immutable for much the same reason as in functional languages, because immutability tends towards fewer bugs as we don't mutate internal state, but we optimize such immutability through things like a virtual DOM.


https://twitter.com/ryancarniato/status/1353801411331416070

Both React & Solid are consistent here, but I find Solid to be more intuitive.


I can see advantage of using React's approach. But why would anyone in right mind want to Not batch dom updates by default?


I believe Solid does have batching, and continues to work on the problem?


> I can basically treat the state as a render loop that runs from top to bottom every time.

If you use vue like this. I think you will never use vue properly. In vue, state IS the ui. UI is always direct mapping of state. You setup state, you describe mapping between state and dom. And the rest of things will be maintained by vue. Data is the trust anchor of everything. UI is derived from it and you shouldn't even think about update it yourself.

And vue also makes lots of utils specified to map or handling data change. (Personally I think the most important one is computed, as it is the bridge between the the different form of same state)


I read over the React integration here, and it seems like an unsafe approach to React 18+ by monkey-patching React internals state transitions, as well as mutating refs during rendering which can break during concurrent updates.

Some abuse of the internals may be "safe" because React Devtools relies on this stuff, but I would be very nervous betting production correctness on this strategy long-term. Maybe it's worth it? Really hard call.

Here's the details: https://github.com/preactjs/signals/blob/main/packages/react...


I had a similar response when reading "React adapter allows you to access signals directly inside your components and will automatically subscribe to them."

I don't believe this is worth it. The implementation is fairly dangerous and fragile as you point out, all in order to simplify something fairly clean/reacty such as `const value = useSignalValue(signal)` into `signal.value`.


The difference (as I understand it) is that you don’t need to list your dependencies as you would with a hook, and you can call it anywhere. I find this approach conceptually simpler than hooks.


Sure, but you could at least wrap the whole component in observer(MyComponent) like Mobx and avoid all this skullduggery with the internals.


Invisible dependencies and avoid referential transparency sounds more complex to me than the opposite (upfront dependencies and referential transparency), not simpler.


Yeah, it seems like a lot of effort to do things in a fragile way when regular hooks are capable of getting similar results in a standard way.


Don't get why you would use it over Solid (https://www.solidjs.com). I guess to keep compat with React ecosystem but moving React components to Solid is trivial. Solid signals have better DX & I think are more performant.


I recently ported my side project app from React to Solid and it was surprisingly straightforward. I think this is an advantage of Solid over Svelte. The concepts and syntax are quite similar and it eliminates the manual dependency tracking headaches from React.


I just browsed the Solid website for the first time. It looks somewhat interesting, but one thing I noticed is that rollup seems to be the preferred way to do builds?

I kind of like keeping things simple by using just Preact and esbuild.


I recently started using Solid myself, and I was initially skeptical about that too. I love esbuild as it’s simple and fast, and distrust all these other slow and complex JS tools, especially Webpack.

But it actually uses Rollup and esbuild by default (via Vite): Rollup for the HTML/CSS and esbuild for the JS/JSX. It works great and it’s plenty fast.


I think the preferred build system is Vite. Assuming you're looking at https://www.solidjs.com/guides/getting-started, they mention Vite first, and under that is manual setup for webpack/rollup/etc if you don't use the template. I can vouch for Vite being nice and simple for basic apps at least.


You can use Solid without any build tooling (with some caveats). I’m surprised Rollup is prominent at all on the site. The preferred tooling for Solid is its custom Babel plugin, but you can skip it and use hyperscript or html tagged templates. From what I’ve seen on GitHub they’re generally moving towards recommending Vite and their Vite starter is as simple as you can get (far simpler than ESBuild projects I maintain, there isn’t even a need for config or CLI flags or anything), but there’s an ESBuild plugin available. Ultimately all current solutions just wrap the Babel plugin in some form.


I think if you import it from solid/h then you can use TS' transform. But I guess you would look compatibility with the ecosystem as Solid would work slightly differently.


Yeah, IIRC the two main caveats are that you don’t get some compiler optimizations and you have to wrap prop access in functions. The former probably won’t affect many people, but Ryan has said many times that the minor semantic differences have driven people away from Solid entirely.


> I’m surprised Rollup is prominent at all on the site.

Solid and Rollup are by the same author.


I just double checked because this comment made me do a double take, but that isn’t true. Solid was created by Ryan Carniato, Rollup by Rich Harris. Both have R names and I do sincerely get some R names confused in the web tech world. But I’m reasonably certain Rich wasn’t in a band which was the namesake of either technologies, nor has a tattoo from being in that band. ;)


sorry, i was thinking of svelte, not solid


Solid uses a custom Babel transform. Interestingly performance and simple build process aren't mutually exclusive, you can get both, the transform provides (arguably) convenience features mainly.


Probably library support for React. React is treated as the default for most libraries while Solid, Svelte and Vue have a fraction of them. One reason I moved to React from these.


Perhaps a better URL is this blogpost: https://preactjs.com/blog/introducing-signals/


Preact Maintainer here.

Agree, it explains much better the motivation behind signals. I wish HN would've linked to that instead of the README of the git repository.


Maybe I’m thick, but I don’t see what makes this (or solid, or svelte) different from Knockout or RxJS. Does this somehow avoid the tangled dependency chains and complex push-based reactivity issues that I deal with in RxJS/Angular?

To me the brilliance of React is the VDOM, which allows you to treat your components like a render loop in a game.

What’s the piece that I’m missing that separates Signals from RxJS observables?


I think this is very spiritually similar to the intent of Solid, and Solid’s creator often cites Knockout as an inspiration. I haven’t even looked at his Twitter yet since this announcement but I expect he’ll have a lot of nuance to offer on the subject.

At a very high level, the thing that this offers versus (anything) is a very minimal reactivity solution that’s small and designed specifically to be integrated into Preact’s VDOM. Other than that, it’s a whole lot of details at a level that most people won’t care about beyond the minutiae of implementation and the nuance people actively implementing these libraries can provide.


Afaict this allows you to retain the one-way data model that makes react easy to reason about. This is simply one way to handle your data model (vs "useState", for example). You'd have actions that update the signals (they seem more or less to be streams), and then your components are pure functions of the signal data.


I also noticed the similarity to Knockout. The main difference in the API seems to be the use of a value property vs. Knockout's function call to access the value.

The computed function seems almost identical.


Minor aside, I really dig that there is a synchronous `signal.peek()`. Common accepted dogma is that an async value is ought only be accessible asynchronously, but having the capability to do the "you probably shouldn't" thing (getting/seeing current state synchronously) is, imo, the sort of escape hatch that adds potential & makes greatness possibility.

There's a lot of reason we let fear govern language design & direction, but I really believe we also need to give a lot of credit to what we make possible (or not) in these decisions too. For a long time, keeping async & sync separate has been the law of the land. I hope someday we can see values & their changes with less of a everything-neatly-in-one-box-or-another view, & more integratively.


I don't think that's a sync vs async difference in the API. peek() is related to reading the value without creating a subscription on it: https://preactjs.com/guide/v10/signals#reading-signals-witho...

In general signals looks like a synchronous but lazy system. I don't see anything that's actually async in there?


Not sure how you are getting this conclusion? The example code is as follows:

  effectCount = effectCount.peek() + 1;
That is definitely absolutely clearly 100% for sure sync code. Adding a numerical value +1 to the result of a function (with no wait on it) makes it clear that .peek() here is synchronously returning a value.


Correct, .value does exactly the same thing. The only difference is .value subscribes to updates. The framework is all synchronous (but sometimes lazy).


Very similar model to MobX. Main difference seems to be that deep accesses/mutations aren't (?) tracked, which is unfortunate. But it does mean there's a little less magic going on. In any case, I'm glad to see more people doing this style of reactivity


Deep reactivity can be implemented on top of the provided functions, I have a version of that for Preact that doesn't seem super broken here: https://codesandbox.io/s/sparkling-cherry-bn9bu5?file=/src/i...


I guess it's similar to this MobX :)? https://github.com/mobxjs/mobx


I read the intro and it looks similar to Jotai[1].

But the value getter and the compute tracking looks nicer. Anybody tried it with React pros/cons?

[1] https://jotai.org/docs/basics/primitives


I see lots of negative comments, but this actually looks great to use! I'm all for having more solutions for the global state problem.


Is computed just a selector?

Is a signal just a useState hook?

I’m sure the answers are no. But I’m struggling to see it in the provided examples.


These are difficult to understand concepts imo. I have written an annotated implementation of a simple reactive system (you may want to run it through Prettier if that's your thing): https://github.com/fabiospampinato/flimsy/blob/master/src/fl...

A signal is basically a function that you have to go through to read and write a value. In the case of Preact the function is split into getter and setter assigned to the "value" property. The interesting thing about signals is that they can tell their parent computation to re-execute, automatically, without any manual dependency array.

A computed is a signal generated from a function rather than a primitive. So like the function that generates the value is re-executed automatically whenever any of the signals read inside it change.


But your description matches useState exactly...


Sorry, you seem to hint at knowing this already. But, to clarify,

* Hooks - `setState` reruns the full code for your component.

* Signals - only the dependent effects/computed/JSX is run.


If the runtime is capable of this, why doesn't it do it for setState?

What I'm complaining about is that the signals seem to have the exact same semantics and use cases with different implementations and performance characteristics.


If you are posing the question, "why does Preact not make hooks behave like signals?", it very likely is to retain similarity with React where you only have hooks. For better or worse, Preact is known to most folks as an alternate implementation of React with the same API.

SolidJs shows you can get most of the functionality of hooks using signals. Who knows, if someday most Preact users converge to using signals for state management, maybe they will throw out hooks altogether. But in their current context, supporting both hooks and signals feels a reasonable choice to me.


But the API of signals and hooks is the same... What would change if new signal thing was called useState?


See: https://preactjs.com/blog/introducing-signals/#signals-to-th...

EDIT: scroll up to "The global state struggle" first to get more background/comparison to useEffect if it isn't clear.


Really interesting to see Preact adopt this kind of model.

I have been working on a similar programming model for a while, where this kind of state management is the only approach:

https://github.com/sunesimonsen/dependable-view https://github.com/sunesimonsen/dependable-state

The library has other kinds of agendas like being able to run without a build step, being really small and allow multiple versions in the page.

Examples: https://github.com/sunesimonsen/dependable-example-hackernew... https://github.com/sunesimonsen/dependable-example-todomvc


Seems very similar to what I have been using for a while now: https://github.com/jorbuedo/react-reactive-var. I reckon that library is based on the reactive vars in Apollo client, but without the unnecessary GraphQL code. Reactive vars are great to work with, the implementation is only a few lines of code, it is very predictable, and it doesn't require monkey patching your React internals.


This is knockout observables reinvented.

Stay far far away if you want your app to be maintainable beyond 6 months.


The syntax looks very similar to Vue 3's composition api. Nothing wrong with that.


I suggest using jotai for atomic state management. What they're trying to do is nice, but I think it's full of unnecessary complexity.

https://jotai.org/


can this be reused in nodejs? Im'm not gonna waste my time when there arre Rx and Mobx. They are just pure reactive programming library instead of trying to be a front-end framework


Preact maintainer here.

Yes, signals can be used independently of any framework. It runs everywhere JavaScript runs. To use it in node without any framework adapters, import the `@preact/signals-core` package, which just exports the reactivity API.


Why would anyone use Preact? If I'm a team lead, why would I choose to use it over plain React? Would not my sense of risk be triggered at least a little?

I've talked to many React engineers and none of them have a good reason to use a library like Preact.

Once again, the author is content to reinvent the wheel unnecessarily and publish it as something that can/should be used in production. It's a useful toy, a nice exercise in "what can be done" but not much else. Yes, I'm aware of the countless "useless" things that get posted to HN all the time - none of them purport themselves to being anything more than what they are.

I have never met anyone with any sense using Preact, and I see no compelling reason to use Signals either when there are countless other libraries doing similar things, with developers that actually use them.


Is shipping less JS important for your use case? If so Preact can be 10x smaller than React. If only you could do that with all of your dependencies that'd be outstanding. According js-frameworks-benchmark it's also faster at runtime (https://krausest.github.io/js-framework-benchmark/current.ht...).

If startup and runtime performance have 0 value there's no reason to use Preact. If startup and runtime performance have greater than 0 value there's some reason to consider using Preact.


FWIW, one specific counter-example would be Etsy choosing to adopt Preact to migrate upwards from React 15:

- https://github.com/mq2thez/blog/blob/main/upgrade-react-etsy...

- https://www.etsy.com/codeascraft/mobius-adopting-jsx-while-p...


My top use case for preact was when I needed to embed code into customer’s sites and wanted it to be as lightweight as possible since it was one of my selling points. Think embedded systems but in a web context (like injecting a web form in the middle of an existing page or a modal, completely independent of the rest of the page)

Another example, you know those UIs that might drop down when you click on a chrome extension icon? No need to use react for a simple extension drop down.

If you’re building a 1st party platform or webapp then react makes more sense. But there’s clearly usecases for React-lite.

Regardless, preact core is boring and stable (like 5 years old). I built a multi-million dollar company on it previously for what it’s worth.


> I built a multi-million dollar company on it previously for what it’s worth.

Not a particularly useful anecdote, unless you're telling me your choice of Preact was some kind of linchpin. And I'd bet it was most certainly not.


> unless you're telling me your choice of Preact was some kind of linchpin

Oh it definitely wasn’t. The other reasons are valid though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: