And it might be worth putting in your HN profile that you are the creator of SolidJS. :)
And good point thank you about updating my profile.
The other reactive/observable-based frameworks I've seen (eg Cycle) very much put the observable streams center piece, and I always felt that was distracting, and that nuances about how the underlying observable stream library worked quickly got in the way.
Solid still puts the components firmly at the center, just like React, but replaces React's state concept by fully reactive observable state, called "signals". You use them pretty much like you useState in React, but deep inside it's an observable stream of changing data, and you get all the finegrained update control that comes with that.
Also, I love how noun-heavy it is. Resources, tracking scopes, effects, signals. It's just like how React moved from "do this thing after the component updated" to "ok we have this concept called an effect", but extended to more topics such as dealing with async data loading, when exactly a signal is observable, etc.
I am confused about why this is a positive.
React Hooks are the reason why I want to stop using React. They are confusing and seemingly magical, compared to lifecycle methods that make a ton of sense. While I agree they can make complex things easier, they are also incredibly easy to get wrong, as they are order dependent.
Reading the Solid landing page, what I see is "Solid takes the most confusing part of React, and runs with it."
Some people position it more like, "Solid makes Hooks the way they should have been in React". Personally understanding how React works this doesn't make sense. But I think it might be helpful for people just approaching the framework.
Also how are mid-level invalidations handled? For example if I update a list to remove one element do the other elements get re-rendered or are they cached?
It might be because I am browsing the docs in mobile but I find them hard to navigate.
I like and know where I'm at with React, but bringing a beginner through it recently definitely made me re-appreciate how nuanced it is. Also, you have to do a bit of voodoo to get good performance, and when you do the intention of the code vanishes pretty quickly.
For someone using React on top and mobx stores in the background (say 50k LOC all in), how big of a task would you say it is to move to something like Solid?
This always kills me. Clojure(script) is one of the neatest languages I've ever used, but it is just such a pain to work with. I spent hours getting NPM imports working in a project, when it ought to take seconds. It really makes it hard to recommend, even though the language itself is amazing.
Could he use more appropriate data structures? Could he avoid all the schema stuff that doesn't really improve the readability? Could he use better data structures later avoiding slow functions like update-in and migrating the bottlenecks to transducers and transients perhaps?
The author just did a rewrite and that is totally ok. He is trying things out and that is also quite alright. He provided some rather high-level benchmarks that would be really time consuming the reproduce and explain in more detail.
We have looked at the cljs code (e.g. https://github.com/asciinema/vt-clj/blob/master/src/asciinem...) with my colleague and it definitely isn't the best possible Clojure(Script) code around from a readability nor it seems performance standpoint.
To summarize, good that @sickill got a discussion going but it is best to step back and think about it in more depth. We all should apply more of this "extraordinary claims require extraordinary evidence" https://en.wikipedia.org/wiki/Sagan_standard
Yes, it is his free time and good explanations/ understanding take time. We should treat the blog entry accordingly. It is a good exercise in critical thinking and code review, if you actually take the time to at least run through the code briefly.
Besides embedded, really high-performance stuff, some parts of the infrastructure that need to be as efficient as possible or run without GC or need to talk to very special interfaces or a library, Clojure and ClojureScript or related dialects/ implementations are suited for pretty much everything else I can think of.
We should be thinking about how to implement a Clojure-like language in more places, perhaps even without a GC but with AOT compilation + interpreter for the REPL along the way of Babashka. We should explore how to have a REPL to multiple systems at once and handle them like it was a single machine to some degree. We should be thinking, how to make a running Clojure program interruptible easily (perhaps with an extra setting), like it was a program in the shell. We should think about adding a Clojure-like language to the browser natively so that programs don't have to load it like they do now and that a browser tab could have some kind of REPL that you could authenticate and connect to over a socket. That way, you could rewrite the code of the web app at runtime if allowed by the user.
And we definitely should design more APIs in much simpler way working more with data and less with invoking some specific functions. E.g. browser "history" could just be a vector/ array of maps/ objects or whatever instead of some finicky getters and setters that obscure the problem and are just another thing you have to learn to do useful work.
If you need to find something out, having to wait or randomly click through a video is extremely frustrating.
At the very least, give us a frigging transcript.
And imagine even a more aggressive "video summarizer" generating articles from videos with interspersed screenshots or brief video segments where they matter for the understanding...
It does have formal docs, but I didn't fully understand the program after reading them. I didn't enjoy sitting through branchless's videos or clicking around for the right point in time.
Edit: well, not (lossily) compressed in the case of a gif but not as nice as text properly rendered for your display.
vlc seems to do this (Android version).
I'll grant that the usual methods of interacting with GIFs don't do this, and am not arguing that they do. But if you really want a specific functionality, you can look for a tool which might provide that.
If you are showing off some beautiful TLI, an interactive prompt or ascii animations, this is the perfect tool.
As someone who works in film: film also doesn't lend itself to everything. Certain internal observations that work great in literature would be unwatchable on film etc.
Or to take it to the extreme: you don't expect the overview of an database to be experimental performance video art.
If you want to just show something off, a GIF (or real video) is the answer.
The defining feature of asciinema is that you can copy the text out of the animation. Which is actually not that useful most of the time, because you can't search for text in asciinema (AFAIK?) like you can in, well, documentation.
Why so? You can't pause or seek GIFs. GIFs and video files are usually several times larger than asciinema recordings of the same content. And you can't copy text out of them when it is useful.
Similar how "Ruby and Python interpreters are slow but webapps are IO bound anyway so it doesn't matter" to "how can I get this to handle more than x req/sec, can we get a JIT to speed up our dog slow backend".
asciinema prototyped and worked out their design, and discovered it needed more performance. And then it got it. Sounds like a success story to me.
Turns out that if you write business logic with abandon, you end up with a lot of business logic.
Personally I wish that Python, Ruby and the ilk all get replaced with Lua, but also that Lua gets a proper `null`.
But now that asciinema is no longer in React maybe it will be possible to embed now. See https://github.com/asciinema/asciinema-player/issues/72#issu....
Also, the dependency on Solid.js is unlikely to cause any conflicts even if you used Solid.js yourself in your app given the new package doesn't globally export anything else other than the minimal public API for mounting the player in DOM.
4KB/s second would be enough to get it to average around 6FPS in the first pass through the animation, if it were being rendered as it is being downloaded.
ooh actually, I got banned from the Ars-Technica forums way back in the day for posting a giant 800x600 animated GIF. It took browsers a few minutes to render! People thought I had crashed their browsers, but they just needed to be patient. ;)
I think I had a Duron 700mhz back then, so a 486 would've been turned into toast.
(Or, possibly, IE's GIF code was just really bad at that time! :) )
Gifs can do silly things, like store a sequence of full frames.
(I see this more as a demo thing to show people what is possible than, say, documentation to crib from, but I can see the value in being able to do that.)
However, most times I have problems with copy paste, it’s not due to gif’s. It’s due to some BS framework thing rendering text in some non-standard way.
FYI you're not forced to use asciinema.org for hosting the recordings, it's fully self-hosting friendly: https://github.com/asciinema/asciinema-player/#quick-start
Last time I made an animation, like this, over a year ago, I just pasted it into a team Slack channel.
If you're building gifs then you need additional file storage and an async job queue for generating the gif. I try to avoid image processing personally.
For the decrease in size, I expect most of the gain to come from dropping ClojureScript. For the speed increase, though, I expect most of the gain to come from WASM. JS and ClojureScript are within the same margin of error compared to the performance that can be achieved with WASM.
^ From the article, sounds like a plausible cause for the speed difference.
After several benchmarks I realized that the fastest way to update the entire window is to compose a string and assign it to each line/line-element via innerHTML. I usually get 60fps with 5-8k chars in fullscreen (browser text rendering has become really fast).
I’d probably paint it on canvas and then overlay an invisible plaintext node to allow selection.
The terminal lines with (colored) text definitely don't need a view library if you just want to display it, true. Canvas would be way more efficient here, and it's not out of the question for the terminal part in the future. One thing that using DOM (spans here) gives for free is copy-paste. Like you mentioned it could be solved by overlaying a text element on top of canvas (on mousedown, or when paused) or custom implementing copy-paste for canvas with mousedown/mousemove/mouseup, but that's all extra work, and as I mentioned in the blog terminal emulation was the bottleneck, not rendering.
I feel like you may not be seeing the whole picture / problem domain.
I started using asciinema two months ago and I must say that it's excellent! One minor annoyance though, it forces the use of the default shell instead of using the shell you launched it in. Other than that I am very excited by this release, more speed is always welcome.
Not to say that we shouldn't have "everything be performant" but drawing a bunch of stuff to screens is _the classic_ performance question. Whereas most "business apps" people here work on to a day-to-day have different performance issues.
Rewriting your CRUD frontend in Rust isn't going to make your DB queries faster
There are often performance bottlenecks you didn't know about, and had blamed on database (or whatever) interaction overhead. It will never feel worthwhile to dig into each candidate, because any payback seems too unlikely. Not having left scope for such bottlenecks means you can be confident they are not there. Re-implementing once is a lot less work than diving into each possible bottleneck. Improving your actual database operations, after, is more likely to have an effect when some other bottleneck doesn't mask the improvement.
You don't have to do it in Rust. Any optimization you could do in Rust is probably easier in C++, and also easier to find maintainers for.
At least the first part is not necessarily true. E.g., in C++, you might make defensive copies, whereas in Rust the lifetime system will track things for you.
I think this feeling comes from the fact that it takes longer to learn the basics of Rust compared to C++.
However, once one has learned C++ or Rust to a reasonable level, I would argue that Rust is actually easier to use.
This is not the same thing but many people make this claim.
But it is a fair bet that changes to C++ code to implement a point performance optimization will be smaller than the same sort of change would be for Rust code. For the latter, you are likely to need to re-architect that part of the system some to get your optimization and still satisfy the borrow checker. Having a borrow checker that demands satisfaction is a virtue, but there is no denying it adds cost in the small, where we're talking about, notwithstanding that such cost may be paid back at the system level.
Does it really? For example I'd think that initialization of objects is a topic that should be in "basics", yet initialization of objects in C++ seems disproportionately complex compared to Rust (at least to me).
So it's perfectly fine for me to leave objects uninitialized because of lack of attention?
That's kind of funny in light of the history that certain optimizations in web layout engines were attempted, unsuccessfully, in C++ multiple times and ultimately they invented Rust to make them easier.
The facts on the ground probably have more to do with improvements to the C++ code being obliged work as deltas against existing C++ code, where the Rust code was a complete re-implementation, thus not constrained.
Both C++ and Rust are today different languages from when that project ran.
A typical consumer disk can do 1,000,000 IOPS (enterprise is one generation behind, and slower at the moment), with millisecond read and write latencies.
Are there any managed CRUD frontend languages that are fast enough to keep up with that?
(By “keep up”, I mean “be less than half the hardware I provision at scale”)
> for the high frame-rate, heavy animations this puts a lot of pressure on CPU and memory
...does seem to suggest that the "garbage multiplier" effect of immutability is an ill fit for applications that also create a lot of garbage naturally. Note that this is about as close to an apples-to-apples comparison as we're likely to get - the same application implemented two different ways - so it's not the application's innate object-lifetime characteristics that are the problem. That's an implementation artifact.
The question is: how many applications are likely to hit this same limit? Is this a rare case, or is it common? If it's common, it is indeed an indictment of the "immutable" approach. Otherwise, not so much.
You'll rarely see apps designed like this up-front though. Most of the time, the surgical mutability will come as a performance optimization pass later on.
As for the apples-to-apples part, I for one am unsurprised to see that WASM performs better than ClojureScript, particularly for an application like this.
The Erlang runtime exposes complex mutable resources like ETS tables through opaque handles, where the handle can be freely shared, but the resource backing the handle can never actually be touched by "clients." Instead, the resource backing the handle lives in its own heap, which is owned by a manager object; and accesses to the resources in that heap are done by handing the manager references to data that it then copies into the heap; or querying it by handing it a reference to a key, whereupon the manager will copy data back out and return it.
It's not really the same abstraction as e.g. a Concurrent container-class in the Java stdlib, as it's not implemented through the client ever acquiring (the moral equivalent of) a mutex and then touching the data itself; nor does it involve the client adding object references to an atomic queue, where some async process then weaves those references into the backing object. Neither the client's execution-thread, nor its passed-in data, ever touches the handle's backing data.
Instead, ETS and the like have guarantees almost as if the mutable resources they hold were living in a separate OS process (similar to e.g. data in a nearby Redis instance), where you need to "view" and "update" the resources living in that separate process through commands issued to that server, over a socket, using a wire protocol; where that serialization over the socket guarantees that the data reaching the other end is a copy of your client-owned data, rather than a shared-memory representation of it. The same semantics, but without the actual overhead of serialization or kernel context-switching, because the "other end" is still inside the managed runtime of your OS process.
And, to be clear, Erlang ETS accesses aren't linearized by a "message inbox" queue sitting in-between, the way that regular Erlang inter-actor message-passing is. The ETS table manager can handle multiple concurrent requests to the same table, from different processes, simultaneously, without locking the whole table—just like an RDBMS would. (Instead, it uses row-batch locks for writes, like RDBMSes do.) The concurrency strategy is a concern internal to each particular black-box-with-a-handle, rather than something general to the abstraction. The only thing guaranteed by the black-box-with-a-handle abstraction, is that nobody can mutate the data "inside" the black box without its manager's knowledge, because nobody ever holds a live reference to the data "inside" the black box.
Also if the data/history are relatively small compared to the available memory it's a fine default that generally leads to "nicer" code.
Video doesn't seem at first glance like such a great fit.
That actually depends on how your GC is implemented. For example, due to laziness+immutability, Haskell produces a lot of garbage and a lot of allocations. This is not a problem with the GHC compiler, as the GC design makes allocation cheap (effectively a bump pointer allocator) and GC cost scales with the amount of non-garbage (this is, like all GC design, is a trade-off that can get you into trouble with some workloads).
I have a hard time thinking of an application for which this isn't the case. If my cooking recipe app / website is too slow and/or eats too much battery (and god fucking knows they are) I'll look for a competitor immediately.
that the bundle used to be 570kB isnt an immutability issue. itcs that clojurescript drags in a whole clojure runtime, a new virtualization layer atop the js runtime. that, to me, is the most likely suspect.
that said, for sure, short tbeow away high gc allocation patterns are generally not good. at work there's a lot of "functional" patterns, nary a for loop in sight. this endless .map() .filter() usage causes near exactly similar issues, with shortived objects. it seems ultra sad & silly to me. waste after waste. but i also think we have much more deeply rooted problems.
Immutability done right need not be much worse than mutability.
For example, jq's internal value representation is immutable in appearance:
- mutations yield new values,
- but when the value being "mutated" has only one extant reference then mutation is done in-place,
- while when the value being "mutated" has more than one extant reference then mutation is done by copy-on-write.
If you manage to always have extra references, then "immutable mutation" gets expensive.
If you manage hold on to old references for a long time, then "immutable mutation" gets even more expensive.
In a run-time with a GC not based on reference counting you do have to GC all the intermediate garbage.
Immutable data structures really lend themselves well to reference counting GCs because you can't have cycles in immutable data.
Isn't it the opposite? If you want to hold on to many referenced states at the same time, an immutable data structure should provide less overhead than a mutable one, due to structural sharing.
But now, if that's just what you must do for some reason, then, yes, immutability makes the task of taking all those snapshots real easy.
GC apologists seek to normalize this behavior. They often succeed, at that. Performing quite well against actually fast things, less often.
Performance isn’t black and white. Optimizing your memory usage isn’t going to do you a whit of good if you’re constrained by your database queries. Optimizing your DB queries isn’t going to do you any good if you’re constrained by a chatty microservice architecture. Optimizing your UI response time isn’t going to do you any good if you’re already below the threshold of perceived speed. Etc.
GC performance on short-lived objects is quite good in enough situations to matter, such that optimizing for it, rather than your application architecture, is likely foolish outside of performance sensitive loops.
Time is always limited. Spending your optimization budget on micro-optimizations is short-sighted.
Performance problems usually appear in places we prefer they would not, often runtime apparatus we poorly control such as GC. It is always preferable to try to ignore and discount those, as they may be arbitrarily hard to fix, so people do.
Yet, actually not depending on such apparatus, where it is the problem, gets you free optimization.
Performance doesn't care where it is found or lost. Micro-optimization is foundational; fail there, and there is often little else you can usefully do. The best optimizations are not doing the thing at all. GC is always strictly worse than no memory management.
Fixing your chatty microservoices and your under- or over-indexed DB queries may do you no good if you have built in bottlenecks of your own.
"Quite good" means nothing except in comparison to something else.
In the time being, while "spiderweb object graphs" are commonplace, perhaps Nim's memory management (https://nim-lang.org/docs/gc.html) can give us "non-GC by default, refcounting or GC when necessary". I want it to succeed. I hope it does.
It’s better to look at cheap short lived collection as a great way to get the thing you’re making working but ultimately something that needs to be cleaned up to be production ready.
And you can of course just use java objects whenever you want.
Roc-lang, which is a functional, systems language in development uses something called opportunistic in-place mutation to do just that. Here's a video where the creator talks about it: https://youtu.be/vzfy4EKwG_Y?t=1276
In practice a JS implementation that had special optimizations for code using immutability as a convention might for example auto-parallelize code.
Also it by no means a bad thing if a compiler turns a piece of easy to reason about functional code "back" to generated code that exploits local mutability behind the scenes in some circumstances, that's exactly how we want it to work. We still get the robustness guarantees of semantics where our objects don't change from under us in programmer visible ways.
Would that help with a huge 50x difference? Maybe not, but the point is that evaluating the benefits of immutability on a VM that does not optimize it - JS VMs - is not relevant. (And that 50x might also be caused by other limitations of JS VMs and not immutability at all, like say deopts.)
Also let's not forget that this was already "fast enough" for a long time before the 50x rewrite.
I have absolutely no issues with efforts to improve single threaded performance of programming languages (and indeed the advances made by JS are remarkable) and I don't believe any language needs to be "As Fast As Cee". But There are other perfectly reasonable languages that are within a small integer factor of C and C++. You do not need to pay a large penalty for ergonomics.
I don’t spend thousands on computer hardware so that lazy devs can get lazier.
Functional programming is great. What’s not great is loading the entire closure VM environment into my browser - resulting in the software running (in this case) 50x slower. FP is no excuse for making my computer crawl to a halt.
And there’s no essential reason FP needs to be slow. For example, look at how well llvm optimizes map/filter/fold in rust. In many cases they’re faster than their imperative counterparts! There are other ways to benefit from immutability that don’t involve burying a garbage collector in useless work. For example, React is a lovely example of immutable ideas, and a fast runtime.
I spent a few thousand dollars on a new Mac laptop recently. I wonder what percentage of my clock cycles are going to be wasted due to bad abstractions and inefficient code? Probably most of them. I wish I could take the money I spent on the machine and instead pay people to improve their software. I don’t have billions of cycles per second of actual work to do.
There's no VM in clojurescript, it compiles to JS, it is tree-shaken and heavily optimized and minified through Google's Closure compiler.
> resulting in the software running (in this case) 50x slower
I think this is a common misunderstanding, that's why I'm reacting. Nobody claims immutable data structures to be the silver bullet. Computation-heavy parts need to be done in low level code and with primitive types.
More or less, yes.
The majority of the perf increase here came from two things: 1. going from immutable->mutable, 2. going from CLJS/JS->Rust in the perf critical part. Doing just 1. would likely improve the performance, but not as much as doing both 1. and 2.
Doing just 1. while staying with ClojureScript could potentially be accomplished with transients  at the cost of making a major chunk of the code non-idiomatic Clojure. I actually played with transients here before attempting the rewrite, but haven't got too promising results though.
Proper abstractions aid in understanding and can ideally be optimized away. Poor abstractions hinder it and slow things down.
I believe it's possible to understand the code you create, even in the presence of mutation (though you can no longer store old values for free, and need to use cloning or other approaches). You need to restrict which code is responsible for mutating state (using careful program architecture), and restrict the ability to mutate data while other pointers to the data exist (Rust imposes these restrictions). Interestingly, the Relm architecture is a translation of the Elm architecture (Elm is an immutable language) to Rust code (Rust is a mutable language) which restricts which code is responsible for mutating state, and Rust restricts the ability to mutate data while other pointers to the data exist.
Interestingly, Rust unifies immutable and mutable data structures. The im library (https://docs.rs/im) uses the same tree-based immutable data structures as Clojure and such, but exposes an API closer to Rust's stdlib containers (including mutation). However im's performance characteristics are different from Rust's stdlib; clones are shallow and take O(1) time, while IndexMut is slower and copies tree nodes whenever that node's refcount is greater than 1. immer.js (https://github.com/immerjs/immer) has a somewhat similar API, but a different implementation (I think it uses standard JS arrays and copies the whole thing when you mutate one element).
Instead of properly managing mutable state (which can be difficult in situations with growing teams and complex application logic) people are opting to just copy everything or copy a subset of the tree they need so they don't have to think about it.
Immutability is bad for performance when the purpose is not recalculating a certain state after a certain number of operations(memoization). Many of the best practices that have cropped up in the past few years has been more for helping teams of people deal with growing code bases rather than helping programmers deal with limited hardware. In computer science this should be self explanatory. For optimal runtimes the act of making copies is avoided. Why people usually seem to ignore the fact that they're wasting cycles for the sake of the holy grail of clean, functional programming and immutability has eluded me.
Typescript, focus on immutability, microservices even. I hate all of them but they have their purposes. They solve people problems. Maximizing hardware performance is not in the list though.
Though I must admit, every time my browser lags when viewing what SHOULD be a static site, I do die a little inside.
We all know that performance takes extra work to ensure, then, and is uncertain even with the extra work.
"Lazy devs", Work on a team in C graphic code and watch nothing get done.
It doesn't matter what language they move on to. Rust and C++ are both good.
Rust which was used here also has immutability by default and mutability is an explicit opt-in.
That’s actually not how clojure’s immutable data structures work. They use structural sharing, so only the portion that changes (roughly) needs new memory allocated, and only the parts that changed get garbage collected, so it is a bit more efficient than that.
So you don't necessarily have an allocation for every single character, but you're still able to share memory between buffers
I've implemented a game engine with immutability (makes for fast cloning in AI search) where much of the game state is shared between clones. With reference counting it also means if there's a unique reference being modified then no copy is made. This same trick is used by Python to optimize string concatenation
if you have the string "hello" and somewhere else "hello world" you can retain only one copy of "hello" because it's guaranteed it won't change.
but js vm it's not smart enough for that.
my bad, I learned something new.
anyway this proves that immutable data structures are not inherently slow, this is infact an optimization that makes things faster.
Immutability is alive and well, it's simply a matter of js runtime not supporting it, because the developers thought "one thread ought be enough for anybody".
Having written a ClojureScript app with a Rust/WASM component, for me it's very clear that they have different strengths that can be complementary.
That the author dropped CLJS along with React is fair enough, in my opinion. The CLJS ecosystem is pretty React-centric.
I would expect the system maybe to be slower like 20 to maximum 50% (as in Rust to be maximum 2 times faster) but 50 times faster?. That's already brain in the chair mistake.
Although, fair enough, it's a rewrite, and the author has the benefit of experience with the first implementation when writing this one. A second implementation in ClojureScript would likely have been faster than the first one. But 50x faster? Unlikely.
> Yes - about 50 times faster in my experience.
Just as a data point that the author isn't the only one to see this type of performance increase.
I expect that it isn't 50x across the board over JS, but that it would vary wildly across the board, including being slower if you're doing something trivial and end up paying more in serialisation to and from WASM than you gain in speed.
And yet it is.
The biggest factor here is that CLJS is interpreted, GC-ed with immutable data structures while Rust is lower level with highly optimized WASM bytecode, that's optimized by Rust/LLVM compiler ahead of time.
So, when there's a lot of terminal activity (high speed colorful animation), the terminal emulator in CLJS implementation allocates and GCs millions of data structures every second. In Rust implementation there's very little allocation because most code operates on already pre-allocated buffers, and just mutates then. Both approaches are standard to their language. I could have tried all sorts of tricks (in fact I tried some) in CLJS to make this implementation faster but it would very quickly make the code non-idiomatic, and not fun to work with. But let's say I could make the CLJS impl be on par with a theoretical plain JS impl - this would likely still be many times slower than a basic Rust impl. Modern JS engines are amazing but they just can't beat WASM. Maybe some day :)
Btw, it's not a product, just a side, hobby project of mine.