> For example, my blog search engine, which indexes on the order of 500,000+ words across 1,000+ documents, can return results in < 10ms because the indices can be queried incredibly quickly.
This is odd. Let's do a back-of-the-envelope estimation. DDR4 bandwidth is at least 12GB/s. 500k words is around 3MB. So, reading this amount takes 0.25ms, ignoring CPU caches. It sounds reasonable to assume the matching algorithm can run in-between that speed and 40 times slower for the 10ms mark.
This is one example where precomputation is probably not needed: bruteforce if N is small enough.
Applications want to receive/provide a stream (X sample-rate, Y sample format, Z channels) and have it routed to the right destination, that probably is not configured with the same parameters. Having all applications responsible for handling this conversion is not doable. Having the kernel handle this conversion is not a good idea. The routing decision-making needs to be implemented somewhere as well. Let's not ignore the complexity involved in format negotiation as well.
The scenario of a DAW (pro-audio usage) is too specific to generalise from that. That is the only kind of software that really cares about codec configuration, latencies and picking its own routing (or rather to let the user pick routing from the DAW GUI).
I have trouble understanding how people work for such big companies. Of course if the scale is 1000 employees "the company" cannot care about you; it is an institution not a bunch of people anymore. There is no upper layer to blame, the structure and scale themselves are the culprit.
I decided against that and work in a small company. Work-life balance is nice, we are friends and we all care (at least some bit) about the company itself. That is, because "the company" is us.
The place I work at is cut-throat. The company says that we are NOT a family, we are a team (Like professional baseball). Some people are better at other teams (Aka not fit for this company and get traded).
However they pay very well (Cash and benefits) and I can handle it. The people that cannot should work at places more like USPS or smaller shops.
This is worse than the typical family speal. I hope you realize it's just another tactic of the same manipulation. Trying to provide a sense of superiority to it's employees to justify working conditions. If you can't handle it, "go work at USPS" is especially telling.
What about plain HTML & CSS for all the websites where this approach is sufficient? Then apply HTMX or any other approach for the few websites that are and need to be dynamic.
That is exactly what htmx is and does. Everything is rendered server side and sections of the page that you need to be dynamic and respond to clicks to fetch more data have some added attributes
I see two differences: (1) the software stack on the server side and (2) I guess there is JS to be sent to the client side for HTMX support(?). Both those things make a difference.
I'm embedded so I don't much about web stuff but sometimes I create dashboards to monitor services just for our team, tganks for introducing me to htmx. I do think html+css should be used for anything that is a document or static for longer than a typical view lasts. Arxiv is leaning towards HTML+css vs latex in acknowledgement that paper is no longer how "papers" are read. And on the other end, eBay works really well with no js right up until you get to an item's page, where it breaks. If ebay can work without js, almost anything that isn't monitoring and visualizing constant data (last few minutes of a bid, or telemetry from an embedded sensor) can work without js. I don't understand how amazon.com has gotten so slow and clunky for instance.
I have been using wasm and webgpu for visualization, partly to offload any burden from the embedded device to be monitored, but that could always be a third machine. Htmx says it supports websockets, is there a good way to have it eat a stream and plot data as telemetry, or is that time for a new tool?
You would have to replace the whole graph everytime. Probably works if it updates once per minute. But more than that it might be time to look at some small js plot library to update the graph.
Another way to describe what has been done: implement a pure function and avoid storing additional state. It sounds way less dumb that way. It is not really a pure function but the spirit is here.
I've done the same during a refactoring of a side-project recently. It handles the input/output to a MIDI controller with many buttons, knobs and matching LEDs. Instead of computing what LED should change at regular interval, I am switching to recomputing the whole state each time. No more complex logic, no more mutable data. Only a pure function that outputs the desired LED state based on software internal state. Then a diff is computed and only changes lead to MIDI messages. Code is less efficient (for 100-ish LEDs) but much more straight forward.
This is how React works, or at least the illusion it presents to the developer.
Where it goes awry and gets complicated is that web developers want to modify the input state directly within the same functions that produce the output state, and they also want to trigger side effects after the output state has been completed, requiring another pass.
I’ve built a React variant for video compositing. Since it renders at a steady frame rate, there’s no reason to ever trigger re-renders. The useState() and useEffect() hooks are practically useless. To my personal taste it’s a sweet spot for React, and I wonder if some kinds of web apps might benefit from similar simplification to the state approach.
I've also struggled with React's insistence on immutability. What if mutability was the only way to update state? I implemented a JSX-powered react-alike that explored the concept[1]. To my lack of surprise, I found the resulting environment easier to get stuff done in. I'm not subjecting my employer to this, but I would totally use this on a solo project that I had to support.
One problem with this that I've found through similar endeavors is: the act of making the library results in you knowing how it works on a deep level. It's likely you don't know React on such a level. This inflates your perception of how easy to use it is. If you did know React that well, you'd probably be able to use it very effectively!
Anyway I did take a look at Mutraction and it looks great actually. I've just made a lot of abstractions at work, and always been a little surprised by how hard it is for people to get used to them. Of course maybe I'm just bad at it. But ultimately it's made me kind of anti-abstraction all around. If everyone was as good with vanilla HTML, CSS, and JS (actually I'll approve TS) as they are with React, the web would be a better place </opinion>
Before I built this, I spent a couple of weeks reading the react source code. Of course, it's huge, and I didn't touch most lines. Probably never saw most of them. But it was enough to understand dispatchers, work scheduling, and fibers.
But you're right though. I still understand my own library better. However, I've really made an honest try to understand react (at least the client&DOM parts) as much as I practically could.
I think I'm on-board with your anti-abstraction POV too.
To be fair, this (like many applications of reactivity) is conceptually very different from embracing mutability. And at least at a glance, it looks conceptually much closer to React than that.
Where React differs isn’t immutability, but where the mutation of state/effect boundary is (at the component/hooks-rules, versus something more fine grained).
In every possible approach, a state change needs some orchestration to produce rendering updates. The approach taken here looks like a subset of a common reactive approach, not dissimilar to say Solid with its createMutable Proxy-based store. That’s much more palatable to me (and I expect it would be to even a lot of React devs) than a less disciplined free-for-all mutability take (which effectively devolves to “build your own state<->render abstraction, or just maintain state in the view itself, probably both”).
Solid's proxy-based approach was indeed one of the major influences. It's also similar to reactive() in VueJS. There's one novel thing in mutraction that's not in either though, which is the undo/redo log. It might not be very useful in practice. I'm not under the impression that I really created anything fundamentally new here. I just scratched my own itch.
Really, I think the main difference is that there's nothing in mutraction like a virtual DOM. Conceptually, it's dead simple. There are only real DOM nodes. This eliminates most of the use case for DOM refs as used in react, As you can just assign a JSX expression straight to a variable.
I've seen the word "orchestration" used before with respect to UI framework architecture. I must confess, I don't understand what it means. By default, in mutraction, most mutations are immediately applied to the corresponding DOM elements. You can wrap blocks in transactions, but probably most of the time, you wouldn't. Is that orchestration?
> There's one novel thing in mutraction that's not in either though, which is the undo/redo log. It might not be very useful in practice.
On the contrary! That alone is cause for me to give it another look. Stuff like state history is sorely lacking in the industry in general, and can enable powerful things like time travel debugging. I’m super curious to look into how it works when I get a chance.
> Really, I think the main difference is that there's nothing in mutraction like a virtual DOM.
Clarification (as I presume you know this, but in case anyone else isn’t familiar): this is also how Solid works.
> I've seen the word "orchestration" used before with respect to UI framework architecture. I must confess, I don't understand what it means. By default, in mutraction, most mutations are immediately applied to the corresponding DOM elements.
That’s exactly what I meant in this context. Without something like reactive Proxy tracking and binding to the produced DOM nodes, you’ll have:
1. Some mutable state, like objects and arrays and reassignable variable bindings.
2. Some view DOM.
3. Some code that manually assigns 1 to 2.
4. Some code that manually handles events in 2 and applies mutations to 1.
5. Recurse.
This can be as “bare metal” as direct DOM interaction, but usually tends to look more like jQuery. As popular as that is in HN comments, it’s really hard to manage in applications beyond a certain level of complexity (interactivity, feature scope, etc).
IMO react made way more sense when it was used only as the view layer. What react really needed was a separate paradigm for business logic. Having a state machine tied into react is really interesting to me though I haven't seen many try it, and I personally don't use react often enough to give it a go.
React's render loop, and even JSX itself, makes plenty of sense when the data is just fed in and rendered. It falls apart really quickly when data is being changed from inside a component rather than must firing events, leaving us with a decade worth of duct tape trying to find a solution that works long term.
Many people use flux-variants like redux which is basically what you say.
But they aren’t perfect either. And perhaps worse than the ’default’ way.
The fundamental problem is a lot of state is local and doesn’t need to leak outside of the view (for example, is the mouse hovering on a button or not). Yet it can be hard to tell when that’s the case—imagine if hovering on a button now needs to call some logging code or update some status UI elsewhere on the page.
If we were to store all that globally then it allows for pure rendering but it becomes unwieldy and hard to maintain. But if we don’t then you get the duct tape system.
Yeah I did actually try a few flux-like libraries early on. I actually used redux for state management in an Angular 2 app. It worked better than I would have expected, but async was always a problem - at the time thunk and sagas were the solution and both were painful IMO.
That's reall interesting though, I run into plenty of problems with shared state but don't actually remember having any real issues with local state. I haven't seen too much of a problem with components changing local state as long as nothing else can change it. Even if that local state is passed down to child components, changes would only happen in the one place and it should only cause a single re-render cascade.
Where sate in react has really bitten me is when multiple components all try to read/write the same state, especially when some of it is async. Patterns can be used to hide or try to isolate it, but I've never seem it done in a way that feels cleaner or more fool proof than the idea of a state machine running entirely outside of react's component tree.
That’s what I try to do with every declarative UI i’ve worked on. Hoist all the logic that act on state outside of the view modules. Then you plug it in via functions. A list component may have only one hook (useItems) that take in a filter/search state and another (useSelection) that is dependent on the first. This ensures a clean relationship graph. Having everything in a smart component is where you got the spaghetti nightmare of relationship graph.
In my MIDI controller case, the "compute state and send update" code takes about 8µs, up to a spike of 15µs. The worst case of the code in article is a sort on each frame on a 100ish array, it sounds safe to ignore. Profile before optimizing.
28 to 32 is negligible aero wise for most people, I don't disagree. 30 mph (or slower into the headwind for an air speed of 30 mph) will start to matter a little bit though, math doesn't lie.
But saying wider tires are better as a blanket statement implies that you can run like 45c gravel tires and still have the same aero drag, which is by far not the case.
Most short stories about technology are about it's role in society. Take a look at any of the winning or short-listed short stories from any reputable sci-fi award (like the Nebula or Locus) and you'll find them by the dozen.
I like stories by Asimov and Bradbury. If you're looking for something contemporary, take a gander at Ted Chiang.
Yeah, the trick to writing science fiction is that it's almost never about the grandeur of the setting, it's using that setting and its technology as a tool to lay bare the inherent problems with society and humanity. One of the things I've been struggling with is that AI tools are effectively cheap low-quality knowledge labour. How could this go wrong? Many fucking ways it turns out.
I do not agree. Commits in my patch series have no link whatsoever with the chronology of my work. I wouldn't call it "Git history" as long as it is the branch I'm working on. It becomes history once it is merged inside a more persistent branch.
This is odd. Let's do a back-of-the-envelope estimation. DDR4 bandwidth is at least 12GB/s. 500k words is around 3MB. So, reading this amount takes 0.25ms, ignoring CPU caches. It sounds reasonable to assume the matching algorithm can run in-between that speed and 40 times slower for the 10ms mark.
This is one example where precomputation is probably not needed: bruteforce if N is small enough.