Also, its amazing how fast this has popped up. Wasm was added to the compiler literally a month ago and already there is a standard library from dom manipulation and front-end frameworks. 2018 is definitely going to be interesting
On the other hand, in Rust, we have the compiler on our side. If one grabs a mutable reference to the state (&mut Model in the update function in https://github.com/DenisKolodin/yew/blob/master/examples/cou...), then the Rust compiler ensures that there are no other references to that Model. Thus, when the framework does `nextState = mutate(¤tState)`, one is guaranteed that there are no existing references to `¤tState`, and thus it is impossible for a shadow update to occur.
IMO, this is better not primarily for performance reasons, but because it is conceptually easier to mutate a state object instead of using reducers. (I haven't looked into this framework much, so I don't presume to speak about it's internal implementation, so this may be off in some framework-specific details.)
Maybe Haskell goes too far in this regard, but Rust is in line with the same way of thinking, because mutability is surfaced and checked at type level.
Debugging and rewind/replaying application state is also greatly simplified by keeping states immutable.
Having the state mutable would also need some kind of locking between the render view and updateState method
That's right, but that's exactly the main strength of Rust : the borrow checker deals with that locking at compile time with zero runtime overhead.
Not arguing on the other points though.
You may want to brush up on Rust, the language has been designed explicitly to deal with these kinds of situations.
In theory there would be no difference between mutating an existing memory area and allocating new memory to write to it. The number of bytes written to RAM are the same.
It's not the same, you need to allocate new memory, if it's on the heap then (depending on your allocator/OS etc) you will be making a syscall, which means context switch, cache eviction etc.
That means a difference of ~300 cycles before you can even start working with the cache line you pull from main memory.
I think frameworks like React should explicitly demand that state and props will be referentially compared to optimize the rendering.
so you mean bloody expensive ? seriously, if everybody thinks like you no surprise the current web crawls on beastly computers.
The only way to address it is with extensive pooling and heuristics.
Or you can just mutate.
Really it's no wonder that the web is so slow if the common conception is that allocating is no big deal. If you really want to do performance right allocations should be at the top of your list right next to how cache friendly your data structures are.
When a GC allocates memory all it does is check if there is enough memory in the "young generation" if yes it will increment a pointer and then it will just return the last position in the young generation. If there is not enough memory in the young generation the GC will start traversing the GC root nodes on the stack and only the "live" memory has to be traversed and copied over to the old generation. In other words in the majority of cases allocation memory costs almost as much as mutating memory.
Mutable language GC's like JS or Java's aren't necessarily built for this compared to GHC. And even discounting all that, things like stack memory, cache, etc all make a huge difference in real world performance. GC's have come a long way, but there is still a gap in performance.
"Immutable" data structures are not just objects that never get mutated, they're different manners of organizing the bytes.
However, most JS runtime have a generational GC so an allocation isn't remotely as costly as an allocation in C or Rust.
Afaik you don't have to. It just makes things easier since state changes can be detected through shallow reference comparisons. There is even the shouldComponentUpdate() hook to allow the user to detect state changes which did not trigger a whole state object change.
When you use `Object.assign()` then you're doing it, but you don't have to use that, and probably shouldn't.
This kind of performance optimization, especially outside of game engines seem unnecessary.
I have used a lot of redux in js and written a few things in Elm, and the immutible property of application state is very central in both, and an important part of being able to reason with the ui, as well as events happening concurrently.
The problem with shared mutable state might appear to be then “mutable” but it’s the combination of shared+mutable.
Ownership removes one (if it’s mutable it’s not shared). Immutability removes the other.
The outcome is the same: you can reason about your code, including concurrent code.
The price of ownership is some extra mental overhead. The price of immutability is some extra copying and cumbersome updates.
In JS/Elm you have no choice but to manage shared mutable state using pure functions and immutable data. In rust you can use either model.
Not really. Of course you can copy immutable data upon update, but persistent data structures (large shared state between data versions) are hard to write without GC...
: mozilla's JS VM
: this is how you call a doubly linked list managed by the JS GC https://github.com/asajeffrey/josephine/blob/master/examples...
The problem is that the banana has a jungle attribute. If you pass a bannana to a function you're also implicitly passing in the entire jungle. The function could mutate the entire jungle without your knowledge. If you remove the jungle attribute from the bannana class you have to explicitly pass the jungle as a function parameter. This makes it far easier to understand the code.
The same applies to return values. If you return every changed value from a function and then explicitly handle the change at the callsite it doesn't actually matter if the data is mutable or immutable because the side effects are visible and easy to understand.
E.G: QT used from Python with PyQT.
It is not only GraphQL itself I had in mind though. It is also about implementing a client library similar to Apollo GraphQL where you can provide watch queries wrapping your UI components. Watch queries provide great developer ergonomics for handling the flux data flow, that is, whenever there is a change to your data in store, if the corresponding elements in datastore are watched by a query, the UI components wrapped by this watch query are re-rendered with new data, automatically.
I think this combination will help desktop development by helping to reduce boilerplate code and by providing a convention to development team. We enjoy this combination already in web UI development for the last one year, in our team.
- build views without being blocked waiting for a new rest endpoint
- get exactly what the UI needs to render
Although to negate my own first point, nb4 being blocked on waiting for new mutations and queries :P
In that regard, rust is actually behind. Async/await is still coming.
Native applications don’t benefit from smaller responses of just the data they need to render a view?
I am confused.
This enables you to transparently move data-source around.
Little did they know that web-assembly was already scheming their demise.
p.s asking as @spicyj is, from her profile, an "eng manager of React @ Facebook".
Remember you can also have a polyfill (at a performance cost when parsing).
but i thought wasm couldn't manipulate the dom? is that not the case anymore?
The story with Rust compiling to it is still younger; it works well at the moment, but is very fiddly. The early adopters and enthusiasts are working on making it all smoother. So for most people, it's still a bit young. Just depends on your temperament.
IE is discontinued. IE11, which was released back in 2013, only gets security-related updates. It's on life support.
IE11 part of Windows 10 which EOLs in 2020 (mainstream) or 2025 (extended).
it does appear to be declining ~0.2% per month, but it's a long way off from dead :(
That is one fancy as fuck ecommerce site to be that reliant on features in modern browsers that lack automated transcompilation/shim generating tooling.
(If you were talking about IE5 or IE6 I'd understand the argument.)
thankfully, js and dom bugs are either rare or well-known and have lightweight polyfills.
7.6% is a huge number. we dont even start discussions until traffic is < 2% (and also a huge pain in the ass to support). IE11 is not actually that terrible.
- they want to be able to switch to newer APIs when the underlying OS adds them (though in Edge's case it's more likely it was written from the ground up using newer APIs)
- they quite possibly want to use "you can get the new browser only if you upgrade" as a carrot for OS upgrades (they explicitly did with IE, I haven't seen anything explicit for Edge but it wouldn't surprise me if they're still taking that approach)
DOM support for wasm really means native DOM support, which removes that layer of indirection, and therefore will make it faster.
claims it is but it's no longer in the high level goals in the spec link is dead... so do you know if it is?
Everyone I talk to suggests that it will be sometime next year.
I understand JS is popular to hate, for a variety of reasons, but those reasons seem mostly to boil down to nerd cred. The "hate what's popular because being contrarian means I'm cool" crowd.
So, and this is an honest question here, why should I invest the time to learn an entirely new syntax for a framework to build web applications? What can this do that vue.js or react cannot? (And I don't mean it does it differently I'm looking for it does it better or JS framework doesn't do it.)
Learn Rust or another ML-family language, actually learn it to the point where you can write a decent, full-sized, idiomatic program in it. It won't be easy, but it's worth it. I mean, I could talk through all the reasons such languages are better, but it seems like you're already determined to dismiss them, so really the only way is to see for yourself.
Yeah, statements like these without even trying to back them up are pretty ridiculous.
Sometimes someone brings some stupid issue that's never a problem in practice, but most of the time people just admit they simply have no real experience with JS. They just love to claim how superior they are, and anyone who disagrees is "not paying attention".
FYI you are just showing how awful are communities around "legitimately good" languages.
Just to let you know Rust's community is pretty cool and not awful at all. For instance the Rust Survey 2017 reports that 98.7% of respondents feel welcome within the Rust Community.
Furthermore according to the Stackoverflow 2017 survey, Rust is the "most loved" language.
Btw that same survey showed how people are quite fine with writing JS, which makes this entire thread seem even more ridiculous.
Could You elaborate?
From my experience main problems seem to be:
1) poor static analysis of code, leading to bad Intellisense etc. - Mostly solved by TypeScript.
2) DOM manipulation APIs. I don't see how Rust will help here. Especially considering JS frameworks solve this. I guess that's what's new in the OP.
Otherwise JS serves as pretty good language for beginners/UI wiring.
But thanks for at least describing some strengths (and weaknesses) of Rust when directly compared with JS.
It has nothing to do with JS, right? :)
It allows you to write your web app in Rust.
>And I don't mean it does it differently I'm looking for it does it better or JS framework doesn't do it.
"Better" depends on the use case, so merely by being different something might better fit another use case.