Hacker News new | past | comments | ask | show | jobs | submit login
Yew: Rust framework for making React-like client web apps (github.com/deniskolodin)
500 points by syrusakbary on Dec 25, 2017 | hide | past | favorite | 137 comments

When I saw the work being done with adding web assembly support to rust I though it was cool but I never really put much thought into how far it could be taken. Seeing projects like this is really great and makes me realize just how much room there is for using rust for web front ends. Given all the consideration for performance that the web has needed recently, I am excited to see what is possible with wasm and rust in the future.

Also, its amazing how fast this has popped up. Wasm was added to the compiler literally a month ago and already there is a standard library from dom manipulation and front-end frameworks. 2018 is definitely going to be interesting

Why does update(...) mutate the model instead of returning the next state? This would make update a function instead of a side effect mutating the model.

Returning a nextState in Redux/JS land is a workaround. Basically, one wants to avoid the situation where the state is mutated, but the framework does not know. This happens because there are multiple references to the state, and because, for performance reason, the framework is comparing the states based on referential (===) equality. Thus the view (in whole or in part) and the model can go out of sync. Always returning a new state object is an expensive way of ensuring this, but on the human time scales we're talking about, an insignificant cost.

On the other hand, in Rust, we have the compiler on our side. If one grabs a mutable reference to the state (&mut Model in the update function in https://github.com/DenisKolodin/yew/blob/master/examples/cou...), then the Rust compiler ensures that there are no other references to that Model. Thus, when the framework does `nextState = mutate(&currentState)`, one is guaranteed that there are no existing references to `&currentState`, and thus it is impossible for a shadow update to occur.

IMO, this is better not primarily for performance reasons, but because it is conceptually easier to mutate a state object instead of using reducers. (I haven't looked into this framework much, so I don't presume to speak about it's internal implementation, so this may be off in some framework-specific details.)

in a nutshell this the counterargument to functional purity: side effects can be done correctly/safely when you have/build the right tools. i'm inclined to agree with it.

It's not really a counterargument so much as another approach. Immutability and purity are still powerful concepts, but you don't need to use them for every single situation in rust.

In Haskell, mutable APIs are available but this is surfaced in the types. The idea is not so much that you shouldn’t have mutability, as that it shouldn’t look the same (and be treated by the compiler the same) as immutability.

Maybe Haskell goes too far in this regard, but Rust is in line with the same way of thinking, because mutability is surfaced and checked at type level.

Mutability is still harder to reason about in many/most cases. I thought that was always the main driver behind pure FP.

Referential equality is possibly an optimization, and makes re-rendering efficiently at least simpler.

Debugging and rewind/replaying application state is also greatly simplified by keeping states immutable.

Having the state mutable would also need some kind of locking between the render view and updateState method

> Having the state mutable would also need some kind of locking between the render view and updateState method

That's right, but that's exactly the main strength of Rust : the borrow checker deals with that locking at compile time with zero runtime overhead.

Not arguing on the other points though.

> Having the state mutable would also need some kind of locking between the render view and updateState method

You may want to brush up on Rust, the language has been designed explicitly to deal with these kinds of situations.

You're certainly right that in the mutable Rust land you don't have rewind/replaying state out of the box.

FWIW, it's more of a "simplicity" choice - it is entirely possible to keep mutations reactive in React with libraries like MobX - it's just extra functionality that React doesn't ship out of the box but it works just fine without a borrow checker.

Probably for performance reasons, allocating a whole new state each frame is pretty expensive.

Incremental updates to large, immutable data structures do not require allocating a whole new state per update (despite maintaining immutability!) Mutation is still frequently more efficient though, at the cost of not being able to reason immutably about your program. But if you're doing something that requires multiple versions of your state (e.g. history, undo/backtracking), the immutable version is going to be more efficient and definitely easier to write efficiently in the first place.

I'll give you that it'll be easier to write but once you start working with value types(like Rust has proper support for) then you don't get the nice efficiencies that come when everything in the world is a reference.

Most efficiency gains of imperative programming languages like C or C++ come from the compact and continguous memory layout and usage of the stack which is usually in the L1 cache.

In theory there would be no difference between mutating an existing memory area and allocating new memory to write to it. The number of bytes written to RAM are the same.

> In theory there would be no difference between mutating an existing memory area and allocating new memory to write to it. The number of bytes written to RAM are the same.

It's not the same, you need to allocate new memory, if it's on the heap then (depending on your allocator/OS etc) you will be making a syscall, which means context switch, cache eviction etc.

This is where understanding the difference between theory and practice is important, by its nature anything in the heap is not going to be in L1 cache.

That means a difference of ~300 cycles before you can even start working with the cache line you pull from main memory.

But the state is likely a tree structure and there will be lots of sharing if coded right.

I think frameworks like React should explicitly demand that state and props will be referentially compared to optimize the rendering.

it's not any more expensive than allocating a new object in Javascript. possibly less. if you want immutable models, allocation will most likely happen on mutation.

> it's not any more expensive than allocating a new object in Javascript.

so you mean bloody expensive ? seriously, if everybody thinks like you no surprise the current web crawls on beastly computers.

Idk about JS's memory model, but you can allocate the equivalent of a JS object in Java and Haskell very, very cheaply. I really don't think allocating a single JS object is expensive...updates to large immutable data structures should just require a few allocations (aka a handful of pointer bumps). Sure, it's technically more expensive than an in-place update to an equivalent large mutable data structure. But it's also not a fair comparison given one gives you way stronger guarantees about its behavior.

Except in those languages it can be just as brutally painful to allocate. Start modifying strings in the render loop on Android and see how quickly you get destroyed by constant GC pauses.

The only way to address it is with extensive pooling and heuristics.

Or you can just mutate.

Really it's no wonder that the web is so slow if the common conception is that allocating is no big deal. If you really want to do performance right allocations should be at the top of your list right next to how cache friendly your data structures are.

Because in reality it is no big deal. Modern GCs are incredibly efficient.

When a GC allocates memory all it does is check if there is enough memory in the "young generation" if yes it will increment a pointer and then it will just return the last position in the young generation. If there is not enough memory in the young generation the GC will start traversing the GC root nodes on the stack and only the "live" memory has to be traversed and copied over to the old generation. In other words in the majority of cases allocation memory costs almost as much as mutating memory.

There is a huge difference between “algorithmically efficient” (aka big o) and “real life efficient” (aka actual cycle counts). In real life, constant factors are a huge deal. Real developers don’t just work with CS, they work with the actual underlying hardware. Big O has no concept of cache, no SMP, no hypertreading, no pipeline flushes, no branch prediction, or anything else that actually matters to creating performance libraries and applications in real life.

There is a huge difference, you're also discounting GC. Haskell's GC for example is tuned to the aspects of the language, meaning it's pretty efficient at allocating and cleaning up lots of memory; it has to be, everything is immutable.

Mutable language GC's like JS or Java's aren't necessarily built for this compared to GHC. And even discounting all that, things like stack memory, cache, etc all make a huge difference in real world performance. GC's have come a long way, but there is still a gap in performance.

React does not allocate a new object in Javascript. No sane framework does.

"Immutable" data structures are not just objects that never get mutated, they're different manners of organizing the bytes.

React itself does not allocate a new object, but it forces you to do it yourself: to update the application state, you're supposed to call `setState()` with a brand new state (which is a newly allocated object). In React tutorial[1], you can notice the use of the `Object.assign` pattern which perform a new allocation.

However, most JS runtime have a generational GC so an allocation isn't remotely as costly as an allocation in C or Rust.

[1] https://reactjs.org/tutorial/tutorial.html#why-immutability-...

> React itself does not allocate a new object, but it forces you to do it yourself: to update the application state, you're supposed to call `setState()` with a brand new state (which is a newly allocated object)

Afaik you don't have to. It just makes things easier since state changes can be detected through shallow reference comparisons. There is even the shouldComponentUpdate() hook to allow the user to detect state changes which did not trigger a whole state object change.

Doesn't make sense. You don't provide an entirely new object when you call `setState()`, just the parts of it that have changed.

When you use `Object.assign()` then you're doing it, but you don't have to use that, and probably shouldn't.

I would argue that it is obviously worth it for the simplicity of being explicit what is the next state.

This kind of performance optimization, especially outside of game engines seem unnecessary.

This is Rust. Not exactly a language that makes compromises on performance for the sake of simplicity.

Each cycle you waste on mobile is milliwatts you're burning from the battery. Even if the user doesn't perceive it their battery will appreciate it.

You save a single allocation for every state change in your app? Never mind the actual render cycle? And lose a simple reference check to decide if you can short-circuit a part of your app?

mutating in Rust is not like mutating in JS, Python, Java C# or many other languages. Rust tracks ownership, you always know who might mutate the object and when. It is a very unique experience programming Rust

This might be more sane in Rust, I am not familiar with Rust.

I have used a lot of redux in js and written a few things in Elm, and the immutible property of application state is very central in both, and an important part of being able to reason with the ui, as well as events happening concurrently.

Immutability and ownership arent unrelated. They solve the same problem.

The problem with shared mutable state might appear to be then “mutable” but it’s the combination of shared+mutable.

Ownership removes one (if it’s mutable it’s not shared). Immutability removes the other.

The outcome is the same: you can reason about your code, including concurrent code.

The price of ownership is some extra mental overhead. The price of immutability is some extra copying and cumbersome updates.

In JS/Elm you have no choice but to manage shared mutable state using pure functions and immutable data. In rust you can use either model.

In rust you can use either model.

Not really. Of course you can copy immutable data upon update, but persistent data structures (large shared state between data versions)[1] are hard to write without GC...

[1] https://en.m.wikipedia.org/wiki/Persistent_data_structure

Reference counting [1] and atomic reference counting [2] are supported within Rust, and are sufficient for any persistent data structure without cycles. There's also a prototype crate for a mark and sweep GC [3], which would likely work for anything else. Writing such data structures using these features might be a bit more tedious than writing them in a GC language, but I suspect the increased explicitness provides a worthwhile tradeoff. See the crossbeam library [4] for some examples of lockless concurrent data structures implemented in Rust.

[1] https://doc.rust-lang.org/std/rc/struct.Rc.html [2] https://doc.rust-lang.org/std/sync/struct.Arc.html [3] https://manishearth.github.io/blog/2016/08/18/gc-support-in-... [4] https://github.com/crossbeam-rs/crossbeam

As an alternative GC in Rust, you also have Josephine[1], which uses the SpiderMonkey[2] GC. It's cumbersome to use though[3] and should probably not be used outside of Servo.

[1]: https://github.com/asajeffrey/josephine

[2]: mozilla's JS VM

[3]: this is how you call a doubly linked list managed by the JS GC https://github.com/asajeffrey/josephine/blob/master/examples...

If you don't have cycles, ref counting can work (it's a kind of GC after all), but it indeed comes with a big performance overhead compared to a modern GC.

I’d say it’s not the immutability that’s so valuable, rather than restricting where it can be done. If the app state can only be mutated in one place in your app and under strict conditions, formal immutability no longer offers that much.

"You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."

The problem is that the banana has a jungle attribute. If you pass a bannana to a function you're also implicitly passing in the entire jungle. The function could mutate the entire jungle without your knowledge. If you remove the jungle attribute from the bannana class you have to explicitly pass the jungle as a function parameter. This makes it far easier to understand the code.

The same applies to return values. If you return every changed value from a function and then explicitly handle the change at the callsite it doesn't actually matter if the data is mutable or immutable because the side effects are visible and easy to understand.

This is awesome. Is one of things I was hoping to happen when I first heard about wasm.

Nothing against JavaScript, but I is nice that other languages can now be first class citizens on the web as well.

I was imagining a React style UI framework but for desktop apps. The ingredients I have in mind are React, GraphQL, and Rust. I think these three solve the problems of their respective areas the best and combining them together would be priceless.

https://github.com/antoyo/relm is almost that, except elm-inspired rather than react-

There's been some work on GraphQL and Rust https://www.gjtorikian.com/posts/2017/08/15/wrapping-rust-in...

I think Rust may be a bit too low level for that but at least you will be in the HN frontpage with that combination for sure!

How is rust too low level for a desktop app? All the native ui toolkits are in C/C++ ...

Before I started programming in Rust I had similar concerns when I saw people pushing rust as backend language choice. However having written a non significant amount of rust it does really well at feeling a lot like ruby/python while still achieving high performance and low level access.

Is there any good/mature/sane library for UI in Rust? I'd love to use Rust for a desktop app I want to write.

This comes up in /r/rust periodically, and IIRC the answer is still pretty much "no". It is getting better, though, slowly. I'm on my phone and too lazy to find a link for you, but search /r/rust for discussions about GUI libraries/frameworks.

Integration with GTK is getting easier and nicer, but there's more work to do.

Does Nuklear fit your definition of sane UI library?



That looks interesting, but I'm talking about something more like GTK, ie something that looks somewhat native and is easily portable across OSes.

Yeah but you are much more productive using them from a higher level language binding.

E.G: QT used from Python with PyQT.

How would GraphQL be useful for a desktop app?

A typed query language will definitely help for anything you store either locally or remotely. In this case, it would provide information to compiler to do most of the type checking for you.

It is not only GraphQL itself I had in mind though. It is also about implementing a client library similar to Apollo GraphQL where you can provide watch queries wrapping your UI components. Watch queries provide great developer ergonomics for handling the flux data flow, that is, whenever there is a change to your data in store, if the corresponding elements in datastore are watched by a query, the UI components wrapped by this watch query are re-rendered with new data, automatically.

I think this combination will help desktop development by helping to reduce boilerplate code and by providing a convention to development team. We enjoy this combination already in web UI development for the last one year, in our team.

I think GraphQL is useful for getting data for UIs on any platform.

- build views without being blocked waiting for a new rest endpoint

- get exactly what the UI needs to render


Although to negate my own first point, nb4 being blocked on waiting for new mutations and queries :P

Native applications don’t need that, though. Modern languages embrace async/await and don’t waste cycles or block the UI or even the backend unnecessarily waiting for network/local IO or user interaction.

In that regard, rust is actually behind. Async/await is still coming.

Native applications don’t get blocked waiting for backend engineers to implement a new REST endpoint?

Native applications don’t benefit from smaller responses of just the data they need to render a view?

I am confused.

Also, you can use it to query local data too.

This enables you to transparently move data-source around.

That’s a good point! Never thought about local data sources. I’ve only mapped over SQL, REST, and redis/etc.

As the great cambric JS framework explosion fizzled out, only a few lucky survivors where left (angular/react/vue.js).

Little did they know that web-assembly was already scheming their demise.

I'd like to think it is more like when Oxygen entered the scene. Maybe we can never get rid of of Javascript, but like anaerobic bacteria, maybe it can stay put and out of sight under some rock.

Don’t worry, we know.

Wondering if you are doing anything about it?

p.s asking as @spicyj is, from her profile, an "eng manager of React @ Facebook".

So here's my Christmas present, and maybe an excuse to learn Rust. Ho ho ho!

I was literally thinking "Maybe I should try using stdweb with React" last night. Awesome work, can't wait to use it/contribute!

The counter demo is 9.4mb in amsjs and 1.5mb is wasm.

Great concept. This is my desired FutureStack for 2018!

Wow, that's a really clean API. Nice work!

Any update on wasm support for mobile platforms ?

Uh, it works? https://caniuse.com/#search=Webassembly

Remember you can also have a polyfill (at a performance cost when parsing).

Coming next year: PHP apps that compile to WASM, so you can <?php ?> right in your HTML.

Or, I bet, a WASM port of Arc and the default Arc forum.


color me shocked: this is really a rust web framework (compiles to wasm). i was expecting a backend framework that supported react development on the frontend in some non-trivial way. wow. very cool.

but i thought wasm couldn't manipulate the dom? is that not the case anymore?

WASM is just something that can run compiled code in the browser. You can communicate with the Javascript environment. You cannot access the JS libraries directly, but you can with thin bridges.

Using such bridges, you can create anything you can with Javascript in any language that compiles to webassembly.

It can through a thin bridge.

See https://github.com/koute/stdweb

Plus first class DOM support for wasm is coming at some point in the future, so I'd imagine any properly supported library could implement it once it is finalized.

I'm not sure I want this. I use WASM as a compilation target, but I rally enjoy the simplicity of the platform and how all connections from and to the Javascript world have to be explicit. Allowing more transparent access to the DOM would diminish this highly defined interface.

The DOM is unrelated to Javascript.

Sure, but it's still part of the outside world, so it's still opening a lot of extra interface surface to the platform.

Wasm is specified in two layers; wasm itself and then the api the host provides. All of the "external surface" stuff is in the host, not in wasm itself, so wasm for another host doesn't get larger.

so how mature is this tech? is this really feasible? writing browser client code in rust?

The biggest holdout is non-Edge IE https://caniuse.com/#search=wasm

The story with Rust compiling to it is still younger; it works well at the moment, but is very fiddly. The early adopters and enthusiasts are working on making it all smoother. So for most people, it's still a bit young. Just depends on your temperament.

> The biggest holdout is non-Edge IE

IE is discontinued. IE11, which was released back in 2013, only gets security-related updates. It's on life support.

IE11 part of Windows 10 which EOLs in 2020 (mainstream) or 2025 (extended).

we still have 7.6% IE11 ecommerce traffic which converts very well (US, home remodeling).

it does appear to be declining ~0.2% per month, but it's a long way off from dead :(

7.6% isn't considered dead?

Actively doing something that predictably results in 7.6% less traffic to your profit generating website seems quite the questionable business decision.

How much money are you spending on dealing with IE11 bugs? My estimate from my last job is more than that 7% brings in.

Assuming IE11 users are otherwise representative of the customer base, multiply your total web based revenue by 7.6%. (If this assumption is wrong, then just check your reports to get the correct number.) If, for example, the business in question makes $10M yearly revenue, your estimate has 7 entire devs working exclusively on IE11 support fulltime, year round.

That is one fancy as fuck ecommerce site to be that reliant on features in modern browsers that lack automated transcompilation/shim generating tooling.

Are there actually IE11 bugs at a noticeably higher rate than bugs (or intentional deprecations) in other browsers? I thought we were just talking about a lack of new features that are present in Edge.

(If you were talking about IE5 or IE6 I'd understand the argument.)

most ie11 bugs that are relevant/annoying are around its flexbox impl. specifically equal height boxes via min-height and such. maybe some other fancy css animation or 3d transform stuff, but these are not terribly critical and can degrade without much impact.

thankfully, js and dom bugs are either rare or well-known and have lightweight polyfills.

would you find it acceptable if 1 of every 13 customers was unable to use your ecommerce site after you paid good money to acquire him/her via ads?

7.6% is a huge number. we dont even start discussions until traffic is < 2% (and also a huge pain in the ass to support). IE11 is not actually that terrible.

7.6% is huge,indeed.

7.6% can be absolutely HUGE. I have to support Samsung "The Internet" version 2.1.... and that's about 1% of traffic. But the amount that that still brings in in a single month is enough to pay a developers yearly salary.

Unfortunately not everyone stops using a project as soon as it is discontinued.

The problem is that MS won't port Edge to Win7.

Why is that? Latest versions of Chrome and Firefox work fine on Windows 7 , so they have real competition in this area.

Same reasons that IE versions only support certain versions of Windows:

- they want to be able to switch to newer APIs when the underlying OS adds them (though in Edge's case it's more likely it was written from the ground up using newer APIs)

- they quite possibly want to use "you can get the new browser only if you upgrade" as a carrot for OS upgrades (they explicitly did with IE, I haven't seen anything explicit for Edge but it wouldn't surprise me if they're still taking that approach)

Still used by some enterprise customers...

Is it possible to compile to plain javascript for IE11?

If you can write frontend code in typescript or elm, why not rust indeed?

Those transpile to JS though, not Web Assembly, don't they?

Javascript and Web Assembly are expected to run on the same engine, if I'm not mistaken.

I think stewed was announced about 8 months ago.

stewed -> stdweb

Thanks, autocorrect.

Yesterday, I learned that the correct term for that is “autoincorrect” thanks to HN.

Wasm can't directly, but it can call into JavaScript, which can. That's what this does.

DOM support for wasm really means native DOM support, which removes that layer of indirection, and therefore will make it faster.

i was going to ask if DOM support is on the road map and this comment


claims it is but it's no longer in the high level goals in the spec link is dead... so do you know if it is?

Things have been changing, people thought that the DOM stuff would need the GC stuff, but the new "host bindings" proposal my sibling linked to would give DOM stuff without the GC stuff.

Everyone I talk to suggests that it will be sometime next year.

I don't know where it is on the current roadmap, but there's some activity here: https://github.com/WebAssembly/host-bindings/

weird i tried to post a thumbs up and hn deletes it. is it because it's a single rune? no it doesn't appear in this reply either.

Aside from the fact that a single thumbs up doesn’t qualify as a comment by HN standards, select Unicode characters are whitelisted but the rest are sanitized.

Maybe I don't get it, but why would I use this? What problem is it solving that insert JS framework isn't?

I understand JS is popular to hate, for a variety of reasons, but those reasons seem mostly to boil down to nerd cred. The "hate what's popular because being contrarian means I'm cool" crowd.

So, and this is an honest question here, why should I invest the time to learn an entirely new syntax for a framework to build web applications? What can this do that vue.js or react cannot? (And I don't mean it does it differently I'm looking for it does it better or JS framework doesn't do it.)

> I understand JS is popular to hate, for a variety of reasons, but those reasons seem mostly to boil down to nerd cred. The "hate what's popular because being contrarian means I'm cool" crowd.

Then you're not paying attention. I'm not sure how you expect anyone to answer your "honest question" when you've already dismissed all the answers. It's not just fashion, programming language design is a real thing that it's possible to do badly or well, Javascript is a legitimately bad language, and Rust is a legitimately good language.

Learn Rust or another ML-family language, actually learn it to the point where you can write a decent, full-sized, idiomatic program in it. It won't be easy, but it's worth it. I mean, I could talk through all the reasons such languages are better, but it seems like you're already determined to dismiss them, so really the only way is to see for yourself.

> Javascript is a legitimately bad language, and Rust is a legitimately good language.

Yeah, statements like these without even trying to back them up are pretty ridiculous.

Sometimes someone brings some stupid issue that's never a problem in practice, but most of the time people just admit they simply have no real experience with JS. They just love to claim how superior they are, and anyone who disagrees is "not paying attention".

FYI you are just showing how awful are communities around "legitimately good" languages.

I'm willing to back it up. I've written at length on these things before. But there seems little point when the person I was replying to has already pre-emptively dismissed any reason I could give.

> FYI you are just showing how awful are communities around "legitimately good" languages.

Just to let you know Rust's community is pretty cool and not awful at all. For instance the Rust Survey 2017 reports that 98.7% of respondents feel welcome within the Rust Community.

Furthermore according to the Stackoverflow 2017 survey, Rust is the "most loved" language.

Sure, I bet community is great otherwise, it is just occassional elitism (which I guess was inherited from the C++ community) that comes off as toxic.

Btw that same survey showed how people are quite fine with writing JS, which makes this entire thread seem even more ridiculous.

> Javascript is a legitimately bad language, and Rust is a legitimately good language.

Could You elaborate?

Sure. Language design is has tradeoffs but Javascript makes a lot of unforced errors, cases where the consensus was already established and Javascript went against it. (In fairness it was a single-application scripting language written by one guy in three days, not a language carefully designed for general-purpose use by a panel of experts).

Non-lexical scoping is awful, everyone knows it's awful. Recent versions added a better-scoped "let" which is progress in a way but now means you have two different kinds of declarations with very different kinds of scoping. "this" in Javascript is just entirely useless, confusing semantics that resemble no other remotely mainstream language. "Prototypal inheritance" is a mistake, again no other mainstream language uses it for good reason, and again newer versions of Javascript have added "class" to correct this but that leaves the language with two radically different ways of accomplishing the same thing. "undefined" manages to be an even worse version of the billion-dollar mistake, null; errors become apparent even further from where they occur ("foo is not a property of undefined" failures a long way away from whatever caused the value to be undefined), where what you want is fail-fast. Extremely loose value coercion at runtime is terrible for large programs in a language without a type system; even at the time Javascript was first created, Python or TCL knew this. (Perl didn't, but avoided the worst of the problems by not having so much operator overloading, e.g. using . for string concatenation). Indeed lack of any kind of type system is pretty indefensible in a general-purpose language these days, even Python's getting type hinting, though this was less common knowledge back when Javascript first came out. The language is far too dynamic for working with mixed security contexts even though it's the most popular language for that very use case, forcing browsers to resort to crude sandboxes and coarse-grained permissioning instead (e.g. you can give a site access to your webcam/microphone, at which point all the ads running on the page have access too). And there are a bunch of small usability niggles (the "wats" you see talked about), none of which is important in isolation, but developer-friendliness does add up.

Rust is a pretty conservative ML-family language design: where it differs from OCaml it's because of deliberate design decisions. (In fairness some of Javascript is descended from Smalltalk which was more respected at the time than it is now). Really most of what Rust does ought to be table stakes for language design (and it kind of is, looking at e.g. Swift or even Kotlin), should have been since the 1970s when ML came out: to quote Wikipedia, "Features of ML include a call-by-value evaluation strategy, first-class functions, automatic memory management through garbage collection, parametric polymorphism, static typing, type inference, algebraic data types, pattern matching, and exception handling. ML uses static scoping rules." (Rust may appear not to have exceptions, but ML "exceptions" have more in common with Rust's panic/recover than with Java-style exceptions). These things are small and simple but very useful and general, and allow you to push a lot of work out into libraries written in plain old "userspace" Rust code (though this would be much more true if the language had HKT, grumble grumble) rather than having to "bolt on" ad-hoc features at the language level. The only novel-for-mainstream language-level feature in Rust is its ownership tracking (i.e. the borrow checker), which is about the right amount of innovation for a programming language to have, and by all accounts is working well. Good language design is as much about leaving things out (of the language itself, pushing them into userspace code) as it is about putting things in.

Simply put, at the moment, you wouldn’t. But when wasm is mature enough (host binding to control DOM directly), there will be framework for 10x performance (compares to js’). This framework is a step toward that direction. Also it provides options for Rust programmers to not touching js. Totally worth it.

The reason people like Rust more than JavaScript is that Rust is a better at what it tries to do than JavaScript is at what JavaScript tries to do. That said, they don't try to do the same thing, so you shouldn't use Rust in all situations. Rust is focused on speed, safety, and low bug count, but it's slow and annoying to write and refactor. If you are willing to make that trade-off then do; if not, then don't.

Why is JS bad at "what it tries to do"? I've honestly not seen answer in these discussions, even though everyone acts like they are obvious.

From my experience main problems seem to be:

1) poor static analysis of code, leading to bad Intellisense etc. - Mostly solved by TypeScript.

2) DOM manipulation APIs. I don't see how Rust will help here. Especially considering JS frameworks solve this. I guess that's what's new in the OP.

Otherwise JS serves as pretty good language for beginners/UI wiring.

But thanks for at least describing some strengths (and weaknesses) of Rust when directly compared with JS.

> poor static analysis of code, leading to bad Intellisense etc. - Mostly solved by TypeScript.

Typescript is not Javascript. Most people would agree that that Typescript is a reasonable language.

The fact that it can be transpiled to Javascript is irrelevant, many languages can be transpiled to Javascript.

Well it is still mostly JS. Claiming JS is crap (often stated as "beyond repair") and TS is good just because of (arguably) light addition shows some serious cognitive dissonance. But nevermind:

> The fact that it can be transpiled to Javascript is irrelevant

It has nothing to do with JS, right? :)

>What can this do that vue.js or react cannot?

It allows you to write your web app in Rust.

>And I don't mean it does it differently I'm looking for it does it better or JS framework doesn't do it.

"Better" depends on the use case, so merely by being different something might better fit another use case.

Ok...is there a use case where Rust is better? This seems like you're punting. Why would I want to invest the time and energy to learn this?

For some of the same reasons you might want to write your back-end in JS, but going the other direction. You could share code between front-end and back-end, but have a very strongly typed, secure, fast, and less prone to bugs language to do it in. Not everyone's cup of tea, but if you're already got a lot of code written on the back-end, this might make a lot of sense.

You wouldn't invest time and energy to build web app in rust, but you would be able to write your frontend in rust instead of investing time and energy in $JSFRAMEWORK of the year.

Of the year? So react is new or something? This is super outdated meme at best.

I'd imagine that it's useful to be able to share a data context between a route-handler, for example, and it's rendered template and the embedded client side "scripting" that may be quite extensive these days.

("scripting" in quotes because of how rust embedded in the html templates can be so much more than logic-less templates- a bit like a server side template, the dom, and clientside javascript all in one- and I'm just guessing, but because it's not dynamic, but compiled, nonetheless safe even though logic is in there...)

Wanted: Full stack rust developer

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact