Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Tweening web visualisation, in Rust – (WASM) (rustween.mtassoumt.uk)
94 points by hirako2000 8 months ago | hide | past | favorite | 40 comments
Using wasm-bindgen, and the tween-rs crate



My startup is betting big on Bevy, Dioxus, and WASM. (The thing we're building requires this.)

It's a bleeding-edge weird world, and there's definitely a lot of risk, sharp corners, etc. But it's also incredibly exciting and a breath of fresh air.

One of my big worries is picking the "wrong" tech and the community electing something else, leaving us in an evolutionary pit. We already see this in other areas. We chose Actix Web, and now Axum appears to be in the lead. That's not as big of a concern as the frontend stack changing, though.

Anybody else headed down this path?


You shouldn't worry too much about not picking the library/framework that ends up being the most popular: as long as the project fits your needs and doesn't die, you're good. For instance, Actix remains a solid choice, and given it survived having two changes in leadership, it has actually proved pretty robust. Conversely, even the most popular framework could disappear if the maintener gives up and nobody can/want to take over.

So the best thing you can do is making sure the maintainer doesn't burns out and give up.

As a company that includes giving them dedicated help for triaging bugs or reviewing PRs (it doesn't need to be a full time job, but assign one member of your team a fraction of their work time to this task) or sponsoring them if you have some money to spare but only if they are looking for sponsorship to have more time working on the project (but not otherwise, because that's a recipe to make them burn out).


I've been down the path of picking the defeated tech myself, I would give this simple tip:

If the project is likely to hit production and be supported for the many years to come, architect the code base to separate concerns. Most logic isn't part of the bindings to any framework.

That way potential migration isn't off the table. And it gives a certain peace of mind.

This small pet project isn't a demonstration of that, but building a web application that serves data can quite simply abstract things like routing and parsing, keeping all the core business logic independent.

Just don't have any Actix, bevy, dioxus imports except where they belong.

Add abstraction layers where needed, it's sometimes inevitable given how frameworks can spread their tantacles.

Wasm isn't going anyway, so on that front given first class wasm support by Rust itself, no worry to have.


"hexagonal architecture" (aka ports and adapters) offers some nice practical ideas on how to achieve this.

I find it works well with rust. Rust traits are well suited for a ports and adapters setup. And in the rust ecosystem, "frameworks" are most often designed to get out of the way. To be pulled into a project as library, rather than to enforce itself all throughout the codebase (like Rails, Django, Laravel etc tend to do)


Unless your startup model revolves around releasing tooling in this space, why would you focus so much on tech this early on instead of just shipping stuff fast? I've seen founders nerding out on the tooling and picking exotic languages like D for things that could be done in PHP or Rails in 1/5th of the time and I can understand that for hobby projects, but I'd never bet my company on that.


It's an often repeated idea that somehow Rails or PHP allow to ship faster than rust (or go, or java?)

For one, it's simply not true in many cases: I know Rails (and Sinatra) very well and axum or actix slightly less so, but the time from idea to running server is hardly different in both. In this phase, that's measured in hours or days anyway.

Secondly, the primary thing that speeds up "getting your startup to production" is using known tech. Stuff you are extremely familiar with, is boring, and well supported on hosting and deployment. If you write rust day in day out, getting rbenv,bundler, the right gems,ide support, CI, linting, and whatnot set up will slow you down a lot when you are new to Ruby and Rails. Same for any language. I'm slow in PHP or Python because I keep hitting speed bumps that require experience, detailed knowledge of the stack, or best practices to figure out.

I now understand the borrow- and typechecker, in rust. The thing most people say is slowing them down. It was slowing me down, when I was new to it. The same way my inexperience with pipenv, or phpstan or webpack is slowing me down on python, PHP or JavaScript.

And lastly, the strictness of rust, as with Java and C# (and TS) is also very quickly turned around into a time saver. Refactoring, understanding and maintenance is just so much easier and faster with static typing and checkers. And, I've found, especially in the first phase of rushing, pivoting, adapting and rapid change, a crucial timesaver. Which is after days, already.


I was referring to the benefits using a well established framework/platform gives you as opposed to implementing new features. The boring parts of web are long solved in boring frameworks, but not new ones, where you mess around trying to come up with a good project structure, form handling, auth, background workers and all that. Performance critical paths can be written in performant languages, no need to do the CRUD part with fancy tech imho.


I know.

And I was referring to the benefits of using a framework or platform that you know well. Which is rather different, but will often overlap.

The boring parts may be solved in boring frameworks, but if you don't know to find them, don't know their ins- and outs, don't know how to leverage it (or avoid parts) and so on, it just slows down. Learning a language is so much more than learning a new syntax - that's the trivial part.

It's about learning where to find up-to-date information, what libs are good today, which ones no longer preferred but still highly popular for legacy reasons. Which boring stuff saves time and which will come and byte you. What "sharp knives" or "footguns" there are lying around. And what might speed you up today, but cripple you tomorrow.


See above: the thing we're building require this

It's not like you can easily replace Rust with something like PHP or rails in most context, the alternative would be C++ and there's definitely some productivity and stability benefit in using Rust, which can dramatically speed your execution (especially if you don't have a rock star C++ team in the first place).


We're using Dioxus on the backend with Axum for https://github.com/bionic-gpt/bionic-gpt

Very pleased with the result. It's great having the compiler help out with UI work.


Shoot - I tried to select a Rust backend web framework purely on popularity and totally missed this Axum.

I’m having a good experience with Warp, but as a new Rust user I feel like I’m investing a lot on learning Warp’s system of traits. Perhaps I’ll get better at internalizing such things but for now, it seems likely that the switching costs, even if only cognitively, might be a bit steep.


I'm building a game with Bevy targeting WASM currently :)

Good luck with your project!


Can I ask why not fyrox?


I got to know the developer of Fyrox, MrDimas, and commissioned him to implement blend shapes (morph targets) in the engine, as they had been missing. He's an amazing engineer and got it done quickly.

I then took the same task to Bevy. There are so many more people in the community. One of them stepped up right away and I commissioned him to implement similarly missing blend shapes. Same excellent engineering and delivery, but there were thirty times more people asking and involved in the process.

Bevy has community. Fyrox is a one man show.

Fyrox was ahead of Bevy a little bit, but Bevy has taken the time to develop community and has made much more important and sound architectural choices for becoming a much bigger project and engine.

If I had to recommend either, it'd 100% be Bevy. No hate or dunking on MrDimas, either, because he's an amazing engineer and has built something incredible. But Bevy absolutely appears to be the future, and their community is not going away.


bevy: 1M downloads, tons of example games, plugins and tutorials. large community of helpful people. school book example of well-run FOSS project.

fyrox: 12K downloads, first time I hear about it today.

I did a decent amount of research before settling on bevy two years ago, and I did not even encounter fyrox back then.

Since then I learned bevy and am happy working with it.

Following the bevy project's updates I gained a great deal of respect of the project lead and the long term goals of the project.

Also, since I'm mid project there's no good reason to switch frameworks now.

So why would I use fyrox?


Good point, community wins. Was just asking.

I dismissed fyrox, I was initially interested in their real time scene renderer, but it terribly lags in my environment. I still think it has a nice architecture.


Scene editor is one of the major lacking parts of bevy right now, but it's in the planning stage.

You can follow the issue at https://github.com/bevyengine/bevy/issues/85 if you are interested.


Interesting, thank you.


I'm curious how much faster rust can be compared to js optimized by the v8 in this example. Can you provide some comparison of performance indicators?


I don't have any benchmark for these.

Some did some comparisons.

"Versus our initial moment-based implementation, in Chrome we see a 78% improvement (183.93ms to 39.69ms), in Firefox a 90% improvement (269.80ms to 24.88ms), and in Safari an 83% improvement (166.56ms to 27.98ms)."

Ref: https://engineering.widen.com/blog/A-Tale-of-Performance-Jav...

More related to rendering things:

https://www.reddit.com/r/rust/s/wOzuEzFdM5

Dom elements are expensive, so probably not down to v8 itself in that second comparison.

Generally one should expect significantly higher performance with Rust compiled into (optimised) Wasm. 2x, 10x. I don't have strong numbers in hands to share now.

But in some minority of cases, it might be slower than on v8 the latter has a few extremely highly optimised JS operations.


Eh, would rather see a comparison to a webGL approach given we’re doing tweening here. Fine to not have one, just sorta leaves the question up in the air. Faster than Canvas at the very least!


Why would we compare to webGL?

Shaders would only (tremendously) improve rendering vs dom elements.

And a Wasm can also leverage the GPU so it would yield similar performance comparing apple to apple.

V8 interpreting JS for CPU computation I think is what OP was asking for, as it is relevant to determine the best optimised route to get the highest performance on compute.

If most published benchmark are correct*, then a GPU for graphics applications complied into Wasm coming along CPU compute logic would perform better than its JS+WebGL counter part.

Would be nice to benchmark that to confirm.

*and they probably are as Wasm executes at near native speed. V8 executes certain operations at near native speed. The rest takes the overhead hit of the interpreter.


Why compare to WebGL? Tweening is for games, and making games in the DOM or in a Canvas is silly.


I'm sorry I just may not understand. My point is that even graphic intensive games also are compute (CPU) intensive, so significant performance differences is where it matters. Webgl, WebGPU, wasm all execute shaders on the GPU I don't see the overhead to be critical and of much interest to benchmark. I could be wrong and some drastically different results could be observed but what I've read on the subject indicates it won't. Or I don't understand what you mean.


>> Easing functions specify the rate of change of a parameter over time. Objects in real life don’t just start and stop instantly, and almost never move at a constant speed.

This reminds me of the ‘Illusion of Life’ video on animation https://youtu.be/yiGY0qiy8fY?si=GzjG7GwaH5xAlpH0


https://easings.net/ is popular in gamedev circles to quickly visualize different types of easings


On mobile, the twees perform at different frame rates each time, and stutter, and it makes observing the effect of the curves difficult.


Thanks for reporting this. I couldn't see any drop of framerates, even my rather old phone. Now I got a reason to dig into performance further.


The flash it gives on "Restart" is blinding/annoying. Not sure if it makes a difference, but I'm in dark mode in Firefox.

Cool idea btw! Combining tween + web-sys, that too up and running in 5 days. Kudos!


The restart button is just navigating back to the same URL, the white flash is your browser showing white in between two pages.

I don't know why browsers do that, it seems like they should be able to hold the old background color until the first paint of the new page. I also have had this annoyance going from one dark mode site to another that it just flashes pure white in between.


That's right, took a shortcut, simply reloaded the page.

Could reset the animation instead that would avoid the flickering. And didn't test on firefox, my bad


It's a legitimate shortcut, don't take the issue of some browsers having less than ideal page navigation behavior too hard.


I'd play the animation in reverse and loop indefinitely until hitting pause

Could also animate all the graphs on the first page with a dot moving along the lines


Good news everyone!

https://developer.mozilla.org/en-US/docs/Web/API/View_Transi...

  Note: The View Transitions API doesn't currently enable cross-document view transitions, but this is planned for a future level of the spec and is actively being worked on.
Ah, this bit didn't make it into the current spec. See you in 2034.

I suppose we could have had this 25 years ago. It is almost as if web browsers are sabotaged in subtle ways, but who would try and mess with such an important platform.


We did have them 25 years ago.

https://learn.microsoft.com/en-us/previous-versions/windows/...

The documentation there may not make it clear, but you could do things like navigate from one page to the next, and fade in the new page, from a normal full page reload, with no intermediate flash of unstyled anything.

It was part of the attempt to lock the web into DirectX and even Microsoft themselves deprecated this a long time ago. But it definitely worked. I remember playing with it. IE4 though IE6-ish had a lot of weird stuff in it to try to drive lock in.


Bit more info here: https://www.htmlgoodies.com/javascript/dhtml-transitions/

Example to add a 1 second random fade-in:

  <META HTTP-EQUIV=”Site-Enter” content=”revealTrans(Duration=1.0,Transition=23)”>


Time is a flat circle ;)


Transitions API is more about supporting hero animations which others native apps and SPAs can have.

I want something much simpler: if navigating between documents and it would reset to pure white then just hold the previous documents background color instead. Neither the outgoing or incoming site should have to opt in to anything for this, and it'll avoid the hell flash when going from darkmode reddit to some other darkmode site


I've been noticing it more lately - especially at night if the room is darker. It seems like it has gotten worse lately so I was wondering if there had been some kind of change or I'm just noticing it more.

Mostly am using firefox these days.


I think the change is more sites offer a dark mode, so the flash is more visible even though it always happened




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: