
Fast Vue SSR with Rust and QuickJS - jgalvez
https://github.com/galvez/fast-vue-ssr
======
the_duke
QuickJS is a simple interpreter, no JIT, so performance obviously won't be
competitive with V8.

The appeal is that it supports most of ES2019 while being a easy to build,
light weight dependency that can be embedded into Rust applications. [1]

Would I use something like this for a production service with a high traffic
volume? It would be nice to see benchmarks, but most likely not.

But if you have a Rust backend and want a simple way to do SSR without having
to operate a separate Node service or linking v8, this might be a neat
approach.

ps: It would be nice to run QuickJS in a WASM runtime, while still offering
the same convenient API surface for Rust. This would provide sandboxing for
the QuickJS C code. It's is in my backlog, I might get to it eventually. [2]

[1] [https://github.com/theduke/quickjs-
rs](https://github.com/theduke/quickjs-rs)

[2] [https://github.com/theduke/quickjs-
rs/issues/11](https://github.com/theduke/quickjs-rs/issues/11)

~~~
amelius
Personally I think it's a huge waste to re-write everything (badly) in Rust.
Why not link against an existing JS compiler/interpreter written in another
language?

~~~
jgalvez
That is precisely what this is doing.

------
kbumsik
So, how is it "Fast", without any performance benchmarks? Are we just assuming
that it is fast just because it is written in Rust?

~~~
jgalvez
Haven't had time to do proper benchmarks yet, but it does perform way faster
than a single Node process serving the same Vue.js app via Nuxt.js, using
fewer memory. This is likely to vary for large, complex apps for sure. And
yes, Node will beat QuickJS in benchmarks by a wide margin, but then again, I
don't think you can run 64 tiny threads with V8 like I do with this thing :)

~~~
jgalvez
On my computer, single process Node vs single process Rust:

\- Rust: 20k requests in 10.04s

\- Node: 4k requests in 10.11s

Tested with `autocannon [http://localhost:3030`](http://localhost:3030`).

Running multiple Node processes will beat it for sure.

But then you're using a lot more memory and CPU, I think.

~~~
wmf
I'm curious about the latency since that's generally the point of SSR.

Also, a single process can consume just as much CPU as multiple processes, so
the efficiency isn't obvious unless you measure user time.

------
xucheng
It seems that there is a JS context per worker and the workers are shared
among different requests. Further, the requests are handled by simple string
formatting and eval in the JS context. This means that it should be trivial to
inject arbitrary js codes in the server and initiate a XSS attack.

~~~
jgalvez
Each thread runs the renderer sequentially, which means we can share a
VueRouter instance between them as well. At least so far haven't seen any
issues with it.

This is highly experimental, just a proof-of-concept so far. Security issues
need to be reviewed and addressed for sure.

~~~
spankalee
Client side code can assume a single-user context. It could write user-
specific values to globals, use them in modules, etc., and it would be fine on
the client side. But run in a multi-user environment it would be relatively
easy to leak data from one user to another. SSR should run each request, or at
least session, in a fresh context.

~~~
jgalvez
That would be the case in a typical Node setting, but in this case, each
thread runs the renderer no more than once at a time. So we just have to
provide a fresh state.

------
gutino
If memory is the concern, node V8 Engine has the mode "lite" mode:
[https://v8.dev/blog/v8-lite](https://v8.dev/blog/v8-lite)

with that perhaps you could spawn node to all threads and do even better in
terms performance/memory?

~~~
jiofih
V8-lite offers a ~25% reduction in memory usage at best, which is still one or
two orders of magnitude more than engines like QuickJS. And you still have to
deal with the massively slow and complex build.

------
Matthias247
Be aware about a couple of issues in the implementation:

\- The Warp handlers are blocked by the use of thread-blocking synchronization
primitives (locks, channels) - which might harm performance and can even lead
to deadlocks.

\- The architecture with "passing work off to a threadpool via a chnanel and
waiting for it to complete" is not very resilient, and prone to behave badly
under high load. The reason is basically the inifinite queuing - if new
requests come in faster than previous ones can be processed the queue length
(and thereby the memory footprint of the app) will grow and grow. At some
point the application will just be busy serving old queued up requests, which
might already have timed out on the client side and potentially even being
retried. That's a vicious circle, and you can only get out ot it by shutting
down the app. One way to fix this is to limit the queue/channel size and
perform load-shedding on requests which can not be enqueued.

~~~
jgalvez
Still it could be used reliably with the help of a load balancer and health
checks. I can see Kubernetes spawning pods running this server a lot quicker
than it does for Node ones.

------
mhd
I'll definitely use this once Fabrice Bellard gets around to writing TinyRust.

------
sradman
I understand that Rust Hyper and/or Warp provide a very fast httpd engine but
I would have thought that invoking JavaScript via Deno/V8, instead of QuickJS,
was the natural integration.

Maybe Deno is hard to invoke from Rust Code.

~~~
chrismorgan
The Deno core is _easy_ to embed in Rust (or so they say; I haven’t tried it).
[https://deno.land/manual/embedding_deno](https://deno.land/manual/embedding_deno).
Note that this is the core and doesn’t include TypeScript support. Also note
that the rusty_v8 bindings will by default download a prebuilt version of V8
from GitHub, because otherwise it can be a mild pain to set up and takes a
long time to build. See
[https://github.com/denoland/rusty_v8](https://github.com/denoland/rusty_v8).

I can easily see why one would go with QuickJS: it’ll be much quicker to get
going.

------
maxfrai
I think you should post about this project to reddit:
[https://www.reddit.com/r/rust](https://www.reddit.com/r/rust)

------
dragonsh
It’s been a observation for quite sometime add Rust to any heading and most
likely will end up on top of HN.

Didn’t expect such a trivial thing to be in front of HN, but probably Rust
marketing and evangelist are all on HN waiting to move them to top.

Stil waiting for good library in rust not dependent on C or unsafe code.
Looking at most of the cargo libraries most good one’s are a wrapper around C
or C++ library or use unsafe code.

~~~
Avi-D-coder
There will always be unsafe and often be C dependencies.

There is no community goal or plan to do away with unsafe. Until we have an
easy to use fully dependent and linearly typed language, there will be some
degree of unsafety. Even if Idris 3 and ATS 4 come out with magical proof
inference, we will assert many more complicated proofs.

~~~
dragonsh
Rust community has been very vocal placing Rust as C replacement and based on
your statement it is not. So it’s better for systems programmer to first focus
on C/C++ to be closer to hardware and when needs some higher level work look
at Rust for writing higher level libraries not dealing directly with hardware.

~~~
dbaupp
The grandparent didn’t say anything about why C dependencies will often exist:
in almost all cases, it is not because they _must_ exist due to missing
functionality in Rust, but because it is easier to use a C library than
rewrite/translate into Rust. Once someone has done that process (or
implemented something equivalent), using a pure Rust dependency is nicer for
many reasons, but reimplementing a complicated library is always hard, no
matter the language used.

~~~
dragonsh
> it is not because they must exist due to missing functionality in Rust

This is the precise reason, Rust is a decade or two away from being a useful
replacement for C/C++ like other contending systems programming language like
go and Swift which are at present more popular than Rust for writing systems
program for networking, servers and controlling Apple hardware directly.

~~~
Avi-D-coder
Zig, Nim, Crystal, Go and Swift all use C libraries extensively. Nim and Zig
even make lower boiler plate C inter-op a key selling point.

Mozilla, Microsoft, Amazon, Apple and other's investments in Rust, clearly
demonstrate the value of writing new systems software in Rust, alongside and
inside existing C/C++ code bases. Rust is clearly monetarily useful, despite
not being a systems programing panacea.

~~~
dragonsh
The difference is most of those language don’t proclaim to replace C and are
explicit in support of C.

Rust evangelist promote Rust as replacement for C and frown upon anyone who
has a contrary view including direct attack on those opinion.

Rust has a value like other programming languages and not a panacea As you
said, it’s just not a C replacement for another decade or two or may be never.

~~~
Avi-D-coder
Zig is attempts to be a C replacement candidate, like Rust. What would make
Rust or Zig a valid replacement in your opinion?

------
alttab
Using JS to do SSR? How the turntables.

~~~
tyingq
It wasn't well liked, but Netscape Livewire did JS SSR in the 90's.

------
andybak
I hate post titles containing TLAs I don't recognise.

(Three letter acronyms before anyone accuses me of hypocrisy)

~~~
etbusch
SSR - Server Side Rendering

