Hacker News new | comments | ask | show | jobs | submit login
Making a Statically-Linked, Single-File Web App with React and Rust (anderspitman.net)
378 points by anderspitman 10 months ago | hide | past | web | favorite | 171 comments



> This is somewhat similar to what Electron accomplishes, but your backend is in Rust rather than JavaScript, and the user navigates to the app from their browser.

An option in the middle is webview[0] which will "appify" it by using OS-native browser lib so you don't have to carry Chromium, V8, or all the multi-process ugliness but you still get the web stack.

Also everybody, don't forget to check your Host headers when building apps running HTTP daemons locally or you'll be open to DNS rebinding attack. And if your use case can support it, a random port is nice too.

0 - https://github.com/zserge/webview 1 - https://en.wikipedia.org/wiki/DNS_rebinding


> This is somewhat similar to what Electron accomplishes,

What? This is nothing like Electron. What it's like is shipping the `node` executable to the end-user along with a .js script. The whole point of Electron is that it bundles a browser.


The parent already covered your whole objection in their comment "but your backend is in Rust rather than JavaScript, and the user navigates to the app from their browser." -- and went to provide even more context and info that shows they perfectly know the distinction.

The parent's point is not that it's like Electron in that it bundles a browser, but that it's like Electron in that it allows you to bundle a web application into an executable.

The difference is that it doesn't include the rendering engine along.


Was OP's quote, but "somewhat similar" still applies. For some, the whole point of Electron is they can see web pages...meaning maybe there isn't only one "whole point" of it and therefore to some "nothing like Electron" is incorrect.


I just wish electron could somehow harness the fact that everyone has a browser installed.


Well, Chrome doesn't really allow chrome.dll to be easily used. Electron has deep requirements on Chromium for V8, IPC, webview, customer protocol handlers, etc IIRC so it can't just easily switch browser engines. I'd say if you want this, the Chromium project would have to prioritize a stable Chrome ABI from their shared lib which I doubt they would do. And it still doesn't solve the extremely mem-bloated approach they take to serving the web stack. So for that, the Chromium project would have to prioritize better runtime feature gating for mem reduction and maybe revive first-class support for single-process mode, which I doubt they'd do. And even then, you're carrying node.

To sum up my rant, I'm afraid the fast moving web stack, coupled with the fact only a few companies can keep up, coupled with the fact that those companies are laser focused on their own browser use case and not embeddability combine to make this incredibly common desktop use case still suck. In the meantime, just hope these OS engines stay minimal and don't fall too far behind the standards you want.


This is what makes me want to see something like Electron developed over Firefox internals.

Packaging my own browser with my code so that I am now developing toward ONE client instead of any random thing that can speak http, and at the same time, being able to take the core of my app, tweak it a little, and plop it out on the web without a complete rewrite ... that is completely amazing. It's what I've wanted since forever.

Even better would be the ability to optionally include features in my build. Like if I don't NEED WebGL, the MIDI and Audio APIs, etc, etc ... it'd be pretty nice to be able to optionally exclude those from the bundled browser to minimize size.

However, I don't want to be welded to google. Mozilla's codebase seems ripe for such a development.


> This is what makes me want to see something like Electron developed over Firefox internals.

Tried, but abandoned presumably based on prioritization: https://github.com/mozilla/positron. I was hoping Servo would help here, but from the outside looking in, the pieces are being moved into FF proper and the Servo browser's priorities have been reshifted to being a part of the VR team. Sure they're still making a general use browser (and that's quite a feat), but the embeddability game may suffer (I believe conforming to the CEF iface that was there original goal has become stale).


It was called XUL runtime and thankfully already forgotten, beyond Mozilla's own stuff.


Having a browser installed isn't the problem. The problem is having a specific version of a specific browser installed because that's the only thing you've tested against.

(I'm not condoning this, and haven't worked with Electron myself, but that's my understanding of the motivation.)


But what happens if your js stack requires a feature that is only in chrome 65+ and someone elses stack relies on a feature that was depricated/removed in Chrome 60? Furthermore, does that mean if a user goes to task manager and ends the task of chrome, that your app would be killed too?


We tried that. (DLLs, shared objects). Doesn't work.


Electron is just revivalism, it used to be called MSHTML.

Like everything in fashion, next season trends will be other ones.


Which browser? Why do I want to spend my time making my desktop app run in Chrome and Firefox and Internet Explorer and Edge... and all the old versions of these. Plus the fun of saying to my customer "oh you have to upgrade your OS before you can run our app, sorry".


Oh, but the power of the Web....


That’s literally what a web view does, though.


> ... DNS rebinding attack. And if your use case can support it, a random port is nice too.

Ah, that's a good explanation as to why zserge/webview recommends serving using ephemeral ports.

Also, wanted to pull in from the readme: webview supports interacting with the javascript environment directly, so a web server isn't strictly required.


With all that work why not just link to gtk and build on that?

Why should a desktop app even care about host headers?


> With all that work why not just link to gtk and build on that?

Because the packaging and distribution experience sucks. Seriously, that's all it is. Make it so that I can run one command and turn my GTK app (written in a language that has a decent package manager and library ecosystem i.e. not C or Vala) into single-file executables for all major platforms, and you'll take the marketshare back from Electron.


That exists, just not in 100% automated form.

I'll agree that packaging desktop apps is still annoying, but traditionally the packaging experience is handled elsewhere in your stack. I'm not sure that expecting a widget API to provide lifecycle management is the wisest choice...

Electron is a full stack, yes, but GTK slots into any number of stacks. Not my area but putting together a build script that outputs different formats (for an app that just requires some file copying for install) can't be that bad. Python in particular has lots of options for cross-platform packaging and distribution. (py2exe, etc)


> I'm not sure that expecting a widget API to provide lifecycle management is the wisest choice...

From a developer's point of view, Electron solves that problem. It's not about what the underlying library is, it's about what the whole package does.

> Not my area but putting together a build script that outputs different formats (for an app that just requires some file copying for install) can't be that bad. Python in particular has lots of options for cross-platform packaging and distribution. (py2exe, etc)

All the pieces are there, but no-one's put them together in a nice, well-supported way. Packaging a python application like that is a bunch of tedious manual gruntwork that's easy for a beginner to get wrong.


> With all that work why not just link to gtk and build on that?

If you're asking "why not link to gtk webkit?": because I don't want to statically compile or ship with the entire browser (not to mention Windows compat). If you're asking "why not build your app on gtk instead of web tech?": there are a million reasons and this question happens frequently, so not really sure it's worth rehashing here.

> Why should a desktop app even care about host headers?

It's one way to prevent DNS rebinding attacks. You can employ other methods to ensure the client is "authenticated" to use the server. OP's app listens on port 5000. I can have a page on example.com:5000 and set the DNS zone to a really low TTL change its A record to think it's 127.0.0.1 after first load thereby letting the browser think I'm same origin w/ just an IP change, then I can ajax call to example.com:5000 to access the local daemon. That's how DNS rebinding attacks work and host header checking is one way to for the local web server to prevent other pages from accessing it. Project Zero is finding lots of local HTTP servers that are reachable from web pages.


  > there are a million reasons and this question happens frequently, so not really sure it's worth rehashing here.
then

  > Project Zero is finding lots of local HTTP servers that are reachable from web pages.
Amazing. All signs point to not building apps manifesting as local web servers and people continue to do so anyway.

A millions reasons, huh? A million reasons against, more like it.


well I wouldn't recommend webkit-gtk either, but there's a whole lot of other controls and stuff in gtk that were designed for desktop apps. Just seems silly to have to go through all those lengths just to preserve a JS/HTML/CSS workflow when you can get the same thing, with less work (edit: and fewer potential vulns) with gtk and rcstyles.


Besides stack familiarity, a big problem people have with GTK is that it feels even less native than Electron.


........ huh? GTK uses native controls on Windows and Linux (and I think Mac?)

The fact that you have to care about DNS rebinding attacks on a desktop app, where presumably that app does not do anything special with the network or DNS, is all kinds of smelly to me.

I wish I could show off some of my work in this area but we took an old JS/HTML/CSS 1.0 stack and did our 2.0 in GTK (with all the fancy graphics and everything), and we are much happier now -- a ten-year-old single-core Celeron system with little RAM runs our stuff great (whereas before we had issues)


Is webview more like Phonegap (throwback) than Electron?


They are about the same in what concerns UI/UX for the end user.


This looks awesome. Does it add much size to the binary?


Just the size of the tiny bit of C code you'd compile with. It uses shared libs already on the system (though not familiar w/ how many distros have gtk-webkit2 on Linux).


"Because I’ve been obsessed with static linking for as long as I can remember and I’m not really sure why."

Glad I'm not the only one.

When I was younger, I made a hobby Linux distro that ran completely on floppies with a custom filesystem hierarchy. All the binaries were in /Programs and were statically linked against uClibc. Even X was statically linked, there was some abandoned branch of XFree86 that ran with a VESA driver and fit under 1.44 MB. I thought that I was the bee's knees.

All this was still possible ~2007. Not sure how masochistic you'd have to be to try today.


Same, although people think it's silly I find it less error-prone. The same goes for C++ libraries, only using singe-header or amalgamations because it's less to support and ABIs wont break between versions.


Being able to include arbitrary data inside a static binary is definitely a nice side-effect of rust and golang being able to produce static binaries.

If you're interested in stuff like this for go, I've used https://github.com/GeertJohan/go.rice with great success.

[EDIT]

"feature" -> "side effect"

"rust and golang" -> "rust and golang being able to produce static binaries"


Yeah its definitely nice to avoid an annoying "binary" to source step and then have to waste a bunch of cycles to parse/compile said source. I do embedded work and run into this all the time with C and C++. Its toolchain dependent on how to import binary blobs unfortunately in C/C++. You very quickly run into a wall on how large of a source file can be compiled around 100 MB or so. gcc is the best at handling huge source files, but its still slow.

For example, you can do the import with gcc this way: https://balau82.wordpress.com/2012/02/19/linking-a-binary-bl...

With Green Hills, there is a .rawimport directive.


Maybe for younger generations.

Apple and Microsoft systems have been supporting this on their native toolchains since the mid-90's.


Yes, including arbitrary data in binaries is probably as old as making binaries itself.

I think what is new is that in the mid 90s you didn't have an open source, memory safe, ergonomic, strongly typed, type inferenced language that targets multiple platforms with ease, with a decent standard lib. Rust is delivering that. Golang is delivering that. I think those two are novel in that sense.

From where I'm sitting the last few decades have been ruled by VMs and the explosion of scripting languages, because no one wanted to use those Apple/Microsoft toolchains that have been around in the 90s. While those toolchains rested on their laurels (or didn't, can't tell the difference now), Java became the enterprise workhorse.


Well, Turbo Pascal, Delphi and Oberon also supported it, and they check some of those bullet points.

Many devs do actually enjoy those Apple/Microsoft toolchains.


Common Lisp as well. For certain definitions of memory safe.


The "decent standard lib" part is debatable, especially if you don't slip on the "targeting multiple platforms with ease" part.


Fair enough. Quicklisp + SBCL/Clozure is pretty solid but I suppose that wasn't the case 20 years ago.


For sure, and its standard library equivalent was very, very good for the time, just not by 2018 standards.


Haskell had that.

Java was marketed pretty closely to that.


There still isn't a good built-in way to do this portably in C++.


Qt has the moc/resource compiler that can do this for you. Not that you would/should shoehorn qmake into your project for it though, but they do it nonetheless.

Make you can use qmake as a prebuild step and include the generated moc files in your build? I wonder how many qt headers are in the generated source files.


qmake is not the only way to build Qt projects; CMake is quite straightforward and qbs works extremely well, too, though is rarely used in the wild (pity). I'm sure other build systems have sugar for Qt as well; CMake and qmake are just wildly popular.


Was the "data fork" and "resource fork" of the original Mac binaries in 1984 something different than what you're talking about?


Also counts, Macs just weren't something we would care about on 80's Europe, beyond an occasional article on a computer magazine.


Aren't resource scripts (.rc) even older than that?


How does go.rice compare with go-bindata[0]? I only have experience with the latter.

[0] https://github.com/jteeuwen/go-bindata


I haven't compared them (as I haven't used go-bindata), but go.rice has a pretty ergonomic API -- I once took a look at go-bindata, but I much prefer go.rice. go.rice also seems more active in general.


This looks really cool. I'd love to have a library for Rust to make this more seamless. In particular it would be nice to have a "development mode" where you just host the HTML/JS files normally while you're working on them, then flip a production switch when you're ready to have them compiled into the binary. Does go.rice have something like this?


I actually thought that's what Rouille (https://github.com/tomaka/rouille) was, which was mentioned in the blog post, but it looks like just uses the macro from that package (to include the strings), not the actual library even, since at the end of the day he opts for rocket (https://rocket.rs/)

[EDIT] - I didn't address your question about go.rice -- from what I know, it doesn't have this kind of thing built in, and I don't know that it even should... "flipping a production switch" is a pretty application-specific endeavor. Also this sounds like something you should be doing with your build tools..., can't include something in the binary at runtime. Maybe I misunderstood what you were asking.

To clarify, the description for go.rice is:

> go.rice is a Go package that makes working with resources such as html,js,css,images,templates, etc very easy. During development go.rice will load required files directly from disk. Upon deployment it is easy to add all resource files to a executable using the rice tool, without changing the source code for your package. go.rice provides several methods to add resources to a binary.


Actually the include_str! macro is shipped with Rust. The router! macro for setting up the endpoints comes from Rouille.

My question is really geared towards this pain point: the React/JS stuff can be quickly rebuilt for almost instant feedback. But even for a simple app like this example the Rust compilation takes a couple seconds on my system. You have to pay that cost even if you're just updating the JS because it has to be built into the binary. It would be nice if you're not changing the actual Rust code to be able to easily reload your JS without compiling.


Thanks for the clarification! I assumed it was from Rouille... I went back and re-read the area, and he's using Rouille, NOT Rocket -- I didn't notice this the first time through, didn't read closely enough.

Oh I think you absolutely can have that kind of flow -- go.rice DOES support that. Also, I still don't really understand because this problem seems to be easily solved by just changing your build script, or detecting environment at runtime. You could even check for the data, and if it's not present, fall back to disk.

I don't know a library that does it off the top of my head for rust though, since I'm not that familiar with rust dev.


I’ve come halfway to building this a few times; it’d be extremely helpful for mdbook, for example.


That feature's on my (currently just mental) TODO list for my http-serve crate (https://github.com/scottlamb/http-serve). I want to use it in my moonfire-nvr app.

I'm thinking something a little fancier than include_str! because I want to include a whole subdir of resources, and I probably want to embed the gzipped version (and uncompress into RAM for when there's no "Accept-Encoding: gzip") rather than the reverse.


Assuming I understand you, it would be easy via an interface.

Build an interface for the go.rice (or go-binddata/etc) such that, depending on the environment, it either pulls the file from disk or from memory. Super easy.

I seem to recall seeing a Go library that actually has this functionality built in, but I've got no idea which it was.


Are these basically statically linked jars?


Rust can create static binaries -- AFAIK there isn't even such a thing as a "statically linked" jar -- if you're relying on a VM (the JVM in this case), your program isn't statically compiled. It's not even "compiled" in the strict traditional sense, since the target isn't the machine, it's an VM (JVM).

But yes, it is just like including files in your classpath (and making sure they end up in the jar), then serving them from a Netty endpoint or something.


Yes it can be compiled into pure VM free Assembly.

It is only a matter of buying one of the commercial JDKs, or if doing GPL stuff use the open source license from ExcelsiorJET.

Those not willing to pay and already on Java 9/10, running on Linux x64, can play with the initial AOT support.


> Are these basically statically linked jars?

A JAR is meant to run on the JVM. There is no such thing as a "statically linked jar". The process you're referring to is more like compiling bytecode to assembly, you're converting a JAR, a thing meant to run on the JVM, to native machine code (with assembly as the step in between).

> it's only a matter of ...

Yeah, it sounds simple in theory, but somehow I don't find many projects these days that use these commercial JDKs to generate VM free assembly. In fact, I don't even know one big tech company that does so -- maybe you could enlighten me.

> Those not willing to pay and already on Java 9/10, running on Linux x64, can play with the initial AOT support.

So just about the newest version of Java just got initial support? Cool. Well this brand new community-led thoroughly open-source language has this as a core tenet, and supports compiling to many platforms very easily.

Kudos to Java for getting better, embracing a more open development methodology (I think this has been the case since 8), listening to the community more about which features to include, but when compared to projects that took a fresh look at all these concepts, and started with the open community-based approach, Java doesn't compare (except in terms of speed, JVM is pretty heckin' fast).


US Army, US Navy, French Army, IBM real time deployments, factory automations, for your enlightenment.

Ah, and if you happen to own an Android phone running version 6.0 or newer.

The free beer version of Java never supported AOT because Sun was religiously against it.

The community never managed to gain enough mindshare to keep gcj going, which was used by Red-Hat to ship native compiled versions of Eclipse, Tomcat and JBoss.

Since Oracle thankfully has another opinion on AOT, they have a long term plan to bootstrap Java and remove C++ out of the equation.


Thanks, I didn't know that, definitely a huge TIL.

Also, Android went from Dalvik to ART right? Those are both still VMs? Latest android looks like ART + JIT, but you can't AOT a JIT (that's the whole point of doing it Just In Time)?

Maybe I should take a look at Java 9/10, but if an employer isn't requiring it, and library support is somewhere near similar, I'm definitely considering doing the project in Rust first, then Clojure, then Frege, then Java 9/10.

Outside of the insane wealth of libraries that exist as a result of no one really having a good cross-platform choice for the last few decades, I don't think I'd choose plain Java for a project today. The JVM, maybe, but not plain Java on top of it. Luckily, I'm not the only developer out there, since there are tons of people who love java are still supporting it and using it, making cool things with it, and pushing it to be better.


Android is a bit more complex than that.

Until version 5.0, there was only Dalvik, a register based VM with a very basic JIT compiler.

On version 4.4, ART was introduced, but you had to explicitly enable it and many OEMs did not had it available anyway.

ART was a pure AOT compiler, at installation time on-device.

Given that it was taken hours to recompile everything when updates came on phones with lots of apps, Android 7 introduced a multi-mode interpreter/JIT/AOT toolchain.

On 7 and later, when an application starts for the first time, the interpreter written in hand optimized Assembly is executed, then control is given to the JIT, which in turn makes use of PGO.

A background AOT compiler takes the PGO data generated by the JIT and creates an optimized AOT binary for the application.

The next time an application is executed, the AOT compiled binary is used, until there is some kind of change, like an update or unexpected execution path, that requires a recompilation to take place.


> Those are both still VMs? Latest android looks like ART + JIT, but you can't AOT a JIT (that's the whole point of doing it Just In Time)?

Dalvik was a JIT VM. Then, for Android 5.0 (Lollipop), Android moved to AOT compile code (On installation and ROM upgrade, which is why if you ever updated Android then, you'd be sitting and waiting for a few minutes while Android optimized apps).

Then recently (I don't remember if it was for Marshmallow or Nougat) Android went back to a JIT VM on install, followed by a background AOT compile. This way installation goes faster (it doesn't have to compile everything right away), ROM upgrades go way faster (you don't have to AOT compile tens to hundreds of apps before being able to use your phone), and Android can do guided optimization during AOT (it knows which hot-paths were taken).


You can see here:

http://openjdk.java.net/jeps/295

Basically Java AOT is half-assed solution like so many other Java solutions e.g Java GUIs, Java build tools, Java generics etc etc.


Sure, because fully thought out solutions like Python 3 do wonders to keep the community united.

Either one replaces the engines in mid-flight, or parks the plane with perfect thought out solution.


>US Army, US Navy, French Army, IBM real time deployments, factory automations, for your enlightenment.

Sound exactly like the kind of environments were all kinds of horrible crap are used. Doubly so with the mention of IBM.


I consider real-time JVMs with AOT compilation something worthwhile using.

I just mentioned those, because they have Java code in production, with soft-real time deadlines for weapon targeting systems of a few milliseconds.

Usually the kind of stuff that gets mentioned that GC enabled languages aren't capable of.

It is all a matter of budget and the army has lots of it, which incidently is what allowed boring stuff like the Internet to exist.


>I consider real-time JVMs with AOT compilation something worthwhile using.

Sure, I don't have a beef with either (although someone could go non-JVM to get either AOT and/or real-time guarantees in a better form perhaps).

But I don't think "the X and Y army uses it" serves as proof that a technology is mature.

It's not like the army doesn't have all kinds of non-critical applications, and all kinds of legacy crap and modern crap apps lying around, doing some thing or another.

It's like saying "Facebook uses our app X" as proof of maturity, when the use could be in some horrible experimental project, that Facebook tried and semi-abandoned, or is the product of some engineers "20% time" stuff...


Well, if you want the actual production use cases, here are some of them:

French radar system for ballistic missile tracking and measurement.

http://www.militaryaerospace.com/articles/2009/03/thales-cho...

Aegis Weapon naval defence system, deployed across US Navy cruisers

http://www.spacewar.com/reports/Lockheed_Martin_Selects_Aoni...

USS Bunker Hill ballistic missile defence system weapons control

http://www.militaryaerospace.com/articles/2010/04/aonix-perc...


> if doing GPL stuff use the open source license from ExcelsiorJET.

Thanks for mentioning our product, but I have to make a few corrections:

First, the Excelsior JET Runtime license is sadly not GPL-compatible. Also, you cannot use it to target embedded systems, unless you buy Excelsior JET Embedded and pay royalties, though we are currently weighing the option to switch to OpenJDK to eliminate the latter.

Second, the Standard Edition is free even for commercial use (the above limitations apply).

Finally, other editions are available at no cost for use in public non-commercial projects. It does not matter whether the (software part of the) project is open source or not.

See https://www.excelsiorjet.com/free for details.


Thanks for the correction, I was on mobile andm was writting out of memory.


I turned my private website (not the one listed on my HN profile) into a single binary doing something similar to what OP is doing, except I didn't know about include_str!, only include!, so in my build.rs I first read in my static files and then I write them out as byte array declarations into intermediate Rust files which I then include! in my src/main.rs.

Thanks to this short blog post by OP I learned something that will let me simplify some of my code.

However the thing that I am doing is not completely useless, because I have been able to embed fonts, favicon and images as well with the method that I am using.

The code I have written for this is only the beginning but if anyone wants to look at my code in order to see how I did it here you go:

https://gist.github.com/ctsrc/4c4cc05254d12bbc8937a0ea385fcd...

It's far from great but it's a start. Let me know how you would do it better :)

Speaking of Rust and websites, in Python I have fallen in love with the pyTenjin templating engine. http://www.kuwata-lab.com/tenjin/pytenjin-users-guide.html. Does anyone know of a templating engine like pyTenjin but for Rust?


> However the thing that I am doing is not completely useless, because I have been able to embed fonts, favicon and images as well with the method that I am using.

FWIW there's also an include_bytes! macro to embed binary data right into the executable without going through a separate rustification step.


I actually already use the include_bytes! macro within the build.rs. Looking at the source I see that I create arrays of arrays of bytes, not just arrays of bytes as opposed to what I originally said. It was a few months since I wrote that code so I hope I would be forgiven for not remembering such details ;)

My eventual plan that I remember now was to make it so that I could reference static files (CSS, JS, icons and images, fonts) in my static HTML files and HTML templates, and build.rs would automatically include the bytes of those files.


I have now rewritten my code so that I include bytes directly in my src/main.rs like so:

    #[get("/fonts/<fname>")]
    fn fonts<'a> (fname: String) -> Result<Content<Vec<u8>>, NotFound<String>>
    {
        match fname.as_ref()
        {
            "autobahn.woff" => Ok(Content(ContentType::WOFF,
                include_bytes!("../static/fonts/Autobahn/autobahn.woff").to_vec())),

            "autobahn.ttf" => Ok(Content(ContentType::TTF,
                include_bytes!("../static/fonts/Autobahn/autobahn.ttf").to_vec())),

            "grobe-deutschmeister.woff" => Ok(Content(ContentType::WOFF,
                include_bytes!("../static/fonts/Grobe-Deutschmeister/grobe-deutschmeister.woff").to_vec())),

            "grobe-deutschmeister.ttf" => Ok(Content(ContentType::TTF,
                include_bytes!("../static/fonts/Grobe-Deutschmeister/grobe-deutschmeister.ttf").to_vec())),

            _ => Err(NotFound(format!("No such file: /fonts/{}", fname)))
        }
    }
In the meantime since I first wrote my original code, the web framework I was using called Iron [1] has become unmaintained and they now urge people to pick a different framework. I chose to use Rocket because it seemed to have the features I want and the documentation seemed good enough to get started (and it was).

For templating I found Askama [2], which I am using from git master in order to be able to use it together with Rocket. https://github.com/djc/askama/issues/71

Eventually I will generate the routes for the included bytes with build.rs again. My new code is better both short-term and for when I will create said build.rs.

Thank you masklinn for your comment about include_bytes! which made me look at my code again and rewrite it.

[1]: https://rocket.rs/

[2]: https://docs.rs/crate/askama/


You can do this in pretty much any language. Back in the early 2000's, people used to take the Perl source code and replace the main loop (you know, the part that actually opens files and reads in your "perl scripts") with the text of a cut and pasted perl script. So when the executable ran, instead opening a file, it simply ran the perl code that was nothing but a (giant) string within the perl interpreter executable. Nothing stopping you from including the code from a perl http server or other perl framework instead. I think cpanel did something like this for many years as their toolset was all perl but distributed as a single binary file.


Can of course still do this with FatPacker: https://metacpan.org/pod/App::FatPacker

But, AFAIK, only supports Pure Perl code, so one is SOL for most database drivers etc...


PAR[1] is what you're looking for.

It supports loading XS modules by overriding DynaLoader bootstrapping methods; it writes shared object file to a temporary file at the time it is needed.

1: https://metacpan.org/pod/PAR


Awesome, thanks for the pointer! CPAN still has lots of tricks up its sleeve, eh :)


There will apparently never be a replacement for Perl :)


Yea I haven't used fatpacker but I've heard it is only for "pure perl" and can't be used with anything that drops down to XS or C.


If is interesting to you, I also suggest to check out my Rust crate bui-backend[0]. This is a library for building Browser User Interface (BUI) backends. The key idea is a datastore which is synced between the backend and the browser. The demo has several example frontends. The most interesting perhaps is the yew[1] framework, which is somewhat like React but in Rust. This lets you share code directly between the backend (natively compiled) and frontend (compiled to wasm). There is also a pure Javascript frontend, an Elm frontend, and a Rust stdweb frontend in the demo.

0 - http://github.com/astraw/bui-backend 1 - https://github.com/DenisKolodin/yew


I don't see the point of this. Why not use nginx which comes with startup scripts and point it to the site's directory that includes the bundle/index files?

If the single binary that just serves static files you're better off with a web server that supports https, rules, vhosts, etc and is battle tested. Not to mention that this binary you created will need startup scripts of some sort depending on the platform.


Ultra portability. And there shouldn't be any startup scripts, however, the binary does need to be compiled for the target platform.

I'm pretty sure the use case of this is more akin to an electron replacement, and not a production web server. So the user would just download and run the binary, and interact with the app from their browser.


I don't see point of this either. I aware this would cost me lot of hardly obtained karma, but I feel I have to say this.

Strange how some people are obsessed with hype. I think what raised interest of this article was just word "Rust" in the title. Rust seems to be wet dream of many developers. They heard it's safe, so it will magically solve all their problems. But the usage of Rust in this case is meaningless. You could do the same with python, nodejs, java, just plain nginx or million other (and better) ways. Just first google query returned probably better solution if you like single static binary http://miniweb.sourceforge.net/. It would be two files instead of one miniweb binary and static site compressed as 7z.

How does this thing scale? What about SSL? Do I have to put reverse SSL terminating proxy in front of it? Yes, you say? Ok, why not just use that proxy for serving those static files too (nginx does that) and skip this thing completely. What about performance? Have you tried to httperf on it?

I would appreciate if this did more then just being a single binary. Like Facebook's HipHop compiler of PHP to static executable or something like that. Sorry but this is real bullshit.


You're bashing it based on some assumption that it's being heralded as an amazing piece of tech. It's just a static binary. If you want to argue the merit of static binaries then, by all means. I personally love them, it's one of the reasons why I like Go so much. I'm sure there's a ton of people who like static binaries, and who dislike them.

There's nothing else to see here, you either like dependency-less binaries or not.

Hell, if anything, I don't see the point of your point, calling static binaries "real bullshit". Depending on the scenario static binaries are either better for you, or worse for you - it's just a tool in a bag of tools. How can that possibly be bullshit? Is a wrench bullshit?


I understand that static bundled site as rust executable is not for me. However I'm asking for who is? Specially considering my concerns about about SSL, performance.

EDIT: I'm not against static binaries per se. Only against this usage of them.


I've actually used static bundles (including css/etc) for a lot of internal applications, both at home and at work.

Basically anywhere where reducing difficulty of deployment and asset management outweighs the valid concerns you pointed out. It's really, really handy in my view.

Of course, I've got no plans or desire to run some production documentation site on this. Yet, that problem domain is vastly different than internal tooling, OSS apps, etc.

Just think of a note web app you write. Do you want your users to have to manage css/template/image bundles just to use your note app? Why not just make the binary work with zero configuration/management? Plus, if you wanted - you can of course support both, allowing the user to override css files if they so desire, without having to recompile the binary/etc.


If theoretically had that need, I would first go with docker and if hw is limited, then maybe this thing I just googled out as part of research on the topic:

http://miniweb.sourceforge.net/. It would be two files. 20k server binary and 7z with site resources. It would safe me trouble of recompiling binary.


How about a web interface on an embedded device?


As widely used/loved as nginx seems to be, I couldn't for the life of me find any good resources that hand hold a bit more than just reading the docs and hoping for the best.

Know any good nginx books or courses?


Back in the day, Shopify distributed a standalone version of their store server so people could write themes. It was pretty neat. I could see this being used for something similar.


I agree with what you're saying in regards to simply using nginx to handle web requests. I don't agree that there needs to be a point to this to do it.


You're absolutely right, though I assume using Apache/nginx with a reverse proxy is pretty standard for this type of scenario. Having a single static binary makes it convenient for developing, testing, and deploying.


Have done several sites in this exact fashion, except coded in Nim, not Rust. A reasonably sized, musl-linked, absolutely standalone executable with SQLite baked in for data storage. That's two files all told - the program and the data. Super simple deployment, I usually stuff the thing behind a Caddy server these days. Never let me down so far.


I've never used rust before, but have been meaning to look into it and play around some. OP mentions the include_str! function(maybe its a function, not sure what '!' signifies here) and it blew my mind.

I'm betting that almost every language that I've used has something like this, but the succinctness of it and how it's used to keep the source clean, but still compile in the text is really cool.


Normally webservers serve files with the sendfile system call for zero copy IO. If the string is statically included in the binary then it first has to copy it into kernel space to send the data. So this is actually less efficient.

In theory this could be re-optimized with vmsplice, but I wonder whether that webserver actually does it because vmsplice's lifetime requirements are... complex (although a &'static str would meet those requirements). Or the binary could sendfile itself if it knew the position of the string in its binary image, but that information is not preserved by a &str.


> Normally webservers serve files with the sendfile system call for zero copy IO. If the string is statically included in the binary then it first has to copy it into kernel space to send the data. So this is actually less efficient.

This doesn't sound right to me - sendfile's "zero copy" is about eliminating the extra copy when you read() a file into a userspace buffer and then write() that file into a socket. If you have a memory-mapped file (which is what your executable itself is), and you pass an address inside that mapped region to write(), I would hope that the kernel will just access the buffers of that file directly - i.e., the fact that it's memory mapped means there is no separate copy of the file, there's a virtual page that references the single kernel buffer for that file. So at that point write() should behave just like sendfile().

If you need to process the file in the kernel itself, there will still be a copy, but you'd have that in either case. If you don't, I would again hope that both write() from a memory-mapped region and sendfile() know how to DMA the file from the disk to the network card.

I am not confident about this and it seems worth benchmarking. (Also, your production server probably uses HTTPS, at which point this is all irrelevant if your TLS encryption happens in userspace. If you're using in-kernel TLS, then you're definitely not doing DMA, and I would strongly hope that sendfile() and write() from a mapped buffer involve the same number of copies - one read from disk, at which point the kernel encrypts it into a writable buffer.)


Putting the string in the binary is really only useful for trying to do ultra-portable deployments like this imo. The overhead of copying the string into memory is pretty meaningless in the use case where you only expect a single, home-desktop user.

Now if this was a production server having to service thousands of clients than the sendfile optimization becomes much more important.


Even in production though, you can just have a step on startup which copies built-in resources to a /tmp location and then sendfile's from there.


> Normally webservers serve files with the sendfile system call for zero copy IO

Not if they are doing their own tls encryption which hopefully one day is the new normal.


I know what you mean! I first learned about `include_str!` more than a year ago and I've wanted to try this ever since.

The '!' means `include_str!` is a macro in Rust, similar to a C macro in function, but Rust has a much better system for macros and ships with many useful ones.


Others have mentioned that include_str! is a macro, which I suppose it is. But if you look at the source it says that it is a compiler built-in[0]. If you dig into that, you'll see that it happens inside the compiler[1] (well duh). I feel that that makes it slightly different than a normal macro in that it doesn't expand to a bunch of Rust code. I found that pretty neat.

[0] - https://github.com/rust-lang/rust/blob/master/src/libcore/ma... [1] - https://github.com/rust-lang/rust/blob/e8af0f4c1f121263e55da...


The ! indicates the object you are "calling" is a macro, which essentially replaces that call with the equivalent syntax which is produced by applying the macro.

In addition to the older hygenic macros (which have their own syntax to define them), Rust also supports proc_macros, which are actually code that is executed by the compiler and can preprocess a struct or function.

For more information, see the Rust Book (second edition) section on macros:

https://doc.rust-lang.org/book/second-edition/appendix-04-ma...


the ! indicates that it is a macro, so include_string! is run at compile time.


It's a macro.


There's a small typo on the article in /ui/dist/index.html - the script source should be '/bundle.js', not '/main.js'. I think the git repo corrected this error


Thanks, fixed!


I do similar, except I just deploy to AWS as a static website. Super duper minimal everything.


If you only have static files, might as well go with Netlify. Can’t beat them for simplicity.


The tutorial stops where it starts to get interesting, actually implementing any logic with the backend other than serving static files. It would have been interesting to follow through a database transaction or see some data / image manipulation.


I actually agree. I extracted this tutorial from a project I'm working on developing an RTS game for school. I'm experimenting with having the game loop run in Rust and seeing if I can send render updates fast enough to smoothly run the UI in JS. It's working pretty well so far. I'm using Protobuf 3. I'd love to get capnproto or flatbuffers working eventually, but neither of them seem to have both JS and Rust support mature yet. I think the next obvious evolution from this tutorial would be to connect a sqlite db. Not sure how easy statically linking with it is though.


Are you talking about statically linking the database itself? Seems like it'd be better to keep the database separate from the binary, unless you're working with a read-only database.

For static-linking the sqlite3 library, if you're using rusqlite you can just specify the bundled feature in Cargo.toml.

``` [dependencies.rusqlite] version = "your-version-number" features = ["bundled"] ```


Yeah I meant statically linking the sqlite library. Thanks for the tip on how to do it


I have such a thing in Haskell: https://github.com/shapr/sporkle/blob/master/app/Main.hs

It's not Rust, but it is a really crappy exercise tracking app I wrote for teaching a class.

Thing different about my code: I build the HTML in the source itself, and I don't use React


If you want to go super-minimal at the cost of being more dependent on Amazon, you can deploy the static website on S3 and use Route53 as an HTTP frontend.


Last week I learned that if you configure your bucket name the same as your domain, then a simple CNAME record will work without Route53.


I don't think you can make SSL work on AWS without Route53, though. Which is (for better or worse) about to become a big problem when Google starts penalising non-SSL sites. You also can't make a AAAA record, so no IPv6.


Put Cloudflare in front for SSL. Not sure if that makes the solution more or less complex, never worked with Route53.


As of two years ago, there was a bug with mobile safari and hash fragment preservation, which actually made using a lot of client side JS code very difficult to use directly from s3.


This is what I do on Google cloud, presumably the same. CloudFlare for ssl, google Cloud DNS, then a Google bucket. Real simple, real fast.


Yep, this is what I do for my site. 51 cents a month.


Yes. Here is a guide to deploy a React app on S3 and CloudFront (with a backend on EC2): https://stackoverflow.com/questions/41250087/how-to-deploy-a...


Inline JS and css? My first thought was that I'd rather leave the Rust part out and do what you suggest in order to deploy to static file cloud hosts.


And you can scale this so good, we do something like this with every branch/user story to test it on live environments. Supereasy, once set up.


I've done this same thing before with Node using [nexe](https://github.com/nexe/nexe) and musl.

With a little bit of effort it's also possible to build and statically link native modules, eg I've had success with nodegit, uws and sqlite. (Needed to rebuild some of the node deps with musl, then rebuilding the native modules with musl, then modifying Node's node.gyp to --whole-archive include the additional .a/.o's (done in nexe), and using rollup to rewrite the .node includes to `process._linkedBinding('the registered module name')`, and then nexe builds the whole lot into a single executable.)

Nexe supports bundling resources (although less cleanly than `include_str` imo) so it can also bundle all your ui resources as well.


Fossil (http://fossil-scm.org/) has been doing very similar for a while in plain C, it ships statically-linked and includes a CLI and a web UI. (not in React, however)


I love these things. I made my resume page by piping together two small rust pograms and a bash script to get a single html, from a json in firebase, a handlebars template and several static assets.


I really like this guide to get started.

Its a bit short though. I would really like to see the setup of a simple uni- or even bi-directional rpc or value binding mechanism between react and rust.


The project that I extracted this from actually uses a bi-directional websocket with proto3. It's just a simple message system and doesn't have binding. The code isn't great or documented but you can check it out here: https://github.com/anderspitman/battle_beetles


Thx! Nice.

Maybe json-api on a websocket on the same port as a transport would also be possible? Protobuf seems a bit overkill for something that is that tightly integrated together IMO. But if it works, it works.


The use of protobuf is mostly to provide a schema and enforce strict[er] typing on JS.


Cool small example. It's helpful to have the actual cli commands on hand.

There's just a small typo. The script path in the HTML mount file and the router path need to be the same, but do not match. Either one needs to change:

    // ui/dist/index.html

    <script src="/main.js"></script>
    


    // src/main.rs

    (GET) ["/bundle.js"] => {
        Response::text(bundle)
    },


What I really need is a tool to generate blog-post code snippets from actual code ;) Thanks, fixed!


How hard would it be to use something like this to deploy directly onto a hypervisor, like with Erlang on Xen?


Not sure about on a hypervisor, but if you're using Docker, you could do something like:

FROM scratch COPY ./bin/app /app ENTRYPOINT ["app"]

which will make a barebones image with no OS cruft. You need to make sure your binary _is_ truly statically linked though. The key being the inclusion of musl libc in in the OP's build.

I've tried this with golang and you need to pass some odd flags I can't remember off the top of my head to get it to not dynamically link gnu libc, but once working you have container images that are incredibly small.

I'd imagine with a minimal container host like CoreOS you get largely the same effect as running Erlang on Xen.


Quite hard. I don't think any of the rust unikernel projects really have gotten off ground to even serve "hello world" web requests, and there has been several attempts.


I had a little Rust site running on rumprun a year or two ago. Not exactly the same, but same idea.


One reason this is nice is that certain build systems make it easy to copy 1 file or variable into a repository, so this might be easier than pushing a container to a private repository that can be somewhat configuration error prone. However personally I might still use Docker with this :)


To simplify statically cross compiling Rust code for different target platforms, including musl and openssl support, check out https://github.com/messense/rust-musl-cross


Was really hoping this would include Windows and MacOS support somehow, but it looks useful even without them.


For me, the Rust syntax is hard to look at and read. I much prefer Go which reads like C, Perl, PHP, JS or many other common languages.

I'm wondering if people who prefer Phython prefer Rust? I've been developing for 20+ years and the syntax alone makes Rust feel prohibitive to me.


To be fair, this sample of code includes macro usage (every function call that ends in an exclamation point is actually a macro). The syntax inside the macros is different from normal, legal rust syntax.

Still, I agree that Rust's syntax approaches C++ level of complexity at times. I find that writing actual statements and expressions is simple enough, but writing structs, lambdas, and function definitions requires knowing how to specify types, type bounds (if using generics), lifetimes (if using references), and Fully Qualified Syntax (for referencing types/items in other modules). And that's before actually having to come to terms with the borrow checker.

For what it's worth, once you get over the initial learning curve the cognitive burden goes down. Rust's syntax, to me, is actually visually distinctive enough that it's easy to parse out types, expressions, declarations, etc. from just a quick glance.


I find it has a lot of similarities with the C-family of languages. I'd say more with C++ though (generics, namespaces etc).


How does go look more like C than Rust?


I prefer Rust because it has an actual type system capable of allowing me to express constraints and relationships.

I highly doubt it's the syntax that's the issue. You're probably not use to caring about lifetimes and ownership. It's a different paradigm.


Tangentially related, but I've used https://github.com/mjibson/esc before and liked it. I found it was a bit simpler than https://github.com/GeertJohan/go.rice.


No other language forces you to think about ownership and lifetimes the way Rust does. I find the syntax really nice, in the sense that I always know what it is saying. The hard part is not understanding the words, but grasping the meaning. That's IMO what makes Rust harder than other languages that rely on fat runtimes.


I still find Rust's syntax relatively nice and love Python. A "system" language with similar to Python's syntax is https://nim-lang.org/


Anecdata: Not a fan of Python's syntax, but Rust looks readable to me.


You're not alone. I've tried a few times to dive into rust, but just find something about the syntax jarring. It's not so much bad, it's just unpleasant to look at for me. I'm sure it's a great language, but not for me for that reason.


Is there something like "50% statically-linked". I'm trying to understand if static linking can be partial. I often see it done it all in or none.


Remember that ultimately, you link multiple things in, and each can be linked dynamically or statically. So, a "100% statically linked" binary would have every library statically linked, a "100% dynamically linked" binary would have every library dynamically linked, with all the ranges in between.

This comes up with Rust due to what's in the article:

> This part can be skipped if you don’t need 100% static linking. Rust statically links most libraries by default anyway, except for things like libc.

Rust statically links Rust code by default, but has a dynamic link to libc. So Rust binaries are "mostly statically linked" for this reason. MUSL lets you link that final dependency, getting up to 100%.


Technically, no, but in practice, yes. For example, default static linking does not include the base C libraries that are pretty much available in every OS. You have to explicitly tell the linker to also link those in.


Sure, it means 50% of its dependencies are linked to statically.


I wanted to use rust for web development for a while but always struggled with just getting an OAuth2 example running in the past, has this improved recently?


Everything is always improving. Unless you let us know what you had issues with, no one can tell you if it's been made easier.

For instance, are these issues with rust you're having, or with a particular framework? Without any information, it's a meaningless question.


If this project discussed here is using react and rust together, could I just use an react example for OAuth2 to get started with? Up to some months ago, there were no examples of using any of the Rust web frameworks with OAuth2.


Can it pass C10k?


Or, you know... Python.


How do you make a statically-linked, single-file web app with python?


With something like py2exe.


Ha, Python is probably one of the hardest to distribute languages out there. Hence the creation of virtualenv, grumpy, py2exe. None are proper solutions.


What if you don't like Python?


it's very rare to only run one http based server app (port 80). So you probably want to put a proxy (like nginx) infront of it, but then you could just let it serve the files.


(misunderstood)


The JavaScript file is actually added to the Rust binary at compile time as a static string.


If you want the same thing only with C# instead of Rust you can use https://github.com/ElectronNET/Electron.NET


That seems to be a wrapper around Electron, the article talks about building a web server with all of the assets bundled inside the binary as strings.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: