The benchmark result really confused me. But you'll find that the `actix` actors are extraordinarily lightweight and highly optimized around the equally lightweight and highly optimized Futures. The design is hard to beat, from a performance standpoint.
Also, do you have a list of speeds for the frameworks you've tested?
There are a lot though: https://github.com/flosse/rust-web-framework-comparison#serv...
Nice to see front page example using 'impl', the recent most improvement in ergonomics. Before 1.26 things would be different. This makes me more appreciative of the efforts from Rust team/community to improve the ease of use.
It allows functions to return an object that provides a certain interface without specifying the actual type of that object. This was only previously true by wrapping it in a 'box', which meant a heap allocation and dynamic dispatch. The 'impl trait' provides static dispatch and no other overhead, so produces equivalent code to returning the type directly, but with all the abstraction flexibility that you want.
I wrote a post about it. Let me know if it helps.
I've seen georust, and it's on my backlog. There's a few functions that I saw missing (that I actively use). With mentorship, I'd love to contribute them. I'll open some issues and introduce myself in the coming days.
The TechEmpower benchmarks do not care for mem usage and startup time. Maybe this has something to do with them doing JVM consulting? :)
I have a 64GB RAM, 12 core server, and the JVM services take up about 25% of resident RAM (ignoring Kafka and other Java stuff).
I've wanted to learn Rust for a while, so I recently bit the bullet. I use gRPC everywhere (Dart/Flutter, Node, JVM, Python), so I decided to start by rewriting some small services in Rust using gRPC for comms.
For now, I've taken a Java service that used 500MB at peak, to under 10MB RAM. I'm planning on eating into the big stuff over the coming months.
It's not making money, so it's a "hobby" yes. Consulting's paying the bills so I don't mind at this point.
E.g. here's an url shortener gRPC server that runs on NodeJS, and a Rust client that can shorten urls and get results back.
FYI: https://nevi.me/ errors out with a 502.
Regarding the RAM, I'll see how far I get with moving some services to Rust in the coming months, and what benefit I derive in the long run.
It's a learning experience. Right now, I can't port everything to Rust, because each platform has its advantages. Rust is still missing a lot of libs that exist in NPM and Maven, so it makes it difficult.
I've been exploring exposing some functions that exist in Java/Kotlin through gRPC (as everything runs on the same network anyways) until it's available on Rust.
I hope to blog about my experiences in the coming days/weeks.
(I might add that this is a question I've had for a while, and I did not check the source at  in detail today.)
I have not checked recently, but last I saw, the database libraries for rust did not use async IO. Looking at (what I presume is) the code for the benchmark , it seems it imports the postgres and diesel crates. Last I heard diesel did not support async  and looking at the postgres crate  it does not mention async, which I assume it would in case it was supported.
My whole point was that, sure, I can see how async IO is important for handling many concurrent http requests, but each of those requests would still have to pass through the synchronous database driver which uses threadpooling, right? Or what am I missing here? I can see how it has great performance on the plaintext and json benchamrks, but I dont understand what gives it such a large boost in fortunes or multiple queries.
For example, Iron is doing 300k at plaintext/json benchmarks, but drops to 18k on fortunes, and the way I remember the benchmark code it is written in a fairly straight forward way. If the database layer supported 160k requests per second I dont see why we would see such a huge drop? (Edit: 160k is the performance of Actix on fortunes.)
I also recall seeing numbers on the 10k order of magnitude from doing naive benchmarks with the various database libraries available, without any http part to the application. But I'm not sure, maybe I'm missing something or remember incorrectly?
I actually forgot the DB tests were implemented; when I was sending in PRs, I was mostly thinking about plaintext and JSON, not database stuff. Sorry about that!
> Technically, sync actors are worker style actors. Multiple sync actors can be run in parallel and process messages from same queue. Sync actors work in mpsc mode.
So, you're still getting some degree of parallelism here. I wonder if that's it?
(You're right about the fact that the DB APIs are currently synchronous.)
However, Diesel uses `libpq` over `postgres` (and its child crate `tokio-postgres`). Moving from `libpq` to `rust-postgres` would cause a lot of damage.
Looking at Rust's strengths, and that MSFT has languages/compilers of it's own, the use case is prolly "mission critical component that also needs to be blazing fast". But for now we're guessing.
fafhrd91 appears to be the primary author.
Everyone is saying great things about it, but I just want to point out that it's not for everyone.
From looking at the two, Actix is more minimalist with direct control, though it has some nice middleware built in. Rocket is a more Rails-like “everything works and is magic” approach. Both seem like they could be great but it depends on your use case.
So as a result I've opted to using D instead for my current project, I've been able to get more figured out with Vibe.d in shorter time so I'll likely stick to D for this project due to time being a factor against me.
Actix-web was almost a drop-in replacement.
Anyone have any insight into how Actix and Rocket compare? I'm interested mostly in ergonomics and safety.
I'm not experienced with web services and my project was very limited for learning purposes, here my take away:
I like both and for me Rocket was way more ergonomic for creating routes dealing as if they're simple functions where input and output are dealt automatically with (from request and to response).
Actix advantage is actors and is easy to be fully async, I had some issues dealing with it but most of my troubles were extracting request data and building responses.
When actix-web will support magic as Rocket (once proc-macros becomes stable) then actix-web will have the edge if Rocket don't become async-ready and stable before.
For both it is only question of missing stable rust features and actix-web is already running on stable.
- stupidly easy
Other than Rocket being nightly, the other reason I switched to Actix was because Rocket doesn't have the ability to respond to requests directly within the middleware layer, you can only modify the response but not return early. This is pretty important with regards to CORS and trying to catch all OPTIONS requests. There are a few solutions of course, but all of them felt hacky or verbose.
I have no complaints with Actix yet.
from performance perspective, actix is faster than rocket on any type of load.
The actix library is fairly nice though, but still a few cases where the use of globals and/or statics introduces problems.
One other thing that wasn't really apparent but would have made my life easier is a way to use an actor to handle a request, so I could have access to a context for thread related activities (e.g. tokio handles). It feels wrong to just use Arbiter::handle there, especially for testable code.
I'm excited to use it again in my next service.
Actix does pretty well too.
This is an aside, and I sincerely apologize for that, but I am compelled...
I'm reading the greeting/hello-world example on this nice site and I notice unwrap_or(). That is a poor name: can it panic, as suggested by the "unwrap" part (I have just enough Rust to know that,) or can it not, as suggested by the "or" part? The name is inherently ambiguous!
It's as if the .unwrap() that is festooned throughout such example Rust code has become so ubiquitous that someone felt it had to be used and so tacked on "_or". Why couldn't it just be .or() or perhaps .default()?
And so I investigate and things go rapidly downhill from there. Consider:
Again I'm sorry; this is clearly off topic, probably badly naive and possibly inappropriate in a few other ways to which I'm pathetically oblivious. I couldn't help myself.
unwrap_or: produce this value instead
unwrap_or_else: produce a value by running this closure instead
unwrap_or_default: produce a default value instead
In this case, the "container" is option, but similar types have the same methods, like Result.
> Why couldn't it just be .or() or perhaps .default()?
or returns Option<T>, unwrap_or returns T. .default returns the default value for a T already.
TL;DR: you have a lot of options (pun intended, sorry!) with what to do, but the names all follow a quite regular scheme.
One way to look at them is "unwrap", "or" and "or_else" are building blocks that have a common meaning across the different examples:
- unwrap: returns a plain T
- or: the left-hand side, unless it is a "failure", then use the argument value
- or_else: the left-hand side, unless it is a "failure", then use the argument function to create the value
For suffixes, "else" means executable code (a closure).
> > It's as if the .unwrap() that is festooned throughout such example Rust code has become so ubiquitous that someone felt it had to be used and so tacked on "_or". Why couldn't it just be .or() or perhaps .default()?
Because Rust doesn't have function overloading and thus you'd be missing most of the cases?
 or more generally Wrapper<T> -> T and Wrapper<T> -> Wrapper<T>
That is the insight I needed. Thank you.
Rust seems novel in that despite having powerful abstractions and a rigorous type system it does not support overloaded functions. I gather from some of the discussions about it that this design decision greatly simplifies the compiler implementation; supporting function overloading would necessitate answering several very tough questions such as whether a function can be overloaded on argument lifetime.
So the Rust standard library has established conventions (in this case 'unwrap', 'else' and or 'combined' in various ways) to deal with the permutations that naturally emerge given the lack of function overloading. It's important understand this and inculcate these conventions, particularly when designing interfaces for use by others.
Neither Haskell nor ML (incl. its children) have function overloading.
As a general case, Actix-web pulls in a lot of dependencies at install, and compile time.Are all of those dependencies really necessary for a hello world scenario?
Being a Rust newbie, I thought maybe I was using the wrong tool and started to look at hyper instead..
- Compile with --release.
- Before distribution, enable LTO and strip the binary.
- If your program is not memory-intensive, use the system allocator (assuming nightly).
- You may be able to use the optimization level s/z in the future as well.
- I didn’t mention this because it doesn’t improve such a small program, but you can also try UPX and other executable compressors if you are working with a much larger application.
So, two releases :)
I guess I was tangentially pointing to complexity and abstraction there.
I've only played around with Rust a bit, but IMO, as a general rule: don't judge back-end frameworks by the size of the deliveries, unless we're talking about something ridiculous (5GB). It's extremely superficial and has a very low correlation with the quality of the actual tool.
See: "Why are Rust executables so huge"
To give a specific example: they can be used to mediate access to a resource without requiring complex synchronisation: instead of sharing a piece of mutable state (like a cache) and protecting it with a lock, you can ensure a single actor accesses the cache, and other code communicates with that actor.
This is particularly useful with asynchronous code, because it's not possible to have a fair, asynchronous "passive" mutex without suffering from the thundering-herd problem. If you try to implement such a mutex, you will find yourself needing to queue up lock requests and responses, and you will end up reinventing the concept of an actor.
: http://akka.io/ Akka is the implementation of the Actor Model on the JVM.
First, even if waiting for a response from the database is the largest single contributor to your response time, and if you're running an extremely low-traffic service, there's still benefit in reducing your total latency
Second, if you're not running a low-traffic service, and have enough requests that you're approaching the memory or cpu capacity of your web server (or have an existing application that's already deployed across multiple servers), making significant reduction in your CPU or memory use can let you handle quite a bit more traffic with less hardware.
Third, not all web services involve little more than making a request to a single external slow database. The data for your service could be:
* Ephemeral, kept in-process
* In a database on the same server
* In a fast database (memcache)
* Require nontrivial processing
* Processed by a separate service you're just acting as a proxy for
Also, what problem is this solving that countless other near identical web frameworks don’t already solve?
- Blazing fast!
You know, marketing stuff that tells you absolutely nothing about the software itself but is required to fill space on a website.
The thing is, a boring "Benchmarks" heading in the README doesn't get you as many GitHub stars as a "Blazingly fast!" heading+icon. And you know how addictive Internet Points can be.