The benchmarks are a bit lower than we wanted. I think this is mostly due to an immature ecosystem for doing basic http - working off of c bindings to a proven, fast framework would probably speed this up by a few times.
Thanks for the heads up. We reworked our folder structure after writing the READMEs, and it looks like we forgot to update. I'll get working on that.
The 200 is being served as the default of `rust-http`. Static will never serve a 404 - it will only defer to the next middleware, and the docs server is overly simple, so it just goes to default.
It does sound odd that you compare your framework to NodeJS framework. Comparing to Java frameworks (a fast language which can do both parallelism and concurrency) would be much more relevant.
The comparison with Go is really interesting because I don't think its http server does anything particularly fancy (like calling across FFI to non-Go code) and I don't see any obvious bottlenecks in rust-http that would cause an order of magnitude less throughput. Would be interesting to see some profiling.
Maybe redesign it to start some thread early, and put them in a pool.. then dispatch to them.. i think i've saw a thread pool already backed in rust std or in servo.. dont recall right.. i think servo even have a fancy work stealing thread pool..
Also the design of Nginx is sophisticated, because it uses the event io + a thread dispatch design.. (at least it was like this last time i've hacked it) and worth being copyied
I am not surprised. Most of the machinery necessary for such things is still developing in Rust.
When I was testing my now-discontinued framework (widmann) using a very early version of rust-http, I felt lucky when it didn't drop or break on half of the requests. Reliability comes before performance in this regard and Rust is a bit from done. I expect that to develop quickly once the general shape of code is known and people start profiling and optimizing.
Same here, without even looking into what is slow, Clojure with Compojure does 7500 req/s. If Rust would like to be the next close to bare metal language we need some serious performance improvements.
Simple question. According to [1], middlewares are cloned for each request. Is there a good built-in way of accessing shared mutable data, such as a connection pool?
We have a middleware for that! You can look at http://github.com/iron/persistent, which does wrap an RWLock in an Arc. It also provides a struct to make your middleware Share, so you can do similar things.
We also have session, which works on a HashMap under an Arc, although it isolates the mutable data to a session, instead of sharing it across isolated requests.
I would guess `Arc<Mutex<T>>`, `Arc<RWLock<T>>` and `Arc<SomeConcurrentContainer>` would all work (cloning them is a ref-count - the handle gets copied, not the contents).
I've been curious about the HTTP implementation, particularly considering that `rust-http` is in bugfix-only mode and `teepee` is still in design phase. Did you folks build it from scratch?
Are you planning on lending assistance to Teepee? We are kind of in limbo in terms of http right now and it would be a shame for Teepee to turn out to be vapourware :(
(A hint for any folks experienced with http - this could be your chance to make lasting mark on the Rust ecosystem!)
Teepee is getting back on track now; I just had a few weeks where I was unable to do much with it, but I started designing and developing things again yesterday.
This looks very nice. I've been following rust and go for a while now and I finally have a small service to write that would be a good fit for either language so I'm going to try both.
I noticed one of the repo's more recent issues was caused by a change upstream. The upstream rust devs seem very accessible, but how often do you encounter these kinds of breaking changes. Acknowledging that rust is pre-1.0, is there any way to stay up to date on these kinds of changes before they hit your app so you aren't blind sided?
The rate of change slowed, but then, as we get closer to 1.0, the rate of change has sped back up again. We need to break as many things as possible as soon as possible, so that we can give the community some time to try it out before we christen a 1.0 release. Does that make sense?
Naw. The language itself is getting smaller and smaller over time. Most breaking changes these days (though not all!) are in libraries, which have individual stability markers, and the stability of the standard library doesn't block 1.0.
A year is a very, very, very long time in software. Why do you suggest that long? I'm curious.
We spent a lot of time talking with rust devs on #rust, which is full of extremely helpful, extremely nice people who help you as much as possible.
Usually we woke up in the morning to everything broken due to changes in the nightly, but it's pretty easy to grep for [breaking change] and work from there. Most changes take moments to fix.
Would you be willing to contribute a test implementation to our project [1]?
[1] http://www.techempower.com/benchmarks/#section=code