Hacker News new | past | comments | ask | show | jobs | submit login
Actix: a small, pragmatic, and fast Rust web framework (actix.rs)
399 points by fafhrd91 on May 30, 2018 | hide | past | web | favorite | 123 comments

We use it at Sentry for one of our services and the experience has been great. The best part by far is that you can benefit from async io handling without having to write every one of your views in an async fashion. All the async complexity is offloaded into the extractors and the response sending.

Is this a new service built from scratch in Actix, or did you migrate the service from something else?

New service entirely.

Do you have any performance numbers, or a prior version of this service to compare it to? I'm curious how async in Rust compares to synchronous code in real-world applications, especially given its yet-incomplete state.

Absolutely no idea since we did not replace an existing service. From everything I have seen from tests performance is not a concern here and our bottlenecks are elsewhere entirely.

I evaluated a few different Rust web frameworks, where performance was the deciding factor. A minimum viable echo server was put through its paces with `h2load` on a relatively recent MacBook Pro. `actix-web` was literally 100x faster than the next fastest competing framework.

The benchmark result really confused me. But you'll find that the `actix` actors are extraordinarily lightweight and highly optimized around the equally lightweight and highly optimized Futures. The design is hard to beat, from a performance standpoint.

Have you tried to create a more complex web page and have the web frameworks render that as well? Sometimes being the fastest at the most trivial request isn't enough, if it can't handle complex request fast as well.

Also, do you have a list of speeds for the frameworks you've tested?

A lot of Rust frameworks use sync io. The first generation does because the libraries for async didn't exist yet, and Rocket doesn't because (last I heard), the author said that he didn't feel the ergonomics were there yet, and that's one of Rocket's primary goals. So that leaves Actix, Gotham, and Shio, basically. Gotham hasn't been tuned for performance at all. I haven't seen any Shio benchmarks.

There are a lot though: https://github.com/flosse/rust-web-framework-comparison#serv...

From repo activity Shio seems dead. And here is for gotham https://gotham.rs/blog/2018/05/31/the-state-of-gotham.html

Yeah, that Gotham news happened after I made this comment. Good to know!

Actix-web is great! I've adopted it for my projects. I can take full advantage of what Rust offers for concurrency and safety without going too deep into the weeds. Documentation, a growing number of examples, a very responsive author, and growing community are some of the reasons why I think this project is going to play a major role in Rust's web development story going forward.

> fn greet(req: HttpRequest) -> impl Responder

Nice to see front page example using 'impl', the recent most improvement in ergonomics. Before 1.26 things would be different. This makes me more appreciative of the efforts from Rust team/community to improve the ease of use.

To explain this to people who don't know the background - this is about 'impl NameOfTrait' in the return position.

It allows functions to return an object that provides a certain interface without specifying the actual type of that object. This was only previously true by wrapping it in a 'box', which meant a heap allocation and dynamic dispatch. The 'impl trait' provides static dispatch and no other overhead, so produces equivalent code to returning the type directly, but with all the abstraction flexibility that you want.

What is the use case ? Is there simple examples one could go through?

Usually you return a closure with impl Fn. Another use case is to return an iterator.

I wrote a post about it. Let me know if it helps.


Your code is nothing but Magic . I have spent time trying to wrap my head around it. I am almost there but I don't understand .map(<&str>::into) . Monoid implements String

Finally, that code is itched in my brain


I've had relatively good success moving some microservices from Kotlin to Rust (mainly saving about 90% resident memory util). I picked up Actix recently, and so far I'm enjoying using it. If Rust library support for geospatial tools was as good as turf.js, I'd be able to move a lot more stuff into Rust.

Turf development is funded by Mapbox. If you need more Geo functionality than that provided by https://github.com/georust/rust-geo (I'm one of the developers), contributions are actively encouraged, and we're happy to mentor people. Alternatively, feel free to pay for some dev time if you need something specific (e.g. OpenCage paid to have their geocoder included in https://github.com/georust/rust-geocoding, dual-licensed as Apache / MIT).

Hi urschrei, I naively tried to port turf to Rust, but from my past experience [0] and flaky time commitment in the past; I thought it'd be too much to do alone.

I've seen georust, and it's on my backlog. There's a few functions that I saw missing (that I actively use). With mentorship, I'd love to contribute them. I'll open some issues and introduce myself in the coming days.

[0] https://github.com/nevi-me/turf-kotlin

Hey by the way the link in the readme to the documentation is broken. It points here https://docs.rs/geo/0.9.1/geo/

I would like to join so that I would be mentored . So, where do I start ?

> mainly saving about 90% resident memory

The TechEmpower benchmarks do not care for mem usage and startup time. Maybe this has something to do with them doing JVM consulting? :)

Whats your driver? Do you save enough server resources do make it worth it, or is it just a hobby project?

I run a public transport website that includes a few mobile apps. I've broken it down into quite a few microservices, but the bulk of it runs on Node and JVM.

I have a 64GB RAM, 12 core server, and the JVM services take up about 25% of resident RAM (ignoring Kafka and other Java stuff).

I've wanted to learn Rust for a while, so I recently bit the bullet. I use gRPC everywhere (Dart/Flutter, Node, JVM, Python), so I decided to start by rewriting some small services in Rust using gRPC for comms.

For now, I've taken a Java service that used 500MB at peak, to under 10MB RAM. I'm planning on eating into the big stuff over the coming months.

It's not making money, so it's a "hobby" yes. Consulting's paying the bills so I don't mind at this point.


E.g. here's an url shortener gRPC server that runs on NodeJS, and a Rust client that can shorten urls and get results back.



Thanks, situation is similar for me - a 5€ Digital Ocean Droplet (1GB RAM) goes much farther with Rust services than JVM based ones.

FYI: https://nevi.me/ errors out with a 502.

Thanks, the Ghost service was down. The blog's back up.

Regarding the RAM, I'll see how far I get with moving some services to Rust in the coming months, and what benefit I derive in the long run.

same question, why moved from Kotlin to Rust all the way.

To supplement the other sibling question that I answered:

It's a learning experience. Right now, I can't port everything to Rust, because each platform has its advantages. Rust is still missing a lot of libs that exist in NPM and Maven, so it makes it difficult.

I've been exploring exposing some functions that exist in Java/Kotlin through gRPC (as everything runs on the same network anyways) until it's available on Rust.

I hope to blog about my experiences in the coming days/weeks.

Of the libs that are missing in Rust, which one would you find the most useful? I'm looking for project ideas to get stuck into.. :)

It's largely geospatial tools, which urschrei commented on above. Others are abstract, for example I use https://ignite.apache.org (I know I could use Redis), and that's one of the things tying me into the JVM. I use Apache Ignite as a distributed cache thingy.

I really want something in rust that can read orc files from hdfs and write them to some other database using jdbc or odbc. It's difficult though. So many technologies there from the Java world :(

Could you link to your blog, please? I'd like to read about that when you do end up writing about the experience.

I assume its [0] as can be found in his profile. Too bad it is throwing an error right now.

[0] http://nevi.me

It is, I didn't see that it's down. Just finished writing my last exam, so I'll have time to get it back up and running

Not sure if I'm missing something, but for the Techempower Bencharks [1] I had the impression that the bottleneck for other rust libraries were in accessing the database rather than handling http requests. However, looking at the code [2] it seems that the Actix solution isn't doing anything special with regards to this. Can someone give a quick description of "what" is causing such a huge performance boost for Actix compared to other frameworks?

(I might add that this is a question I've had for a while, and I did not check the source at [2] in detail today.)

[1]: https://www.techempower.com/benchmarks/

[2]: https://github.com/TechEmpower/FrameworkBenchmarks/tree/mast...

These kinds of tests are heavily reliant on having async IO, and https://news.ycombinator.com/item?id=17194761

Any chance you could elaborate on this, because I dont really understand how it answers my original question.

I have not checked recently, but last I saw, the database libraries for rust did not use async IO. Looking at (what I presume is) the code for the benchmark [1], it seems it imports the postgres and diesel crates. Last I heard diesel did not support async [2] and looking at the postgres crate [3] it does not mention async, which I assume it would in case it was supported.

My whole point was that, sure, I can see how async IO is important for handling many concurrent http requests, but each of those requests would still have to pass through the synchronous database driver which uses threadpooling, right? Or what am I missing here? I can see how it has great performance on the plaintext and json benchamrks, but I dont understand what gives it such a large boost in fortunes or multiple queries.

For example, Iron is doing 300k at plaintext/json benchmarks, but drops to 18k on fortunes, and the way I remember the benchmark code it is written in a fairly straight forward way. If the database layer supported 160k requests per second I dont see why we would see such a huge drop? (Edit: 160k is the performance of Actix on fortunes.)

I also recall seeing numbers on the 10k order of magnitude from doing naive benchmarks with the various database libraries available, without any http part to the application. But I'm not sure, maybe I'm missing something or remember incorrectly?

[1]: https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

[2]: https://github.com/diesel-rs/diesel/issues/399

[3]: https://crates.io/crates/postgres

> I can see how it has great performance on the plaintext and json benchamrks

I actually forgot the DB tests were implemented; when I was sending in PRs, I was mostly thinking about plaintext and JSON, not database stuff. Sorry about that!



> Technically, sync actors are worker style actors. Multiple sync actors can be run in parallel and process messages from same queue. Sync actors work in mpsc mode.

So, you're still getting some degree of parallelism here. I wonder if that's it?

(You're right about the fact that the DB APIs are currently synchronous.)

Actix provides actor abstraction over synchronous code and allows to communicate with it in async manner. TechEmpower benchmarks uses sync actors for db operations and http part is async. I am not sure how this help though, tokio-minihttp also uses threadpool for db operations, results are not that good

From my cursory understanding there is following. There is a n async protocol for `postgres` crate called `tokio-postgres`[1] (it's a child crate that is enabled via feature). However, either only Postgres supports async protocols or there isn't an async protocol for Rust outside of `tokio-postgres`.

However, Diesel uses `libpq` over `postgres` (and its child crate `tokio-postgres`). Moving from `libpq` to `rust-postgres` would cause a lot of damage.


If you are doing sql queries in web requests, you probably should not invest in Rust for performance.

It appears that Actix's primary author works for Microsoft, does anyone know if Microsoft is using it internally for anything?

we use actix. but that is all i can share :)

Have you spoken with the Rust core team about Microsoft's use of actix? They love getting feedback from commercial users, and I believe they are willing to sign NDAs when necessary (there's certainly plenty of commercial users they seem to be unable to tell me about, and I ask often :P ). I'm happy to put you in touch with them if you'd like; see my email in my HN profile (and this invitation goes for anyone else out there using Rust in production capacities, of course!).

He's the author of actix

So MSFT uses Rust. :) That's actually quite a newsworthy thing, could be a lot more newsworthy if we knew the purpose it was used for was some mission critical component that also needs to be blazing fast.

Looking at Rust's strengths, and that MSFT has languages/compilers of it's own, the use case is prolly "mission critical component that also needs to be blazing fast". But for now we're guessing.

Oracle also uses it, and they changed the Java ONE conference to be Oracle Code ONE conference, including Go and Rust related talks.

This is very cool! I’d seen their open source work but not seen this.

I've learned to hold my cheering whenever I read Oracle and open source in close proximity.

I hear you. At the same time, the particular project they're doing is an implementation of an open standard, to provide an alternative and prevent monoculture, so seems good to me. I'm not involved in the container space, so I can't speak to the quality, but that's what it looks like from here.

Here is the announcement.



> Oracle Code One is the most inclusive developer conference on the planet. Join discussions on Java, Go, Rust, Python, JavaScript, SQL, R, and more.

MSFT used `ripgrep` for Visual Studio Code IIRC.

Yes but who are you?


fafhrd91 appears to be the primary author.

I tried using actix for a recent project. I just could not get it to do what I wanted to. It always felt like a fight. I switched to rocket and everything is so much easier.

Everyone is saying great things about it, but I just want to point out that it's not for everyone.

Just to clarify for others, Actix is a more general actor-based programming framework. Actix-web is a nice little web server framework built on top of that, which you could integrate with other Actix components.

From looking at the two, Actix is more minimalist with direct control, though it has some nice middleware built in. Rocket is a more Rails-like “everything works and is magic” approach. Both seem like they could be great but it depends on your use case.

I tried out actix-web and to be fair I'm not very fluent in Rust yet, so I had a similar experience of not being able to figure out some things readily enough.

So as a result I've opted to using D instead for my current project, I've been able to get more figured out with Vibe.d in shorter time so I'll likely stick to D for this project due to time being a factor against me.

I had the opposite experience. I wrote an endpoint in 3 languages/frameworks last weekend: Java/Spring Boot, Elixir/Phoenix, and Rust/Rocket. (the last was mostly going off this blog post: https://lankydanblog.com/2018/05/20/creating-a-rusty-rocket-...) I was fighting with Rust Nightly failing to compile a couple different packages. All the GitHub issues seemed to point to finding the magical combination that would make things run. I gave up after an hour or so. I'd like to have something that runs on Rust Stable. Rocket was pretty cool _looking_ though.

Actix-web was almost a drop-in replacement.

Don't know anything about Actix, but can confirm that Rocket is excellent. The actor abstraction is very interesting, though.

Anyone have any insight into how Actix and Rocket compare? I'm interested mostly in ergonomics and safety.

I've done a simple hobby project in Rocket and ported to Actix.

I'm not experienced with web services and my project was very limited for learning purposes, here my take away:

I like both and for me Rocket was way more ergonomic for creating routes dealing as if they're simple functions where input and output are dealt automatically with (from request and to response).

Actix advantage is actors and is easy to be fully async, I had some issues dealing with it but most of my troubles were extracting request data and building responses.

When actix-web will support magic as Rocket (once proc-macros becomes stable) then actix-web will have the edge if Rocket don't become async-ready and stable before.

For both it is only question of missing stable rust features and actix-web is already running on stable.

actix-web: - easy - async - stable

Rocket: - stupidly easy - sync - nightly

I also recently just finished porting a side project of my own from Rocket to Actix. I'm absolutely loving Actix so far!

Other than Rocket being nightly, the other reason I switched to Actix was because Rocket doesn't have the ability to respond to requests directly within the middleware layer, you can only modify the response but not return early. This is pretty important with regards to CORS and trying to catch all OPTIONS requests. There are a few solutions of course, but all of them felt hacky or verbose.

I have no complaints with Actix yet.

i think from ergonomics standpoint, actix is very close to rocket. of course rocket has some advantage, but actix compiles on stable and has zero codegen code. as soon as proc macro stabilizes both will be on par.

from performance perspective, actix is faster than rocket on any type of load.

Interestingly, there is a proposal for Actix to be the HTTP layer of Rocket. https://github.com/SergioBenitez/Rocket/issues/17#issuecomme...

I'm in the same boat, at least for actix-web. The static lifetimes on the handlers (and I think state) makes some things really painful, but is also part of the reason it gets the speed it does.

The actix library is fairly nice though, but still a few cases where the use of globals and/or statics introduces problems.

We've got some stuff in the pipeline that should relax that requirement. It plus async/await (which has a PR open in the compiler) should make all of this way easier.

Good to hear. Could use box to get around the static stuff, but given the power of lifetimes, would much rather be able to use lifetimes correctly to not have to box everything.

One other thing that wasn't really apparent but would have made my life easier is a way to use an actor to handle a request, so I could have access to a context for thread related activities (e.g. tokio handles). It feels wrong to just use Arbiter::handle there, especially for testable code.

Doesn’t Rocket dependend on nightly Rust? Doesn’t it use features that may never make it into stable Rust?

Rocket should be usable on stable this year, but is nightly only for now.

I recently ported a very small micro service from Rocket to Actix and found the migration to be painless. In fact, it's use of types and inferencing along with integration with Serde made it very easy. It also worked on stable-rust and can work with connection pooling against postgres. This makes it a winner in my mind.

I'm excited to use it again in my next service.

What motivated you to move away from Rocket (just curious, I've only used Iron)?

Honestly, it was not being able to target stable that prompted the switch. Then after using actix I was impressed.

Browsing thru Actix guide (https://actix.rs/actix/guide/), I didn't find any explanation regarding what will happen when actor(s) crashes or how crashes[1] are being handled?


I feel like Let It Crash and Fail Fast make a lot of sense in a dynamically typed language where, if you were to go ahead and code defensively and try to make assumptions about where your program could possible fail, you'll end up with a bunch of redundant error code and will probably have written error handling for places where the program might not have ever crashed. Rust's Result type makes sections where an error can occur pretty explicit, so I'm not sure it makes sense to follow the same methodology. I'm interested to hear what other people think.

first, you need to define what is crash in rust means. panic or error. in general case you can not recover from panic. in case of error, type system prevents unhanded errors in actors. you can restart actor, but that is controlled by developer action.

A panic in an actor will take down the whole (rust) server?

one thread in best case. But process may die. Depends on panic

I came across this the other day while looking into Flow, the C++ extension used to write FoundationDB. This Github issue asks about a benchmark on which Flow claims really good results:


Actix does pretty well too.

What a great looking site. I've been really exciting about actix for a while now. We just started some internal experiments with it here and I'm looking forward to more.

Agreed. Good site.

This is an aside, and I sincerely apologize for that, but I am compelled...

I'm reading the greeting/hello-world example on this nice site and I notice unwrap_or(). That is a poor name: can it panic, as suggested by the "unwrap" part (I have just enough Rust to know that,) or can it not, as suggested by the "or" part? The name is inherently ambiguous!

It's as if the .unwrap() that is festooned throughout such example Rust code has become so ubiquitous that someone felt it had to be used and so tacked on "_or". Why couldn't it just be .or() or perhaps .default()?

And so I investigate and things go rapidly downhill from there. Consider:

Good grief. The word ambiguous seems inadequate to describe what has emerged here.

Again I'm sorry; this is clearly off topic, probably badly naive and possibly inappropriate in a few other ways to which I'm pathetically oblivious. I couldn't help myself.

"unwrap" doesn't mean "panic". It means "to take out of some kind of container". So, the question is, what to do if the thing isn't in the container?

    unwrap: panic
    unwrap_or: produce this value instead
    unwrap_or_else: produce a value by running this closure instead
    unwrap_or_default: produce a default value instead
or and or_else are just like unwrap_or and unwrap_or_else, but they don't do the unwrapping; you keep the container.

In this case, the "container" is option, but similar types have the same methods, like Result.

> Why couldn't it just be .or() or perhaps .default()?

or returns Option<T>, unwrap_or returns T. .default returns the default value for a T already.

TL;DR: you have a lot of options (pun intended, sorry!) with what to do, but the names all follow a quite regular scheme.

The "unwrap" is referring to getting a plain T out of a Foo<T> container: all of those "unwrap" functions return a plain T.

One way to look at them is "unwrap", "or" and "or_else" are building blocks that have a common meaning across the different examples:

- unwrap: returns a plain T

- or: the left-hand side, unless it is a "failure", then use the argument value

- or_else: the left-hand side, unless it is a "failure", then use the argument function to create the value

For prefixes, unwrap mean Option<T> -> T while or means Option<T> -> Option<T>[0].

For suffixes, "else" means executable code (a closure).

> > It's as if the .unwrap() that is festooned throughout such example Rust code has become so ubiquitous that someone felt it had to be used and so tacked on "_or". Why couldn't it just be .or() or perhaps .default()?

Because Rust doesn't have function overloading and thus you'd be missing most of the cases?

[0] or more generally Wrapper<T> -> T and Wrapper<T> -> Wrapper<T>

>> Because Rust doesn't have function overloading

That is the insight I needed. Thank you.

Rust seems novel in that despite having powerful abstractions and a rigorous type system it does not support overloaded functions. I gather from some of the discussions about it that this design decision greatly simplifies the compiler implementation; supporting function overloading would necessitate answering several very tough questions such as whether a function can be overloaded on argument lifetime.

So the Rust standard library has established conventions (in this case 'unwrap', 'else' and or 'combined' in various ways) to deal with the permutations that naturally emerge given the lack of function overloading. It's important understand this and inculcate these conventions, particularly when designing interfaces for use by others.

> Rust seems novel in that despite having powerful abstractions and a rigorous type system it does not support overloaded functions.

Neither Haskell nor ML (incl. its children) have function overloading.

Learn about the Option type: https://en.wikipedia.org/wiki/Option_type

Looks nice, really great landing page too. Informative and concise, didn't leave me asking "what is this?"

Ought to be good, site was designed by the creator of Flask :)

Wow, that's cool. Maybe I should finally learn rust!

I started porting one of rocket-based synchronous services to actix-web and so far I'm pleased with the process.

I have just started playing with Actix-web. Rust-Noob as well. The yellow world example compiled to a binary that was ~ 5 MB.

As a general case, Actix-web pulls in a lot of dependencies at install, and compile time.Are all of those dependencies really necessary for a hello world scenario?

Being a Rust newbie, I thought maybe I was using the wrong tool and started to look at hyper instead..

5MB is nothing compared to what you'd use for a similar project in node or Python or Ruby. Sure, it's not the tiniest it could be, but using tools like strip, not including debug symbols etc it gets pretty damn small. Honestly though, at that point I think size becomes entirely pointless to even mention unless you need to run it in a super constrained environment, which you're probably not when you're using the standard library :)

There's a lot that can be done to shrink a Rust binary[0]. A copy of the summary:

- Compile with --release.

- Before distribution, enable LTO and strip the binary.

- If your program is not memory-intensive, use the system allocator (assuming nightly).

- You may be able to use the optimization level s/z in the future as well.

- I didn’t mention this because it doesn’t improve such a small program, but you can also try UPX and other executable compressors if you are working with a much larger application.


s/z was made stable ten days ago: https://github.com/rust-lang/rust/pull/50265

So, two releases :)

Yes, I'm not constrained in any way to not be able to run a 5 MB binary. I was curious if that was an indication of what is to come for large projects...

I guess I was tangentially pointing to complexity and abstraction there.

rust doesn't use dynamic linking, which contributes a lot to that size.

Wont a crate compiled with 'crate_type = "dylib"' be dynamically linked if you specify '-C prefer-dynamic' when compiling your program?


This is pedantic, but rust does link dynamically, but only to libc.

It can, but for practical reasons, generally does not. For Rust code anyway; often anything bound via FFI is dynamically linked.

> I have just started playing with Actix-web. Rust-Noob as well. The yellow world example compiled to a binary that was ~ 5 MB.

I've only played around with Rust a bit, but IMO, as a general rule: don't judge back-end frameworks by the size of the deliveries, unless we're talking about something ridiculous (5GB). It's extremely superficial and has a very low correlation with the quality of the actual tool.

> The yellow [sic] world example compiled to a binary that was ~ 5 MB.

See: "Why are Rust executables so huge"


I've recently been building an IRC bouncer and webapp with Actix, and it's been really smooth sailing so far -- excellent documentation, extensive examples, and everything I've touched so far has just worked the way you'd expect it to. It's a gem of a project.

Wanted to do the same exact thing! Is it public?

Not yet (still incomplete) but I'd be happy to drop you a line if/when that changes - send me an email?

Can someone explain to me what advantage the actor model provides for a web server?

Actors are generally a very powerful abstraction.

To give a specific example: they can be used to mediate access to a resource without requiring complex synchronisation: instead of sharing a piece of mutable state (like a cache) and protecting it with a lock, you can ensure a single actor accesses the cache, and other code communicates with that actor.

This is particularly useful with asynchronous code, because it's not possible to have a fair, asynchronous "passive" mutex without suffering from the thundering-herd problem. If you try to implement such a mutex, you will find yourself needing to queue up lock requests and responses, and you will end up reinventing the concept of an actor.

This looks really nice! I learned Scala primarily to use the Play Framework which is a fabulous way to build large web applications. This looks spiritually quite similar, but with the advantages of Rust.

I started using Scala for Play and Akka and I'm currently using Akka[1] in production in many projects.

[1]: http://akka.io/ Akka is the implementation of the Actor Model on the JVM.

Funny, I'm learning Scala for work after teaching myself Rust. It's definitely made the whole process a lot more straightforward thanks to the commonalities between them.

Question: What is the point of making the fastest web server possible when any kind of datastore attached to the server is going to be the bottleneck?

That's a good question, and something that's not always obvious without having been in a situation to get advantage from such an improvement.

First, even if waiting for a response from the database is the largest single contributor to your response time, and if you're running an extremely low-traffic service, there's still benefit in reducing your total latency

Second, if you're not running a low-traffic service, and have enough requests that you're approaching the memory or cpu capacity of your web server (or have an existing application that's already deployed across multiple servers), making significant reduction in your CPU or memory use can let you handle quite a bit more traffic with less hardware.

Third, not all web services involve little more than making a request to a single external slow database. The data for your service could be: * Static * Ephemeral, kept in-process * In a database on the same server * In a fast database (memcache) * Require nontrivial processing * Processed by a separate service you're just acting as a proxy for

That's only true for certain workloads and applications.

Many modern databases are very fast and many modern web servers are very slow. I've run into many applications where database performance was not the bottleneck.

Why do so many Rust projects lead with telling you their number one feature is type safety? It’s Rust we get it, stop telling us about your type safety!

Also, what problem is this solving that countless other near identical web frameworks don’t already solve?

Even with an opinionated type system, it's possible to slack off a bit. There are different degrees of strictness/specificity when composing types.

Most projects use buzzwords like that though.

- Safe! - Blazing fast! - Powerful! - Elegant! - Shiny! - Modern!

You know, marketing stuff that tells you absolutely nothing about the software itself but is required to fill space on a website.

The thing is, a boring "Benchmarks" heading in the README doesn't get you as many GitHub stars as a "Blazingly fast!" heading+icon. And you know how addictive Internet Points can be.

A few devs are over enthusiast and missed the 80/90's wave of safe systems languages.

I really love seeing rust making progress. It's a fantastic (with slightly ugly syntax) language that I want to learn. :)



Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact