Hacker News new | past | comments | ask | show | jobs | submit login
Migrating from Warp to Axum (fasterthanli.me)
101 points by hasheddan on Nov 23, 2022 | hide | past | favorite | 75 comments



I've gone through the process of auditioning a bunch of rust webservers as well and came away with the opinion that of them all, Axum and Tide have the best interfaces and features. We went with tide, and I still believe that it is the _easiest_ to use in most cases - just with a few quirks that need to be changed and features added (or in some cases just be made public). Sadly however it is not very actively maintained and I fear for its future as that compounds with less and less people choosing it over time.

Axum by comparison has a very active community, but I found it's request handling and middleware concepts much less ergonomic. If you're debating between rust webservers it's worth taking a look and giving tide a chance. The async-std choice hasn't been an issue for us at all either.


Very similar conclusions to what I got to.

Just a couple of days ago I watched this video on YT - Designing Tide by Yoshua Wuyrts [1] and I really loved everything about tide.

On the other hand, it seems that this guy was (is) sponsored by Microsoft for working on these things, which is a red flag. Not that I am against something sponsored by them, but having a framework maintained by pretty much a one person-team at Microsoft seems like it could end any time if some manager decides there is no budget for "Rust web research" anymore.

But I have also asked myself - how mature is tide now, and how much development does it really need? I can't answer, since I have used it only very little, over the last days. I am curious if somebody else, more in the know, would explain.

So the question really is - Could tide, as it is now, be considered mature enough such that it does not matter who maintains it (if even)? If so, then it might be the perfect framework. Anything extra could be developed as additional packages on top of it. Plus, it could always go through a revival (somebody else forking it and continuing to add features)?

[1] - https://www.youtube.com/watch?v=laJA4QCjmxk


I came to similar conclusions. Tide is IMO the most ergonomic, but its future is unclear in terms of community/maintenance. Its choice to use async-std ended up kind of biting it, since projects have by and large chosen tokio. I have hope for the eventual “swappable async runtimes” initiative, but it’s probably going to be too late to help in these regards.


Yeah I think it's mostly a choice between Tokio and async-std. FWIW I chose Axum in a recent migration from Node. It's not perfect, but I found it to be pretty simple, even with controller/service/data abstractions.


> Oh and that type never "gets bigger" in a way that would cause compile-time explosions.

FWIW Axum did get hit by last year's "huge types" regression, the maintainer opted to bypass the issue by boxing routes internally: https://github.com/tokio-rs/axum/pull/404

Which is not really surprising, as it's built on tower, which at the time Lime noted was affected: https://fasterthanli.me/articles/why-is-my-rust-build-so-slo...

> ...I didn't actually trust warp that much. It's perfectly fine! It just.. has a tendency to make for very very long types, like the one we see above. Just like tower.


Fair, but if you were writing a web application on top of hyper directly, you probably wouldn't use one tower layer per route, which is somewhat equivalent to what warp does.

When I wrote this I worked at a company where the main Rust codebase had, uh, perhaps too many tower layers.


As someone that uses actix-web, what are the pros and cons of moving to Axum? I hear about it a lot these day. I know it integrates into the Tokio ecosystem well, including Tower, but I'm not sure what that concretely means for someone already using actix-web. When would I use Tower?


Warp/Rocket/Actix to Axum is a good improvement IMHO, especially because you get Hyper and Tower as well. I wrote an intro to these at https://github.com/joelparkerhenderson/demo-rust-axum


I'm writing a toy image sharing webapp with Axum, $40/m server is able to process 200,000 dynamic requests per second. A bit more than nginx and a bit less than varnish serving static files on the same hardware. This whole Rust thing has some potential.


I have a handful of services, some in Warp and some in Rocket, and I dislike both of those frameworks. I've been looking into axum so this is a nice read.

Honestly I don't think that Axum is right either. So for example, this:

    async fn create_user(
        Json(payload): Json<CreateUser>,
    ) -> impl IntoResponse {
         let user = User {
            id: 1337,
            username: payload.username,
        };
        (StatusCode::CREATED, Json(user))
    }
For context, if you haven't checked out Axum, this is from the Axum docs.

Rocket has a similar thing with its request guards, it has a similar json type that you put into the signature of a handler function and it automatically plucks the value from the body and parses it as json.

What's weird to me is that it's coupling the request and response format to the logic. What if a client wants to post this as a form body? Write a separate endpoint? What if some old clients put the arguments in the query string? When I see this snippet as one of the intro examples for Axum, it feels like a red flag to me.


> What's weird to me is that it's coupling the request and response format to the logic.

It does not though? It lets you do it for your personal convenience. If you define a JSON API, you can just tell the framework that it takes JSON data, and it'll do the deserialisation for you.

> What if a client wants to post this as a form body? Write a separate endpoint? What if some old clients put the arguments in the query string?

Take a raw body and query strings and do the dispatching internally. Hell, you can ask for the request (https://docs.rs/http/latest/http/request/struct.Request.html) itself:

    async fn handler(request: Request<Body>) {
        // ...
    }
There you go, knock yourself out.

I think Warp would actually let you write different handlers for each of those cases, because it routes on the entire thing e.g.

    let some_route = path!("foo" / usize / "bar");
    some_route.and(json()).map(handler_json)
        .or(some_route.and(form()).map(handler_form)
        .or(some_route.and(query()).map(handler_query)

or you could tell it to unify these three filters and pass the data from whatever source it got to the same handler:

    path!("foo" / usize / "bar").and(
        json().or(form()).or(query())
    ).map(handler)
I'm sure this wouldn't work as-is and would require some tuning up or `unify()` calls, but you get the gist.

IIRC Axum only routes on the URL, so it can't do that, that's both why it doesn't build types as giantic as warp, and why you have to specify the extractors in the function where warp doesn't need that (you'd just tell it that `payload: CreateUser).


sure you can get the underlying request, but if that’s the answer to everything that the framework author didn’t think of, that’s just an admission that the abstraction is wrong, which is kinda what I’m getting at. The entire conceptual model that views the Json<T> type as a handler argument type and then parsing the body based off of that is what Rocket does too. I think the entire strategy is conceptually incorrect. Axum may do Rocket better than Rocket, but if it’s using the same conceptual model, it seems like a lateral move. I’m looking for a new abstraction and conceptual model, not a better implementation of the same concepts or the same concepts with a larger pool of maintainers.


That's not a wrong abstraction, it's just not the abstraction level that you want to work at for this problem. So... create multiple endpoints that all call the same function to perform the behavior you want? Maybe call it a "controller".


I spent ten years writing Go HTTP servers and the abstraction used in net/http has served me well for a decade straight. I’ve been writing http servers in rust for 3 months. Rocket provides an abstraction with a lot of holes that makes me jump through a lot of hoops to do things that have been trivial and common in http programming for over a decade. I’ve written stuff using only Hyper, it’s very manual. Warp has the problems the article describes. With Go, I used the standard library HTTP implementation for a decade and was happy the whole time. I also wasn’t experienced with Go going into it, it was how I learned Go.

Axum and Rocket have the same general thrust, the abstraction being that you define types to see a thing that’s not an HTTP request. That’s the abstraction that I think is incorrect. That design strategy is actively making things complicated for me on a daily basis at my dayjob writing services in Rust.

I want something higher level than Hyper but with a different theory of abstraction than Rocket or Hyper want to provide. The theory of Rocket’s abstraction is that the framework handles the http request and response for you, what you see is something else. That’s not the toolkit I’m looking for. The toolkit I’m looking for makes it easy to interact with http request and response streams instead of making it easy to hide their existence.


You don't want to use request guards / automatic deserialization into strict types because it's not flexible enough. You're not willing to use the tiniest abstraction (a function) to perform the same behavior for different strictly encoded requests. (Well, you didn't really respond to the content of my comment at all, but nevermind.) You rejected ancestor's suggestion of using Request<Body>... which is exactly what is provided by Go.

I can't tell what you want, all of these opinions stacked together are incoherent.

> The toolkit I’m looking for makes it easy to interact with http request and response streams

> I want something higher level than Hyper

So higher level than hyper, but no higher level than hyper. Crystal clear.


no that's ... a pretty extreme misreading of what I'm saying. I'm not saying "I don't want any abstraction", I'm saying "I don't think this abstraction is a very good one, I think it has problems, and I don't think I would rewrite my existing services to use this framework as a result". Here, I'll provide two high-level alternatives.

Here's some pseudo-code of an endpoint that can accept json or form data as an alternative to the Json<T> abstraction that Rocket and Axum both currently utilize:

    async fn handler(thing: Thing) {
        // the Thing is read from the request by a request decoder.
        // The request decoder is chosen from a set of available
        // request decoders based on the Content-Type header.
    }

    let mut app = App::new();
    app.register_decoder(jsonDecoder);
    app.register_decoder(formDecoder);
    app.post("/thing", handler);
    app.run()
> You rejected ancestor's suggestion of using Request<Body>... which is exactly what is provided by Go.

That's not really accurate. net/http provides an abstraction that has survived for a decade that has been leveraged by a lot of tools to make middleware interchangeable. For example, gorilla/mux uses the net/http standard, that has worked great for me for like 8 years running (unfortunately, that project lost its maintainer). The argument I'm making is that not all abstractions are equally good; Json<T> is an example of an abstraction that is used in Rocket that I have found to be cumbersome and Axum is repeating that abstraction. It's one of the very first examples in their docs. Why couple handler logic to request encoding? I think that abstraction is wrong, I don't think it will withstand the test of time, and in another year or two, will be back at it, updating our Axum services to use [some new thing].

So instead of the core abstraction being "every endpoint accepts whatever type it wants", the core abstraction could be "every endpoint accepts one value of the same type":

    async fn handler(req: Request<Body>) {
        // req.decoder looks at the content-type header and
        // picks from a list of registered decoders. If
        // the client picks an unsupported decoder it fails.
        let dec = req.decoder()?;
        let thing = dec.parse::<Thing>()?;
    }

    let mut app = App::new();
    app.register_decoder(jsonDecoder);
    app.register_decoder(formDecoder);
    app.post("/thing", handler);
    app.run()
So a really cool, useful, powerful, and general abstraction that I love in Rust is the string parse method: https://doc.rust-lang.org/std/string/struct.String.html#meth...

I honestly would rather have that for HTTP requests than making an assumption about the content encoding in the handler's signature.

> (Well, you didn't really respond to the content of my comment at all, but nevermind.)

I mean my argument is "a thing that is trivially expressible and easy to do in other stacks has poor ergonomics in this framework" and your response is basically "ok so take the product of all of your endpoints and all of your encodings, ez pz", which ... is also not ergonomic? Literally the opening prompt was me saying I think that coupling the encoding to the endpoint's logic means you'd have to write another endpoint and that feels wrong to me, so ... you're just telling me to do the thing that I specifically said is the thing that makes me think this abstraction is weak.


For what it's worth, it's not that difficult to manually implement the `FromRequest` trait for `Thing` so that it parses the request based on the Content-Type.


It’s literally how warp works but apparently they skipped right over that so…

The objection is also incredibly weird, I think I’ve “needed” a variable type intake all of once, and it was a mistake to do so (as it’s an easy path towards inconsistent handling at different levels of processing).


it’s a thread on an article about moving away from warp. I have a handful of warp services currently and we’re actively moving those services away from warp for other reasons. I’m not going to try to convince everyone at my org to stay on warp, it has the issues this article mentions.

My argument is not that it’s impossible, it’s that the whole value proposition of these frameworks is that they make you jump through fewer hoops than building on top of Hyper yourself, but it looks like a lot of the problems that I’ve encountered with Rocket are being replicated with Axum. There’s a very good chance we -will- move our services to Axum, I’m just not confident that this is really stable ground.

As for the specific example, I think you’re missing the forest through the trees. I used that specific example because it’s in the article and it’s in Axum’s readme, so it’s safe to assume that people discussing the article would be familiar with that case.


A perfect API would consist of a few parts:

1. A request type and an associated response type

2. Impls for converting an HTTP request into the request type, and the reverse for response

3. A server type with an impl for handling the request

4. A client type with an impl for sending the request

The remaining challenge is making all this ergonomic for the simple cases.


1, 3, and 4 are already there, they're just a part of Hyper, not Axum/Warp/Rocket. 2 is basically the thing that Axum/Warp/Rocket provide. Hyper is kinda like net/http and Axum/Warp/Rocket are more feature rich. The thing is, they're super early. They don't remind me of the pared-down quality simplicity of the Gorilla Toolkit or even the feature paradise of Gin.

Honestly it a lot of the Rust http frameworks strike me as eerily similar to either Falcore or Revel. Falcore was a very early Go http application framework built by ngmoco, a now-defunct game company. Falcore didn't really gain a lot of traction, partially because it provided abstractions that weren't very ergonomic. It's whole thing was that the core abstraction was a modular pipeline. https://github.com/ngmoco/falcore

I think most people know Revel, it's a little less obscure. It's philosophically the precursor to Gin.


The frameworks provide 2 by hiding 1. This makes it impossible to use the request and response types for other purposes.


The examples showcased by Axum and co. are the "ergonomic simple cases" and it's easy to morph what's provided into any flavor you personally prefer with as many `impl`s and types. Here's[0] my jam rn.

[0]: https://github.com/dman-os/template_rust_web_api/blob/main/s...


Fun fact: you just described Go's net/http package.


It's also pretty much Express.js


not really, point 2 isn't in the standard library.



that’s not what the Axum example does. The function you’re linking turns an opaque run of bytes into an http request object. What the Axum example does is turn an http request object into a value of some other type T.


That's essentially what happens in Rust, but it's a layer of crates.


The Python framework FastAPI does this too. I think it does so because it's convenient, easy, and it's one less line of boilerplate. You trade off flexibility for having a very clean and simple "happy path".


> You trade off flexibility for having a very clean and simple "happy path".

You don't trade anything though, if you want the raw information you can just ask for that.


You don't have to specify the type in the signature; you can just as easily parse the request body manually. But in the instance where the endpoint only accepts json, it's simpler to write it this way.


https://news.ycombinator.com/item?id=33721070

same line of reasoning as here


You can just have `Request` as a parameter and do those edge cases yourself. Or write your own Extractor which would handle that pretty easily.

I'm not sure of your use case where clients can send any format they want and the HTTP server is supposed to know and handle any format automatically (form, json, querystring, etc), but seems more like a legacy edge case than something you would do building a server from scratch.

Something like this (completely untested) but pretty straight forward to handle your usecase.

    #[derive(Debug, Clone, Copy, Default)]
    #[cfg_attr(docsrs, doc(cfg(feature = "json")))]
    pub struct FormOrJson<T>(pub T);
    
    #[async_trait]
    impl<T, S, B> FromRequest<S, B> for FormOrJson<T>
    where
        T: DeserializeOwned,
        B: HttpBody + Send + 'static,
        B::Data: Send,
        B::Error: Into<BoxError>,
        S: Send + Sync,
    {
        type Rejection = InvalidFormOrJson;
    
        async fn from_request(req: Request<B>, state: &S) -> Result<Self, Self::Rejection> {
            if json_content_type(req.headers()) {
                let bytes = Bytes::from_request(req, state).await?;
                let deserializer = &mut serde_json::Deserializer::from_slice(&bytes);
    
                let value = match serde_path_to_error::deserialize(deserializer) {
                    Ok(value) => value,
                    Err(err) => {
                        let rejection = match err.inner().classify() {
                            serde_json::error::Category::Data => JsonDataError::from_err(err).into(),
                            serde_json::error::Category::Syntax | serde_json::error::Category::Eof => {
                                JsonSyntaxError::from_err(err).into()
                            }
                            serde_json::error::Category::Io => {
                                if cfg!(debug_assertions) {
                                    // we don't use `serde_json::from_reader` and instead always buffer
                                    // bodies first, so we shouldn't encounter any IO errors
                                    unreachable!()
                                } else {
                                    JsonSyntaxError::from_err(err).into()
                                }
                            }
                        };
                        return Err(rejection);
                    }
                };
    
                Ok(FormOrJson(value))
            } else if has_content_type(req, &mime::APPLICATION_WWW_FORM_URLENCODED) {
                let bytes = Bytes::from_request(req).await?;
                let value = serde_urlencoded::from_bytes(&bytes)
                    .map_err(FailedToDeserializeQueryString::__private_new::<(), _>)?;
    
                Ok(FormOrJson(value))
            } else {
                Err(InvalidFormOrJson.into())
            }
        }
    }


> I'm not sure of your use case where clients can send any format they want and the HTTP server is supposed to know and handle any format automatically (form, json, querystring, etc)

Any server that has clients that aren't fully under your control that you can't force-update, where the clients today and the clients yesterday encode their requests differently.

I mean, it's the entire purpose of the `Content-Type` header. The whole concept is that the server has a set of encodings that it can understand, and the client can pick between them.

This isn't a new idea. Here it is in RFC 2068, from 1997: https://www.rfc-editor.org/rfc/rfc2068#page-116

These concepts have existed for decades.

In your comment, you make an implicit assumption: the assumption is that the person that writes the server is in control of the client. When we make tools that make it easier to construct software that assumes the server operator is in control of the client but do nothing to make it easier to build software in which the server operator is not in control of the client, we are making a political choice to place power in the hands of server operators at the expense of end users. That's not a political choice that I'm comfortable with.


I'm curious, have you attempted and found difficulties writing an extractor similar to the decoder you describe?


Coincidentally, last night I started porting an actix-web project to Axum. In my very brief experience, I have found Axum (0.6.0-rc5) to be more ergonomic compared to actix-web. However, I haven't rewritten all the existing features yet so my opinion could change in a few days.


I respect the people behind Warp, but it always struck me as a design that actively worked against the grain of the language rather than a design that worked gracefully with the language. A bit too "functional" for what Rust really supports.


Really glad to see this! More eyes on axum will make it better and better.

We recently decided to move from Rocket to axum for non-user-facing service to support our platform. Haven't made the decision yet to move the main API, but strongly considering it.

Rocket is really nice, and it's recent stall at 0.5-rc has been jump-started, but I feel that axum has much more momentum. Sergio (Benitez, of Rocket) is fantastic, but only one guy. OTOH, Axum is mostly the pet project of one person as well. If I had to pick analogies, I'd say Rocket is aiming to be more like Django and Axum more like Flask. The latter scope seems much more sustainable by a single person.


> The axum::debug_handler macro is invaluable to debug type errors (there's some with axum too), like for example, accidentally having a non-Send type slip in.

Heh, yeah. For my recent project where I explored implementing the same little app in a few different languages[0], I chose Axum for the rust version.

The whole "extractor" system was pretty magical, and when I had this exact issue (non-Send argument), the compiler error was totally useless. I did see the docs about adding this extra macro crate for error messages but it seemed like a bit of a red flag that the framework was going against the grain of the language. Still, on the whole, I did enjoy working with Axum.

[0] https://github.com/losvedir/transit-lang-cmp


> I did see the docs about adding this extra macro crate for error messages but it seemed like a bit of a red flag that the framework was going against the grain of the language.

Is it really going against the grain of the language? Or really that Rust still has issues with error messages on complex / deep types and that should be reported / fixed?

Because leveraging the type system seems to be pretty in-line with normal Rust ideals, and Send issues is a load-bearing type error of the concurrency system.


I created my own web framework (<https://docs.rs/under/latest/under/>) to address some of my own perceived issues with a lot of current rust web frameworks - typing issues being one of them. Personally, I don't like the idea of endpoints requiring `#[handler]` or other "magic" derives, or guards - I wanted something with an endpoint that takes a request, and returns a result with a response. I'm still working on it, though.


I always thought Actix-Web was the tried & true rust web framework. I'd vaguely heard of Warp and never heard of Axum.

What's behind this sudden explosion of them? Are they all skins over the same core libs or what?


>I always thought Actix-Web was the tried & true rust web framework.

That is mostly still true, but Axum is developed under the same organization as Tokio and integrates very well with that ecosystem, including Tower middleware. So in that sense it has a feeling of being "official" insofar as such a thing exists. The community is big enough for it to not die if the maintainer were to disappear tomorrow.

It's also a pretty nice framework and has better compile times than Actix-web. It is less an explosion of frameworks so much as a consolidation, if anything. Axum and Actix are the de-facto frontrunners.


Thanks - that gives me a nice overview of what's going on. Will check out Axum the thing i was excited about actix-web (Actors) is apparently not part of it anymore anyway.


I've gone very recently through a rewrite from Rocket to Axum and very much love it so far.

The initial motivation was the need for web socket support, which Rocket doesn't have (yet). But I love how simple it is, and also that it does not want to be the entry point of the application. (I like an http server that's a library that can be embedded at any place in the application.) Another great thing is the examples/ directory in the Axum repository.

I had to use the latest version from GitHub though to get some of the features I needed, but maybe that's not the case anymore.


Judging by these comments, everyone is writing web apps in Rust these days. While I can understand the fascination that many HNers seem to have with Rust (been wanting to give it a try myself for some time now, but work and life always get in the way), I'm still not sure web apps are really a good fit for a language that has a reputation for being high performance, but more difficult than other languages commonly used for web development. I mean, you wouldn't build a web app in C/C++, and not only because of the potential security issues?


There was a post just yesterday about this topic, for which I gave my (general) rebuttal: https://news.ycombinator.com/context?id=33714300


I previously wrote web applications in Python.

My primary reason for moving was to capture the advantages of a strongly-typed language. I suppose Go would have been another reasonable choice in this regard.


Yup, working on that one too.

I liked the API of warp a lot, it somehow reminded me of Akka http a bit.

But all the points mentioned in the blog post resonate with me, which is why I've begun to migrate to axum.rs as well :)


Not about OS/2 Warp, apparently. I was imagining an OS/2 diehard user either migrating to some new version of ArcaOS or finally finding another worthy successor to the system.


I'm afraid I'm a bit too young for OS/2 Warp, but I can write about Haiku OS in the future if that'd help?


I did the exact same thing with my imageboard, plainchant [1]. I had the same experience that you did with Warp: the routing model was extremely clever, but never came to feel intuitive or ergonomic to me.

[1] https://github.com/jgbyrne/plainchant


Noob to rust, is there a reason to use if / if let over match statements or is it just a style preference?


Using "if let" is generally preferred style when there's only a single case that needs to be handled. (And I suspect that now that "if let ... else" is available that that will become the preferred style for handling two cases.)


> And I suspect that now that "if let ... else" is available that that will become the preferred style for handling two cases.

    if let Pattern(binding) = thing { ... } else { ... }
has been available for a long time. What was recently stabilized is

    let Pattern(binding) = thing else { ... };
which is syntax sugar for

    let binding = if let Pattern(binding) { binding } else { ... };
And facilitates easier unwrapping in early exit or default scenarios, without an extra level of indentation.


Yes, you're right of course. That's what I get for commenting while not fully awake :)


Yeah, clippy will yell at you if you use a match with a single arm - I personally don't hate matches with a single arm, but I'd rather not fight clippy (ie. come up with my own clippy config, enforce it onto every project I maintain etc.)


I've hit this warning a few times where I expect to add additional match arms to be added in the future, always bugs me that clippy in that case steers you towards some change you know you'll end up undoing.


That's why they're warnings, not errors. I don't really understand the inability of some people to ignore warnings while they're working.


This is what #[allow(clippy::whatever)] is for.


In the sense that "match is a superset of if and if let's functionality", yeah it's just a style preference.

However, while style truly is subjective, I'm not sure anyone could convince me that

  match foo {
      true => println!("hello"),
      _ => {}
  }
is the same or better than

  if foo {
      println!("hello");
  }
The RFC for "if let" has some good motivating examples as well https://rust-lang.github.io/rfcs/0160-if-let.html


What about this code from the article

``` if cx.path == "/tags" { return tags::serve_list(cx).await; } else if cx.path.starts_with("/tags/") { return tags::serve_single(cx).await; }

    if cx.path == "/settings" {
        return settings::serve(cx).await;
    }

    if cx.path == "/search" {
        return search::serve(cx).await;
    }

    if cx.path == "/login" {
        return login::serve_login(cx).await;
    }

    if cx.path == "/patreon/oauth" {
        return login::serve_patreon_oauth(cx).await;
    }

    if cx.path == "/logout" {
        return login::serve_logout(cx).await;
    }

    if cx.path == "/debug-credentials" {
        return login::serve_debug_credentials(cx).await;
    }

    if cx.path == "/comments" {
        return comments::serve(cx).await;
    }

    if cx.path == "/latest-video" {
        return latest_video::serve(cx).await;
    }

    if cx.path == "/patron-list" {
        return patron_list::serve(cx).await;
    }

    if cx.path == "/index.xml" {
        return cx
            .serve_template("index.xml", Default::default(), mime::atom())
            .await;
    }
```


Oh I wrote that code in anger forever ago, it had the merit of still working when I did the last round of cleanups on my website.

I could've sworn at some point some paths had something slightly more involved (strip slashes, some starts_with or trim_prefix, etc.) but the snippet as you've copied it is certainly... not great.


(Hacker News doesn't support markdown; indent any line two spaces to make it render as code)

I would probably write that with a match, personally. I didn't say match was useless, just that if and if let pull their weight as features, in my opinion, even though you could express them as a match if you really wanted to.


i really like elixir plug style API for http. trillium.rs comes very close but it’s not as popular.


I'd like a single-threaded async pull-based http library


I'm not sure what you mean by "pull-based", but you can make the whole thing single-threaded with `#[tokio::main(flavor = "current_thread")]`

See https://docs.rs/tokio/latest/tokio/attr.main.html


I'm using Axum now...its nice but still could use much more "batteries included"

For example, I would like something like Go's Context for passing temporary per-request data through middlewares...maybe this is possible with Axum but I'm not seeing a straightforward approach

Extensions also seem to be kinda magical and not straightforward

Immature ergonomics are still a hallmark of many Rust libs


Request extensions (a feature of the `http` crate, not specifically hyper/warp/axum) are "just" a typemap, which is "just" a HashMap where keys are TypeIDs rather than strings. It's slightly less footgunny than something like Go's Context, although I still dislike it because checking for the presence of something happens at runtime only, so if you're messing with middleware composition, it's still too easy to accidentally get it wrong.


Maybe tower-request-id is an example of how to implement something like that?

https://github.com/imbolc/tower-request-id


This is exactly what I am doing, but you shouldn't have to modify the request itself to pass temporary state


Eh, that's definitely a matter of taste. You definitely shouldn't have to pass an additional parameter everywhere you want context there's no absolutes here, just preferences.


Why not? Given Rust's semantics it's really no different than how Go's Context seems to be used, just more efficient.


> For example, I would like something like Go's Context for passing temporary per-request data through middlewares...maybe this is possible with Axum but I'm not seeing a straightforward approach

That's probably more of a Tower(-http) thing. It's the bit which handles all the middleware, Tower is layered over that.

> Extensions also seem to be kinda magical and not straightforward

Yeah, the problem is that they come directly from the http crates so everything that's built on top assumes you know what it is.

The reality is that Extensions are just a typemap, meaning it's a k:v where keys are types. So you can define your own state type, shove it in the Extension, and know that you'll be able to retrieve that and there will be no collision (because it's your state type). For a more complete introduction, see https://blog.adamchalmers.com/what-are-extensions/

It's one of those topic which is so obvious afterwards that it's easy to forget it's completely opaque and obscure beforewards.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: