Hacker News new | past | comments | ask | show | jobs | submit login
How I Start: Rust (christine.website)
184 points by fanf2 19 days ago | hide | past | web | favorite | 83 comments



In case it helps, Render (https://render.com) has first class support for Rust webapp hosting without needing Docker.

And here's the Hello World URL for this tutorial! https://rust-nix-helloworld.onrender.com/hostinfo

More info at https://render.com/docs/deploy-rocket-rust. I'm the founder; happy to answer questions here or in our user chat at https://render.com/chat.


Didn't know about this render.com, very nice hosting. Thanks for this.


As much as I use Rust personally and for some of my own pet projects, for serious HTTP services and introducing it into a new developer team, I'm afraid that isn't feasible for my requirements. I'm put off with the immaturity of the crates ecosystem and the costs it will bring. Most of the crates are not even 1.0 including Rocket and that also requires a nightly build which isn't good enough for my requirements. (I only use stable Rust).

As a language it is mature, but the crates ecosystem is of lesser quality, especially for applications like HTTP servers. I've already overcome its learning curve, but the devs who swear by other languages that pay them well may not be so forgiving.


I totally agree that Rocket requiring nightly is a deal-breaker for production. I suspect the author picked Rocket because it feels more beginner friendly than alternatives.

Actix-web is in 2.0. The original author stepped away recently, but it has quite the community behind it.

If readers are curious why Rocket requires nightly, it has to do with the macros (that resemble Java annotations/python decorators). Specifically Rust wants all its macros to be hygienic - they shouldn't be able to (for example) create variables that clobber user-defined variables. Rust has multiple types of macros, but the type used by Rocket, procedural macros, don't have this hygienic property yet. For context, procedural macros operate on TokenStreams, whereas "normal macros" (declarative macros) operate on AST nodes. See the issue https://github.com/SergioBenitez/Rocket/issues/19


> Actix-web is in 2.0.

What's the situation with use of unsafe now?

Although toxicity that was shown by the community was obviously uncalled for, I was very put off by the original author's stance on the issue and am wary of using it in production.


The author chose a new maintainer who is fixing the unsafe issues. It seems to be much better now, as well as working on stable instead of nightly, and the maintainer is much less "toxic" than the original author.


The original author has handed it off to some of the other maintainers.


Don't know current situation, here's from a couple months ago, from a maintainer (not the original author):

> Performance remains the key reason for the unsafe blocks. As per the safety dance charter, changing std Rust so as to eliminate the need for unsafe seems like the best course of action, but this work is beyond my comprehension.

https://www.reddit.com/r/rust/comments/efk7n7/actixweb_v20_r...


All issues dealing with undefined behavior (UB), segfaults and the like are currently closed. Actix-Web makes use of unsafe (as high-performance Rust usually does) so there could be some UB hiding.


"Closed" as in ignored, or solved with actions taken? Asking legitimately, because the stuff I'd read previously didn't fill me with confidence.


With actions taken in all of 5 issues. See all issues here https://github.com/actix/actix-web/issues?q=label%3Aunsafe+


Honest question: What is the point of using Rust if you have to use unsafe constructs to get performance?


unsafe is an escape hatch. You don't have to use unsafe to get performance, in the general case. Often, safe code is as fast or faster than unsafe code. But sometimes, it is needed. Even then, the point of using Rust is containment: you can write a safe abstraction over that unsafe code, and minimize the hunt for what you did wrong, if you've made a mistake and caused a problem.


It might be useful to think of unsafe portions the same way you would consider calling out to a C library from some managed language.

If you have one or two modules you're using that cal calling out to C libs, and your managed language is segfaulting, it's probably somewhere to do with those libraries, or how you're utilizing them (maybe a bad API or abstraction). Rust just lets you take that reasoning further in the case of unsafe, since you can see the unsafe code in the same codebase, it's basically the same language plus a couple extra features, and can be kept much smaller (at least in the case where the functionality is implemented in Rust).

Unsafe isn't meant to imply "don't do this", it's meant to imply you're giving up the safety generally afforded. It's like driving a car without seatbelts and airbags down the freeway, not like driving a car at 100Mph on a windy cliff-side road.

1: Or to the same level, if using unsafe to abstract over a C lib also


Say you are having a pizza party, and you are cooking 12 pizzas, in your pizza oven, the fire alarm is going off, but you know there is no fire, and should there be a fire it is indistinguishable from the smoke pouring out of the oven. lets say it is a fancy fire alarm which turns the oven off.

you could:

- Convince the fire alarm about the difference between pizza smoke, and fire smoke (somehow?). - Call the fire department to come put out the fire so you can finish cooking your pizza. - Disable it until you have finished cooking the pizzas. - Have a pizzaless party.

Fire alarms have little to do with performance, and even though they have false positives we tolerate them due to the impact a false negative would have.


What are your alternatives? Managed memory language or C/C++?

Rust has much more modern and ergonomic syntax, features and package management over C, so even inside unsafe blocks it's simply easier to write.


Managed memory languages like Swift, F#, C#, D, Nim, offer the productivity of GC, value types, unsafe if really needed and AOT compiled toolchains.


"productivity of GC" is often offset by much harder management of resources other than memory due to lack of deterministic destruction.

Also benchmarks show Swift and C# are still about 2x-5x slower than Rust and C++, and that only if you're very careful and you write non-idiomatic code (e.g by avoiding heap allocations). When you're not, and you use OOP abstractions heavily, 10x worse is a much more likely outcome.


That is only true if using a language that doesn't not offer mechanisms to have deterministic destruction, when required to do so.

Winning the benchmarks game is meaningless, other than "I fell good" kind of thing.

What matters is having the language features that allow to optimize the 1% of the code base, when it actually impact the acceptance criteria of project delivery.

I have been replacing C++ systems with Java and .NET based solutions since 2006, it hardly matters that C++ wins in the micro-benchmarks.

And if really needed, only that 1% gets packed into a C++ library with managed language bindings, turning C++ into our "unsafe" module.


My point is these languages you mentioned don't offer good tools to optimize that 1% and they make the other 99% an order of magnitude slower and more resource hungry to the point where it actually matters and annoys users. Also neither C# nor Java offers deterministic destruction. Try with resources is a joke, because it is limited to a lexical scope. Java and C# are not true alternatives to Rust or C++. They are inferior both on performance side and abstraction/productivity side, and severely inferior if you want both in the same fragment of code.

In many applications there is also no single bottleneck and the split is not 1/99, nor even 20/80. After you optimize the most obvious bottlenecks you end up with a flat profile, where majority of time is taken by cache misses scattered across almost the whole codebase.

It might not matter for some apps where performance is less critical, but in this case you probably don't want to use Java or C# when there exist languages offering much better abstractions (and surprisingly - Rust and C++ can be higher-level than Java or C#, leading to better abstractions, shorter code and higher productivity). Don't conflate EASY with PRODUCTIVE. Easy languages are not always more productive (if it was true everybody would be coding in Scratch).


C++ can be indeed higher level than Java, except that you are forgetting about the money spent fixing C related bugs, developer salaries, lack of tooling to plugin into a cluster and just monitor it like JFR, VisualVM, ETW, lack of interoperability between libraries due to conflicting requirements (RTTI, Exceptions, STL avoidance, ...)

As for Rust, it remains to have something that matches Orleans, Akka, Kafka, ....

Then there is the whole IDE tooling experience, libraries like the one here that require nightly toolchains, and lack of support for stagging binary libraries, which cargo might eventually get one day, but isn't here today.

Java and C# might be inferior products from your point of view, but as mentioned, what I get are rewrites from C++ into Java and .NET languages, not the other way around.

And I never sensed lack of productivity, quite the contrary, specially when I usually don't have to think about which C++ flavor of the month I am allowed to use, or having political discussions about enforcing the use of static analyzers on the CI/CD pipeline (assuming there is even one to start with).


But now you're taking about tooling and not languages. Rust will eventually get there.

IDE support is already very good and better than for many popular languages (e.g. dynamic ones).

Performance profiling is also better than in Java. I'd take perf over visualvm any time.

Java had much worse tooling when it was at the age of Rust today.


Languages are not used in isolation, the days that grammar and semantics were enough to warrant a language change.

IDE support can be considered very good if the baseline is the 90's Borland and Microsoft IDE experience, not what modern Java and .NET are capable of, which starts to finally approach the Smalltalk and Common Lisp development experience of yore.

Rust IDEs can't still offer completion that works all the time, let alone all the other IDE features.

Perf is no match for Visual VM, as it is a Linux only tool, and it's usability is found lacking. It is so good that Google was forced to create their own graphical based tooling after years of Android developers complaints being forced to use it.

Yeah, except not every business is willing to wait 25 years for Rust to achieve parity with today's Java ecosystem.

Note that C++ is 40 something years old and still there are domains that it is fighting against C, which require a generational change before being open to try out anything else.

Rust's has a big selling story for OS low level systems libraries, the niche C++ is heading to, with the caveat that C++ will as secure as Rust, that is where the language should focus.

To be honest, had Go supported generics from day one, Nim or D some corporate backing, and I would have never considered Rust for hobby projects beyond the language geek thing of trying out new languages every year.


> Yeah, except not every business is willing to wait 25 years for Rust to achieve parity with today's Java ecosystem.

Unfounded speculation. You've made this number up.

> Rust IDEs can't still offer completion that works all the time, let alone all the other IDE features

I didn't see any problems in IntelliJ. If there are cases where autocomplete doesn't work, these are rare edge cases and they don't affect predictivity. VSCode was a bit laggy, but that's probably VSCode problem not Rust's. This is what happens if you base a desktop tool on a browser running JS.

> Rust's has a big selling story for OS low level systems libraries

It is good at that, but this is not the primary reason to use Rust. Rust selling point are explicit lifetimes which make it virtually impossible to create pointer hell I found in every commercial Java codebase I worked on. It is the same level of productivity enhancement as introducing static types over dynamic. Explicit lifetimes make it a bit harder to write in Rust but code is being read 99% of time and written only 1% of time.


Not much.


Note: there's a PR in for stabilisation for the requisite macro feature, and it looks like it's going to be merged soon.


I did some digging for those interested.

Compile on Stable (Rocket): https://github.com/SergioBenitez/Rocket/issues/19

Stabilize Proc Macro Hygiene (Rust): https://github.com/rust-lang/rust/issues/54727

Partially Stabilize Proc Macro Hygiene (Rust): https://github.com/rust-lang/rust/pull/68717


One thing I've always wondered about this "not even 1.0" mentality is, why is a version number so significant? Sure it's an easy proxy, but it's not perfect. That number could be arbitrarily changed tomorrow and the maturity of the project hasn't necessarily changed.


Most versions don't mean much on their face -- every project has a different definition and way of versioning (usually undefined and fairly arbitrary, or based on client feature requirement and need-to-deploy).

But 1.0 is special -- pretty much everyone agrees on its meaning: it's ready for others to use it, and I'm willing to stand by it.

Sub-1.0 is also special, and generally agreed upon -- no one should consider this production-ready, and I'm not ready to defend it.


To me sub-1.0 means you can't rely on even point releases being non-breaking changes, more than the suitability for production use... though that difference sees some overlap.

Using node as much as I have, I've gotten used to updating dependencies fairly often and spending maybe a day a month on keeping things updated and smooth. I also tend to dig into nested dependencies and my bundle sizes (more on the front end).


I mean, they could be, theoretically, but generally authors try to make the number a good reflection of the project's status.


A question is what you would consider a more mature environment? E.g. C++ - a decades old programming language - does not contain widely accepted solutions for building web services. There are some libraries, but I mostly have not seen them being used outside of the companies that developed them. I think Rust already has a better stance there.

Probably Go and node.js could be counted as good solutions, because their standard libraries cover the use-case very well. Or Java, thanks to a mature ecosystem around web services - even though the mainstream solutions (Servlet) focus on blocking IO.


I like node for really fast productivity, at the cost of a little potential stability... not the absolute best performance, but usually good enough.

Up from there would imho be Go or C#, mostly because they're faster than node, but still very accessible, and really good support around them for web services. Certain workloads are still not great. I think Java is probably okay, I just personally don't care for it, apparently it's gotten better than when I last used it regularly over a decade ago.

Rust is when you want balls out absolute performance in a language with newer, higher level features and constructs. The unsafe escape hatch is really there if/when you need it, but should not be the norm. The final executable size on rust is also impressive and will work very well in some constrained environments, or when you need better execution performance and more control than the others provide (outside of C/C++, D and a couple others).

I've only built a couple smaller web apps with rust so far, but the output size is impressive. Being able to build to a bare container or with busybox if some in-container scripting is needed has been a great experience so far.


Servlets have supported asynchronous IO for quite a while now.

Then there is the .NET stack as well.

For distributed computing I hardly see a reason to pick Rust over Java or .NET eco-system.

If we would be speaking about something like real time audio engine, then Rust would probably be the tool I would pick.


Yep, and async support is causing a shakeout of the existing ecosystem too - could take years before the dust settles.

It'd be so nice if a major vendor picked up the language and built an ecosystem on it :)


For distributed computing I also don't see a point using Rust versus the productivity of having a tracing GC strong typed languages with AOT/JIT toolchains.

Most of those languages also have ways to stack allocate and off GC heap allocation, it is a matter of learning how to use those features, when and if it actually matters.

99% of the time it won't matter.

Rust is great replacement for the kind of low level stuff like the C++ layer of Android, UWP composition engine, game engine graphics rendering and similar.


I don't understand either. I mean, it is fun and all, but for a serious project, why bother?


I love the way this tutorial was written. I can't explain what exactly, but I found the entire experience to be very didactic. Congrats, and thank you! Wish we had more of this.


Author of the post here, feel free to ask me anything!


Why did you pick Rocket? For example, Actix-Web is a viable alternative that works on a stable compiler.


Great article, thank you! A suggestion: The code examples were somewhat hard to read. Perhaps you could add syntax highlighting (e.g. via prism or highlight.js).


It was really helpful doing some first steps with rust.

One suggestion: add a warning for macOS users that psutil::host::uptime is not available on macOS. I had to figure that out myself and it was quite frustrating.

I figure you didn't know that, and it's not like the article claims otherwise, but it would be useful to add I think.


Thanks, I had the same issue - and only figured it out because this comment came up in Google results for the error message.

I didn't know that! I've recently switched over to NixOS as my daily driver. I'll see what I can do to fix that for the future.


Why no `cargo run`?


I'm used to doing stuff with WebAssembly, that is beyond the reach of `cargo run` most of the time.


Actually even for wasm you can make `cargo run` work. You just need to write a script for it and use it as the target runner. Especially for WASI this is great where you can just specify wasmtime as the runner and then cargo test / bench / run all work perfectly fine.


I was actually just searching for this the other day but couldn’t quite figure it out. Do you have any guides on doing just this? Thanks!


Do you write code in that color scheme? (Dark green everything on a black background)


No, I'm not sure why the gruvbox theme I use has that for code examples. Here's what my coding view looks like: https://i.imgur.com/oWFlmgB.png


> routes!(...)

> routes_with_openapi!(...)

Macros, macros everywhere. I understand their use in basic functions like println!(), but I'm worried too much macros in client code will lead to problems (unreadable, clashes that will need custom names). If there are no problems with macros, why not making every function a macro then?


Rocket takes the position (or at least, this is what I understand, maybe it's changed) that developer ergonomics are the most important thing. Macros can significantly clean up boilerplate, and so Rocket tends to use a lot of them.

There are other options if you disagree, but this approach has gained it quite a following.


Macros are extremely important to make rust productive. It sometimes feels as if you were writing in a higher level language.

One of the best examples is the serde crate. What a beautiful library!


Macros are powerful, but they should be used sparingly because they often have outsize effects on compilation times and degrading error message quality. I personally give myself a "macro budget" where macro-based solutions are prioritized based on benefit to a particular project.


With great power comes great responsibility.

Based on my experience with macros, across several languages with support for them, macros are beautiful until you need to debug them under time pressure on a critical customer issue on a code base that you never seen before.


Macros let you do work at compile-time like inspect the types of their parameters. A macro can do things like automatically generating code that converts function's return type into a JSON string.

If you need that, use a macro, but otherwise, you can and probably should use a function.


Love seeing more Rust. It's good stuff.

    extern crate rocket;
I thought "extern crate XYZ" was old-style? "use XYZ"?


It's still fairly common to use in conjunction with

    #[macro_use]
for when you want to easily import all of the macros from a crate across your entire project. But yes, you could do

    use rocket::{get, routes};
instead.


One thing I wish was covered in these is how to go past 'hello world', especially things like error responses and serialization models. It took me a while before I got to the point where I think I can be productive in both querying the data from the front-end and getting back the shape that I wanted. Couple of lessons learned:

- Start with separating your API models and DB models out early, you'll be thankful later. You can also help yourself out by implementing From/Into for your database models to your API models. Obvious lesson from CQRS but since rust doesn't allow easily for ad-hoc structs this is a must, especially since there are a lot of examples out there that put a Serialize/Deserialize onto the database object.

- If you have deeply nested objects, look into JSON:API or Juniper. There are a couple of crates that implement the jsonapi spec and I'm hacking on one of them to make it a bit more usable for my personal pet project, but Juniper will make your life easier especially if you're starting with a React frontend.

- Understand that there's still a tracking issue for a few items such as multipart data, although there's a third party crate that integrates the multipart crate into rocket. However if you need an upload you could probably get away with just returning a signed S3 endpoint to the user, which is what I ended up doing

- If integrating with redis or some other database that's not application critical, there is no (as far as I'm aware) graceful degredation that comes out of the box with rocket_contrib, unless there was some config value I was missing. This is usually fine, since you can easily declare an ad-hoc fairing that connects to redis and falls back to an Option::None in the case where it couldn't connect.

And finally, if you want to support different error status codes yourself based off of mappings from internal errors (such as 409 conflict when inserting the same data), your best bet is an enum that's wrapped by a responder derive like so:

    #[derive(Responder, Debug)]
    pub enum UserApiError {
        #[response(status = 500, content_type = "json")]
        Unavailable(Json<UserApiErrorMessage>),
    }

    impl UserApiError {
        fn wrap_body(message: &str) -> Json<UserApiErrorMessage> {
            Json(UserApiErrorMessage {
                status: "error".to_string(),
                message: message.to_string(),
            })
        }

        pub fn unavailable(message: &str) -> Self {
            Self::Unavailable(Self::wrap_body(message))
        }
    }
Which can be called like:

    Err(UserApiError::unavailable("The database is gone!"))
Most of my other lessons learned come from diesel, both are an absolute pleasure to work with and have given me incredible confidence while coding.


This is fantastic advice, and provides a very nice UX when coding against it.

Just wondering, are you trying to protect some "inside" aspect (or allow graceful degradation) by having 500 as "Unavailable" rather than "Internal Server Error"? Also, I take it this code returns a body of { status: ..., message: ... } with a status of HTTP 500? So the enum variant "Unavailable" is just for programming syntax and unwraps around the Json<T> inner body when it's called by Rocket?

Finally, I see you are making your diesel calls inside the controller. Is there a reason to prefer this over creating methods that impl your structs that can do the same thing? Such as:

    impl User { 
        pub fn read_all(con: &PgConnection) -> Result<Self> { 
            User::read_all().get_results::<Self>(con)? 
        }
    }
rather than (in the controller)

    User::read_all().get_results::<User>(&*connection).map_err(|_| Status::Unavailable)?
I'm just getting in to Rust, and have been dabbling with Rocket and Diesel a bit lately. So this is a pretty interesting thread and like seeing how and why people are using it :)


So the answer to both of those things is that I basically hammered out a bit of code that looks sort of like what I have but it's exactly. It should have been a 503 error instead.

In reality, the main thought is that the map_err in the controller should take some error and match against it for a known set of Error enums to translate to an enum that we want to get back out to the user.

> Also, I take it this code returns a body of { status: ..., message: ... } with a status of HTTP 500? So the enum variant "Unavailable" is just for programming syntax and unwraps around the Json<T> inner body when it's called by Rocket?

Yep! Exactly. It's a bit wonky but I thought it would be helpful to show off, especially in a world where JSON bodies with a shape of { status: "error" ... } are so common. In reality you can shove whatever you want in there and use serde to flatten the result. Off the top of my head, you could in theory do something like

    pub struct UserApiErrorMessage {
        status: String,
        message: String,
        #[serde(flatten)]
        _meta: HashMap<...>
    }
Where _meta holds some additional information you might want to dump to the user. In practice, I'm using this error enum approach to construct JSONAPI responses with appropriate status codes and the appropriate structure.

> Finally, I see you are making your diesel calls inside the controller.

So what I've found is that generally you actually don't want to embed your diesel get_result(s) calls inside your impls for your struct, since you're loading the entire set up into memory at once and you lose the ability to cut down. I've leveraged the `into_boxed` method of the query builder pretty heavily to allow for building up queries on the fly, which allows me to abstract the common bits into the impl. Code shows better than words so here we go:

    struct User { .. }

    type WithUsername<'a> = Eq<users::username, String>;

    impl User {
        pub fn read_all<'a>() -> BoxedQuery<'a, Pg> {
            users::table.order(users::created_at).into_boxed()
        }
 
        fn with_username(username: &String) -> WithUsername {
            users::username.eq(username)
        }
    }


    // In some method somewhere. Note: I don't know that this actually works out of the box because I didn't compile it
    User::read_all().filter(User::with_username(&username)).paginate(1).per_page(25).get_results::<User>()?
So I've cooked up a sort-of ORM for this, but I've abstracted away that underlying schema.rs file that's a huge part of diesel, since I think (but haven't proven) that leaking that file out into the rest of the codebase makes things tougher in the long run.

As to your point about the controller, I'd advise against it but hacked something together for the comment. In my project, I actually call out to simple services (think shitty service oriented architecture) that query the database and return a Result. What's interesting (or not) here is that my services actually call the `Into::into` of the database model, so my controller never even sees that they exist. Again, some code:

    // Actually in some other file
    mod service {
        mod users {
            pub fn get_user_page(conn: &PgConnection) -> Result<Vec<UserApiResponse>> {
                let users = User:read_all().paginate(1).per_page(25).get_results::<User>()?;
                users.iter().map(Into::into).collect::<Vec<_>>()
            }
        }
    }

    #[get("/users")]
    fn index(connection: &PgConnection) -> Result<Json<Vec<UserApiResponse>>> {
        service::users::get_user_page(&*connection).map(Json).map_err(...)
    }
But now we come to what I think is the most pertinent question: why does this exist? Testing. Spinning up a DB in CI is quite easy but since rust runs all testing in parallel (yay fearless concurrency!), we can get into trouble when we delete some records from the database but we haven't set up our tests to handle that. I ran into some really stupid problems I caused for myself in assuming that somehow it would be like rspec. If you can basically abstract away the database access part into a dumb hashmap, you get a store per test that won't experience much contention (at least that's the hope). I'm still working on that last bit so my thoughts on it are far and away from complete.

EDIT: Final note on diesel: there is no into_boxed for inserts and into_boxed for updates is dark magic that I can never get working correctly, so I'd avoid being too clever with these and focus on Select/Delete instead


Not offtopic, but not quite the meat of your post: I would love to see a good solution for a drop-in JSON-API server library. Dunno how close what you're building is to that, but it would be very very very cool!


Something in the vein of juniper but for JSON:API? That would be incredibly useful! I'm not at that stage yet, mainly hacking on https://github.com/zacharygolba/json-api-rs to bring it up to rust 2018 and add some more niceties as well as bringing the rocket integration up to the async branch.

What I'm working with right now is more of a pattern. It's a bit rough still and allows for too much recursion and N + 1 queries galore, but since my usual queries from my front-end are for single items it's been pretty nice.

Maybe one day I'll be more up to snuff on macros to expand upon that DSL. I'm already starting to see some rough patches in my very tiny pattern with relationships


> Start with separating your API models and DB models out early, you'll be thankful later. You can also help yourself out by implementing From/Into for your database models to your API models. Obvious lesson from CQRS but since rust doesn't allow easily for ad-hoc structs this is a must,

Can you explain why? This advice doesn't make intuitive sense for me.


Not the parent, so I can't speak for them, but my experience is that it's pretty likely that you'll eventually run into situations where you really want to store something different in the DB than what you send out from your API.

For example, you may want to keep some fields internal and not expose them to users, or you may want to normalize your DB schema so some fields are stored in a linked table, or maybe you had something as timestamp+duration, and need to keep the same external API for compatibility, but also want to refactor it internally into two timestamps.


Yep! Serde allows us to cheat a bit since you can call for it to skip certain fields (such as password hashes), but my preference is for the field to not exist at all in your serialization structs. Less of a chance to forget that #[serde(skip)] flag


Sure! Let's start with two baseline diesel structs:

    #[derive(Serialize, Deserialize, Identifiable, Queryable)]
    #[serde(rename_all = "camelCase")]
    struct User {
        pub id: Uuid,
        pub username: String,
        pub created_at: DateTime<Utc>,
        pub updated_at: DateTime<Utc>,
    }


    #[derive(Serialize, Deserialize, Identifiable, Queryable, Associations)]
    #[belongs_to(User)]
    #[serde(rename_all = "camelCase")]
    struct Business {
        pub id: Uuid,
        pub name: String,
        pub user_id
    }
And their respective endpoints:

    #[get("/users)]
    fn user_index(connection: &PgConnection) -> Result<Json<Vec<User>>, Status> {
        // Going to pretend like we have some helper methods to help us out here
        User::read_all().get_results::<User>(&*connection).map_err(|_| Status::Unavailable)?
    }

    #[get("/companies")]
    fn company_index(connection: &PgConnection) -> Result<Json<Vec<User>>, Status> {
        // Going to pretend like we have some helper methods to help us out here
        Company::read_all().get_results::<Company>(&*connection).map_err(|_| Status::Unavailable)?
    }
This is all well and good until maybe you have a UserPreferences model, which may or may not need its own route. Unlike other languages, you can't easily add attributes to structs. So if you made the choice to embed your UserPreferences into your User api response, you'd have to either: craft a json object from the json! macro, or alter all of your models anyways to introduce the new structure. Now your project/response structure would be:

    #[derive(Serialize, Deserialize)]
    struct UserApiResponse {
        id: Uuid,
        username: String.

        preferences: Option<UserPreferencesApiResponse>,
        company_id: Option<Uuid>
    }

    impl From<User> for UserApiResponse {
        fn from(user: User) -> Self {
            Self {
                id: user.id,
                username: user.username,
                preferences: None,
                company_id: None,
            }
        }
    }

    impl UserApiResponse {
        pub fn with_preferences<P: Into<UserApiPreferences>>(&mut self, preferences: P) {
            self.preferences = Some(preferences.into());
        }

        pub fn with_company<C: Into<CompanyApiResponse>>(&mut self, company: C) {
            self.company_id = company.id;
        }
    }

    #[get("/users)]
    fn user_index(connection: &PgConnection) -> Result<Json<Vec<User>>, Status> {
        let preferences = Preferences::read_all().get_results(&*connection).map_err(|_| Status::Unavailable)?;

        User::read_all().get_results::<User>(&*connection).map(|user| {
            let mut user: UserApiResponse = user.into();
            user.with_preferences(preferences.first().unwrap());
        }).map_err(|_| Status::Unavailable)?
    }
The code doesn't work out of the box but I hope that it conveys the idea that I'm trying to get across. This is sort of the same idea as Marshmallow in python or ActiveModelSerializers for Rails, but the problem is more pointed in rust because at least in python/ruby you can just shove on values as you need them (debatable if this is a good thing). The other thing that we've gained is consistency. Any change to an underlying serialization model is automatically reflected in any endpoint that may utilize it (also debatable if this is a good thing).

To really drive the point home, consider what would need to happen if we serialized out the database models directly from a REST standpoint. The steps would be:

    1. Fetch the user
    2. Fetch the preferences by filtering on user id (requires a query parameter)
    3. Fetch the company by filtering on user id (requires a query parameter)
In this new approach, we cut only one response, but if we were using GraphQL/JsonAPI we can cut this down even more.

If we stuck with our base schema we'd be locked into always modifying our database schema models, which IMO ties too much to the data access layer, or having separate fetches per request which is non ideal for a 250ms response time per request. In this world, we can go a step further without ever altering our DB models, which allows us to reason about them in a more dumb/CRUD way.


This is awesome. Is there a resource for someone just beginning with rust (but with many years on flask and rails).


The best way to learn rust depends on your learning style, the Book (https://doc.rust-lang.org/book/index.html) is fantastic even as a reference. To learn rocket I'd start with the official tutorial here: https://rocket.rs/v0.4/guide/getting-started/#getting-starte...

It might sound canned, but the rust community generally has fantastic documentation and the few edge cases you'll hit are generally well-known in the github issues with well-known causes. My advice would be that if you feel like the compiler is giving you too much grief, take a break for a day or two and then come back to it, you might find that what was giving you trouble isn't so bad anymore. The compiler is very helpful with error messages but because of the litany of information that it gives you it can become a bit of a brain overload, especially if you hit a particularly nasty diesel type error (compile time SQL is no joke!)

It's also good to know in advance that the rust standard lib is great, but fairly bare bones if you want things like JSON support. Check out https://www.arewewebyet.org/ for the tools that can help you out!

EDIT: The only other thing that I just thought of is that authz/authn doesn't have a good story in rust just yet, so I'd rely on an external service of some kind to handle that for you unless you want to do basic signed session cookies.


thanks for this - but i already know a bit of rust.

That's why your previous post was so awesome. it was a rough path to get to a production ready api/web front.

For example, i would tell an Android beginner - "use coroutines. use Retrofit. dont use MVVM, but use ViewModels. Dagger is a big pain, but Koin can break in production", etc.


Sorry, I misunderstood! As far as I'm aware there's no curated list of best practices just yet, although that would be awesome! Most of this comes from past experience with things like Flask/marshmallow and a lot of trial and error. Maybe one day when I've shaken out all of the issues at hand and I've figured out a good way of handling various scenarios I'll write something up about it


> Adding the --vcs git flag also has cargo create a gitignore file so that the target folder isn’t tracked by git.

Doesn’t cargo make a git repository by default, anyways? Why doesn’t it add a sensible gitignore in the process?


Yeah but I'm guessing it's there in case someone has disabled it globally.


Hmm...I thought it did add the gitignore by default


It does.


Why "cargo init" instead of "cargo new"?

Also, wondering when Rocket will be usable without nightly. That makes me concerned that it is not stable enough.


I've used it for 6 months on the async branch and the master branch and master was incredibly stable. Async is less so because it's a large scale effort to try and convert over the entire API with minimal breakage(apart from fairings/middlewares). You can pin to a specific stable nightly if you're needing extra sanity, although CI makes this hard with the nightly docker images. There's a tracking issue for proc_macro_hygine here: https://github.com/rust-lang/rust/issues/54727

Looks like the last feature is almost stable!


I use `cargo init` because I'm used to creating project folders myself.


Rocket uses three nightly features:

* proc_macro_hygiene - this feature lets you call procedural macros in more places than you can today. https://github.com/rust-lang/rust/issues/54727#issuecomment-... might be a bit too in-the-weeds, but the TL;DR is that there seems to be a path forward for stabilizing what Rocket needs in a near-ish timeframe.

* proc_macro_diagnostics and proc_macro_span: these purely give Rocket better error messages in some situations. With the first thing stabilized, this could be dropped to get things compiling fully on stable, though diagnostics would regress from nightly.

In general, these last bits of things Rocket needs aren't changing a whole lot.


Async and stable support are both coming soon. I'd estimate within 3-6 months.


A few comments:

* missing `use rocket_contrib::json::Json`

* `unwrap_or` is better than `or` followed by `unwrap`

* the psutil crate already has hostname info

* the psutil crate doesn't provide an uptime function if you're on a unix system.

* once you setup the hostinfo route, your example says to curl the index

* your docker link is broken


> Rocket has support for unit testing built in

Do I really want my HTTP service framework to be my unit testing framework too? Could these not be separated?


I believe the author meant that Rocket provides helpers to make unit testing easy. The tests are written using the ordinary "#[test]" Rust annotation and run by the standard Rust test runner.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: