Hacker News new | past | comments | ask | show | jobs | submit login
My ideal Rust workflow (fasterthanli.me)
127 points by ingve on Oct 27, 2021 | hide | past | favorite | 51 comments



This seems to be about "deploy-flow" or something, rather than the actual workflow, which currently is kind of horrible in Rust thanks to slow compile times and no availability of a REPL.

I've mostly written lisp in my career, with some dips into PHP, JavaScript and other various languages when needed, but mostly interactive lisp environments so far (and lately, mostly Clojure).

How do people develop in Rust? I'm trying to learn it, but it's hard to jump into code-bases and understand the code as I cannot run snippets. Lots of copy and pasting into new "main" files is being done in order to run small snippets of code, to see what they do.

Another way is to isolate parts into smaller test cases, but now I have two pieces of code in two places, one in the actual program and one in the test case. And still I cannot just select code to evaluate when I need to dig deeper into the statements.

Difficulty to evaluate snippets of code together with the slow compile times makes it really hard (for someone used to lisps) to learn and understand Rust.

How does your workflow (when writing/reading code) actually look day-to-day? If you're writing a web server, are you really stopping the server, compiling again, run the full server when doing changes the logic for the server? That seems absolutely bananas and I'm sure I'm missing something vital for my own environment.


> How do people develop in Rust? I'm trying to learn it, but it's hard to jump into code-bases and understand the code as I cannot run snippets.

I might be able to help answer this! I've spent over 10 years of my career writing production code in Lisp or Scheme, and about 5 years now writing occasional production code in Rust. So maybe I can explain how the two workflows differ.

In Lisp, it's super-easy to redefine a function or test some code. You can constantly test small things as you work. And you can easily open a listener on errors and inspect the current state. It's genuinely great.

In Rust, you rely much more heavily on types and tests. You begin by nailing down a nice, clean set of data structures that represent your problem. This often involves heavy use of "enum" to represent various possible cases.

Once you know what your data structures look like, you start writing code. If you're using "rust-analyzer", you'll see errors marked as you type (and things will autocomplete). If you want to verify that something works, you create a function marked "#[test]", and fill it with exactly the same code you'd type into a listener. Maybe you run "cargo watch -x test" in the background to re-run unit tests on save.

Then, maybe 2 hours later, you'll actually run your program. Between rust-analyzer and the unit tests, everything will probably work on the first or second try. If not, you write more "#[test]" functions to narrow down the problem. If that still fails, you can start throwing in "trace!", or fire up a C debugger.

This workflow is really common in functional languages. GHC Haskell has a lovely listener, for example, but I rarely use it to actually run code. Mostly I use it to look up types. The difference is that in strongly-typed languages, especially functional ones, types get you very close to finished code. And some unit tests or QuickCheck declarations take you almost the rest of the way. You don't need to run much code, because the default assumption is that once code compiles, it generally works. And tests are basically just a written record of what you'd type in a listener.

For understanding code, the trick is to look at the data structures and the type signatures on the functions. That will tell you most of what you want to know in Rust, and even more in Haskell.

So that's why I don't particularly miss a listener when working in Rust. Does this answer your question?


Does “listener” mean the same thing as REPL here? I’ve never that term in this context before.


One might think that a Listener is a particular user interface for a read-eval-print-loop. The REPL reads s-expressions, evaluates them and prints the resulting s-expresion(s). The Listener might be a window (or a terminal session), which allows typing, editing, commands, completion, help, etc. and which runs a REPL.


Yes. I think MCL (aka Macintosh Common Lisp, an absolutely delightful environment from the 90s) and Allegro Common Lisp (the best Windows Lisp of that period) both called the REPL window "Listener".


> This workflow is really common in functional languages.

That depends on the language. OCaml compiles really fast, has a relatively good toplevel with good tracing, and a great time travelling debugger.


I can see the frustration coming from Lisp, but it's just business as usual in compiled, strongly typed system languages. Languages like Rust and C++ add insult to injury by having rather long compile times.

It's definitely a different culture but for someone like me who's on the other side of the fence it's not a big deal at all. I lean heavily onto the type checking to make sure that my code is correct and find myself running code relatively infrequently because of that. I can literally code for hours without running any code, just running "cargo check" here and there to check my syntax and typing.

If I really want to evaluate a snippet to make sure that it does what I want I generally just add a #[test] function wherever I'm currently coding. This way I have access to the full environment (dependencies etc...) and I can run it on its own with "cargo test $the_test". And I may get a bonus unit test out of it. So basically TDD without the religion surrounding it.


> I can see the frustration coming from Lisp, but it's just business as usual in compiled, strongly typed system languages.

Not really. SML and OCaml let you evaluate selected code snippets in REPL just fine, and their compilation is fast. I'm not sure about Haskell, but I'd guess it's similar with ghci.

"Strongly-typed" is in the eye of the beholder, but the phrase "compiled, strongly-typed programming languages" generally doesn't bring to my mind C++ and Java of which this statement would be more accurate.


People confuse strong typing with static typing all the time. I guess this was the case here.


I did do that, I feel ashamed.


That's a good one. Brought a smile on my face.

From "What We Do In The Shadows": <https://youtu.be/yy4CN9DVPII?t=85>


Java IDEs let you evaluate random code fragments at runtime in the context of the current debugger stackframe. You can also change method bodies (not signatures) and have them hot-reload.

It might look like C++, but the JVM is really more like Smalltalk in nature.


Haskell is relatively slow to compile when compared to OCaml.


I've developed multiple microservices in Rust (using axum[0]) and cargo-watch[1] has really come in handy. I usually only have to wait ~5 seconds for my changes to re-compile and the server to spin up.

I can't really speak for massive projects, but in my experience after the first build the compile times aren't too bad. Only the changed code re-compiles.

Another commenter mentioned they can code for hours without needing to re-compile and I've found that to be true also. I use rust-analyzer[2] in VSCode and I can usually get a lot done without needing to manually test every change. I guess it depends on if I'm writing logic heavy code or writing code-architecture (custom traits, implementations, and whatnot)

[0] - https://github.com/tokio-rs/axum [1] - https://github.com/watchexec/cargo-watch [2] - https://github.com/rust-analyzer/rust-analyzer


It is just something you adapt to, in the same way I currently have no idea how people handle tricky refactors in interpreted languages with no compiler assist, but I assume you have developed strategies to do so.

Having a template project for experimentation with bits of code is one way and some simple scripts could automate away most of the pain


Generally there is no need to actually run the program frequently, so you just code for an hour or more and then run the result.

Since Rust has an actual type system, decent programmers usually produce code that works correctly the first time it successfully compiles, so running and testing it is not so important.

If you still need to run many code snippets frequently, create a command line tool with an option parser (e.g. clap in Rust), and create a subcommand for each "code snippet" and you can then invoke them from the command line along with whatever options you need.

You can also use debuggers or add eprintln! calls in existing codebases.

If you are just learning the language and want to test simple examples you can also use online playgrounds like https://play.rust-lang.org


> decent programmers usually produce code that works correctly the first time it successfully compiles, so running and testing it is not so important.

Wow, that is a very big call.


I find that rust playgrounds and inline tests are a good solution for trying out snippets. With rust-analyzer, you can run a test via an LSP code-action from right inside the file you're working on, so it's a nice lightweight way to try out a couple lines of code.


His hot reloading rust article is one of my favourites https://fasterthanli.me/articles/so-you-want-to-live-reload-...

This chap is up there with Eli Benderski’s blog for me, i really enjoy fasterthanlime’s long form articles.


Amazing article! Will save for later reading. I jumped straight to the video to get an overview how it's working, and it seems to reload the contents of the file when it changes, which is cool and surely an improvement. Do people use this when they write web servers? Was trying to find if there is any lib/tool you can include/run to get the functionality, but seems very involved right now.

Seems to be missing the vital functionality of being able to evaluate snippets though, which is the biggest feature missing in Rust.


How exactly do you evaluate snippets in some environment even with REPL? I mean most code in a large system is part of a context that is required at the very least to generate the inputs for your functions. Can't imagine how you could evaluate the these snippets outside of this program environment except in the most trivial cases when dealing with some simple pure functions with only few simple parameters.


Using Clojure as an example (with no regard to quality, not production code and so on). You might have a piece of code like this:

    (defn people-in-scenes [scenes]
      (->> scenes
           (map :subject)
           (interpose ", ")
           (reduce str)))
You could evaluate the full function and then afterwards you evaluate `(people-in-scenes [{:subject "Test"}])`

But, usually you want to evaluate just a piece, in this case we want to evaluate `(->> scenes ...)` but `scenes` is not in scope?

So we evaluate `(def scenes [{:subject "Test"}])` first and now we have scenes in scope wherever we are, with our test data, and we can select arbitrary snippets around the codebase that depend on `scenes` being in scope.

Repeat for all vars that are expected to be in scope, and you can jump around freely.

Sometimes I leave a statement that defines common data-structures to various names within a `comment` block so it's easy to jumpstart this interactive development when you're in a namespace.

There are other ways as well, but normally this is the easiest to get started with and does the trick for most cases.


(I am not OP and I never done lisp.)

>How exactly do you evaluate snippets in some environment even with REPL? I mean most code in a large system is part of a context that is required

This is a bad smell for me. You should always try to split your code in small pure functions that do one thing, it will help you with readability, with refactoring, with testing and depending on environment you could just replace the function with an different one at runtime.


But you still need the types that are required and you need to define the functions that your snippet calls. Those in turn could require their own types and functions and so on. Unless you don't have much in the name of types and so you can easily define whatever you need. That still leaves the functions that snippet depends on.


I try to create new types if needed instead of passing to a function too many parameters. Also say a function needs the user email you don't pass it the big User object you pass only the email.

The issue I hit though is with code that evolves over years, it starts simple and then each team member had to add some new feature and the simple functions get larger and alrger , the number of parameters increases etc , the solution is to take the time and re-analyze the updated problem and refactor or rewrite the solution to be again composed from simple parts that do one thing.

let me know If I misunderstand the thing you said about types, or you referring mainly about REPL


I don't know how close you could get in rust to the kind of productivity lisp development has.

You would need the ability to keep your program running while inside a debugger while you were able to patch memory on the fly (not just replace instructions in a function body - which I reckon I could figure out with the help of that hot reloading in rust link I shared) but replace function signatures at debug-runtime and be able to serialise the image state to disk and arbitrarily resume it later... It just feels like square peg round hole.

Rust has a different development experience than lisp. It doesn't lend itself to stuff like structural editing (which is really fun to watch an experienced dev doing - Emacs lisp videos on yt are a good source for some mind bending productivity)


You do need custom support to do this in Rust, but it's in the works - see https://github.com/google/evcxr for what seems to be the most current effort.


Wow, that looks really cool! Thanks for sharing, gonna give it a try.

Although, the repository seems a bit... Weird. The README ends with:

> This is not an officially supported Google product. It's released by Google only because the (original) author happens to work there.

Why would Google release it if it's not their product/project? Why is it under github.com/google? Do Google own projects that people happen to work on in the free time just because you otherwise work at Google? Something smells fishy here, but wouldn't be the first time Google does fishy stuff.


Google releases lots of projects like this, and yes many software developer contracts hand over all IP: https://news.ycombinator.com/item?id=2208056 (discussion is good but original link is dead).


Google is the legal copyright holder, but that does not mean they want to or have to consider it a product.


Holy fuck, I had no idea. How are developers still working at Google if this is true? Giving up ownership of all projects you do in your free-time?


My understanding (based on second-hand knowledge) is that employees have two "choices" available to them:

* Have a "fairly flexible arrangement" to work on side projects when they want & potentially using Google hardware on work time but in exchange Google holds the copyright & it has to be published under the Google repo with the quoted disclaimer.

* Deal with a bunch more paperwork on per-project basis.

If people don't have a particularly strong attachment to their side-projects the first option is the "low hassle" option for flexibility to do side projects at work.

I would agree it is unfortunate developers don't push back harder against such a wanton "IP" grab by their employer--but hey, I guess that's what we get without collective representation or respecting our own value as developers.


> If you're writing a web server, are you really stopping the server, compiling again, run the full server when doing changes the logic for the server?

Yes. This is definitely how I'd change the server. I wrote a server that reflected our RDF storage engine as an HTTP SPARQL endpoint. I did that in C, today I would probably use Rust. I'd expect to maybe restart the server say a dozen times in a day's work, maybe less, occasionally more.

Stronger type checking and other pre-compilation inspection might help you? You can have tooling that will show you that what you've written is nonsense and won't compile in a typical IDE like VSCode or even vim.

I don't like REPLs for this type of work, whereas they're definitely what I want for say, live coding music performance or ad hoc database queries.


You might like this extension for VS Code: https://marketplace.visualstudio.com/items?itemName=rustnote...

I just got it to a usable state today, does what you're asking which is run rust code without a main function. Just open a .md file and you're good to go. Also saves the outputs to markdown so you can upload to blog sites. I've only tried it on Linux btw might not work on other platforms.


> How does your workflow (when writing/reading code) actually look day-to-day? If you're writing a web server, are you really stopping the server, compiling again, run the full server when doing changes the logic for the server?

You can run cargo watch which will see file changes and then stop the server, compile and then re-start it for you.

For incremental changes you're looking at about 2-10 seconds of delay depending on the complexity of the framework in use. i.e. Axum is faster than Actix.

Before you get to that state you're probably in vscode with rust analyzer, this will point out any compiler errors earlier in the lifecycle.

The rust compiler does a lot. It's catching issues before they get to production.

I'm hoping that maybe the m1 pro/max chips can get my incremental compilations to sub 1 seconds and then I get a fast workflow with the full rust compiler safety net.


> are you really stopping the server, compiling again, run the full server when doing changes the logic for the server?

Yes. I use something like `cargo watch -x run -w templates` for my personal project. If I make a change to any source file, this command will recompile my project, then restart the web framework's (Tide currently) http server. It also watches changes to the templates (Tera, which is like Jinja2). It's not nearly as fast as some of the dynamic language reloading I've used in other languages/frameworks, but it works and I can move pretty quickly this way.


I feel the pain of a lack of REPL. I frequently want to check a snippet of code, but need to go through the cycle of creating a test case, copying code, modifying it so that it works standalone without its context, running it and then deleting it. It's tiresome.

As for compilation, yes, it's slow in general, but once you compile a project initially, later incremental builds are usually quick.

Overall, I think in the future I'll see if I can use SML more and see how it goes.


This is how I keep feedback loops short:

- IntelliJ tells me any type issues instantly (I find this makes up some of the time for not having a REPL).

- Small tests kind of work like a REPL.

- watch.sh running in a terminal, re-compiles and runs the code I am working on. In your web server example, I would be testing the units rather than the whole composite program. I can click file references to jump to the error.

- Fast machine (M1)


I was thinking of getting an M1 just for this.

Do you know how much faster it is? i.e. make a small change in code to compiled binary.

As most articles for rust compilation focus on a full build which I don't need to do very often.


I upgraded from a 2014 MacBook Pro (dual core), it is at least 2x faster for both full and incremental Rust builds.


Reading the docs and the types help me a lot. If you've done Clojure/PHP/JS, maybe you're not used to it


Like C or C++ you attach a debugger and step through the code


I'm coming from the opposite side -- my career has had me working almost entirely in compiled languages, mostly c but I've also done very low level embedded assembly and a fair bit of python. For personal projects recently it's been go but have also done some rust, haskell, and have played with some lisps.

I think the thing you're really talking about is getting fast feedback on a bit of code.

The two basic strategies I've adopted are (1) writing a very small, isolated, standalone program or (2) leaning on a test runner.

I use (1) when the snippet is a basic standalone piece of code. Either something that just uses fundamentals from the language, or in an environment where I can generate an executable quickly that is linked to the right libs. Then I can run it and see the output, often iteratively like `ls *.c | entr 'make && ./my-example'`. Sometimes this is also a convenient way to either step through a code snippet in gdb, or examine assembly output.

I use (2) when there are too many dependencies to make an isolated program feasible. I set up a test case to exercise my snippet, and either use a test runner's "watch" functionality or use entr like above, making sure to have the test selection set to only run my test (to keep the feedback loop tight). Then in one terminal I'm editing, and whenever I save I can see output in another terminal. The trick here is to avoid -- as much as possible -- rebuilding the whole system. If the thing you're working on is deep in a library, just build & run the tests for that library. Sometimes you still get burned by touching some file that requires recompiling a whole mess of stuff, but most of the time in my experience with a little bit of effort you can get the feedback cycle down to a small number of seconds.

> How do people develop in Rust? I'm trying to learn it, but it's hard to jump into code-bases and understand the code as I cannot run snippets.

Interestingly I had a VERY similar reaction when trying to learn lisp. I think it's a beginner vs expert kind of mentality. You have a certain mindset to be able to work with lisp, and don't even consciously think about all the little habits you have for working with it. I have a certain mindset to work with compiled static languages. One thing I struggled with in lisp was trying to keep track of what was exactly the state of my repl, given that I had experimented with a whole bunch of things. I found myself frequently having to exit and reload the repl, reload files, etc to get back to a known state, and then occasionally having to scroll back through my history because some critical piece of code that I entered directly in the repl wasn't saved in my source file -- or worse, some definition that was set in a previous revision of a source file (and subsequently removed) was needed so when restarting the system it was missing. And then it's a debugging exercise to try to figure out what critical thing is missing. I've watched people do dynamic development with lisp and it looks really cool but I have no idea how you keep track of what lives where! In my mind, it's a pure, clean slate starting with source files. (Except for environment variables, and the file system, and databases, and maybe the network, and ...)


>Yes! That the markdown sources should be the only source code for a series, and that Rust files, terminal sessions, even screenshots should be "build artifacts", that can all be produced from the markdown by my content pipeline.

Sounds like somebody's re-inventing literate programming.


Driving right past the snark to point you at https://github.com/dvcrn/markright, whose author recently contacted me, and is going for something really similar to what I've tried to describe in my article.

Markright (& my own thing) are focused on iteration: you can think of the output as a Git repo, rather than just the end result.

I've also never seen a literate programming system that generates colored terminal output, screenshots, etc.

I'm not sure why you chose to write this comment that way instead of saying "Seems like a neat derivative/evolution of literate programming", which wouldn't imply anything negative about anyone, but here we are!


I can't touch on outputting a git repo, but org-babel can definitely give you colored terminal output and screenshots, etc.

But I love to see people play in the literate programming space and loved your article.


This article gave me agita. I'm in a similar boat in that I'm planning to write my own software basically from scratch in Rust, just for the fun of it. But all of the CI stuff and S3 build artifacts and a server running a private cargo repo and a reverse proxy... it gave me serious anxiety.

I'm going to stick with a monorepo and Github Actions for as long as I can. I hope cargo workspace support improves for stuff like clippy so I don't hit the RAM problem mentioned here by the time I'll have written enough code for it to be possible. And yeah, Github Actions sucks. But it's cheap and relatively easy. And it's customizable enough for my use case as long as I'm not afraid to write some glue code when I need to... and either make that glue public or let it live inside each repo that needs it. (Please improve everything about Github Actions, Github!)


"Hey it's Amos! Hi Amos!"


the CI section is painful to read

bloat cost money and time, imagine needing more than the 2 vcpu to compile your code "fast" enough..

languages that takes an eternity to compile code is not something i want in my "workflow"

if i were to hire developers, i'll make sure they pick languages that compile fast, there is a reason why GO took off and is the language for cloud native needs

and there is a reason why nobody uses rust for server stuff


A very short online search would be enough for you to find out the extent to which you are wrong about that last statement, in case you're interested in facts!


I know i won't




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: