In Rust, everyone is a print debugger. The only thing that really goes wrong in normal code (once it compiles) is "why is this value not what I expect?". Dropping down into GDB is way overkill.
* Rust code can have memory errors + undefined behavior, because Rust code can say "unsafe". Plenty of real projects use "unsafe". (Alternate reason: because the compiler has soundness bugs.)
* Memory errors + undefined behavior aren't the only reasons people like debuggers. Consider: there are plenty of other memory-safe (GCed) languages in which people find debuggers useful (such as Java). "The only thing that really goes wrong in normal code (once it compiles) is 'why is this value not what I expect?'" is arguably true there as well.
And, for the record, gdb works decently well with Rust code. Not perfectly (yet) but well enough to be useful. I have tried it (although I'm more of a printf debugger myself).
I do the bulk of my programming in Kotlin these days, and I'd say that the primary reason is because IntelliJ's debugging support is so good. Aside from the ones you mention, key advantages of a good IDE debugger include:
1.) Ability to see the value of every variable in scope without needing to decide a-priori which variables are worth looking at.
2.) Ability to traverse the call-stack and identify at what point a computation went wrong without having to instrument every single call & variable.
3.) Ability to interactively try out new code within the context of a stack frame. When I find a bug, oftentimes I'll try 3-4 new approaches just by entering watch expressions until I find an algorithm that works well on the data. This would take 3-4 full runs without the debugger.
4.) Ability to set conditional breakpoints and skip all the data that's working properly, only stopping on one particular record. When your loops regularly have 100k iterations before they fail on one single iteration, that's a lot of log output to sift though (or a lot of unnecessary loop counters & if-statements) for a rarely-encountered case.
Honestly when I'm dealing with memory errors and undefined behaviors I can count on my hands the number of time a debugger saved me and the hundreds of times I've had to printf my way to victory thanks to 2/3/N-order effects that cascade to the final corruption.
Don't get me wrong, they're handy but I find them much more useful for stepping flow than root-causing errors.
Also if you're dealing with race conditions the only way to safely root-cause to to stash away data somewhere in mem and print it later as flushes/fences/etc change behavior. Debuggers make that even worse.
Love my debuggers for behavior issues but each tool has it's place.
>Also if you're dealing with race conditions the only way to safely root-cause to to stash away data somewhere in mem and print it later as flushes/fences/etc change behavior. Debuggers make that even worse.
I'm not sure how Rust's support is here, but in my experience it's the exact opposite. Debuggers with var-watch or conditional breakpoints can do this (and a heck of a lot more) on the fly, and that's almost always faster than re-compiling and running. Even at the extreme-worst case, you can be a print-debugger with a debugger without needing to rebuild each time, just re-run.
Your conditional breakpoint can change execution behavior thought flushing cache/icache in a way that doesn't reproduce.
X86 is pretty orderly so you usually don't see that class of bugs until you start getting on other architectures but when you do man is it nasty. C/C++ volatile comes to mind particularly. MSCV makes it atomic and fenced which isn't the case pretty much anywhere else.
Also debuggers don't help you with the 2nd/3rd order effects when you need to trace something that's falling over across 5-6 different systems. With print based debugging I can format + graph that stuff much faster than a debugger can show me.
Like I said, different tools for different uses. It's just important to know the right tool so that everything doesn't look like a nail.
>Your conditional breakpoint can change execution behavior thought flushing cache/icache in a way that doesn't reproduce.
Yes, that is definitely true. But so does calling a printing func that does IO, since it often involves system-wide locks - I'm sure many here have encountered bugs that go away when print statements are added. But debuggers are definitely more invasive / have stronger side effects, and have no workaround, yea.
Multiple systems: sorta. Past (legitimately shallow) multi-process debugging that I've done has been pretty easy IMO, you just add a conditional breakpoint on the IPC you want and then enable the breakpoints you care about. Only slightly more complicated than multi-thread since the source isn't all in one UI. Printing is language agnostic tho, so it's at least a viable fallback in all cases, which does make it a lot more common.
---
To be clear, I'm not saying there's never a need for in-bin "debugging" with prints, data collection of some kind, etc. You can do stuff that's infeasible from the outside, it'll always have some place, and some languages/ecosystems give you no option. Just that it's far later than most people encounter, when a sophisticated debugger exists. E.g. printf debugging in Java that I encounter is usually due to a lack of understanding of what the debugger can do, not for any real benefit.
You should try it before making uneducated, general comments that don't add anything. Rust in CLion/IntelliJ using LLVM is terrible: breakpoints and stack callers work, but variables, llvm and the rest are 99% broken.
I prefer ASAN/UBSAN to either debuggers or printf-style debugging in such cases, but it's not always available.
I think a lot of debugger vs printf-style debugging is a matter of preference and familiarity. I'm used to debugging embedded or distributed systems where debugger support is not so great, so I've gotten used to other techniques (including printf-style stuff). But a lot of people love using debuggers, and I find it elitist to tell them they're wrong.
Yeah, I've never had that pleasure except in toy scenarios but they are cool tools! Usually I'm dealing with 2-3 vendors worth of cruft and platforms that aren't publicly available.
You'd be impressed with the power of formatted printf + excel. Solved some fun issues like quaternion interpolation normalization via graphing and the like.
I did specifically mention "normal code". `unsafe` is not normal code. Obviously if there is a segfault, I'd fire up a debugger - GDB is just fine for such purposes.
The comparison with Java is interesting. With Java, I have often found that errors occur in a rather non-local fashion, due to dynamic code loading, confusing inheritance trees, and ubiquitous mutations and what have you. Maybe I'm not actually calling the function I thought I was, maybe because I have actually received a subclass of my expected class. Print-debugging is often too narrow to highlight the cause. In such a situation, I would fire up the debugger and inspect the general state of the application (which Java makes relatively easy to do).
In contrast, in Rust things tend to happen in a very constrained fashion. You can't randomly mutate things, you can't (without considerable effort) make complicated graph structures where everything can touch everything else. With the occasional exception of highly generic code, your call sites and function arguments are exactly what you expect. So I can rely on print-debugging to quickly find the cause of my problem.
Incidentally the same is true with Haskell, moreso even, except due to laziness the evaluation order can be harder to ascertain - debug statements can appear in a strange order (or not at all).
You are being overly-pedantic in your interpretation of the comment you are responding to. It isn't claiming that we have absolutely no undefined behaviour or memory errors in rust. The point is that undefined behaviour and memory errors are rare in Rust development, so tools intended to help find memory errors are just a lot less useful.
Parent probably implies "with the exception of unsafe" when he says "normal code". Unsafe code is supposed to lack many of the benefits of Rust's memory model.
And that'd be a totally useful way of looking at it if most real Rust programs didn't have any "abnormal" (unsafe) code in them. They do, though, and it still must be debugged somehow. Maybe the "unsafe" is hidden away in some transitive dependency crate or even in std, but it's there.
It's incredibly useful to limit the regions of unsafety and use them to build reusable, well-tested safe abstractions, but it's a mistake to confuse that with eliminating unsafe entirely or ignore the possibility there could still be errors within them.
> And that'd be a totally useful way of looking at it if most real Rust programs didn't have any "abnormal" (unsafe) code in them. They do, though,
I'm willing to bet that the vast majority of Rust code (outside of std) is safe. I've written unsafe once ever, in years of writing rust.
I agree that it's unfair to generalize that debuggers have no use in rust, but it's fair to generalize and say that most rust developers do not experience segfaults, or other memory corruption issues that often call for a more advanced approach to debugging.
I'd guess that about 1% of Rust code is unsafe (holds true for a project of mine) but almost all Rust projects depend on some crate's unsafe code. And I've hit segfaults caused by unsafe code in crates I depended on several times. (Most commonly, due to FFI code trying to duplicate a C library's ABI in a .rs file and not getting it exactly right for the version/config options it the library was built with on my machine. This is a disturbingly brittle way of doing things but will probably be common until bindgen is distributed with rustup by default or some such.)
You may not use the debugger often, but it's there if you need/want it, which is an important message that I think is lost with "all Rust programmers are print debuggers".
Congrats on only using unsafe once in years. That's pretty neat.
> I'm willing to bet that the vast majority of Rust code (outside of std) is safe. I've written unsafe once ever, in years of writing rust.
It's very much about project choice. I immediately ran into unsafe trying to test some functions marked extern. Then again writing toy VMs and GC algos.
Not when you have a proper IDE setup where building + running it in debugging session are all done with a single action. I've done print debugging for a long time, and here and there it still makes sense, but I've found that it's honestly worth putting in the effort once per (decently sized) project to just set up the IDE properly. And honestly once you have done it once, it's mostly just copy pasting the same config from project to project.
A lot of us believe that we spend a much smaller portion of our time looking at or debugging existing code than we really do. If you don't believe it's a time suck then you have very little incentive to keep pushing to get better at it. So the majority of us quickly reach a point where we are satisfied that we 'know how to debug' but leave a lot of room for improvement on the table.
The best description I've heard for master-level debugging is that it's a process of narrowing down the problem space as cheaply as possible. Your brain is telling you that based on everything you 'know' about the code, the right answer should come out. If the wrong answer is coming out, something you 'know' is wrong.
After the most obvious failure mode doesn't reveal the problem, your next check may not be the second most obvious failure. Instead you're multiplying the cost of verifying an assumption times the likelihood it's correct times the 'area' of the problem space it eliminates. Checking things like "is it plugged in?" sounds stupid but brings down the worst-case resolution time by hours.
Long story short, let's say I'm sitting in an interactive debugger looking at a stack frame, expecting that a particular variable has the wrong value, but it's fine. The cheapest thing for me to do next is to look at all of the neighbors of the suspicious value, and those in the caller and on the return. With println, pretty much every subsequent check costs the same amount as the first one. And if there's no short path from starting the app to running the scenario, that cost could be pretty high.
If you believe that you have a high success rate on your first couple of guesses, then println works great for you. But what if you're wrong? Have you ever tracked how many attempts it usually takes you? Or are you too wrapped up in the execution to step back and think about how you could do better next time?
Also, I want to be clear that I'm not telling anybody how to debug, as long as you aren't making that choice for your whole team. Don't choose tools or code conventions that break interactive debugging because "println was fine for grandpa so it's good enough for me!" That's a big ol' case of Chesterton's Fence.
Having experienced the higher plane of fully integrated IDE / run / debugging with arbitrary expression evaluation, conditional breakpoints, etc, I can't even imagine how anyone could work with "print debugging".
I actually have gone in the opposite direction. I used to use a step-through debugger for all my debugging needs, but at this point I pretty much only do printf debugging.
I find that in most cases it's easier for me to figure out what's going on, because I can quickly scan a log of how different variables changed over time, instead of having to step through one step at a time.
Even if you have everything working just the way it should, in the majority of cases print debugging is enough because problems boil down to the assumptions in your head about what should be a variable not being reflected by what it is in your program.
You write tests? Consider selecting variables of interest (and printing them to STDERR when debug mode is on) like one of many test.
Looking at memory and all the variables has its place, but as you said only "here and there" - because when you have to do that, you have already lost: you are looking for a needle in a (hay)stack, and will lose much more time that just eyeballing the variables of interest you selected before.
Just wanted to say that your rust projects are a blast to watch from afar on twitter. You've done seemingly really crazy things with Rust + Zelda Wind Waker.
I so deeply hate that there is so much good content on Twitter that is simply lost if I don't happen to login on the given day. God forbid I'm not on the platform at all. It's so weird how RSS + Blog is a better experience for everyone, except advertisers.
Anyway, I'm super curious what someone is doing with Rust and Zelda. How do I learn more without Twitter?
I was actually thinking of the crazy stuff the OP does to modify the game (geometry and collision, I think) [1] with rust. I hadn't even seen WindWakerBetaQuest - that's also really cool!
Convenient, easy to access / use debuggers are a boon for logic bugs. Being able to see the flow of the program and snapshots of state reduce the time it takes to identify and fix bugs significantly.
Perhaps everyone being a print debugger in Rust is less a compliment to the language, but a criticism of the tooling. I absolutely adore Rust, but understand there are still some vast gaps in the tooling.
Really it's just because the debugging experience sucks. If you could actually print out the value of local variables in the debugger, using their `fmt::Debug` representation, that would be great, but that just isn't the case yet. Instead, we're stuck with adding print statements and recompiling our code, which depending on the size of the project can take forever.
Tbh I do the same. GDB is just a massive tool that I feel uncomfortable with.
Maybe that is a missing niche of the market; a debugging protocol similar to the language protocol used in VSCode (RLP in Rust provides this).
Then the IDE could integrate with any language and debug it, regardless of the details on how the language functions. And it can provide a better UI than GDB (which isn't a high bar, it's more like trying to dig down to find the bar because GDB UI is horrid)
In Rust, everyone is a print debugger. The only thing that really goes wrong in normal code (once it compiles) is "why is this value not what I expect?". Dropping down into GDB is way overkill.