Hacker News new | past | comments | ask | show | jobs | submit login
Rust 1.32 released (rust-lang.org)
430 points by steveklabnik 3 months ago | hide | past | web | favorite | 161 comments



Wow, it's been a while since I caught one of these announcements, and I have to say those those additions looks pretty awesome. I myself am a print debugger, and the dbg macro looks like a major quality of life improvement in that regard.

The default removal of jemalloc should greatly cut down on binary size as well, if I understand correctly.

I've dabbled in Rust, but haven't had a major reason to use it for anything yet. I'm looking forward to when that time comes.


> I myself am a print debugger

In Rust, everyone is a print debugger. The only thing that really goes wrong in normal code (once it compiles) is "why is this value not what I expect?". Dropping down into GDB is way overkill.


This is a silly claim.

* Rust code can have memory errors + undefined behavior, because Rust code can say "unsafe". Plenty of real projects use "unsafe". (Alternate reason: because the compiler has soundness bugs.)

* Memory errors + undefined behavior aren't the only reasons people like debuggers. Consider: there are plenty of other memory-safe (GCed) languages in which people find debuggers useful (such as Java). "The only thing that really goes wrong in normal code (once it compiles) is 'why is this value not what I expect?'" is arguably true there as well.

And, for the record, gdb works decently well with Rust code. Not perfectly (yet) but well enough to be useful. I have tried it (although I'm more of a printf debugger myself).


I do the bulk of my programming in Kotlin these days, and I'd say that the primary reason is because IntelliJ's debugging support is so good. Aside from the ones you mention, key advantages of a good IDE debugger include:

1.) Ability to see the value of every variable in scope without needing to decide a-priori which variables are worth looking at.

2.) Ability to traverse the call-stack and identify at what point a computation went wrong without having to instrument every single call & variable.

3.) Ability to interactively try out new code within the context of a stack frame. When I find a bug, oftentimes I'll try 3-4 new approaches just by entering watch expressions until I find an algorithm that works well on the data. This would take 3-4 full runs without the debugger.

4.) Ability to set conditional breakpoints and skip all the data that's working properly, only stopping on one particular record. When your loops regularly have 100k iterations before they fail on one single iteration, that's a lot of log output to sift though (or a lot of unnecessary loop counters & if-statements) for a rarely-encountered case.


JetBrain's CLion IDE offers similar niceness for Rust - I imagine it's the same underlying debugger UI.


CLion debugger has been extremely fickle for me, spoiling what is otherwise a decent IDE.


The llvm support in CLion is terrible, basically the only thing that works right is breakpoints.


Honestly when I'm dealing with memory errors and undefined behaviors I can count on my hands the number of time a debugger saved me and the hundreds of times I've had to printf my way to victory thanks to 2/3/N-order effects that cascade to the final corruption.

Don't get me wrong, they're handy but I find them much more useful for stepping flow than root-causing errors.

Also if you're dealing with race conditions the only way to safely root-cause to to stash away data somewhere in mem and print it later as flushes/fences/etc change behavior. Debuggers make that even worse.

Love my debuggers for behavior issues but each tool has it's place.


>Also if you're dealing with race conditions the only way to safely root-cause to to stash away data somewhere in mem and print it later as flushes/fences/etc change behavior. Debuggers make that even worse.

I'm not sure how Rust's support is here, but in my experience it's the exact opposite. Debuggers with var-watch or conditional breakpoints can do this (and a heck of a lot more) on the fly, and that's almost always faster than re-compiling and running. Even at the extreme-worst case, you can be a print-debugger with a debugger without needing to rebuild each time, just re-run.


Your conditional breakpoint can change execution behavior thought flushing cache/icache in a way that doesn't reproduce.

X86 is pretty orderly so you usually don't see that class of bugs until you start getting on other architectures but when you do man is it nasty. C/C++ volatile comes to mind particularly. MSCV makes it atomic and fenced which isn't the case pretty much anywhere else.

Also debuggers don't help you with the 2nd/3rd order effects when you need to trace something that's falling over across 5-6 different systems. With print based debugging I can format + graph that stuff much faster than a debugger can show me.

Like I said, different tools for different uses. It's just important to know the right tool so that everything doesn't look like a nail.


>Your conditional breakpoint can change execution behavior thought flushing cache/icache in a way that doesn't reproduce.

Yes, that is definitely true. But so does calling a printing func that does IO, since it often involves system-wide locks - I'm sure many here have encountered bugs that go away when print statements are added. But debuggers are definitely more invasive / have stronger side effects, and have no workaround, yea.

Multiple systems: sorta. Past (legitimately shallow) multi-process debugging that I've done has been pretty easy IMO, you just add a conditional breakpoint on the IPC you want and then enable the breakpoints you care about. Only slightly more complicated than multi-thread since the source isn't all in one UI. Printing is language agnostic tho, so it's at least a viable fallback in all cases, which does make it a lot more common.

---

To be clear, I'm not saying there's never a need for in-bin "debugging" with prints, data collection of some kind, etc. You can do stuff that's infeasible from the outside, it'll always have some place, and some languages/ecosystems give you no option. Just that it's far later than most people encounter, when a sophisticated debugger exists. E.g. printf debugging in Java that I encounter is usually due to a lack of understanding of what the debugger can do, not for any real benefit.


> But so does calling a printing func that does IO, since it often involves system-wide locks

> the only way to safely root-cause to to stash away data somewhere in mem and print it later as flushes/fences/etc change behavior.

Dude, I literally called that out in the root post ;).


Memory races, yea. Logical races no. But yep - I'd forgotten the context, agreed :)


You should try it before making uneducated, general comments that don't add anything. Rust in CLion/IntelliJ using LLVM is terrible: breakpoints and stack callers work, but variables, llvm and the rest are 99% broken.


I prefer ASAN/UBSAN to either debuggers or printf-style debugging in such cases, but it's not always available.

I think a lot of debugger vs printf-style debugging is a matter of preference and familiarity. I'm used to debugging embedded or distributed systems where debugger support is not so great, so I've gotten used to other techniques (including printf-style stuff). But a lot of people love using debuggers, and I find it elitist to tell them they're wrong.


Yeah, I've never had that pleasure except in toy scenarios but they are cool tools! Usually I'm dealing with 2-3 vendors worth of cruft and platforms that aren't publicly available.

You'd be impressed with the power of formatted printf + excel. Solved some fun issues like quaternion interpolation normalization via graphing and the like.


I did specifically mention "normal code". `unsafe` is not normal code. Obviously if there is a segfault, I'd fire up a debugger - GDB is just fine for such purposes.

The comparison with Java is interesting. With Java, I have often found that errors occur in a rather non-local fashion, due to dynamic code loading, confusing inheritance trees, and ubiquitous mutations and what have you. Maybe I'm not actually calling the function I thought I was, maybe because I have actually received a subclass of my expected class. Print-debugging is often too narrow to highlight the cause. In such a situation, I would fire up the debugger and inspect the general state of the application (which Java makes relatively easy to do).

In contrast, in Rust things tend to happen in a very constrained fashion. You can't randomly mutate things, you can't (without considerable effort) make complicated graph structures where everything can touch everything else. With the occasional exception of highly generic code, your call sites and function arguments are exactly what you expect. So I can rely on print-debugging to quickly find the cause of my problem.

Incidentally the same is true with Haskell, moreso even, except due to laziness the evaluation order can be harder to ascertain - debug statements can appear in a strange order (or not at all).


Your first point is a rather silly response.

You are being overly-pedantic in your interpretation of the comment you are responding to. It isn't claiming that we have absolutely no undefined behaviour or memory errors in rust. The point is that undefined behaviour and memory errors are rare in Rust development, so tools intended to help find memory errors are just a lot less useful.

Your second point is spot on.


Parent probably implies "with the exception of unsafe" when he says "normal code". Unsafe code is supposed to lack many of the benefits of Rust's memory model.


And that'd be a totally useful way of looking at it if most real Rust programs didn't have any "abnormal" (unsafe) code in them. They do, though, and it still must be debugged somehow. Maybe the "unsafe" is hidden away in some transitive dependency crate or even in std, but it's there.

It's incredibly useful to limit the regions of unsafety and use them to build reusable, well-tested safe abstractions, but it's a mistake to confuse that with eliminating unsafe entirely or ignore the possibility there could still be errors within them.


> And that'd be a totally useful way of looking at it if most real Rust programs didn't have any "abnormal" (unsafe) code in them. They do, though,

I'm willing to bet that the vast majority of Rust code (outside of std) is safe. I've written unsafe once ever, in years of writing rust.

I agree that it's unfair to generalize that debuggers have no use in rust, but it's fair to generalize and say that most rust developers do not experience segfaults, or other memory corruption issues that often call for a more advanced approach to debugging.


I'd guess that about 1% of Rust code is unsafe (holds true for a project of mine) but almost all Rust projects depend on some crate's unsafe code. And I've hit segfaults caused by unsafe code in crates I depended on several times. (Most commonly, due to FFI code trying to duplicate a C library's ABI in a .rs file and not getting it exactly right for the version/config options it the library was built with on my machine. This is a disturbingly brittle way of doing things but will probably be common until bindgen is distributed with rustup by default or some such.)

You may not use the debugger often, but it's there if you need/want it, which is an important message that I think is lost with "all Rust programmers are print debuggers".

Congrats on only using unsafe once in years. That's pretty neat.


> I'm willing to bet that the vast majority of Rust code (outside of std) is safe. I've written unsafe once ever, in years of writing rust.

It's very much about project choice. I immediately ran into unsafe trying to test some functions marked extern. Then again writing toy VMs and GC algos.


Not when you have a proper IDE setup where building + running it in debugging session are all done with a single action. I've done print debugging for a long time, and here and there it still makes sense, but I've found that it's honestly worth putting in the effort once per (decently sized) project to just set up the IDE properly. And honestly once you have done it once, it's mostly just copy pasting the same config from project to project.


A lot of us believe that we spend a much smaller portion of our time looking at or debugging existing code than we really do. If you don't believe it's a time suck then you have very little incentive to keep pushing to get better at it. So the majority of us quickly reach a point where we are satisfied that we 'know how to debug' but leave a lot of room for improvement on the table.

The best description I've heard for master-level debugging is that it's a process of narrowing down the problem space as cheaply as possible. Your brain is telling you that based on everything you 'know' about the code, the right answer should come out. If the wrong answer is coming out, something you 'know' is wrong.

After the most obvious failure mode doesn't reveal the problem, your next check may not be the second most obvious failure. Instead you're multiplying the cost of verifying an assumption times the likelihood it's correct times the 'area' of the problem space it eliminates. Checking things like "is it plugged in?" sounds stupid but brings down the worst-case resolution time by hours.

Long story short, let's say I'm sitting in an interactive debugger looking at a stack frame, expecting that a particular variable has the wrong value, but it's fine. The cheapest thing for me to do next is to look at all of the neighbors of the suspicious value, and those in the caller and on the return. With println, pretty much every subsequent check costs the same amount as the first one. And if there's no short path from starting the app to running the scenario, that cost could be pretty high.

If you believe that you have a high success rate on your first couple of guesses, then println works great for you. But what if you're wrong? Have you ever tracked how many attempts it usually takes you? Or are you too wrapped up in the execution to step back and think about how you could do better next time?

Also, I want to be clear that I'm not telling anybody how to debug, as long as you aren't making that choice for your whole team. Don't choose tools or code conventions that break interactive debugging because "println was fine for grandpa so it's good enough for me!" That's a big ol' case of Chesterton's Fence.


Print debugging is a case of the Flub Paradox.

Having experienced the higher plane of fully integrated IDE / run / debugging with arbitrary expression evaluation, conditional breakpoints, etc, I can't even imagine how anyone could work with "print debugging".


I actually have gone in the opposite direction. I used to use a step-through debugger for all my debugging needs, but at this point I pretty much only do printf debugging.

I find that in most cases it's easier for me to figure out what's going on, because I can quickly scan a log of how different variables changed over time, instead of having to step through one step at a time.


Even if you have everything working just the way it should, in the majority of cases print debugging is enough because problems boil down to the assumptions in your head about what should be a variable not being reflected by what it is in your program.

You write tests? Consider selecting variables of interest (and printing them to STDERR when debug mode is on) like one of many test.

Looking at memory and all the variables has its place, but as you said only "here and there" - because when you have to do that, you have already lost: you are looking for a needle in a (hay)stack, and will lose much more time that just eyeballing the variables of interest you selected before.


Just wanted to say that your rust projects are a blast to watch from afar on twitter. You've done seemingly really crazy things with Rust + Zelda Wind Waker.

Thank you for sharing your hacks!


I so deeply hate that there is so much good content on Twitter that is simply lost if I don't happen to login on the given day. God forbid I'm not on the platform at all. It's so weird how RSS + Blog is a better experience for everyone, except advertisers.

Anyway, I'm super curious what someone is doing with Rust and Zelda. How do I learn more without Twitter?

Looks like this might be it? and then maybe I'm just missing out on images and videos that are only in some ephemeral tweet? https://github.com/CryZe/WindWakerBetaQuest


I was actually thinking of the crazy stuff the OP does to modify the game (geometry and collision, I think) [1] with rust. I hadn't even seen WindWakerBetaQuest - that's also really cool!

[1] Things like custom menus https://twitter.com/CryZe107/status/1026355408343126017

Playing around with physics https://twitter.com/CryZe107/status/991389826002931714

Implementing Super Mario Odyssey-style snow https://twitter.com/CryZe107/status/991104446812819456

Rendering the Rust logo into 3d namespace https://twitter.com/CryZe107/status/990644091963756544


Convenient, easy to access / use debuggers are a boon for logic bugs. Being able to see the flow of the program and snapshots of state reduce the time it takes to identify and fix bugs significantly.

Perhaps everyone being a print debugger in Rust is less a compliment to the language, but a criticism of the tooling. I absolutely adore Rust, but understand there are still some vast gaps in the tooling.


Really it's just because the debugging experience sucks. If you could actually print out the value of local variables in the debugger, using their `fmt::Debug` representation, that would be great, but that just isn't the case yet. Instead, we're stuck with adding print statements and recompiling our code, which depending on the size of the project can take forever.


Tbh I do the same. GDB is just a massive tool that I feel uncomfortable with.

Maybe that is a missing niche of the market; a debugging protocol similar to the language protocol used in VSCode (RLP in Rust provides this).

Then the IDE could integrate with any language and debug it, regardless of the details on how the language functions. And it can provide a better UI than GDB (which isn't a high bar, it's more like trying to dig down to find the bar because GDB UI is horrid)


For me every bug I get that's not obvious/build stuff is something that GDB struggles with because of threads/program boundaries, etc.


Can anyone link me to a breakdown of how jemalloc fairs against system allocators wrt Rust? I was of the impression that the standard system allocators had worse fragmentation properties? Is that not true, or is the idea that if you care about fragmentation, you should opt into jemalloc?


Rust doesn't have a runtime so this is almost completely application dependent.


What kinds of things does jemalloc perform better in? What are its "weaknesses"?


For example, jemalloc 3.x reduces memory usage of CRuby by ~50% and increases throughput by ~10%. It also makes an enormous difference with KV stores like Redis so it's compiled in by default.

Weakness is basically binary size and code complexity.


Why is jemalloc not included in glibc then?


jemalloc is the default allocator on FreeBSD. glibc uses ptmalloc for historical reasons.


jemalloc is a strong memory allocator. But I expect that the team felt that most applications don't need to ship their own memory allocator.

An issue that Rust has had to overcome is that it produces bloated binaries by default. Hello World type applications are hundreds of kilobytes long, because they ship jemalloc and libunwind. Removing one of those helps.


There was a mention somewhere that rustc (the compiler itself) still uses jemalloc, glibc 2.24(?) was about 10% slower. Compilers tend to use a lot of small allocations, so most code probably is less affected than that.

I think the big selling point of jemalloc when it appeared was that it was much better for multithreaded applications. But since then glibc has improved a lot in this area, and nowadays has a similiar design with per-thread pools etc.


Although I'm somewhat embarrassed to admit, I bet that dbg! is going to be extremely helpful in my Rust programming. Really appreciate that the team is taking time to consider "trivial" usability/ergonomics additions to the language.


It's funny that this is the first time I've seen a language explicitly condone "print debugging." It's one of those things that everyone says you're not supposed to do and then does anyway.

Does any other language have a similar feature?


I think debuggers are not worth the cost for many kinds of debugging scenarios. They're great for stepping through projects you're not really familiar with or in situations where code seems to be in violation of baseline expectations, but fiddling around with breakpoints and watches and other UI particularities of the debugger carries more cognitive overhead than "print debugging". Additionally, I think the problem is better solved by using thoughtful and contextualized logging with appropriate severity levels. Couple this with a TDD approach to development and you'll end up in a workflow that is just faster than stepping through lines of code when you could have your assumptions verified through logs and test assertions.


Agreed! Personally I've found that I can find and fix problems much faster with a few print statements than with a debugger--debuggers can make it harder to trace through the full execution of a program.


> debuggers can make it harder to trace through the full execution of a program.

Except you often don't want a full execution, you often just want a partial execution where you suspect the problem arises. I can assure you that a good UI debugger is extremely helpful. Command line debuggers less so.


Yeah, I feel like people who don't like debuggers either don't use IDE's with amazing debuggers, or don't take the time to learn and understand how to use them. You can do powerful things with a debugger in seconds. I still use print statements under other environments. I think 'debug' logging is useful. I prefer logging over "print" any day of the week.


This is why unit tests are so helpful. You only debug the part of the code that's broken. It's a kind of a different way of thinking and people often write what I consider to be "bad" tests -- i.e. tests that don't actually exercise real portions of the code, but rather make tautological statements about interfaces. I spend a considerable amount of time designing my code so that various scenarios are easy to set up. If you find yourself reaching for a fake/mock because it is hard to set up a scenario with real objects, it's an indication of a problem.

This extra work pays off pretty quickly, though. When I have a bug, I find a selection of tests that works with that code, add a couple of printf-equivalents and then rerun the tests. Usually I can spot the error in a couple of minutes. Being an older programmer (I worked professionally for over 10 years before Beck published the first XP book), I'm very comfortable with using a debugger. However since I started doing TDD, I have never once used one. It's just a lot faster to printf debug.

The way I've explained it before is that it's like having an automated debugger. The tests are just code paths that you would dig into if you were debugging. The expectations in the tests are simply watch points. You run the code and look at the results, only you don't have to single step it -- it just runs in a couple of seconds and gives you the results.

You may think that the overhead of writing tests would be higher than the amount saved with debugging and if it were only debugging, I think that would be true. However, one of the things I've found over the years is that I'm actually dramatically faster writing code with tests compared to writing it without tests (keep in mind that I've got nearly 20 years of TDD experience -- yes... I started that early on). I'm pretty good at it.

The main advantage is that when you are writing code without tests, usually you sketch together a solution and then you run the app and see if it works. Sometimes it does pretty much what you want, but usually you discover some problems. You use a debugger, or you just modify the code and see what happens. Depending on the system, you often have to get out of the context of what you are doing, re-run the app, enter a whole bunch of information, etc, etc. It takes time. Debuggers that can update information on the fly are great time savers, but you still have to do a lot of contextual work.

It takes me some extra time to write tests, but running them is super quick (as long as you aren't writing bad tests). Usually I insist that I can run the relevant tests in less than 2 seconds. Ideally I like the entire suite to run in less than a minute, though convincing my peers to adhere to these numbers is often difficult. That 2 seconds is important, though. It's the amount of time it takes your brain to notice that something is taking a long time. If it's less than 2 seconds (and run whenever you save the file), usually you will barely notice it.

In that way, I've got better focus and can stay in the zone of the code, rather than repeatedly setting up my manual testing and looking at what it is doing. Overall, it's a pretty big productivity improvement for me. YMMV.


I really like the idea of tests as materialized debugging sessions.


Any multi-instance, multithread-capable language must have an appropriate debugger.

> [src/main.rs:4] x = 5

How to differentiate the value of 'x' for a given thread/instance/whatever? Don't add the info by hand to the debug message.

edit: typo


Using a debugger for testing multi-threaded code is particularly painful. Tests and logs are especially superior in this case because you can make complex assertions that capture emergent behavior of a multi-threaded application. Pausing threads to step through them can often make it harder to observe the behavior you might expect to see when multiple threads are working together in real time.


> I think debuggers are not worth the cost for many kinds of debugging scenarios.

That just means your debugger has a prohibitively high cost to use. If it takes more than 2 clicks to launch a full debugging session of your project, you need a new IDE.


The cost isn't in starting the debugger, it's getting into what you think is the right point in the execution of the program, and then stepping through one step at a time until you notice something is off.

With printf debugging, you can put print statements everywhere you think something might be wrong, run the program, and quickly scan through the log to see if anything doesn't match your expectations.


Most debuggers have conditional breakpoints and print-breakpoints, no need to step through line by line manually. If something strange happens just right-click and insert a print as the program is running.

With print-debugging if you find a bug you have to stop your app, insert print lines, recompile, redeploy, relaunch, click through your app to reach buggy location and then scan through log. This really feels like stone-age once you've ever used a IDE.


This "print debugging is bad" idea never made sense. Debugging is largely an effort to understand what's going on inside a program - having the program "talk back" to you via prints can be a great way to do this.

Sometimes, print debugging is the only practical way to fix a bug. For example, very rare bugs which can only be reproduced by running many instances of the code for a long time, or situations where attaching a debugger is not feasible (as in live services).


Then one uses trace points like IntelliTrace or DTrace.


Does this also work with 10 replicas of something in Alpine containers on clusters spread across different regions?


IntelliTrace works perfectly fine on Windows clusters and across Cloud deployments on Azure.

Never used Alpine containers, so no idea how it works with DTrace / SystemTap.

However a quick web search revealed the following right away, surely there are other results available.

"Systemtap for CoreOS Container Linux"

https://medium.com/makingtuenti/systemtap-for-coreos-contain...

"App Trace Roll: Users Guide" . For Red Hat Linux clusters

https://docs.huihoo.com/rocksclusters/app-trace/4.1/index.ht...


The person who proposed it is a (previous) Haskell developer.

The dbg! macro is definitely inspired by https://hackage.haskell.org/package/base-4.12.0.0/docs/Debug....


How so? In Rust, the macro is just a very thin wrapper around some output formatting boilerplate. In Haskell, it exists because you'd otherwise have to change the type of the function to add print-debug statements.


Because it can be used to wrap any expression.

I haven't seen a debug function that returns the value again anywhere else.


It was definitely in Arc first, and probably in other lisps before that:

https://github.com/arclanguage/anarki/blob/master/arc.arc#L1...

Edit: Yes, it's in Common Lisp too, so it goes back at least to the 1980s and probably further.

http://www.lispworks.com/documentation/HyperSpec/Body/f_wr_p...


The debug printing method p in Ruby have been returning it's argument for years now. Dbg seems possibly better for the usecase though as you get the location as well.


It also reminds me of Ruby's

  .tap { |x| puts x }
For those of you who don't know Ruby, its map/filter/reduce functions chain like this:

  values.map { |x| x + 2 }.select { |x| x > 3 }
So when you want to look at an intermediate result, there's the .tap() method that runs a lambda with that intermediate result, then passes it on to the next step in the chain.

  [0, 1, 2, 3].map { |x| x + 2 }.tap { |x| puts x }.select { |x| x > 3 }
This returns [4, 5] after printing [2, 3, 4, 5]. ("puts" is Ruby's println.)


Took me a while to find it, but Rust's iterators have an `.inspect` method that gives you a read-only reference, so println debugging works fine. For more advanced tap-like stuff, use the `tap` crate (which allows you to write `array.tap(|xs| xs.sort())` for example, even though `sort` mutates in place and doesn't return the array).


It's pretty much exactly like Ingy's XXX perl module from 2006: https://metacpan.org/pod/XXX

Prints a YAML dump of "my str", followed by file/line number information, then returns its argument so it can be embedded in expressions:

    $ perl -MXXX -E 'say uc(WWW("my str"))'
    --- my str
    ...
      at -e line 1
    MY STR
Data::Dump's "ddx" (1996) is also commonly used for print debugging, except it doesn't return the argument.


Almost. dbg! also shows the statement in addition to what it evaluates too. That is:

  dbg!(n * factorial(n - 1))
shows up as:

  [src/main.rs:5] n * factorial(n - 1) = 2
  [src/main.rs:5] n * factorial(n - 1) = 6
  [src/main.rs:5] n * factorial(n - 1) = 24
Although, IIRC there was some crazy dumper module by dconway (who else) that worked similar to this, but I don't recall if it returned the value.

...

So, I just looked it up, and I found it. Actually, Damien wrote two. One that he updated from 2014 to 2016, and one he started in 2017 and has maintained to the present. I have no idea the reason for that.

1: https://metacpan.org/pod/Data::Show

2: https://metacpan.org/pod/Data::Dx


Hehe nice ones, thanks for the links! :)

Yep leave it Damien. I remember he did something similar for a test module where it would show the expressions that were evaluated in the test failure output.


The `p` function in ruby is designed for quick pretty printing. `console.` in the browser are only* useful for debugging. Go has a `println` keyword so that you don't need to import the `fmt` package, and has a `%+v` format helper to get a pretty printed debugging friendly representation of an object.


As a new go programmer you just saved me a bunch of time, I didn’t know about either of those.


Elixir has a very similar function, IO.inspect(..), which prints a debug version of its argument and then returns it, very similar to dbg! here. Coupled with Elixir's pipeline operator `|>` which passes the output into the first argument of the next function, it's pretty handy: foo() |> IO.inspect |> bar()


Print debugging is only necessary when support for trace points in debuggers is lacking for the respective language.

With good support, a couple of trace points, even on a live running instance is all that is needed, without any extra recompiles.


This is just not true. For example, some bugs are so rare that you can't hope to reproduce them by just looking at "a live running instance". In this case, print debugging can be the only practical way to go.


Which is exactly what trace points are for.

More devs should learn about JTags, IntelliTrace, DTrace, ....

No need for manually writing printf-debugging and recompiling all the time, when the debugger can do that for you.


Do people really not want people to print debug? The professors of all my CS classes so far (from data structures to assembly) explicitly tell students to print debug.


When you're learning it's fine.

I would hope at some point you would get to the point where you move past this, once you get to Enterprise scale print debugging stuff would be slow and laborious.


Elm has a Debug.log function which takes a string and any value, prints "String: value" in the console, and then returns the value.


Bash has a -x mode that does this.


Python/Ruby have long had a tradition of printf + unit test debugging. I assume Python has a debugger now but I honestly don't know anyone who uses it.


I use it all the time. If my program crashes, I swap `python foo.py` with `ipython --pdb -- foo.py` and it will drop me into an ipython session at the exception so that I can inspect variables and the backtrace.

Also, if I want to pause at a point and step from there, just drop `import ipdb; ipdb.set_trace()` at the line I want to set a breakpoint.


I use it all the time as well and know many people that do. It was no longer than yesterday that I discovered it worked in a jupyter notebook ! So I assume enough people use it that it was integrated


I was using Python debuggers in 2002, courtesy of ActiveState.


print is a considered a big antipattern in python since it's equally easy to use the built in logging.debug. With pycharm launching the debugger is as easy as right-click a file and press debug, i don't know anyone who doesn't use it.


One thing I noticed: ne means native endianness. It's probably because I'm used to C, but I'd have preferred he for host endianness and ne for network order endianness.


The naming is far worse than that. For example, from_ne_bytes seems to imply that bytes could have an endianness. That would require bitwise addressing.


A byte cannot have endiannes, but bytes can, and that's how I think one should read that name.


Wellll, the DSP I used to work on had 24-bit bytes. When we saved a data structure onto the SD card (that had 8-bit bytes) we had to decide what endianness to write the byte in. :-)


I don't understand? Isn't that to convert from an array of bytes to a native type? So those bytes need to be in a specific order (in this case, native endianness)?

Do you mean someone could get confused because they think 1 byte has endianness? Well then maybe they shouldn't touch these functions since they don't understand what is endianness.

It's weird that they have these methods on a u8 though. Probably just comes automatically because it's an integer type so they prefer to have a consistent API?

It seems to come from a macro used to define those primitive int types: https://doc.rust-lang.org/stable/src/core/num/mod.rs.html#38...


According to the naming, yes indeed 1 byte has endianness. I know that isn't the intention.

It isn't unheard of for 1 byte to have endianness. To be specific, we can call it "bit endianness". It matters when serializing bits to go over a wire, such as when bit-banging I2C or SPI.

An array doesn't normally have endianness either. I guess you could have a programming language that does the indexing backwards, with 0 at the end of the array. I've never heard of such a thing existing.

Compare this with a endianness functions typically used in C. We get ntohl for example. The size is specified by the "l", the argument is of type "long", and the return value is of type "long". No bytes are involved in that interface.

Compare with what the Linux kernel uses. Again, no bytes are anywhere to be seen. Functions like cpu_to_le32 take and return 32-bit values.

It looks like Rust is doing things Python-style, which is a scary thought. Instead of providing distinct ways to change endianness and to interpret data as integers, the functionality is crammed into one interface.


> It looks like Rust is doing things Python-style, which is a scary thought. Instead of providing distinct ways to change endianness and to interpret data as integers, the functionality is crammed into one interface.

Except it's not. These specific methods aren't some grand interface to change endianness. These are convenience routines for converting between bytes and integers, which is a not altogether uncommon thing to do. I certainly do it a lot.

The existence of these byte-to-integer conversion methods does not imply the non-existence of other methods to convert endianness within integers. Indeed, those methods have existed since Rust 1.0: `swap_bytes`, `from_be`, `from_le`, `to_be`, and `to_le` are all methods defined on {integer type} that return {integer type}.


You're confusing bit ordering and endianess. There are many microcontrollers that will change the bit ordering on i2c and spi buses to accommodate various devices, but that's not endianess.


Bit ordering and byte ordering are both endianness. Word ordering is also endianness, as is nibble ordering. It's all endianness.

The fact that we typically refer to byte ordering doesn't mean the others are not also endianness.


I’m waiting for async/await... that will start the party!


You and many others... the lang team has been hard at work. We want it to ship, but we also want to get it right. It will be a huge deal, for sure.


I'm waiting eagerly too, thanks for all the hard work you do!

I read somewhere in all the discussions that Carl was hesitant to go all-in on futures 0.3 (those are definitely landing in std, right?) in tokio before some additional ergonomics had landed, possibly some language feature among them but I may misremember...

Do you know the list of things and possibly their tracking issues? Futures in std, async/await, etc, something something more? Would love to be able to keep track and follow along :)


No problem, though I don't work on this aspect at all, to be clear :)

> (those are definitely landing in std, right?)

Yes.

> Do you know the list of things and possibly their tracking issues?

https://areweasyncyet.rs/ :D


Ha, that was a superb answer, thanks! :D I'll be checking that site!

I know you may not have your fingers in this particular flower pot, so the "you" was directed at the entire Rust community. However, while we're here you still get a big personal thanks -- your Rust for Rubyists got me hooked those years back and the docs/book (1st ed and then 2nd ed with Carol) work you've done since is nothing short of amazing -- and with these latest developments I hope you find a great place to continue your work <3


Cool, then serious movement on embedded please!


That's happening totally concurrently! We've already made big strides this year, and I'm expecting more good stuff this year. There's even a new, embedded focused conference!


Please consider approaching Segger. If a real IDE supports Rust compiler and builds in real debug, more than just some hobbiest weirdos (heh) will be able to consider it.


Besides inflammatory language not being appropriate (here or anywhere else, really), what makes you think that only "hobbyist weirdos" are using Rust? When I look at "This Week in Rust", it lists job openings for Rust developers every week, so we're way past the "bunch of enthusiast" stage.


I don’t know how to comfort someone that thinks a light hearted “hobbiest wierdos (heh)” is offensive and inflammatory.

Show me serious embedded device mfgs using Rust. Actually using. Not some testimonial on the Rust site. The real truth is no company is going to use it until there is serious use will happen until there is IDE support. Segger is a company large enough to be taken serious and unlike Keil or IAR small enough to get it done.

Great things start with hobbiest wierdos. IMO... once you stop looking for things to be offended by you can get to work, son.


Asynchronous/await is also the reason to delay generators?


In a sense; generators are currently an implementation detail of async/await. The plan is to get async/await shipped first, and then stabilize generators afterward.


https://boats.gitlab.io/blog/post/romio/ you can start playing with it right now


If my experience with Node is any indication, you want to be fashionably late to this party. It takes a while for modules to adapt to such a big change, and the modules you are most interested in may have dependencies on those modules. It takes a bit of time for concurrency changes to percolate up the dependency graph.


? if a function returns a promise you are good and you can promisify cb style libs so how was this an issue?


Perhaps, with the absence of an officially sanctioned, easy-to-use async story, people resort to stuff like thread-per-connection. Like rocket.rs. When said async stuff appears in the core language, it will still take time for other code to be rewritten.


Not that I use Rust for more than toy projects, and not that the choice of allocator makes (any) difference for those... but I'm stoked to see the system allocator used by default now. Tiny binaries are good binaries.


So, the question I have is, should I switch my projects back to jemalloc? Are there are guidelines for determining where it makes sense and where it doesn't? (Obviously to get the fully correct answer for my project, I should benchmark, but how do I know if its even worth going to the effort of that?)


It depends. In the gamedev industry, we noticed performance improvements when switching the allocator. But “should” is relative.

You probably won’t get a 2x boost from it. But you might notice some improvements. It’s worth trying if you can test it quickly.

Software best practices are like flossing: good in the long term, but probably not crucial to keeping your bite.


I'm not incredibly familiar with gamedev, but I would expect most allocations in games to be performed by small-object allocators on top of the general-purpose allocator, so the choice of general-purpose allocator should not make that much of a difference, should it?


I love Rust too. But lately, been getting the feeling that its trying to do "everything". Hope it doesn't end up the C++ way.

Don't get me wrong, dbg macro isn't one of them. Heavy user of print debugger here.


LOL, my dyslexia confused me pretty bad with `dbg!` I was expecting this to hook into GDB somehow, and got really excited. It's still very neat, I'll be using it a lot I'm sure. Reminds me of https://github.com/reem/rust-inspect a little bit.

I'm also quite happy with the uniform path work that's landed. Rust now has easily one of the best namespace management's I've used in a programming language.

Cheers!


Hey, that’s my library! I’m glad to see a similar macro getting added upstream, so you no longer need to install libraries like this one for easy print debugging.


You never merged my PRs, QQ. Just teasing.


How's the documentation coming along? I've had to take a break from it because so far I've only been able to use if for hobby projects. Coming back to Rust for me often means bouncing around different editions of the docs (especially if FFI is involved) and searching blog posts, forums and my own old code to try to remember how to do things. But then of course the language has changed or best practice has so it's all out of date.


I think it has come along really well. I started trying to learn it a few years ago, but going through the official documentation that was recently revised has been a breath of fresh air. Went from one of the most confusing experiences to one of my favorite. Highly recommend starting here

https://doc.rust-lang.org/book/index.html


As of the time of this post, the official standalone installer page incorrectly lists 1.30.0 as the latest stable release. For users who prefer or need standalone installers, please use the URL templates bellow to download your packages until this issue has been resolved.

The URL template for normal rust installers is:

    * https://static.rust-lang.org/dist/rust-1.32.0-{TARGET-TRIPPLE}.{EXT}

    * https://static.rust-lang.org/dist/rust-1.32.0-{TARGET-TRIPPLE}.{EXT}.asc
The URL template for additional compilation target installers (`x86_64-unknown-linux-musl`, `wasm32-unknown-unknown`, ..etc) is:

    * https://static.rust-lang.org/dist/rust-std-1.32.0-{TARGET-TRIPPLE}.{EXT}

    * https://static.rust-lang.org/dist/rust-std-1.32.0-{TARGET-TRIPPLE}.{EXT}.asc
To avoid a very long post (and a lot of scrolling), the list of links to all target installers supported by rust has been omitted from this post. Refer to the complete list of supported platforms in https://forge.rust-lang.org/platform-support.html.

The file extension for auxiliary target installers is `.tar.gz` (or `.tar.xz`) for all targets including Windows.

Note: Due to a known bug, browsing the complete list of all installers is not available on https://static.rust-lang.org. It is however still possible to access dated repositories via the following URL template:

https://static.rust-lang.org/dist/YYYY-MM-DD/

Installers for the current stable release of rust can be browsed at https://static.rust-lang.org/dist/2019-01-17/

Cheers!


Just recently I was looking at example code of an edition 2018 crate and what I feared actually happened: I didn't know any more which crates this example code used, because extern crate statements were missing. So yaay more manual figuring out I guess...


I never understood this argument: you just have to look in Cargo.toml.

Yes, for multi-target (eg lib + bin, ..) it can be a little more work to look it up.

But the new system is SO much more convenient to use overall.


For lib.rs in libraries or main.rs in binary only crates this works great. But if you have multiple examples, then each example can use any dev-dependency it wants and any of the normal dependencies. It's super hard to figure out which crates are needed to add to make an example compile locally.

To give a concrete example, please look at this [1] Cargo.toml and this [2] example and try to figure out the crates its using. I've highlighted them in Cargo.toml, to show how irregular the use is [3].

We don't have test-only or binary only dependencies yet. They would make this a bit easier.

I want to focus on the code I'm writing, not onto figuring out which crates I need to include. On the bright side though, the system is still better than what C++ got which is not knowing from which header a particular symbol came from :).

[1]: https://github.com/djc/quinn/blob/e555d11a430b9760d149a27659...

[2]: https://github.com/djc/quinn/blob/e555d11a430b9760d149a27659...

[3]: https://gist.github.com/est31/7cb33e8a6b63c8798381bdb5e91a14...


Point taken!

I also wish we could specify /examples or /tests specific dependencies in Cargo.toml.

A question here is though if such a complex structure is so rare that making it easier for the 90%+ of crates is worth it, while inconveniencing the complex cases.


The only example crate used is `jemallocator`, which still has an entry in Cargo.toml, and still has `jemallocator` in the source code.


Oh sorry, this isn't about the blog post but about the experience I had so far with the 2018 edition. The blog post is great. Glad to see dbg!, ? in macros, and be/le/etc. stable. Also looking forward to 1.33 which will stabilize duration_from_milis.


Ah, it's all good, no worries :)


But you can see it from Cargo.toml, isn't it? I found extern crate statements unnecessary verbosity. I am happy it is now optional.


dbg! looks neat. It made me wonder if any of the statically typed compiled languages have an equivalent of Ruby's binding.pry or JS's debugger call? That is, something like

  x = foo();
  invoke_debugger(); // An interactive console appears here.
  y = bar(x)


On x86 systems, a breakpoint is implemented by replacing the byte with the "int3" instruction. For Unix systems, this is the SIGTRAP signal. So invoke_debugger() that boils down to asm("int3") would suffice, if you were already on a debugger.

Well, the trouble is knowing if you need to spawn a debugger. On Windows, there's a nice API for checking if you're being debugged (IsDebuggerPresent). On Linux, you have to read /proc/self/status to see if there is somebody running ptrace on you. On other Unixes, well, the only trick is trying to spawn another process to ptrace you and hope it works.

It is possible, just tricky.


The "statically typed" part is not a big issue (there are many statically-typed languages with REPLs, after all), but "compiled" could be. For example, while you can stick a read-eval-print-loop anywhere in your code when using Racket, you can't see any non-top-level bindings there, because they may or may not be there during runtime. The compiler is free to inline or fold or remove any such binding at will (as long as the behavior of the program remains the same). There's a debugger for Racket, and it's also possible to use it programmatically, but you cannot just call it like `debugger;` in JavaScript: you need to recompile a module so that it uses (a lot of) heavy instrumentation and only then you can set a breakpoint, also from the inside.

I'd expect something like this: https://github.com/krixano/Lydrige to have it easy in this regard - it's a statically-typed but interpreted language.


In theory you could inline assembly "int 3" or the like, but I'm not really aware of anyone doing things like this directly.


Well for "regular" code if you want a breakpoint it makes more sense to use the debugger anyway. If you generate machine code however (for a JIT for instance) it's definitely a useful trick since adding debugger support for generated code, while possible, is often pretty tricky and non-portable.


I use this sometimes with some C/C++ code.

Mostly, if I want a break somewhere I do this:

  *(int*)0 = 1;
Which will result in a crash, I attach my debugger, run the code and if it reaches that line it will break and I can inspect everything I want.

It's easier than adding a __asm(int3) because it's cross platform. On windows for example it's not possible to perform inline assembly in x64 builds, and compiler intrinsics are not portable.


Have you ever had issues with the compiler removing/ignoring that line when optimizations are turned on?


No, could it? If I ever noticed it I would make it a bit more complex.


On windows you can just call DebugBreak();


Yes, but most of my code is written to be as portable as possible, if not outright cross-platform.


Using undefined behaviour like this is the opposite of portable.


Visual C++ has the __debugbreak() intrinsic function.

If you have JIT debug enabled, you will be prompted to attach your debugger to the process when hitting such a breakpoint.


gdb or lldb provide debugger functionality for native code, but you don’t set breakpoints programmatically.


You can, actually. At least in x86 asm, you just have to add an int3 instruction to get a breakpoint (requires OS support of course).


Using the literal fragment specifier in serde-big-array already to reduce code size if possible: https://github.com/est31/serde-big-array/commit/c6e5669ffe65...


What is the status on incremental compilation and code completion?


Typo note:

> literal mactches against literals of any type;


Thank you! https://github.com/rust-lang/blog.rust-lang.org/commit/a775b... (will be deployed in a few minutes)


Oh it's just MD on Github.

... of course it is. Noted for next time ;)


> This is servicable


Why every release of Rust appears on HN? (meanwhile Python, Node, Ruby, Perl, PHP, etc... don't enjoy that same treatment)


Other languages' releases do get submitted. I just checked Ruby, Python, and Node, and all their last releases were submitted and got some upvotes, but not a ton of discussion.


The Christmas release cycle of Ruby probably doesn't help either ;)



Rust is just doing more that's interesting for folks here to talk about.


I think a lot of Rust people are on HN


See the discussion in the previous thread https://news.ycombinator.com/item?id=18727312


Often the others (at least some of them) do get posted too. It's just a matter of what gets traction.


[flagged]


> Why every release of Rust appears on HN?

Perfectly fine question.

> (meanwhile Python, Node, Ruby, Perl, PHP, etc... don't enjoy that same treatment)

This is the part that's getting downvoted. It's needlessly snide, especially because it's not actually true — you'll see posts about plenty of language on the home page, posts about the progress of C++ working groups get frequent attention, etc.



Plug- Use QtCreator for a nice Rust GUI


what? how?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: