#[debug_visualizer] seems like it will be very useful.
One of the current issues with Rust (and a place where C++ is still better) is debugging. Granted, I end up needing to debug Rust code much less often than C++ code, but it would still be nice to actually call rust functions in IntelliJ (and it seems there's limited support in gdb and lldb), and view data-structures like maps opaquely instead of just their complex internals.
I'm a debugger specialist, not a Rust specialist - but I'm surprised the experience of viewing data structures is not better for you.
What platform / debugger are you on? I'd expect the GDB pretty-printers (which from the article it looks like Rust has been shipping by default for the standard library) to be taking care of this relatively well - e.g. looking at https://github.com/rust-lang/rust/pull/72357 it appears that `HashMap` and `HashSet` have printers.
(Raw C++ data structures without some form of pretty printing are also incomprehensible for normal development - and I'd be surprised if third party libraries were particularly good at shipping pretty printers, though I may be wrong!)
It’s gotten much better. Primitive structures which are defined by their fields and builtins like `Vec`, `HashMap`, `String`, are handled good. But structures in library or user code which use “raw” and other non-trivial representations, but provide an opaque safe API, I can only introspect the non-trivial fields.
This annotation is exactly what those custom structures need.
If you hunt around for the debugging working group and related issues there's some interesting reading on exactly that. (I'm on my phone and it's been a while since I looked into this). The issue is that a debugger generally can't assume your code doesn't have bugs.
This would make bugs around state in Debug impls much more complicated, for example. This can creep in under a few layers of indirection - ultimately calling a function that gets the value of an interned string would be one example.
I'm not sure they'd do so automatically but, given the ability to call functions (which it sounds like, for Rust, GDB lacks) it should be possible to plumb together.
But debuggers like GDB have built in infrastructure for expressing how to display structured, potentially nested data structures - it's built for the purpose and powerful, so it's nice to bring it to bear if you've not already got something equivalent in they program.
The other important thing is that the debugger's pretty printers work on a core file, without needing a live process.
Though that “only” dates back to 2020 so people with experience older than that would have had a worse experience, and may have ended once bitten twice shy as a result never noticing the improvements.
> In this thread it is revealed that all (almost all) Rust devs are printf() not breakpoint devs, and why.
No it isn't? IDE debugging works just as well for Rust as it does for C++ in my experience - better even. Yes it sometimes shows raw internals of complex containers instead of the logical structure, but it does for C++ too.
In Rust I can just click "Debug test" next to any test and it will start it in a debugger flawlessly. Zero effort to set up. I've never seen anything close to that in C++.
Of course as someone else said, you need a debugger far far less in Rust than C++.
> In this thread it is revealed that all (almost all) Rust devs are printf() not breakpoint devs, and why.
I'm a non-proud printf() debugger. I understand that if I get use to debugger debugging it'd be in many ways better. The thing is, it's an entirely different paradigm of development, and paradigm shifts can come with significant productivity cost. Of course, learning, growing and getting comfortable with new/different tools is part of being a software engineer.
Yes I used debuggers many many times, I'm also relatively experienced with e.g. gdb commands syntax etc. It doesn't seem to fit my workflow for various reasons. In very short, my development style is some form of extreme TDD: when I write code it's almost always of the form "write test, red, write code, still red, write code, green" loop. I know many people hate it but this is what I came to like over the many years and this is what makes me productive the most. I am working on adopting different development paradigms recently though.
That certainly makes sense - and that tight loop is very compelling.
What if you have to solve a bug outside of the development loop though? E.g. a bug somewhere in the whole system after you've done your development, where you don't have a root cause yet?
I write integration tests as well as unittests, but mostly integration tests, to a reasonable extent. I also don't commit all integration tests if it's impractical e.g. makes the test suite too slow, instead have equivalent unittests. I use integration test to understand the underlying bug, then write a unittest. But, truthfully, to understand where to begin with, I religiously use logs. Normally prod logs are disabled or silenced, so the first step of debugging is enabling logs, reproduce the error in prod, get logs. Then, I inspect the logs and develop a hypothesis where the bug is. I write an integration test reproducing the exact same bug, red, I fix the bug, green. Then, I decide whether I want to commit this test, is it useful, is it fast enough. If so, we're done. Otherwise, I undo my commit, I write a unittest, red, apply the previous fix as-is, green. As you see, this is a mix of printf() debugging (logs are essentially printf statements) and TDD. One problem with this approach is, of course, if your logs aren't sufficient you may not now where to start. Then, you need to either ssh into a dev env and play around, or recreate the infra on you laptop and use a debugger to step through. I make sure my logs and tests are excellent in order to prevent these scenarios though, I also make sure the system is idempotent so that it's easy to reproduce bugs (because leadership tends not to like verbose logging enabled in prod due to storage costs, this means before reproducing a bug you need enable logs).
I grew up on PHP (xdebug, var_dump, and stepping through the code) and C# with Visual Studio.
Compared to this gdb is just meh. I know, I know, Rust compiles to a nice binary, there's no runtime/vm/interpreter/JIT/intermediate-language/bytecode...
but that's not what I miss in gdb, I miss the discoverability, the visual representation, the whole I from the IDE concept.
I wonder why there's no protocol that simply tells the debugger that to pretty-print type X, it should call function Y in the debugged executable itself (using normal C ABI and some agreed-on calling convention). Then, for example, rustc could just generate simple glue functions that delegate to `Debug::fmt` in order to get formatting that the programmer is already familiar with or indeed has implemented themselves.
(Some precautionary measures would of course be required to avoid infinite recursion and similar. But the same issues already exist in debuggers that allow pretty-printers to eg. run arbitrary python code. To avoid a pathological or buggy formatter breaking either the debugged program or the debugger itself, it might be best to execute the formatter in a separate child process.)
Take the already-frozen process, make a clone with CoW memory, switch the clone to a sandbox that forbids most syscalls, run the pretty-printer function(s) in the clone process, abort if pretty-printing takes too long (hung on lock, infinite loop due to corrupt data structure, etc)?
Lots of devils in the details, but that sounds possible.
Even just formatting something as a string will likely require memory allocation for all but the most trivial cases, and memory allocation is not side effect free.
The impetus for python, rather than making a call into the inferior/debugged process is largely to support pretty printing data structures within core files & crash dumps. While it is certainly possible, it is not a small amount of work to generate pretty printers to run in the debugger likely involving an entire toolchain to compile the `Debug` traits to. The easiest way is probably compiling Debug traits to wasm, with the existing python pretty printers calling into that would be my best guess at a good starting point.
That's a really neat use case for wasm. And, if there were debugger support for wasm pretty-printers, it would be easy to support in language toolchains which already compile to wasm.
You can view the data structures opaquely even though the functions are written not in Rust, but in Python.
One of the difficulties is that the Rust compiler will emit different debugging info depending on the compiler version. Perhaps I should look into how gdb deals with that (if at all).
Don't know about the long lines, everything wraps well to me, but on mobile (firefox) the zero padding on the left side is brutal. The gigantic comment section at the end is a small hindrance, while the right sidebar kills 30-40% of the page width (on mobile), which is annoying on longer articles.
And thanks for the content :-)
I had wondered about this but am too green to get it. Is this the reason that in VSCode, when debugging, most variables show as basically raw bytes instead of something useful? That was very surprising to me and made debugging difficult. In Python I’m used to it just working.
Just to give some acknowledgment, the VSCode Python debugger is really great. I was a longtime print debugger, but the the tooling makes the process so straightforward to use, that it is now the only way I can operate.
I feel that Rust could really profit from an interactive REPL like notebook like Julias Pluto. It makes visualisations during development and their integration with documentation so seamless.
Not exactly but for shared libraries in general it's possible to ship some accompanying Python that will get installed and automatically loaded by GDB.
You can even embed that Python script within the shared object using some scary ELF tricks - I was surprised, to say the least, to discover this is supported.
Nice to have the option to unwind across an FFI boundary. It's also good that (in a future release) erroneously unwinding across a C ABI boundary will abort, instead of being undefined behaviour.
I write billing apps because I'm a masochist and feel there is deeper meaning to be gained through suffering. Lot of data and rules to process. Torn between learning Go or Rust next.
If you're already familiar with system languages like C++, I think you'd make progress in Rust a lot faster than you think. However, it takes a LONG ass time to truly know the language and internalize good intuition around the kind of code you need to write to take advantage of it.
I do love Rust, but one complaint is that early design decisions because of my weak understanding are extremely hard to refactor out later on. Poor type choices can end up bleeding into the entire project that become nightmares to improve.
My advice is to use cargo workspaces to break up your project into small self contained modules early so when you will inevitably need to refactor, it will make your life less painful.
Clojure was basically made for manipulating data and enabling fast changes of rules. Couple it together with XTDB which makes a lot of sense for finance-related domains because of its bi-temporality.
> Temporal databases aim to make our programming lives easier around time, by baking time itself into the engine. One major feature of temporal databases is the ability to query the database as of a particular point in time.
I've been experimenting with Go recently, with Rust still on my to-do list. I really like the simplicity of Go, with a fairly small language of constructs that combine powerfully.
Having learned SML / Haskell in the past I've got a bit of a soft spot for languages that are utterly cruel to you with their compilation errors but lead you straight to bug-free code. What I've heard about Rust puts it in this camp.
One thing that does concern me about Rust: when I (now and then) look at what's changing in C++ it feels like a lot of new mechanisms and abstractions are required to address problems created by previous design decisions. I sometimes read about Rust and worry that I might need to embark on a similar journey.
> C has a spec. No spec means there’s nothing keeping rustc honest.
Can you find the flaw in this statement?
No spec doesn't mean there's nothing keeping rustc honest, it just means there isn't a spec keeping rustc honest.
> Any behavior it exhibits could change tomorrow.
And pigs could fly tomorrow as well. This is not a serious engagement with the ways the Rust developers go to great pains not to change behavior (such as crater runs). In many ways empirical evidence like crater is more valuable than some random document that might or might not be fully obeyed at any given time.
> That they can’t slow down to pin down exactly what defines Rust is also indicative of an immature language.
Rust has slowed down a lot, just not to Drew's liking. And he isn't acknowledging the ways in which the risks of moving fast(er than he would like) are mitigated.
> Safety. Yes, Rust is more safe. I don’t really care. In light of all of these problems, I’ll take my segfaults and buffer overflows.
Please leave the software industry. This is embarrassing.
> I especially refuse to “rewrite it in Rust” - because no matter what, rewriting an entire program from scratch is always going to introduce more bugs than maintaining the C program ever would.
What evidence is there for this position? My contention is that a Rust rewrite would maybe not start that way, but rapidly exceed the C version in quality because it is just a more modern language.
Many of Rust's new "features"—for example GATs—are really more removing restrictions. Code you would try to write, and then be surprised to learn it wasn't possible.
Rust compilation is strict but not cruel. The suggested fixes associated with compilation errors are very friendly to beginners (one notable exception is errors around lifetimes).
Your concerns regarding "feature creep" are reasonable and would be expected for a language trying to displace C++.
Not qualified to answer thoroughly, but I wouldn’t use Go for anything that it’s not designed for, which is cloud infrastructure. That’s actually its slogan and it excels there.
For Rust, you can likely encode most rules and transitions between states in the type system itself. It’s a super power in that niche.
By that logic Python is a systems programming language because MicroPython exists.
Just because you can provide some niche examples where Go is used as a systems programming language doesn't mean it is a general purpose systems programming language.
Putting Go in the same category as C/C++/Rust/Zig is ridiculous.
The term "systems programming" is just way too vague. I imagine most languages could be contorted to write compilers and whatnot. Go doesn't seem special in that regard. However, the popular definition of systems programming, that of performing well in resource-constrained environments (probably butchered that a bit), is where languages like C/C++/Rust shine. For all your shiny high-throughput database, kernel development, bleeding edge graphics needs. I know you're an experienced programmer, so you probably understand this and view the terminology as gatekeeping. I personally don't know of a better term.
So just like writing an unikernel for an USB key firmware for example.
Yes, it is gatekeeping, back in the 8 and 16 bit home computer days, or even before during the computer revolution, it was anything that could help to develop the whole stack.
From the gatekeepers point of view, Xerox PARC never did any systems programming.
That USB key is listed with "512 MB or 1 GB DDR3 RAM". It seems to be meant for multiple USB "applications" so that's understandable, but it's not exactly minuscule. TinyGo does seem to run on other far more constrained platforms, which is impressive. But like I said, Go doesn't seem fundamentally different than Python when it comes to shoehorning it into embedded systems. Go doesn't seem capable of competing with C/C++/Rust where they really shine ("low-level"). The memory and performance profiles are significantly different. The TinyGo team recognizes the difference between Rust and Go (being two modern "systems languages") and suggests other benefits of Go that aren't efficiency/performance. To each their own, but there's a niche where almost all languages aren't suitable.
Like I said, I think the main issue here is terminology, but you brushed that aside. From a cursory glance, Xerox PARC definitely did low-level programming so I'm not sure what "the gatekeepers point of view" is supposed to mean.
If you mean by low-level programming, implementing the CPU microcode necessary to have a full graphical workstation implemented in Smalltalk, Interlisp-D, Mesa, Cedar.
What is low-level?
Coding in C back in the 8 and 16 bit home computer days when C was useless without piles of compiler extensions beyond K&R C, requiring either inline Assembly or an external Assembler, and even BASIC provided better primitives for hardware access?
I'm not saying C/C++/Rust are the only low level languages. I also did say the existing terminology is bad. When someone says something is the job for C/C++/Rust, there's a certain niche that languages like Go and Python will probably never reach. I don't begrudge that traditionally slower languages are always getting faster, but as far as I can tell, it's ridiculous to say that you can create drop-in replacements for existing high-throughput C++ code in Go. There will be caveats. It seems fair to say that there will always be a need for languages like C that Go can't match. Just because the meaning of "systems programming" has shifted doesn't mean Go suddenly eats Rust's lunch. I'm not trying to gatekeep; I'm just only interested in a niche that I don't realistically see languages like Go breaking into. If performance (and especially memory usage) requirements aren't so strict, Go is perfectly fine, but I don't jibe with that kind of mindset for infrastructure. Seems like an excuse to make everything bloated (Electron being an infamous example).
Regardless, I don't think anyone in the first couple of years would describe Golang's goal as to be "designed for cloud infrastructure" like what parent did.
I agree with you. To me that sentence broadens its applicability. Systems narrows it a bit, but he didn't intend to so much. People misunderstood this use of the term. In this context, it's more related to systems thinking than operating systems. systems =/= close to the metal
Readability and comprehension are very important. You want to minimize the translation between business and technical worlds in this space. C# worked well in the past if you didn't get distracted by all the frameworky tangents but it doesn't seem like there's as much new work in that language.
I could not find any evidence of this slogan or stated design scope limitation.
On the Go web site there wasn't anything in that direction, for example.
Closest I found by web searches was references to this 2014 post from an analyst company: https://redmonk.com/dberkholz/2014/03/18/go-the-emerging-lan... - in there it's opined that Go has a strong position in cloud infrastructure (but doesn't imply that it's designed or suitable only for that).
From what I've read, it seems like if you can learn Rust, go would be a quick afterthought to pick up. Just not sure if it was my own money for dev expenses if I would choose it.
That is very fair. I was useless the first month of Rust. I would say that new rust devs are basically in training and unavailable for ~4 weeks. Rust mastery takes much longer than that, of course.
However, the type checking in Rust is fantastic. The confidence in correctness that you have when it is complete is much higher than many other languages.
> Just not sure if it was my own money for dev expenses if I would choose it.
Use case would probably be a major factor in the decision e.g. picking up rust from scratch to write a basic network service would probably be stupid unless it had very strict requirements known in advance (and even then you might want to write it in go anyway in case that works out or to quickly uncover issues you’d hadn’t predicted).
If you know in advance you have very strict resource constraints however, or you really need the reliability (of extensively encoding things in the typesystem), or you’re writing native extensions (python + rust via pyo3 is great, python + go is something I’d avoid unless the entire solution already existed in Go and I could expose it to python over a pipe), etc… then the investment would likely be more worth it.
Basically, how much return would you expect, Go requires very little investment, Rust a lot more.
This is a really interesting reply. I've thought of building workers in go/rust to help process the harder sidekiq jobs in our ruby solution. Kind of scared of making a Frankenstein though but yeah it might actually be the right way in some ways.
Have you read the `semver` spec? Very small and approachable https://semver.org/ . If your project's dependencies are following semver, you can reduce your chance of dependency hell. If you want to encode the date into a semver version string it is possible https://semver.org/#spec-item-10
But that forces a very specific schedule on projects and ruins the semver meaning of what a version entails.
I really dislike date based versioning unless the major version is year aligned and is truly a big jump forward (like Jetbrains do it).
Otherwise how do you know when a breaking change happens? Or if a version is significant or not? With 1.69.1 I know how .1 relates to .69 and to the 1.
Calver I think is more useful for applications that get continuously developed and released, honestly. Like, whenever you are using a webapp that is just self-contained thing, semver doesn't necessarily mean much, it basically just boils down to "Big version number bump = more recent."
I think this is one of the reasons programmers gets mad at like, Google Chrome or Firefox or whatever about having version number 100, but literally nobody else cares. Version numbers just fundamentally mean something different to engineers who think of "consuming programs" like they do consuming an API, versus a user, who just thinks of it as "Bigger number = more recent = good."
> Otherwise how do you know when a breaking change happens?
Linux is a good illustrative example of how this works. How do you know when a breaking change in Linux happens? The answer is the developers define a stable boundary (userspace) and stick to it, whether or not they use semver, and the other components are out of scope (kernel APIs). They simply have a different criterion for where the line gets drawn. This makes some people mad and some people happy, but it's not exactly new.
Similarly, in most of the calver applications I've seen and used, you typically just don't do huge breaking changes, you just do gradual migrations of existing things, with warnings, rollouts, cutovers, brownouts, etc to control what happens. So the answer is "when do they happen" is "they mostly don't." The release process and guarantees are just different.
Same with "is a version significant or not." How do you know if a version of Linux is significant? You go read the release notes. They come out once every 3 months, so you always know when to look. When you do calver based releases, that question just matters a whole lot less in some sense. Did a feature not get in this time by a hair? It will just get in next time, so there's no need to squeeze yourself or sweat bullets about extending your runway. Sometimes a lot of big features land in one release and sometimes they get held back. It's just how it works.
I don't think calver is very good for actual programmer-facing APIs, necessarily, but it can work e.g. webapps tend to have different versioning schemes and techniques for REST APIs, so clearly semver isn't the only possible technique. It does help introduce some mechanical semantics that can be tool checked, etc. For Rust that's really useful, but for a lot of cases that isn't so relevant.
One of the current issues with Rust (and a place where C++ is still better) is debugging. Granted, I end up needing to debug Rust code much less often than C++ code, but it would still be nice to actually call rust functions in IntelliJ (and it seems there's limited support in gdb and lldb), and view data-structures like maps opaquely instead of just their complex internals.