Hacker News new | past | comments | ask | show | jobs | submit login
What a good debugger can do (werat.dev)
315 points by werat 79 days ago | hide | past | favorite | 205 comments



I use GDB almost daily, and used Visual Studio pretty deeply for many years (and still a little bit, nowadays), but I must say I am still a "printf debugging" aficionado (or better: real logging).

I like many of the features that debuggers can provide, and I think this is a good article to set aspirational goals for what is possible. But my lived experience has generally been that it's a buggy, fuzzy, moving target in terms of overall user experience. GDB, bless its heart, is still somewhat a pile-of-bugs itself. I frequently have it crash, or otherwise get into a "so confused it needs to be restarted" state. Visual Studio is much more stable, though less powerful, too — less scriptable, at least.

Perhaps some day, debugger tech^H^H^H^H UX will advance to the point where it really delivers on its promises consistently and solidly. But after 20+ years in software, I am not holding my breath (: There are some situations where a debugger is just the thing you want (eg: hardware breakpoints can be a life-saver), but I find that's more the exception than the rule, at least in the corners of the software world I've worked in.

Compare the above with logging: It is simple and trustworthy. As you get good at designing a solid logging system, and interpreting the results, your life just gets better and better. If you get good at using a debugger, you can still be hit with gnarly weird behaviors, debugger apoplexy, optimized-out-code wackiness, etc. that are hard to control or predict.

Anyway, "debugger vs. logging" are often presented as some sort of either/or choice, and in some sense it is (you only have X time to spend; where would you like to spend it?), but in many senses it is not; both have their strengths. I just find that the cost/benefit for me has generally favored logging and testing, over the years.


I think one problem I have with gdb and to a lesser extent lldb is that when using them with an IDE, it's just a janky connection between a command-line program and a GUI program. Back in my macOS 7-9 days, using something like CodeWarrior, or even Vusal C++ on Windows, stepping was so fast and smooth. It would respond instantaneously. I can now press the step button like 5 times and watch as it steps to each line. My machine is a bazillion times more powerful, so I have to wonder if that interface between the UI and the debugger is part of the problem. I realize there's also protected memory and separate processes, and all that. But man it's insane how much faster our machines are and how much worse just single stepping is.


You might want to look into the "TUI" mode in GDB. It's an ncurses-style interface, where it shows you the current code, and current line, and you can step along "visually." It is fast. Press Ctrl-L to re-draw the screen when the display gets messed up.

...

Then, you may notice that TUI mode steals certain keyboard commands, such as up/down arrow to scroll the source listing, rather than navigating command history.

Then, you might enable Vi-mode for GDB's readline, so you can use "j/k" to navigate command history, even in TUI mode. Plus Vi-mode is just better (:

Then, you may find that certain things don't work quite right in Vi-mode, because it's not the default and doesn't get as much testing. But you fuddle along because it's better than the alternative.

And thus you have arrived at my basic situation (:


I shouldn't have to sacrifice either good UX or speed of single stepping on a modern machine. It says a lot that that's even a real suggestion in our industry.


How long before a Lua implementation of gdb for neovim outperforms gdb in Vi-mode?


I learned C++ on CodeWarrior (and then vi) and its debugger made me require a debugger whenever I write code. I am currently using PHPStorm (or any JetBrains) and its XDebug debugger.

For Javascript, I use Brave/Chrome debugger.

...And I should use logs more.

I am always in shock when I see people work on large projects with no debugger. I do printf/console.log all the time but HOW DO PEOPLE NOT STEP-STEP-STEP??


> HOW DO PEOPLE NOT STEP-STEP-STEP

I think I have felt some of what you are feeling. The debugger is ... seductive (: For one thing, it is an educational experience, as you step along you are learning what your program is actually doing, which is a good mental checkpoint, especially when I was a more junior developer. I think a debugger as an educational tool is a big point in its favor. And so you may develop a positive relationship with your debugger, and want to bring it with you wherever you go.

But you may find, as I did, that the sparkle of that relationship fades somewhat in time. You will be better at knowing what the code does, without needing the debugger to tell you. You will hit situations where the debugger cannot help you, or is less helpful than the alternatives. You will get better at structuring your code so certain classes of problems simply don't happen as much, rather than using a debugger to peek-and-poke to fix things all the time. You may find the debugger to be a relatively exhausting high-touch real-time experience, compared to thinking about issues more at your leisure, or getting a larger understanding from other sources.

And so, with experience, and hopefully with an open mind, the debugger will settle into its proper station, in the pantheon of your various debugging aids. Not bad, but not the end-all-be-all either.

I am being somewhat flowery in my language, but all I am saying is: it's okay to rely on the debugger, at some stages. Give it time. Just don't forget about the alternatives or denigrate them needlessly. You want to build a big happy family of technologies and techniques (many of them residing solely in your mind) that can all work together.


“HOW DO PEOPLE NOT STEP-STEP-STEP”.

For me it’s because that is slow, tedious and exhausting. Often you need to cover a lot of code surface area to find a bug.

Worse, the bug may be something reported in production in a giant pile of code.

A good set of logs will go a very long way to at least narrowing down where the bad behavior is coming from, and is generally scalabale.

The debugger is useful, but generally only at the extremes of “new to the code, totally clueless” and “really advanced bug that I need to bring out the big guns for”.

The logging approach clicks in really fast the minute you’re talking about a multiprocess program, being microservices or just plain old SOA.


This reminds me of one time I had a co-worker explaining to me how awesome is the logging system he wrote. I expressed my skepticism about using a complex one-off system for debugging purposes on the basis that the said system itself needs debugging. Then no more than couple of months after that conversation a test breaks. The test switches between two code paths over some condition and emits different message via the said logging system, both messages should be appearing but now there is just one. This implies either condition is not detected or it just does not happen, perhaps a system issue? Could be the new hardware revision we switched to recently is faulty? Nope, the older build of the same test runs fine.

Git bisect to the rescue, what do you know, it breaks on the commit adding another improvement to the logging system. The second message is getting eaten by the logging system, that's all.

I don't think this persuaded the co-worker to stick to the more common tools, but definitely fortified me in my belief that a buggy tool you made yourself is no way better than a buggy tool that thousands use all the time.


I have been programming professionally since 1986 and still nothing beats logging or having chunks of specialized code to do dumps of some data to files so you can analyze them with tools better suited for the purpose. Ideally though you don't want to have to modify the code to diagnose the problem especially if it's a crash caught in the wild and you have a chance to live debug it. I would love more useful visualization tools in the debuggers (mostly VS for me) that would be very helpful in all situations like debugging crash dumps.


Most of the data I work with can't be visualized by printing. (I mostly work on 3D and video.) But I have found it invaluable to log data, then read it into another program. Sometimes I can just bing it into a spreadsheet and plot it, other times I need to write something to display it or analyze it. It's definitely an under-utilized technique!


> It is simple and trustworthy.

This sounds so naive for someone with 20+ years in the field...

Linux, for decades, couldn't get logging to the point that it at least doesn't lose messages (the problem with tail / logrotate that is quite obvious once you think about it, but it took many years to give up the approach).

I recently hit a bug where NVidia's driver abuses Linux kernel logging in some tight loop by spamming log messages at insane speed (happens when you have two video adapters, Intel and NVidia and an external monitor). An interesting side-effect here is that Linux logging tries to throttle loggers who output too much, so, from the log you cannot tell what's happening (because even though the system is burning calories trying to print a tonne of messages, nothing really gets printed).

Several iterations ago I worked on a product where logging had to be implemented as writes to shared memory self-styled circular buffer, and because there was too much info printed too quickly you only had few seconds worth of logs before system crash... on a good day.

Needless to mention the fun of stitching together logs coming from different places in your system with separate clocks.

Even simply processing hundreds of Gigabytes of logs on its own isn't a trivial task.

----

Many things are simple, when your task is simple. Logging is just one of those things.


> Many things are simple, when your task is simple. Logging is just one of those things.

I agree with much of what you said, and of course "logging" is not just a single point in the solution space — there is some function "troubleshooting_pain = f(your_project, your_approach)". I was trying to say that for "your_approach=logging" that function tends to return smaller values than for "your_approach=debugging", all other things being equal, in my experience.

Whereas your comments seem more oriented towards the "your_project" factor. Of course using logs is harder on a distributed system. But so is using a debugger, or just about anything else.

Perhaps I should have said "It is relatively simple and trustworthy, even if it can still get hairy at the extremes."


Both interactive and declarative debuggers work better in distributed systems than logging because they can observe events as they happen, and don't need to recreate the order in which they happened from the records which are very hard to make chronologically consistent.

Things like EBPF (which may implement sort of a declarative debugger) are, perhaps the only tool you may hope to use in high volume and high frequency systems.

If I could only choose one technology used for software diagnostics, I'd choose debuggers over logging. Debuggers need more effort to develop them, and they aren't very good (yet), but they have potential. I don't believe that logging can be substantially improved to deal with difficult problems.


One thing that can be pretty nice which is kinda neither traditional debugging nor logging is DTrace (or similar). Basically event tracing on steroids. Maybe EBPF is in that vein? I don't have much personal experience with it, but I have heard some stories of good success on busy production systems.

I guess my (limited) experiences with distributed systems are different than yours. The notion of "pausing" the system to step through things interactively was usually untenable. Do you stop the one node that shows the issue, and let the others run, getting into who-knows-what shared state? Do you somehow attempt to stop them all, and hope/pray that they all are in the right state to make your cross-node analysis meaningful? This was mostly on Apache Spark, where parallelism was the name of the game. Maybe for some kind of long-running distributed system like Erlang it's a different story.


> Maybe EBPF is in that vein?

Not just that :) It was "inspired by". Well, it's the same idea.

> The notion of "pausing" the system to step through things interactively was usually untenable.

That's not what EBPF would be used for in such a system. You'd write a bit of code that can be loaded into a running program and executed as a particular condition occurs. Like how you can attach some code to evaluate on a breakpoint in many other debuggers.


Calling Visual Studio 'less powerful' than GDB is telling.

When you have a bug that doesn't happen until hours into your program execution, being able to set a breakpoint, edit-and-continue, and set-next-statement are worth their weight in gold. You can solve in the moment what would take literally days, even weeks using a logging approach.

Sure they both have their place but having a debugger is indispensable, and Visual Studio's debugger is the undisputed king.


> undisputed king

Well, I personally do agree it's a better, more solid overall product. And also agreed that edit-and-continue in some highly stateful situation can be ::chef's kiss::

But I think there is room to dispute its kingliness (:

For instance, scripting GDB with Python is quite nice, on occasion.

Actually, just on Windows, WinDBG has some killer-feature functionality of its own.

Or back with GDB, take, for example, this classic post by Jonathan Blow, where there is some good back-n-forth discussion about GDB vs VStudio, with varying opinions:

* https://news.ycombinator.com/item?id=5125078

Plenty of disputers out there, I daresay.


> edit-and-continue

Breaks on enough codebases for me to not be in the habit of relying on it.

My own "killer feature" of VS debugging is being able to open up a crash dump, have VS auto-download matching pdbs from a symbol server, and auto-view the correct source file version/revision thanks to source indexing - without checking out by hand and turning my build stale. Killer ergonomics.

Sometimes I'll resort to windbg/cdb for memory pattern searches, automation, untangling wow-mangled crashdumps, etc. but VS itself is a nice first resort.


I've been wanting that symbol server style workflow for GDB for years. It looks like all the parts are there but I haven't found anyone who has plumbed them together into a complete system yet.


20+ years is... a lot. I do agree with the sentiment though. At first I was printf debugging because didn't know better. Then discovered debuggers and my mind was blown. But when I reached the point where I hit bugs that would magically disappear when running the program through a debugger, I finally understood that there's value in becoming good at both debugging styles.


Debugging issues with multithreaded code can be difficult because you could be looking at race condition that only happens when the code is running at full speed and debugging pausing one or all threads could give you an different experience than the real world.


(Debug) logging can also change the frequency/order/synchronization of threads masking over race conditions.


Yeah, I have fought with this occasionally. It's one of the handful of "gotchas" you need to think of when designing logging, and interpreting its results.

Sometimes you can improve the situation by sticking things in a bigger memory buffer and tightening control of when things get flushed to disk, but there's always that fundamental "observing the system will change it" problem. Similar issues arise with a debugger too, of course.


Yeah I had to debug some code with 5 threads that were supposed to be synchronized through the use of several semaphores that was a bear to pin down.


Debuggers are chronically under-invested in. A lot of the pain points could be fixed just with more money and staff. It's a chicken-and-egg situation --- people don't use debuggers that much, for various reasons, so there isn't the investment, so debuggers don't improve, etc etc.


> Compare the above with logging: It is simple and trustworthy.

Is it though? Add concurrency and it's no longer simple. Have an operation running millions of times a second? Good luck even logging that fast enough. If you start adding logic to print only sometimes, now you get a poor-man's debugger.


Yes, see the other sequence of comments[1] where I clarify that a bit. I didn't mean it as an unalloyed "always maximally simple and trustworty in all cases" claim. Sorry for the ambiguous wording.

Yet for me, in those high-concurrency/high-frequency situations, I feel like a debugger is also a pain to use and trust? In truth, I've had best luck solving those sorts of issues by old-fashioned thinking through it / talking it over with someone / creating hypotheses and testing them sort of analysis. And even for that, logging (or profiling, event tracing, and similar "dump lots of data" approaches) tends to produce better "see the big picture" information than a debugger's laser-like focus. Might come down to one's personality somewhat, too (:

[1] https://news.ycombinator.com/item?id=35101564


Great comment. Since it sounds like your jam, do you have any suggestions for reading on logging best practices in this context?


Hmm, I don't have much in the way of links; but I can brain-dump some of my accrued personal opinions, for what they're worth:

* Relatively early in your project, incorporate a "real" logging system, and start leaning on it. For me, in C++, I have most recently been using 'spdlog'[1] fairly happily. It accumulates data in memory and dumps it from a dedicated logging thread at a configurable frequency. That approach helps avoid logging getting in the way of your main program performance, but it has downsides (see below).

** You want logging to be easy to add wherever you need it. It should be about as easy as adding a literal "printf" statement; minimize the barrier to making use of it.

* Be able to slice/dice your logs by category of information, not just debug/warning/error. I find myself often making special "SYSTEM_XYZ_LOG(...)" macros, for different systems. Then, with a compiler flag I can enable/disable output for different facets of the program. Some of those I might leave on, and others are so specialized (or performance-impacting) that I only enable them when needed.

* It can be nice to have several log files, one per system (eg, in a video game: graphics messages go to one file, physics to another, UI to another, etc). However, it can also be useful to have a "combined" log that shows everything intermingled, so you can see the overall timing of things without having to correlate timestamps across files. Ideally your logging systems supports having both.

* Along with the above, develop an ergonomic way of viewing various logs at once. My approach is pretty simplistic: I have a bunch of XTerm windows, and some of them are displaying logs, sometimes filtered variously (eg: `tail -F foo.log | grep -C5 interesting-pattern`). You could get fancier with tmux or something.

** Aside: the general skill of being able to wrangle lots of data from various text files is a good one to develop — not just for logging. I find this much easier to do in a Unix-style environment than on Windows.

* There is a tradeoff between logging synchronously (eg: writing to STDERR, unbuffered) or accumulate-then-flush (such as a buffered STDOUT, or a logger thread approach like in spdlog). If your program crashes, you might not see the most recent (and thus most relevant!) log messages in the latter approach. I usually have some "write to stderr right friggin' now" function for special cases when I need it. However, if you run your program in a debugger and the crash happens, you might be able to step the debugger to let the logging thread dump out whatever is not-yet-flushed-to-disk. I have had good success with that; I just have to remember to do it.

* When you generate too many logging messages, there is a tradeoff between "flush to disk to free up memory" and "just allocate more memory". If you are generating huge amounts of logs, it is possible you will use too much memory on the system, in the latter approach. But in some cases, the latter approach is faster. I've had good enough luck using the former approach and just using a "plenty big" memory buffer.

* If you are logging across multiple machines, you usually need to correlate events via timestamps. So, make sure your clocks are synchronized, or at the very least be aware of this issue (can be a pain).

* Don't be afraid to allow big sizes for your log files, at least when debugging. Storage is cheap, grep is fast. Depends on the scope of your project, of course.

* There's a difference between the logging for your codebase in general, and logging for very specific debugging purposes. I find it handy to have a special log for the latter case, which generally is empty, but when I am investigating something, I can write to that log and monitor it specially, to cut down on having to sift through a lot of noise. It is just the "debugging the current problem" log, and I delete those log statements once I am done. This is essentially the same as using ad-hoc "printf" statements, but using the real logging system, with the benefits that affords.

* Ideally, your log system should not construct its message unless it is actually wanted. Eg: if your log level is "warnings or worse", then LOG_INFO("foo={}", someValue) should not perform any string building work. This seems fairly common today, but some logging APIs don't get this right.

* In C/C++, logging should go through a macro, so it can be compiled out (or compiled out beyond a given severity level) depending on your build. spdlog supports this, and it is fairly easy to write your own, typically.

* A nice-to-have feature is to be able to only log the first N of a duplicate message, when desired. Sometimes (especially when things go wrong), your program will produce an outrageous amount of the same log message, which just drowns out the useful information and potentially use lots of memory. Some logging systems have explicit support for this concept. You can also roll your own (eg: by adding a timer or counter guarding some logging).

* Another nice-to-have feature is having a notion of pushing/popping scopes for logging (eg: log4j's "NDC" concept). In your code, this would correspond to lexical scopes. In the log statements, it would come across as some sort of "toplevel>outer>inner>" prefix or so. This is one thing I wish spdlog had.

** Aha! In digging up the docs for NDC, I found this[2], which does mention a book for your reading list: "Patterns for Logging Diagnostic Messages" part of the book "Pattern Languages of Program Design 3" edited by Martin et al. I cannot vouch for it.

And, as mentioned variously in this thread, logging is just one tool in the toolbox. Don't forget about performance counters, even traces, writing good tests, the debugger, etc. Good logging requires some up-front cost, but generally worth it, IME.

[1] https://github.com/gabime/spdlog

[2] https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4...


Thanks for taking the time - I really appreciate it and lots to think about. This whole thread has definitely swayed my views on the relative merits.

Always enjoy reading people enthusing about tooling they know well.


    a good debugger supports different kinds of breakpoints, offers rich data visualization capabilities, has a REPL for executing expressions, can show the dependencies between threads and control their execution, can pick up changes in the source code and apply them without restarting the program, can step through the code backward and rewind the program state to any point in history, and can even record the entire program execution and visualize control flow and data flow history.    
    I should mention that the perfect debugger doesn’t exist.
People pretends Smalltalk doesn't exist?


Smalltalk images offered introspection, debugging of running processes, etc., but some of the later features listed seem quite impractical to add to the language. "Record the entire program execution," for instance.

That's more or less the point the author makes on the next line: "Different tools support different features and have different limitations." The ideal debugger does not necessarily make for the most ideal programming environment, just as a plane made out of steel is great for structural stability but may not be a good plane.


>> People pretends Smalltalk doesn't exist?

And Factor and many implementations of Lisp too!


>can step through the code backward and rewind the program state to any point in history

This seems impossible for anything that modifies external state.

Say 1. Open database connection 2. Commit data 3. Close database connection

How would you rewind right before #2 if you've already completed step #3? You'd need the socket/connection you already closed

Unless that means something more like "keep a running record of program state over time"


It can work but it depends on the system. One field in which it's very useful is game development. I'm working on a language that has time travel debugging as a feature, so you can rewind a game back to a previous state. I've found it useful when there's a momentary bug that would be hard to recreate. With time traveling, you pause, rewind, inspect state, fix the bug in situ, then resume from that point to check that the behavior is correct.

Here's an example: http://docs.mech-lang.org/#/examples/bouncing-balls.mec

If you want these kinds of features in other systems, they'll have to be architected to support them. For example, the external database will have to be rewound as well. If it doesn't support that feature, then cool language-level debugging features won't be as useful.


Have a look at https://rr-project.org/ (features and motivation are on that page). (Also mentioned in the article.)

It works on recorded state. It's not about executing a program again, it's about root causing a failure by going back in time after the failure happened. You can do things like start with a crash, retroactively set a break point and reverse run back in time until you hit it.

It's for real world applications. It was written specifically to debug Firefox, and has since been used for other applications of similar size.

It's basically GDB with extra commands, so very easy to use and learn if you know GDB. Highly recommend.


Time travel debugging generally works on record-and-replay, with there being a lot of research into figuring out what you need to record to get deterministic replay. Recording the results of system calls is a necessary step [1] to get deterministic replay, and that lets you replay even things that rely on modifying external state like database connections.

[1] Necessary, but not sufficient. Multithreaded applications require a lot more care, and rr relies on very accurate hardware counters to get multithreaded executions working correctly (and not all hardware supports these hardware counters!).


Let's say out loud the part that too often goes unstated:

You can rewind & replay a fixed execution of the program.

All the external interactions are recorded, and the replay can only literally replay what already happened.

You cannot change a variable or edit the code and branch off into a different execution path, that talks to the outside world differently; the rest of the world did not get rewound with the program.


> Say 1. Open database connection 2. Commit data 3. Close database connection > > How would you rewind right before #2 if you've already completed step #3? You'd need the socket/connection you already closed

The key thing is that you're really rewinding the world, as visible by the program

The program doesn't actually know what the network transaction with the database was, it just knows what syscalls it made and what results they returned. If, whenever it gets to talking with the database, you provide the same result as last time then it can't tell the database is gone.

This is self-supporting: if you do this consistently with external sources of information then the program will never go down any new code paths, guaranteeing that you'll always have a ready answer recorded when it needs it.


Yeah it's not perfect, but it often works well enough. Just another benefit of isolating code into stateless stuff.

One use case I have is for instance debugging a bug that's hard to trigger. When it finally happens and my breakpoint is hit, I can edit the code, hot swap it live, drop the current frame and then make it call my updated function again as if the original call never happened.


My guess is that you could attempt something like this by recording system calls as your program executes and then replay them in whichever order you need.

You would need a lot of storage space for some programs, but in simple cases that might still be useful.

So, in your example, playing the program back wouldn't really try to read from a closed socket, it would just hit debugger's database of stored system calls at a particular point and retrieve the stored response from that call.

This could get weird though if the program modifies itself as it executes. Not sure what to do in such case, but maybe there's a way to deal with it in special cases, not in general...


http://undo.io and http://rr-project.org both support self-modifying code.

I am a co-founder of undo.io, many of our customers do this.

It's not as bad as it first sounds because the replay of the program will modify itself deterministically. (Though as always with this stuff, there are some gotchas.)


Only the program state gets rewound, not anything external to the process (or whatever is hosting the program). Just like hitting a breakpoint only pauses the program, not the external world.


Yes, that's what it means. Ideally you'd record the state of memory and CPU after each instruction. In practice you can take snapshots at regular intervals and hook system functions to record their inputs and outputs. If a call has to be replayed the debugger intercepts it and gives the debuggee a recorded result.


Exactly. Except there are sources of non-determinism other than syscalls. Namely asynchronous signals, thread ordering, shared memory and non-deterministic instructions. They can all be dealt with though.


Is there a ST implementation with a time-traveling debugger? That's probably the one feature I miss when using sbcl/slime.


Depends what functionality you think "a time-traveling debugger" must provide, Smalltalk implementations usually provide something like this:

https://cuis-smalltalk.github.io/TheCuisBook/The-Debugger.ht...


To qualify as "time-travelling" You should at a minimum be able to step and run backwards and inspect the values of variables at a previous point in time.

Ideally all debug features (breakpoints, watchpoints, tracepoints &c.) will work running both forwards and backwards. To meet the standard of the "perfect debugger" from the TFA intro, I would say you should be able to run backwards, modify a function, and run forwards again. I'm not aware of any debuggers that let you do this.


Yeah. That's the normal way to work for a Smalltalk debugger.

I remember once debugging a thread without killing it. I've asked a customer to click on a link that would get to a bug while having remotely opened the IDE that was serving that user session, setting a conditional halt. Seen it halting, fixing the bug and saving the method with the halt removed and let the thread run. All the user saw was a long request that ended in service instead of that bug.


I don't see any "running backwards" mentioned in this story. I'm familiar with "break, fix, restart" debugging from Lisp (which had a lot of cross-pollination with smalltalk)


> Ideally all debug features (breakpoints, watchpoints, tracepoints &c.) will work running both forwards and backwards.

Executing arbitrary code in the program's context is also a regular debug feature, but I don't think many reversible debuggers allow this.


Hence "ideally"


rr lets you call program functions just like gdb does (using the same gdb commands, in fact). The limitation is that any state changed by those functions is simply dropped when the function ends. It's still super-useful for dumping program state etc.

Pernosco supports this too.


afaik Smalltalk implementations usually provide step backwards but don't "run backwards".

"The stack of (framed) execution contexts gives a history of the computation so far. You can select any frame, view instance values in the receiver, view the arguments and method variables at that point."

So step backwards, modify a method, step backwards before that method send, and resume execution with the modified method. (Note: resume rather than restart, so the modified method has the preserved context unless we manually edit that context.)


Which of these can't visual studio or intellij do?


hot code reload is very constrained in what can be changed.

It is great that it exists, but it isn't Smalltalk where at any moment the debugger can come up, you can change the world and hit continue without any issues, with all live instances updated to the latest set of changes.


IntelliJ can do time travel if you install the plugin: https://plugins.jetbrains.com/plugin/14767-time-travel-debug...


Most of it can be done by the c# debugger, especially in the laid versions of visual studio.


Thanks very much for posting this article. I found it very intriguing. I read it carefully but I didn’t visit all the links or watch all of the in-line videos.

I would say that for the most part I am a printf-style debugger. I remember reading some years ago (on HN, I believe) about time travel debugging in - I believe - C# and I was really impressed but I don’t code in C# so it soon left my mind.

I have a deep appreciation for mastering one’s toolset. I can’t think of the number of times that I learned something new about a tool (language, editor, shell, browser, whatever) that I use daily that changes my workflow - it has happened so many times. And as with all things code, sometimes new features are added.

I am going to try to make a point to re-read this article later and to visit each link and glean what I can.

I mostly code in Go. I wonder, does anybody know how much of this stuff might be supported there?


For time travel debugging in Go:

The Delve debugger for Go supports debugging rr traces: https://github.com/go-delve/delve/blob/master/Documentation/...

Undo (who I work for) maintain a fork that debugs our LiveRecorder recordings: https://docs.undo.io/GoDelve.html

Either rr (https://rr-project.org/) or our UDB debugger (https://undo.io/solutions/products/udb/) can do some time travel debugging of Go programs via GDB's built-in support for Go. I believe its weakness is in support for goroutines, since they don't map well onto its idea of how programs run.


The same tools don't work on windows either. Delve works well as a remote debugger against go on windows server though. Another fun one is go won't create dumps on panic on windows when GOTRACEBACK=crash is set.


Not tried it unfortunately but there's a travel-time debugger for C(++) somewhere.


I'm missing a few other items on this list that I use daily in Pharo

- being able to open the debugger directly from the program. What the "debugger" command does in JavaScript. Conditional breakpoints are easier to work with if they can be directly included in the source code

- be able to open another debugger on top of the code I see in the debugger. I'm doing the stepping, I'm at a certain point in the method, and I can simply mark the part of the method code that has already been executed or is yet to be executed and start stepping it with the next debugger. This is especially useful for code without side effects. Then I can continue stepping the original method.

- be able to have multiple debuggers open and compare their status

- to have more freedom in the visualization of values and objects. Having them open in other windows independent of the original debugger, being able to interact with them using code (which can be debugged independently)

- be able to save the state of the application and debuggers so that for very hard-to-reproduce errors, I can easily recover the hard-to-retrieve state and experiment with it repeatedly without worrying that I won't get the state again right away


> Conditional breakpoints are easier to work with if they can be directly included in the source code

The following code (probably, haven't tested it) allows to create a conditional breakpoint only when a certain method is called by tracing down the call stack

  debuggerIsInMethod: aMethodName
  ctx := GRPlatform current thisContext. "grab callstack"
 
  [ctx isNotNil] whileTrue: [
    (aMethodName asSymbol = ctx method selector) ifTrue: [ ^ true ].
    ctx := ctx sender ] ].
  ^ false
So in another method you can send (call the method) that's deeper in the call stack by doing:

  someMethod
    "do work"
    debuggerIsInYourMethod: 'aMethodHigherUpTheCallStack' ifTrue: [ 1 halt ].
    "do more work"
Use case: Suppose someMethod is being sent/called from everywhere, but you only want to debug it when aMethodHigherUpTheCallStack is sent/called. With this conditional breakpoint (in code), you can :)


Most record-and-replay debuggers let you store the recording to debug as many times as you want.

Pernosco (which builds on rr) has some really powerful state-recording and state-sharing UI for long-term and collaborative debugging. https://pernos.co/about/notebook/


This is what a good debugger can do: https://twitter.com/yiningkarlli/status/1628612150041382912 (Tomorrow Corporation)


This was on hacker news a few days ago but there wasn't much interest, which is surprising.

Setting a data breakpoint, then rewinding time, then fixing the error in code (while game is running), hot-reloading code and continuing the run with it fixed. All from a recording of a panic dump from another user.

This is next level stuff. Extremely impressive.


Wow I'm just 5 mins into the video and this looks ridiculously powerful. This is essentially an integrated development environment for game development with everything every programmer dreams about!

How do I learn to make such tools? From what I heard everything in the video is built in-house including the language and compiler. Although I understand many game studios do similar stuffs this is by far the most impressive I heard about.


Great demo! The article also has a link to it.


I have mostly found the people who dismissed debuggers tended to be more Unix/Linux people, probably because raw gdb is such a huge pain to use. Windows developers and Visual Studio developers where the debugging experience is so easy, tend to sing the praises of debuggers. I wonder if it a bit of sour grape for the Unix/Linux crowd?


As much of a Linux zealot as I am, you make a valid point. In college, I had a TA help me debug a C program on a VAX, and he blew through finding the problem using its native debugger, and wouldn't explain what he did. (I had an O where a 0 should have been. Or vice versa. It was a worse problem back in the days of actual terminals. He found it in literally 30 seconds after I had been bashing my head on the printout for a couple hours.) Anyway, it took me probably 10 years of professional coding before I discovered gdb, and then realized what that TA had done. All at once, I realized how far you could get without an actual debugger, and also why he never bothered to try to explain it to me. I wasn't ready. Not by a long shot. Seeing the right-click options on breakpoints in Visual Studio was... revelatory.


Depends on the UNIX, Solaris, HP-UX and NeXT/macOS are all UNIXes with good debugging experience.

Naturally none of them have had raw gdb, rather modern graphical debuggers.


> We can snapshot the program whenever something non-deterministic happens (syscall, I/O, etc) and then we just reconstruct the program state at any moment by rewinding it to the nearest snapshot and executing the code from there. This is basically what UDB, WinDBG and rr do.

QEMU does this too. This plus its GDB stub means one can time-travel-debug pretty much anything on any emulated architecture.

https://www.qemu.org/docs/master/system/replay.html


Where I work (IBM Watson Orders) they sometimes call me "the debugger guy". I love a good debugger and what it can do for me.

I don't use it only for debugging; it's also a great tool to help anyone understand a complex codebase like ours.

Not sure how or why the code reaches a particular function? Set a breakpoint and then look at the call stack. It's all right there.

And if you want to write some experimental code that will execute in the context of that breakpoint, just open the Python console. You can import any modules you need and test your new code immediately.

I also appreciate good logging. Especially when there was a problem yesterday in one of our customer's drive-thrus. I can't attach a debugger, but I've helped make sure we have solid information in the store logs.

Just the other day I was working on some testing code that would send an MQTT message to one of our services, and then it did a sleep(n) call to wait for that service to complete our task. The number of seconds to sleep was pure trial and error. Sleep too long and the tests take too much time to run. Don't sleep long enough and the tests become unreliable.

So I figured I would add a bit of code to that service to send another MQTT event back to the test code after completing its task. Instead of sleeping, the test code would just wait for that signal.

But where to put that MQTT call in the target service? It's a lot of rather complex code. So I slathered that service with hundreds of log.info("abcxyz") calls, every place I could find to put one.

It quickly became apparent from the log where the service completed the task and it stopped logging other messages. That's where I added the "I'm done" signal.

One thing that helped here is that our logging library adds the line number where each log.info() was called. So I didn't have to customize that log message, I could just copy and paste the same log.info() call hundreds of places.

I think every developer should learn how to effectively use both a debugger and logging. Otherwise you're working with one hand tied behind your back.

We had a conversation about this last week, including a link to an article with a title I found astonishing - "Debuggers are for Losers":

https://news.ycombinator.com/item?id=35013732


I call it debugger-driven-development and it is indeed great when you have to get quickly into huge, complex codebases.

It allows you to learn them very fast.

VS has great debugger for C# which allows you to do a lot of stuff when running the program, so sometimes I even develop the soft while having debugger and breakpoints attached


This is funny, that is exactly the term I use too:

https://news.ycombinator.com/item?id=29387515

Beware: you may get criticized for it as I was in that thread... :-)


I am a huge proponent of debuggers.

Being able to look at state step by step without having to stop, add log statements, recompile and go back are too slow (for me).

What concerns me more is that is that I end up working with contractors with 5+ years who don't know how to set up a debugger for the code they are working on.

And that concerns me. It's not OR logging OR debugging. It's both. You use the best tool for the job.


It's definitely both. Ideally the debugger is never your first resort.

An extremely common failure mode for less experienced developers is messing about in a debugger all afternoon for a problem that should be solvable in 5 minutes. It's a very slow way to learn.


Weird, I've never run across that. I have wasted a lot of time rebuilding and retesting figuring out where to put a print statement and then figuring out what and how to print it, where a debugger would have let me turn breakpoints on or off and watch different variables in the same run, all without touching the code. I don't think I've ever thought "gee, this debugger sure is getting in my way! I wish I could do printf-debugging instead."


The world isn't printf debugging or debugger only.

Some things are really amenable to the debugger, especially simple bugs. Really especially ones that should have been caught by static analysis anyway but that's a separate issue.

The issue is that firing up a debugger can easily become a fishing expedition. If you don't understand the system behavior and you don't know where the real problem is, you can end up manually stepping through many many layers of a system trying to see what it's doing. Add asynchonicity and this can take hours before you get to the 20 lines of logic that is the actual problem.

Ideally you have better logging and introspection, so the problem is caught in a way that leads you to those 20 lines immediately.

And of course design can make a huge difference in how understandable and localizable things are.

The problem a lot of inexperienced developers run into is if they are used to having a featureful debugger available it can become the only tool they have in their toolbox, and they are rewarded by how quickly it helps them fix the kind of mistakes their inexperience leads them to making a lot of. When they run into a real issue, they can be hitting "step" for hours....

The other pernicious side of this particular coin is that it can lead to localized understanding of the problem and make it really easy to apply a localized solution. With inexperienced people this can quickly lead to band-aid solutions all over the place. With any luck someone more experienced is looking it over and pointing out they should go after the actual problems, but left unchecked it can make a real mess.


I assume the parent comment is talking about "thinking" (not that I have any personal experience with that).


I've found the exact opposite of this starting out. When I started using the debugger a whole lot of things just clicked into place.


To offer an opposite view, I haven't used (or missed) a debugger for decades, in a whole range of programming languages and environments. It was only recently when I had to disentangle some legacy spaghetti code that I have set it up - and once refactoring is done, I'll probably shell it again.

The reason I don't usually use (/need) a debugger is that I know how the code should behave, because I thought about it in advance. Or, if it is not mine, I expect it to be readable and maintainable, otherwise I push for cleanup instead. If it is written in small manageable chunks, covered with tests and if it has good logging (which is necessary anyway - there will probably be no debugger available in production), I simply don't see the added value of a debugger. If it is not, it is not a debugger that is missing. :-)

That said, it is still a valuable learning tool because it helps understand the flow of the code, and it helps when refactoring spaghetti code. Reverse engineering also comes to mind... But other than that I can't be bothered to set it up either.


I usually wait for someone to accidentally try to merge a bunch of conditional logic wrapping logging, prints or stdio.write type stuff and take that as the opportunity to introduce them to the debug tab in the editor... and give a quick tutorial on conditional breakpoints and how to go up and down the call stack.

It's amazing how many developers go sometimes a decade before they learn to use a debugger.


The article provides a pretty good overview of the landscape of debuggers out there. I find myself using printf debugging whenever something is a dynamically evolving state and I need to monitor just the relevant bits of it and breakpoints when something is either crashing or requiring a deeper thought about how the code executes.

That said, I still think there's quite a bit of improvement to be made, which is why I started building a new debugger for myself which puts a lot more focus on breakpoint-less workflow, speed of iteration and scripting ability: (demo) https://www.youtube.com/watch?v=qJYoqfTfuQk It's geared mainly for gamedev, but I also do use it many times to e.g debug itself.


That's an excellent observation which I think agrees with my own.

If you already have a good idea of the control flow (and thus, expected behavior), but there's some minute detail that goes wrong, you just need a few strategically placed prints to see the state evolution. If the problem is control flow related, you might first need to get a grasp of that before going into the weeds.


This looks great, very interesting approach! Augumenting the code of a running process using a scripting language is a cool idea, excited to see what comes next. I will definitely keep an eye on the project.


I am personnally a big fan of gdb. There are some things you need to know before it's usable.

1. Use TUI mode with gbdtui

2. Install the packages that will give you syntax coloring in TUI mode (source-higlight something)

3. learn about Ctrl+L or alias `refresh` to `r` t redraw after your program prints if it does

Bonus 4. gdb uses readline just like bash and rlwrap so its command line is configurable with the ~/.inputrc file. I personally enjoy the vi mode. Tons of other options.

Bonus 5. alias gdbtui to `gdb -q -ex start --args` so it shuts up and set a temp breakpoint on you main

Then you basically have a c interpreter under your hand. I remember writing a function for my class that would dump a .dot file based on the state of my tree that I could just call anytime in gdb and see my tree update in xdot. Wonderful time.


> I should mention that the perfect debugger doesn’t exist

It may not exist, but this demo has got to be the closest thing to the perfect debugger I've ever seen: https://youtu.be/72y2EC5fkcE


Wow, that’s absolutely insanely cool!


A whole world of creative opportunities open up when the toolchain and related debugging tools don't suck. Check out this wild video[1] of someone modifying and debugging a game in real time.

[1]: https://youtu.be/72y2EC5fkcE


I think the visualizations are highly underutilized. There's so much that can derived from the code itself and the runtime profile.

That's why I'm creating the next generation debugger for Rails. https://callstacking.com/


I'm a debugger lover and Visual Studio's is pretty great but GDB is also awesome. However, when debugging embedded systems with a good trace probe it's another whole world, using on-chip trace capabilities and being able to break on things like register or peripheral access makes it even more exciting. And expensive, unfortunately, since it's such a niche thing.


Just curious how does one build the tools you talked about? I'm referencing both hw and sw: hw part I guess is "on chip trace capabilities" and sw is "break on registers or peripheral access".

I'm not an embedded dev neither am I a debugger developer but I'm playing with toy OS dev so just curious.


The setup I'm most familiar with is ARM-based microcontrollers, here's an overview from their docs: https://developer.arm.com/documentation/ihi0014/q/Introducti...

and the debugger from Keil (used to be independent, now part of ARM): https://www2.keil.com/mdk5/debug


Ah thanks, I thought they are yet to be made.


To me the debugger is perhaps the most important tool and I much rather have a bad editor with a good debugger than a great editor without. I find that typing code is never the limiting factor in productivity, being able to understand what your code does is, and a good debugger is a game changer. The same goes for languages, there is a lot of discussion about what various languages do, but not what debuggers are available for them. Visual studio has many faults, but IMO it runs circles around the competition when it comes to te debugger.


Debugging is the one reason I tolerate interpreted languages.

The fact that you can pause execution and start writing commands to execute on the fly in the debug console is VERY POWERFUL.


As a C developer, I REALY want a interpreted C compiler with a crazy good debugger for this reason. On average I think C has really good debuggers (its a simple language that has been around for a long time) but there is so much more one could do. A few years ago i wrote a prototype called OPA, with some cool potential features. https://www.youtube.com/watch?v=pvkn9Xz-xks



...and you don't need even 'interpreted languages' for that...


If someone's preferred language/framework/environment doesn't have a good debugger, or if they simply don't know how to set it up, some percentage of them will start insisting that they don't even want one anyway.

This is the basic Sour Grapes concept.


Debugger is a tool, not a goal. I can reach a GOAL without debugger, using my software development skills only. I prefer to invest into quality of code, write more comments and documentation, perform refactoring, delete unused code, write test case to reproduce the bug, add more command line parameters, write better error messages with more data, and so on, than to invest time into a debugger. I'm the Software Developer, not a Software Debugger.


It's sad that Python does not really support some of these debugging methods.

E.g. you cannot really watch variable changes. There are some workarounds, like writing a custom __setattr__ or __setattribute__ in case of an object, or checking all STORE_* operations. https://youtrack.jetbrains.com/issue/PY-30387 https://github.com/gaogaotiantian/watchpoints

Reverse debugging is also sth I would like to have, and there are a few projects to support this, but it's not really well supported in standard CPython. https://foss.heptapod.net/pypy/revdb https://pytrace.com/


Python debugger is really sad for many more reasons:

1. Cannot interrupt a running program to get debugger prompt, you have to code that functionality in yourself.

2. Cannot deal with threads or multiple child processes because it gets confused by where its output should go, and so it appears to be stuck, prompt doesn't show up, or input isn't being accepted.

3. If someone writes an overly general except, debugger won't ever be called because in order to be called it relies on exceptions.

4. Debugger cannot evaluate comprehensions correctly due to scoping issues (not as much of a debugger fault as much of the language fault).

5. Debugger cannot consistently modify variables inside the function call. Depending on circumstances if you execute an assignment statement in debugger, the value may or may not be set to the variable in current stack frame.

6. Debugger doesn't step into iterator implementation (iirc, I might be confusing with tracing).


This piece from Linus on why he doesn't like debuggers resonates with me [1], although I confess I use Python pdb quite often.

[1] https://lkml.org/lkml/2000/9/6/65


That sort of argument is somewhat defensible in a context like the kernel, where when things go haywire, you can't really expect there to be enough sanity to have a debugger work. But very little code runs in such a context, and it turns out that a well-written debugger has incredible features.

Also, Linus is writing this 22 and a half years ago, where the capabilities of debuggers were... far, far less. Time-travelling debuggers is really a game changer, just having the ability to travel back in time to figure out who set the value that causes the code to crash. Hot reload is also a wonderful thing (unfortunately, the fragmentation of tooling in Linux makes getting this working properly very difficult).


Lots of (most?) real-time and embedded code run in a state where suspending/resuming really doesn't work in any useful fashion, so it's logging, and minimal logging at that to figure out what's going wrong in-situ.

That said, much benefit is gained by writing the complicated bits in such a way that they can be tested/debugged/examined independently on a host system.


Hmmm I have 20+ years of embedded sw programming experience and can tell you the reason that embedded software is oftentimes not easily debugged using a debugger, is the crappyness of the debugger solution (debug probe, its firmware and its eco system). Also, high end embedded ICs often contain a serious amount if silicon bugs. The fact that it needs to run real-time, is mostly not in the way of the debugging process. In other words, embedded processors and their eco systems tend to be sub-par in terms of developer friendlyness. Addendum SHARC ADSP anomaly list https://www.analog.com/media/en/dsp-documentation/integrated...


I'm with you there!

Be happy if you can flash a led... happier if you have two speeds... and Nirvana if it is a multi-color led.

Intense jealousy of your colleagues who have TWO leds on their boards!


> Time-travelling debuggers is really a game changer

Core dumps have existed forever. They give you a stack trace and register values at the time of crash. Even better, you don't need a debugger running at the time of crash and you can dig into dumps sent from nontechnical users.

Sure, Bret Victor's demo was cool. But time travel debugging is so completely oversold at this point that I can't take anyone seriously that mentions it.


I've debugged core dumps before. It's not been a particularly pleasant experience--good luck trying to do something like `call V->dump()` (dump out an easy-to-understand representation of a complex value to stdout... oh that doesn't exist anymore, can't use that functionality!)

The most useful aspect of time-travel debugging for me, personally, has been when the test case that causes a crash is refusing to be reduced, and the function that crashes does so on like the 453rd time it's called. Jumping straight to the crash, then reverse-continuing to a break point cuts out so much time of debugging (especially because it saves you if you accidentally continue the breakpoint one too many times; otherwise, you'd have to start the entire, tedious process from the beginning).


Something else that time-travel debugging helps a lot with, is that in an awful lot of cases, what you have in the crash dump is a broken state, with no way to identify how that state happened to be. Like "why the hell does this variable have this value?". Sure, you can do the whole digging work to find what can possibly change that variable, and try different scenarios to hit them in a debugger at the moment the problem might appear in a new session that is not even guaranteed to produce the same crash. But with record-and-replay type things, you just set a watchpoint on the value, continue backwards, and there you go, you find where the value comes from.

Now, imagine you did do that manually and spent a lot of time finding that location. In many cases, that only gets you one step closer to the root cause, and you have to repeat the operation multiple times. Yes, you _can_ do that without record-and-replay, but do you really want to? Do you want to spend hours doing something that could take you minutes?

(And that's not even mentioning even worse cases, where the value you're tracking goes between processes via IPC)


Have you ever used time-travel debugging? Core dumps give you a backtrace (assuming stack is sane) which gives some clue of how I got here, but not _why_ I got here. e.g. assertion failure. I wrote that assertion because I believed this thing would always be true. Now I find it is not true. From a corefile I can't usually see why it not true.

With a time travel recording I can put a watchpoint (aka data breakpoint) on the state that is supposedly impossible, and reverse-continue back to see exactly where it got set. (And repeat as required.) It really is very powerful.

Admittedly there are situations in which it's not practical to get a recording, but when you can... almost any bug becomes trivial.


Contrasting perspective from John Carmack [1] who drops into a debugger all the time to explore state, understand program behavior, debug things, etc.

Looks like there is no universal golden path. Differing environments make different approaches effective.

[1] https://www.youtube.com/watch?v=tzr7hRXcwkw


I really enjoyed this thought in that mail: “Tough. There are two kinds of reactions to that [time lost from a bug you introduced in kernel dev]: you start being careful, or you start whining about a kernel debugger.”


He also reminds people why everyone pulls from torvalds/linux to this day:

> People think I'm a nice guy, and the fact is that I'm a scheming, conniving bastard who doesn't care for any hurt feelings or lost hours of work if it just results in what I consider to be a better system.


Honestly if most debuggers did what the author suggests in the first paragraph, I'd use them 10x more. Other than Pry for ruby I've not found many to debugger tools that let me drop into code, run parts of it, examine variables etc, without needing a 4 year degree in using the debugger itself.


Pycharm/Intellij stuff does everything you mention in a fairly simple GUI that doesn't make you understand how to set conditional variables, but encourages you to realize you want to


do you know if it's possible to step back in pycharm/intellij? the article mentions some debuggers having this ability, but i never saw an option in pycharm.

learning how to undo my last step would save so much time. right now, i have to anticipate a risky step and run that line in the debugger console.


Unfortunately I dont think there's any time traveling debugger - you can pretty easily go up and down the stack to see where the caller did x and write something in the console to do y, or you can set conditionals that would always trip when you are about to do something risky, but not go back in time.


This https://plugins.jetbrains.com/plugin/14767-time-travel-debug... gives time travel debugging for IntelliJ. (Disclaimer, I work for Undo.)

I don't think there is a solution for PyCharm.


Pdb comes to mind.


Unfortunately debug-ability usually stops when you have a distributed system in which each part of the system is playing a different song. I wish my job was working on a program that I could run on a single machine. Then I'll look into debuggers. For now I'm `print`ing and hopelessly looking into service logs in DataDog :(


Thankfully there are things like flight recorder and application insights that marry distributed computing monitoring with debuggers.


It’s a good reason to not unnecessarily distribute your system.


Would certainly be nice to see some of these PAAS offer a distributed debugger that could track RPCs and breakpoint across machines in the cloud. It seems feasible to "step into" a remote method assuming the request IDs were integrated.


Application Insights on Azure, goes a big deal into that direction.

https://learn.microsoft.com/en-us/azure/azure-monitor/app/ap...

Or Java Flight Recorder,

https://developers.redhat.com/blog/2021/01/25/introduction-t...


> Breakpoints, oh my breakpoints

This is a good list, but one thing missing: catchpoints. Particularly useful with time travel debugging. catch throw + reverse-continue immediately tells you who threw an exception, and you can continue debugging from there.


One style that haven't seen discussed is a hybrid / mixture of in-code (e.g. printf trace-logs) and out-of-code (using external tools like an attached debugger) debugging.

What I do is I encode conditional breakpoints in the source code, (compile and) run the program with a debugger attached. The nice thing is you can have complex conditions using all kinds of functions from your surrounding code and you can have them permanently, even check them into the VCS. It is kind of like placing asserts which don't panic but trap into the debugger and they can be globally enabled / disabled.


Conditional breakpoints are supported for most dynamic languages. With static ones like c++ your way is the only practical way I believe


I program in a hybrid of C/C++/Objective-C/Objective-C++/Swift and can use conditional breakpoints in all of those languages in lldb and Xcode. There's nothing special about conditional breakpoints that makes them not work in compiled or static languages.


Visual Studio supports conditional and printing breakpoints for both C and C++. I tend to use them rarely because they really hurt performance, though. I only use them if I need to be able to turn them on and off at will, which hardcoded __debugbreak()s don't allow, obviously.


I helped implement fast conditional breakpoints in UDB (time travel debugger for Linux) - https://www.youtube.com/watch?v=gcHcGeeJHSA

We used GDB's conditional breakpoint bytecode https://sourceware.org/gdb/onlinedocs/gdb/General-Bytecode-D... to get a speedup in the thousands of times vs plain conditional breakpoints.

That works for us because we've got in-process agent code that can evaluate the breakpoint condition without trapping. It should be possible to do this in other debuggers with a bit of work, though we have the advantage of in-process virtualisation to help hide this computation from the process.


Pernosco combines conditions, logging, and interactive debugging in a really nice way. https://pernos.co/about/expressions/


I've always been curious about Debuggers. How do they work? How do they connect to a program and step through it. Why can't more compiled languages integrate debuggers inside of them so that you can debug the program without using a separate tool?

And why can't we create interfaces to debuggers so that other text editors can integrate with them, much like how we do we LSPs?

*EDIT*

Thanks for all the responses! I've now heard from multiple sources that debugging on Linux is unpleasant, and it seems like the whole process is challenging regardless of the platform.


> How do they work?

Painfully. If you're on Linux, you get to use a mixture of poorly-documented (e.g., ptrace) and undocumented (e.g., r_debug) features to figure out the state of the program. Combine this with the debugging symbols provided by the compiler (DWARF), which is actually a complicated state machine to try to encode sufficient details of the source language, and careful reading of the specification makes you throw it out the window and just rely on doing a well enough job to keep compatibility with gdb.

> Why can't more compiled languages integrate debuggers inside of them so that you can debug the program without using a separate tool?

Because it's really painful to make debugging work properly. At least in the Unix world, the norm is for all of these tools to be developed as separate projects, and the interfaces that the operating system and the standard library and the linker and the compiler and the debugger and the IDE all use to talk to each other are not well-defined enough to make things work well.

> And why can't we create interfaces to debuggers so that other text editors can integrate with them, much like how we do we LSPs?

LSPs have the advantage of needing to communicate relatively little information. At its core, you need to communicate syntax highlighting, autocomplete, typing, and cross-referencing. You can build some fancier stuff on top of that information, but it's easily stuffed in a single box.

Debuggers need to do more things. Fundamentally, they need to be able to precisely correlate the state of a generated build artifact to the original source code of the program. This includes obviously things like knowing what line a given code address refers to (this is an M-N mapping), or what value lives in a given register or stack location (again an M-N mapping). But you also need to be able to understand the ABI of the source level data. This means you can't just box it as "describe language details to me", you also have to have the tools that know how to map language details to binary details. And that's a combinatorial explosion problem.


> Debuggers need to do more things

It's true that coming up with an interface for an abstract debugger is harder, but it's not impossible. Microsoft created Debug Adapter Protocol (https://microsoft.github.io/debug-adapter-protocol/), which is conceptually similar to LSP. It's not perfect, but covers most basic operations pretty well, while leaving to the debugger to deal with the implementation details.


If I'm coming up with a new language, let's call it Drustzig, I can implement the LSP and get support for IDEs (and possibly xref tools and the like) essentially for free. Now to get debugging support for my language... I have to traipse around through every major debugger and beg them to merge Drustzig patches to make it work.

The protocol you've linked (or the similar gdbserver protocol) essentially implements an IDE <-> debugger mapping. Well, most of one: everything is basically being passed as strings, so if you want smart stuff, you kind of have to build in the smart stuff yourself. It doesn't help the other parts of the process; if you want to build a new gdb, you have to do all the parsing of debug info for C, C++, Drustzig, etc. yourself... and you have to integrate the compiler expression stuff yourself. If you want to build a better time-traveling debugger, or a better crash debug format, or something similar, well, the gdb remote protocol lets gdb talk to your tool so all you have to implement is essentially low-level program state (e.g., enumerate registers of a thread, read/write arbitrary memory locations, etc.). But this isn't covered by the thing you've listed either, and it still relies on the debugger supporting a particular protocol.


I agree that language server and debugger are different beasts, but both LSP and DAP serve a purpose of re-using the same server (xrefs or debugger) with different IDEs.

> I can implement the LSP and get support for IDEs essentially for free I mean, technically same is true for DAP... You can implement DAP and get support for IDE for free. But I agree that in general case implementing a good debugger is harder than implementing a good language server.

If Drustzig requires a special debugger (e.g. because it uses acustom format of debug information), then you'd need to implement it, yes. However, existing debuggers can support new languages relatively easy if those follow standard conventions (e.g. use PDB and DWARF). For example, Rust support in LLDB basically comes down to a custom demangler.

Again, I'm not saying that DAP is perfect and solves debugging, but IMO it's a step in the right direction. Make it popular, make it extensible. Debuggers can be mostly language agnostic (within reasonable bounds), but they don't _have_ to be.


Have you tried using vims :Termdebug? By default it's GDB but you can use rr or others inside of it. Just like anything else in vim you can extend it to fit your workflow.

It has some rough edges sure but I'm using it successfully with C and Rust. If you have tried but still don't like it what's your use-case?


On Linux or BSD, the ptrace system call allows one process to take control of another process; it can then observe and control the other process. Breakpoints are inserted by replacing an instruction with a special instruction that traps; the debugger can then take control. Watchpoints (the article calls them data breakpoints) are often implemented by making the containing page read-only, so that a write access traps (this means that there's overhead from other writes to the same page, the debugger has to just resume execution silently for those). A checkpoint can be implemented by forking the process and freezing the fork, so execution can go back to that point.


> How do they work?

A debugger is basically a very complicated exception handler that can, with the help of the kernel, intercept exceptions from other processes and access their memory. (This is for Windows, but I would guess that Linux is about the same.)

When process X wants to debug process Y:

1. Process X calls into the kernel and says it wants to debug process Y.

2. The kernel verifies that process X is allowed to do that. (You wouldn't want a low privileged user to be able to debug a service, right?)

3. The kernel triggers a debug break exception in process Y, usually.

4. Process X goes into a loop where it asks the kernel for the next exception from process Y, which is a blocking call until Y has an exception.

5. The kernel's exception handler catches the exception and passes details about it back to process X.

6. While process X is in control, process Y is suspended and process X can use other kernel calls to read and write process Y's memory.

7. When process X is done doing whatever, it tells the kernel to continue the previous event and asks for the next exception.

Process X will have, of course, loaded some libraries that help it navigate the structures in memory in process Y, starting with something at a well-known address like the PEB. That lets it do things like enumerate threads and loaded modules, find symbols, etc.

Relevant win32 calls are:

WaitForDebugEvent, ContinueDebugEvent, ReadProcessMemory, WriteProcessMemory

(Standard caveat that I haven't actually written one of these things in 20 years applies. Maybe there's new stuff now.)


About the latter, there is an LSP equivalent called DAP (debug adapter protocol).


There should be debuggers for interpreted dynamic languages that can be scripted with strict type systems. Possibly like eg prolog; like if I have Python code that has a Person class defined in it, I could just check for invariant violations with the debugger, eg check self.parent != self.child and whatnot. Then if I come up with a nice check while debugging I should be able to save that as a unit test for the project straight from the debugger, etc.


Also edit to add:

we have pretty magnificent tools for the mapping to a shared semantics of syntax-

    tree-sitter :: any syntax -> 1 semantics
- we have tree-sitter, lsp, etc; any language can be added with an easy description.

But we don't have great tools for mapping (1 semantics -> any syntax) - in some sense, the inverse of tree-sitter:

    inverse-tree-sitter :: 1 semantics -> any syntax
If we did, we could have a universal any language meta-debugger library.

The key impact of this is that anybody from some individual language could add a plugin, which could apply to every supported language that uses the subset of features that apply to that plugin.

Think about the network effects of a language which has a large universe of plugins - now think about the even greater network effects if it could be a union of supported languages - that's strictly at least as good, if not vastly more network-effectful. So I think the benefit to people would increase rapidly if people contributed to it across many different languages, so it could be a good collaborative tool.

I bring this up under this discussion because the debugger would be the exact best place to apply something like the inverse of tree-sitter

pretend I'm not ignoring this would likely lead to a strangely organized library with lots of potential spots of inter-language interface annoyances, so it would not probably be strictly better... :)


People who learn the tools are much more productive. Two features I often miss is that I sometimes want to copy whole data structures as JSON. There are workarounds for it in some langs. Then there is this annoying thing that some things are optimized away even in debug mode. So you have to add in variables just for debugging which is of course an anti-pattern.


>When people say “debuggers are useless and using logging and unit-tests is much better,”

Who? Who does say that? Freshman students who barely started coding?

>I suspect many of them think that debuggers can only put breakpoints on certain lines, step-step-step through the code, and check variable values.

That alone provides enough value to know debuggers are useful.


> Who? Who does say that? Freshman students who barely started coding?

Example from this thread: https://news.ycombinator.com/item?id=35098434


Do we know if that person is a freshman student who barely started coding?


There are many other experienced devs (me included) who simply don't see debuggers worth the effort in most cases. There are exceptions and every competent dev should know how to use one, but to me it is like using crutches - I run faster and better without them. I did use debuggers at the beginning though, a lot too. Then I learned to think about problems and to simplify design of the code, making debuggers much less useful.


I find it useful to run pdb when my program throws an exception. You can run your program with `python -m pdb -cc file.py args`, and it will leave you in a prompt when there's an exception or a breakpoint (eg the `breakpoint` statement). From there, you can evaluate expressions using local variables.


Linus Torvalds famously dislike them: https://lwn.net/2000/0914/a/lt-debugger.php3


At least he did 23 years ago. Have debuggers evolved in the past 23 years?


They have a little. But his main point is that debuggers distract developers from seeing the problem as a whole rather than just understanding what's going on the vicinity of that problematic line. So, the debugger incentivises small, targeted fixes rather than bigger solutions for more systematic problems.

That aspect of debuggers hasn't changed in 23 years, because that's the whole point of debuggers anyway.

Some projects are better off with a debugger and some developers are more productive with them. But also the other end of the spectrum exists: projects that are better off without and developers who are more productive without them. Not liking debuggers don't mean someone is stupid or inexperienced.

I'd even dare say that's more likely an experienced developer to not use debuggers as much as people earlier in their careers. Seasoned developers working for a long time in the same code base are more likely to have insight of what's going on without stepping through the process execution.


> But his main point is that debuggers distract developers from seeing the problem as a whole rather than just understanding what's going on the vicinity of that problematic line. So, the debugger incentivises small, targeted fixes rather than bigger solutions for more systematic problems.

Said differently by other old beards in this classic of Ken Thompson & Rob Pike: https://www.informit.com/articles/article.aspx?p=1941206

> When something went wrong, I'd reflexively start to dig in to the problem, examining stack traces, sticking in print statements, invoking a debugger, and so on. But Ken would just stand and think, ignoring me and the code we'd just written. After a while I noticed a pattern: Ken would often understand the problem before I would, and would suddenly announce, "I know what's wrong." He was usually correct. I realized that Ken was building a mental model of the code and when something broke it was an error in the model. By thinking about how that problem could happen, he'd intuit where the model was wrong or where our code must not be satisfying the model.


That could work in the old Unix days, and for toy problems today. But if Firefox crashes you can't just build a mental model of 10M lines of code and intuit the problem.


Having a mental model doesn't require you to read all the code. But even if it did, most people work with much smaller code bases anyway.


> But his main point is that debuggers distract developers from seeing the problem as a whole rather than just understanding what's going on the vicinity of that problematic line. So, the debugger incentivises small, targeted fixes rather than bigger solutions for more systematic problems.

What a bizarre take. Just today I had to use a debugger to figure out where there was a deadlock in our program and why. No amount of printfs would have made that speedy. In the debugger I just stop the program once it's hung and look at all the threads to see which threads are holding which locks and which are waiting to obtain locks. From there I can now see it's a lock inversion because we weren't being careful about the order of taking our locks in a few places. That lead to doing a wider check to see if there were other similar cases, and whether we needed to rethink anything in particular.

The debugger generally gives me an overview of the whole program in a way that a bunch of printf statements can't. (Which isn't to say that printf doesn't have a place in debugging - it certainly does.)


There’s more than one way to skin a cat. People not using a debugger are not mindlessly adding prints, sort of a poor man’s debugger. Those people are figuring out the problem from a different perspective. In your case, one may trying to come up with a theory as to what could cause a deadlock and think of ways to cause it to always happen consistently. Before running anything, they would be trying to understand what could possibly go wrong. Then they test the theory. You can perhaps see that, if someone is always doing that, they can get quite good at it. They would also develop quite a good intuition on what to expect from the code base over time. Whereas, someone who just goes straight to a debugger may not develop such an intimate relationship with the code base.

It depends on individual’s strengths and weakness as well. Neither way is going to be better for everyone. Each person has to find out what works for them.


Thank you for the link, well worth the read.


25 years in the industry. I used the debugger countless times because it was useful.

But I still default to use printf/log/tracing debugging style because it's often easier to get some signal under the conditions I care about and are hard to reproduce where it's hard to attach a debugger.


in my experience a large portion of go programmers seem to think this. I would guess (but haven't confirmed) that there's someplace where rob pike and/or russ cox are on the record telling folks that they don't really need debuggers because printf is great.


Rob Pike goes quite a bit further than that—don't even use printf or look at stack traces:

> A year or two after I'd joined the Labs, I was pair programming with Ken Thompson [...] Ken taught me that thinking before debugging is extremely important. If you dive into the bug, you tend to fix the local issue in the code, but if you think about the bug first, how the bug came to be, you often find and correct a higher-level problem in the code that will improve the design and prevent further bugs.¶ I recognize this is largely a matter of style. Some people insist on line-by-line tool-driven debugging for everything. But I now believe that thinking—without looking at the code—is the best debugging tool of all, because it leads to better software.

<https://www.informit.com/articles/article.aspx?p=1941206>


There are a lot of middle to late career developers who write code that is difficult to debug. It's a feedback loop. You write garbage code, the debugger is confusing, so you use the debugger less, so you get less feedback on your garbage code, until eventually other people can't debug your code either, or only invest the energy to do so when something has gone horribly wrong.

Or you write debugger-legible code, you use the debugger, and you get feedback when the code you wrote confuses the debugger, so you refactor it to be easier to diagnose the problem, then you commit those changes.


I found GDB to have a steep learning curve, even stepping through code was hard at first. Now I simply cannot work without it anymore. I have GDB scripts for visualizing all my data structures now. GDB script is a painful little language but it just changed everything for me.


This is another example, a tracing time travel debugger for Clojure https://github.com/jpmonettas/flow-storm-debugger

Supports a bunch of stuff described there and more.


One of my favorite "debuggers" is OllyDbg.

I will never forget the first time I changed a few lines of assembly of a running game (starsiege tribes, around 2005), and then I saw friend-or-foe indicators through walls.

That kind of "hot reloading" was special.


It would be nice to have event-sourcing and associated tools set up where one could drop into a REPL of some collection of docker-compose'd microservices, and step through a heterogeneous collection of the stack--from HTTP request to db query, and back out again.

Certainly tools like App Dynamics and Datadog enable this app stack tracing, and Docker Compose enables composing services.

To be able to play back transactions backwards and forwards among services would be a dream.

Wondering if any folks have played with this kind of debugging?


I like this article. There are more such tips like that in this series: https://www.youtube.com/watch?v=A919j_5qE0k


Although the language i have been working on includes time travel as a built-in feature, i find that i almost never use it, because in a deductive language, it is not that easy to create a bug.

Time travel is great for understanding someone else's large program, so that you can see where it goes as it is running. In large projects the instruction pointer is hop around madly throughout the code, and seeing where it goes like the path of a bird is rather handy.


Just yesterday I was trying to dive into a container running some c program and I took a look at gdb... again, for the n time.. and the whole ordeal of the gdbserver and gdb as a client and the symbol table yadda yadda yadda I just rebuild the program with some printfs displaying data, filename and line number and re ran it. Done. gdb is stifling.


Have you tried using a Time Travel Debugger to record the process and then just debug the recording outside?

You can use rr or LiveRecorder (commercial product, which I work on) to generate the recording non-interactively then debug it "locally". Avoids the need to set up a client/server configuration, so long as you don't need to modify variable values at runtime, etc.


Containers are the real ordeal and not just here.


Visual Studio C++ did this in the 90s. I never understood why Unix devs preferred emacs/vim back then.


I prefered XEmacs, because it was the only thing that could improve my experience versus Borland Turbo Pascal and C++ IDEs, and it was much better than plain Emacs or VI (vim was years away to materialize).

Nowadays only if I am on bare bones installation, I reach out for emacs or vim, on that order.


I use the debugger occasionally, when the human factor creeps in and I inadvertently make a programming error.

The downside of debugging is obvious: it's not enabled in your production programs (although it could be).

Logging OTOH is always on, and it has to be good enough to let you figure out what happened on a remote machine, in a program whose state you can't recreate anymore.

This obviously makes it good enough for debugging programs running locally, and makes reaching for a debugger a rare occurrence. GUIs for debuggers of course lag behind, so you need to refresh your memory of the step, step over and breakpoint instructions, etc. on every one of these rare occasions.

And then they don't always work. Printing more complex data structures is often an unholy mess of pointers to strings and private class fields. Evaluating expressions will end up failing or crashing your program. The symbol you're trying to break on doesn't exist? But why? This makes debuggers unusable for a mediocre developer.


I would be curious if there are that many frontend people that don't use a debugger. You mostly have to open devtools in the browser anyway, and it has an unquestionably sophisticated set of debugging capabilities


The omniscient debugging in this article was new to me. Sounds like this could be quite useful. Does anyone here have experience using this?


Seems as good a hook as any to hang this rant...

> a good debugger supports different kinds of breakpoints, offers rich data visualization capabilities, has a REPL for executing expressions, can show the dependencies between threads and control their execution, pick up changes in the source code and apply them without restarting the program, can step through the code backward and rewind the program state to any point in history, and even record the entire program execution and visualize control flow and data flow history.

TL;DR: That's fantastic! But if you need any of it you're already "doing it wrong".

Context: I've recently been using a compiler for a C-like language that targets a simple 64-bit VM, the point being there's no debugger for the stack (the VM is written in Rust; I could put a debugger on that and just deal with the extra level of abstraction using e.g. features like those described above.)

So how do you cope?

Design simple and robust systems that can be understood in action easily via printf/log. Use tried-and-true off-the-shelf components (including algorithms and datastructures.) Write in small increments, compile often, never proceed without complete confidence in your understanding of the system. When the inevitable bugs appear, and they can't be discovered through just thought or a rereading of the code, then you bisect: often literally (LoC) but also conceptually. Solve the puzzle and eliminate the places where puzzles can hide in the first place.

I quite literally see how a having a debugger would lead to worse code. (Even if you don't use it.) This isn't news btw:

> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?

-- Brian Kernighan, 1974

(The feeling is different: like instead of a wild adventure, programming this way feels like gardening in an orderly and well-kept garden.)


In the real world, programmers are brought into projects with literally millions of lines of code that have been developed over years or decades by tens or hundreds of programmers. They are under intense time pressure to fix bugs that they likely didn’t create. This is normal and debuggers are invaluable. No offence, but what you’re describing is a lovely ideal that only exists if you work mostly alone or on small things.


I hear you: people have been "doing it wrong" voluminously and for a long time. Ergo, fancy debuggers come in handy.

> This is normal...

That's the part that I have a problem with. We have leavened our entire complex civilization with software, it's important to get it right.


That's all good, but if the problem itself operates with more data than a human can reasonably operate with, this no longer applies.

I do 3D meshes programming. The amount of vertices, planes and other geometrical entities that I need to operate with in my algorithms is too big for me to do printf debugging. I can't just look on a long list of 3D vertex coordinates and visualize the mesh they make in my head. Moreover, I had to come with my own 3D visualization tool, once I got fed up with pen and paper for converting 3D coordinates to actual meshes, or using Blender for that. For me, a debugger (and debugging tools in general) is irreplaceable. I'm not saying my field is the most complex one, because it clearly isn't - but in that field, I can't think of any other way I could do what I do.


But it sounds like you had to come up with your own tools because a debugger wasn't enough?

That's my biggest criticism of debuggers having used both approaches: you forget that sometimes you need new tools. Whereas with prints you're constantly building new instrumentation for yourself.

https://merveilles.town/@akkartik/106138280776488247


I'm not sure I'm following. A visual debugger ticks a lot of boxes for my needs. Not all - some of them I had to tick myself, but I don't think that trying to reinvent the value that's already there in a debugger, would be particularly productive for me.

As for "forgetting", I don't think I forgot, because, well, I did make a new tool.

The main thing that I wanted to address in the parent comment is that needing tools to debug your code somehow means that the code is a mess - no, sometimes it doesn't.


> The main thing that I wanted to address in the parent comment is that needing tools to debug your code somehow means that the code is a mess - no, sometimes it doesn't.

Ah. I totally agree with that. Or at least, it's the sort of mess I don't know how to avoid yet.


Is your argument "Don't use debuggers because it will prevent you from writing a debugger?"


I'm not arguing "don't use debuggers."

I'm arguing, "don't just use debuggers."

I elaborate more on this argument in the first 2 minutes of this 4-minute video: https://handmade.network/snippet/1561


That makes sense. I'm also with you in that programmers forget that their development tools are also programs and they can modify and/or make more of them.


Whereas with prints you're constantly building new instrumentation for yourself.

What does this mean? Formatting text differently is still text, which the parent post was just explaining doesn't cut it.


I'm constantly finding new places in my program to add prints to. This isn't just copy changes. (Though that has also had a huge impact occasionally in understanding something. Imagine emitting 2 variables and then focusing on 1 of them. A pattern can pop out of a screenful of iterations of a loop when things line up just right.)


There are debuggers that will let you halt your program, go back in time, and then print out each place where a specific variable changes. If that's "interacting with your program through a pane of glass" then shrug.


I'm very happy to hear it! Can you point me at a few of them?


Mentioned in TFA is "rr"; it serves up to gdb the history of the program and you can use tracepoints (also mentioned in TFA) to essentially retroactively add prints. Gdb is ... not particularly ergonomic, but it is eminently scriptable and writing programs to generate arbitrary logs post-facto is a useful skill.

Undo is another one for Linux programs, and I think I've seen developers from them post here.


Hallo, I'm an Undo developer wave

Indeed, using GDB's version of tracepoints (dprintf - https://doc.ecoscentric.com/gnutools/doc/gdb/Dynamic-Printf....) is really powerful and replaying a trace with these installed is to generate "logs I wish I'd had" is exciting.

It also has potential use if:

* you'd like to diff two executions (e.g. one successful, one failing) - you probably can't just compare raw execution as there'll be uninteresting variation but comparing extended logs could be useful

* you are debugging something at a customer site - they might not let you debug directly but you could iterate on plain text logs by shipping them additional tracepoints to run on a recording

We thought this was useful enough that we implemented a tool with its own DSL for simply specifying post factor logging "probes": https://docs.undo.io/PostFailureLogging.html


The big problem with debuggers in my experience is the difficulty of setup. They take a lot of code to build, and if you use a slightly different language or compiler or OS you're SoL. Or at least facing some rabbitholes of unknown depth.

The most sophisticated debuggers seem to target C or Lisp. But lately I don't use either.

I've never gotten time-travel debugging in gdb to work. And it's been out for more than 10 years at this point. The last time was 2 years ago, so I forget the details. I do love tracepoints in gdb.

I've been watching https://rr-project.org for a while. The instructions still say, "build from source".

I looked at LLDB after your previous comment. Got it installed, but I can't actually get it to set a breakpoint on Linux. The docs seem optimized for Mac.

I have no doubt that once you get something set up just right you can do great things with it. But the power to weight ratio seems totally out of whack. Part of the goal of my projects recently has been to show that you can get a bunch of features far more simply if you build on a base of logging. If the programmer is willing to modify the program, a single tool can help debug programs in a wide variety of languages. They just have to follow a common and fairly simple protocol.

The big drawback of my approaches is that they don't scale for long runs of huge codebases. That hasn't been an issue for most programs I ever want to debug. Why should I pay the complexity costs of debugging gcc or Firefox for small programs.


> I've been watching https://rr-project.org for a while. The instructions still say, "build from source".

rr should be in most distros at this point. (It's been in Debian since at least 2014).


Thanks so much! I just tried it out and finally got to see time travel debugging working with my own eyes[1]. I'm glad to have it in my toolbox.

[1] Though I had to use sudo a few times, reboot, see some scary warning that it might not work reliably, read https://github.com/rr-debugger/rr/wiki/Will-rr-work-on-my-sy... and https://github.com/rr-debugger/rr/wiki/Zen.


FWIW it sounds to me like you're using what effectively amounts to fancy 3D printf, eh? In other words logging/printf help "visualize" non-geometrical entities.


Not _only_ that. It was an example of how tools help me deal with inherent complexity that cannot be dealt with by "just write better code".

I do use debugger. During big fixing, I need answers to various questions, such as: on what halfspace does a vertex lie? Is this point inside that polygon? What is the distance between two points? - and so on. If I didn't have a debugger, I'd have to stop the program, find the distinguishing features of the entities I'm interested in again - and this is often the hardest part, since the same code can be called thousands of times with different data - put printfs, rebuild, relaunch and prepare for another cycle. With a debugger, I just put these questions into LLDB queries - and I get answers. It is so much faster.

Would it be possible for me to do my job without all that, using debug prints only? Theoretically, yes. Would it be practical? Absolutely not.


Meaning no disrespect, and I swear I'm not just being contrary for kicks, but it now sounds (to me) like you want to be using Common Lisp? It sounds like you're fighting your tools with your tools.


Why do you think so? I think I'm pretty comfortable with what I'm doing now, but maybe I don't know something.


'Design simple and robust systems...' and 'never proceed without confidence in your understanding...'. When I was a freshman, I would have said that. Then I entered the professional world. Complex problems are solved using complex solutions. Complete understanding is non-existing. Robustness is a relative term.


> never proceed without [total] confidence in your understanding

The fellow I learned that from was apenwarr, FWIW.




Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: