- run the thing
- grab an output string of some sort (either output proper or logs or whatever else)
- grep for it
- set a breakpoint where the string is generated
- run again
Imagine all the stuff that's probably in MS office, for which the reason is totally lost. There were reasons, but they're not really retrievable.
Yes and no, I think. Codebases represent a collection of point-in-time decisions, and each one of those decisions had a reason. There's always a 'why' even if it's not captured.
> Imagine all the stuff that's probably in MS office, for which the reason is totally lost. There were reasons, but they're not really retrievable.
True - and that's Really Bad. That means you're stuck not able to improve or change certain things, because you don't know why they were done that way - and when you have a userbase the size of Office, you can't just say 'f it' and break things - at least not lightly, as a code-level decision.
1. Green field+0 years: This particular portion of the system was designed in this way as a cargo cult.
2. Green field+2 years: Because that portion of the system was designed in that way, we had to modify this other portion of the system to achieve X.
3. Green field+3 years: Because of the change in 2, and this new constraint Y, we had to make this other change in a third portion of the system.
5. Green field+(3 to N) years: repeat steps 2 and 3.
6. Green field+N+M years: Wow 5 (or previous 6) has created a mess and exposed a serious architectural flaw from step 1 or previous 6. All of the logic in the various iterations of 2+3 are refactored as best as possible.
So, the "why" basically grounds out in "the refactoring we could afford to do in order to correct a mess that resulted from a cargo cult or previous refactoring of even older cargo cult".
Is that "why" functionally helpful? Usually not for me -- in either case, I can't really assume a lot and I have to proceed with laborious "just watch what the code does" debugging/logging that I would've done without the "why?".
But this is probably a ymmv situation.
Which, of course, has two negative implications. Not just one.
Of course every once in a while you see the super helpful commit log "fix bugs". Some of those are mine and I've even burned myself in the past.
I think there might be ways to mostly sidestep this, though. Docs that look more like changelogs than "docs." Store them reverse chronologically so someone new can either look back at recent history or try to jump in to when a particular feature was originally added. Lean in to there being a series of "whys" for the code.
If a developer doesn't check for a null value, are you really going to head to the documentation to find out why? Is the segfault really by-design? It might sound facetious, but that's the 90% of bugs: not specifically segfaults, but trivial mistakes, invalid values and logic problems. Only a debugger can show you the steps and conditions involved that lead to that invalid value. Sometimes, through debugging, you find a problem that does span multiple layers or components.
If something spans multiple layers or components then that's a design problem. When you are solving a bug like this you have to "debug" at a much higher abstraction level. You shouldn't use a debugger for this (as you've pointed out); you should use a team of peers (documentation alone is not the correct tool). That team might rely on documentation, but would have to keep in mind that documentation rots over time.
If you're solving the former with the processes of the latter, I'm surprised you get anything done. Documentation won't tell you where an invalid value originates from or why. Only extremely verbose logging or debugging can do that.
Yes. I often write code in a way such that no null is checked. But if I do so, I always document it properly and my reasoning why, if the function is used properly, no null will be passed.
This way, you can be sure that if a null is dereferenced, not my function is the source of the bug, but a function that does not obey the pre-invariants that my function imposes on its parameters.
So you know that you are not supposed to "fix" this bug by introducing a null check to my function, but instead fix the bug in the other function that calls mine.
if (value == null)
print "Must try harder"
No. Static analysis can and is actually what is used most of the time. Either with an automated tool, or with your brain.
Of course debugging is also needed, but the right mix between the two is needed, and you are actually doing it even if you use a debugger a lot (at least I hope so, otherwise you are probably not fixing your bugs correctly way too often).
If you're lucky, you've got the commit messages and associated bug/work tracker items that explain where the software comes from. I agree there's no substitute for planning documents, but those don't always survive confrontation with the enemy.
But now I'm working on nuts and bolts cybersecurity stuff for a tech giant. Stuff like public key infrastructure. Even if I know why things are done, stepping through someone else's code with expected behaviors, pathological cases, and error cases has been extremely helpful because there are so many more subtleties to worry about.
Yes, ability to explore and understand the project via debugger is a great and useful skill.
However, if it takes debugger to understand that project, then, I'd say, it is a badly written and poorly documented project.
Wielding the low-level binary debugger well is also a good way to unnerve your coworkers and survive weird situations, like when you've got no tools because you've broken your tools with your tools ( https://www.usenix.org/system/files/1311_05-08_mickens.pdf ; content warning for hilarious hyperbolic rant)
I'm not a fan of this workflow. I like to start learning about something from the top down and there is basically no way to do that with this method. It's great for getting into the nitty gritty details of the code to solve a problem, but it's terrible for learning how the different chunks of code fit together or how the code looks from the top. For our developers with many years of experience, this isn't a problem. For newcomers like me, it is.
It probably doesn't help that this particular debugger is both written in and used for assembly language. If it was in a higher level language, I could probably get more of a top level view from the commentary, meaningful variable names longer than 8 characters, and the fact that 10 lines of python does a lot more than 10 lines of C which does a lot more than 10 lines of assembly.
Alas, inferring flow of control (stepwise) is challenging with async operations.
But even harder is inferring the data flows. So many abstractions, wrappers, transforms, slice & dice. Especially with these dynamic languages.
So in desperation I'll use/write "tracing wrappers" over interfaces to record/log inputs and outputs.
One example, I created code generating TraceGL (wrapper for OpenGL). Run my app, render a few frames, run the generated code, tweak til it works, then use that knowledge to fix my app.
One recent example, code generating wrappers for S3, DynamoDB, Redis. So I can figure out exactly what data is going where. Then I can fix the code.
Rob Pike has a quote about the primacy of data over code.
This is what you do when you can't rely on file names, variable names, function names, types, and comments.
A codebase that make this the fastest possible workflow probably has some important design issues.
But I also try to write/architecture my code so that the big stuff always gets logged. In cases where the logging isn't as good as it should be, a few hundred lines of reflection/meta-programming can enable just enough aspect-oriented programming capabilities to enable this approach.
- Create methods
- Delete methods
- Update Methods
- Read Methods
Creating and deleting things are usually the easiest starting points. You always have an idea on how an app does this so its pretty easy to figure what things you need to do to trigger the breakpoint. Usually the create and delete methods are named something obvious so grepping for it is not terribly difficult
If you work with JS libraries there is always some demo website, you can check for any event listeners on that element too, set breakpoints go from there.
Heck, even logging frameworks are this way. I wish we could sit down and rewrite some of these critical libraries for legibility. Especially as the number of libraries we have to interact with continues to grow day by day.
Us old folks that have been programming for 20 years don't even separate the two, there is no meaningful distinction. Programming is not a write-only operation (Perl excepted).
If it's an existing project/product, I get it running and find the entry point. If it's new, I write and entry point and get it running. Then I change something, or write something, and debug it. Is it working as expected? Maybe the execution flow isn't what I expected. Why do I always forget to initialize things right. Probably because every language thinks their version of native v. abstract references are fancier.
Blah, I need to get to work cod... debug.... what am I doing today? Ah yea, writing documentation. Fack
The EDSAC was on the top floor of the building and the tape-punching and editing equipment one floor below. […] It was on one of my journeys between the EDSAC room and the punching equipment that ‘hesitating at the angles of stairs’ the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.”
— Sir Maurice Wilkes
Really need to squash the Perl is write-only. Modern Perl is quite readable, supported by tests and linters.
> Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
The loophole is be as clever as you can at making the code as clear as you can. And even then you aren’t always clever enough.
Edit: And why exactly is this opinion unacceptable? Being able to communicate what you're done (and hopefully why you did it) is an essential part of writing software that will be worked on and/or used by other people. Code that takes just as long to figure out how to use as it does to create from scratch is as good as no code.
A lot of people would take issue with that statement.
Most devs probably wouldn't notice that their backspace key went missing until they were fine tuning the phrasing of an email about why the best way to solve their spaghetti is to make the same spaghetti slingers port the codebase to $FAD_OF_THE_WEEK.
Edit: It should be obvious that this is a joke but if it's not you should probably check if your backspace key still works.
Back in the late 90's, when Java was still sort of new, my employer insisted that I install an IDE to do my Java development that I had been doing mostly in vi. The one they picked (for some reason) was Symantec's Visual Cafe. I found that I could reliably, reproducibly, crash the IDE by pressing the backspace key. Maybe that was their way of pushing a "no mistakes" philosophy?
Jetbrains was the first one that was reliable. Slow, but not as slow as VA.
Given that the normal way to develop a Lisp program is to do it incrementally, your program (or someone else’s!) will break, and it will break often, and you’ll get launched into an interactive debugger. But it’s friendly, not requiring a completely separate, foreign toolchain, and much of the functionality works regardless of which editors/IDEs you’re using (though Emacs makes the experience much more ergonomic).
I think this is perhaps a bigger selling point of Common Lisp than all the great, oft-quoted things like metaprogramming.
Smalltalk is something like the ultimate debugger. The advanced Smalltalk coders would often do most of their coding in the debugger. (It's very easy to unwind the stack a few places, then resume execution.) The VisualWorks debugger was so nimble, people would put expressions to be executed and debugged into code comments! (The debugger would execute the code in the context in which it was evaluated, so this could be absolutely awesome.)
Given that the normal way to develop a Lisp program is to do it incrementally, your program (or someone else’s!) will break, and it will break often, and you’ll get launched into an interactive debugger.
One of the most frequent demos we did in Smalltalk, was to write some empty scaffold methods, launch into an exception, then fill in everything in the debugger until the system was complete.
That's what some Lisp systems do, too.
> The debugger would execute the code in the context in which it was evaluated, so this could be absolutely awesome
That's what Lisp debuggers would do, too.
> launch into an exception, then fill in everything in the debugger until the system was complete.
That's actually an old story about Marvin Minsky:
Here's an anecdote I heard once about Minsky. He was showing a
student how to use ITS to write a program. ITS was an unusual
operating system in that the `shell' was the DDT debugger. You ran
programs by loading them into memory and jumping to the entry point.
But you can also just start writing assembly code directly into memory
from the DDT prompt. Minsky started with the null program.
Obviously, it needs an entry point, so he defined a label for that.
He then told the debugger to jump to that label. This immediately
raised an error of there being no code at the jump target. So he
wrote a few lines of code and restarted the jump instruction. This
time it succeeded and the first few instructions were executed. When
the debugger again halted, he looked at the register contents and
wrote a few more lines. Again proceeding from where he left off he
watched the program run the few more instructions. He developed the
entire program by `debugging' the null program.
One of the main debugging differences between Lisp and Smalltalk though is that Lisp systems often can run interpreted Lisp code (where Smalltalk runs bytecode) and then the debugger can see and change the actual source code while it is running. A Lisp Interpreter runs of Lisp source code - which still is data, since in Lisp source = structured data.
Useful Lisp debugging experiences existed as early as late 60s / early 70s in BBN Lisp.
The BBN Lisp development then moved to Xerox PARC and was renamed to Xerox Interlisp.
Often times, it's not necessarily how something works, but how well it breaks. Things will break, so engineer for it. You'll be happy, the team will be happy, and the business will be happy.
Debugging and error handling are first class activities. Building a thoughtful, "well engineered" tool chain around these activities is the mark of a great platform (and the developers behind it), in my opinion.
A bit of nitpicking: It is anything but "extraordinarily", it is a completely ordinary activity.
I wish edit-and-continue, rewinding time, etc... would be better supported across IDEs. Programming should be a "conversation with the machine", and ultimately with other programmers. When I'm stepping through my own code, I try to assume the role of an "outsider" and judge my code from the "outside".
The debugger is also the most important tool for me to understand code written by others.
But I think my workflow is similar to yours. I often set a breakpoint in my new code and just watch the state to see if it looks as expected, so small tidbits of usage here and there.
This is because of a good reason: implementing debugging is hard ... very hard, and therefore expensive to implement.
This is of course the main reason in practice.
I notice that times changed a bit. This was true in software engineering like 10 years ago, but in recent years there is much more emphasis on good practices, constant refactoring, implementing lots of tests alongside the production code, etc. I debug maybe 1-2 hours per month, as a full-time programmer.
If everything is heavily design by contract, code is clean, there are lots of tests, everything it tdd'd then most bugs are almost always shallow and surfaced quickly. Good debugging skills and tools aren't so useful in this environment.
By contrast, if the code base is a mess and there are no tests I seem to spend about 60% of my time ascertaining the cause of unexpected behavior. Good debugging skills and tools are invaluable in this environment.
For example, I spend 50% of my time debugging another company's code, 20% of my time debugging my company's legacy code, 20% of my time refactoring/implementing unit tests (I'm sure like most engineers here, I consider those inseparable and essentially one activity), around 9% on new feature implementation written to the same standards as that refactored code, and maybe 1% of my time debugging code that is up to standard.
The 50% is the real killer because I have absolutely no control over the codebase that forms the bulk of it. It's produced by another company that's the primary contractor on the project and we just have to make our one small component work within the larger application. So it's not even like debugging a typical third-party dependency, in that we can't replace it with a different dependency and (due to terrible architecture) our code is riddled with hidden dependencies on their shitty, poorly documented code.
My point being: debugging skills are a core competency for most engineers because it's rare to have complete control over the code you interact with, and even rarer to have it all up to acceptable standards. Even in the best of environments, we all spend time debugging issues arising from third-party dependencies, whether it's an issue with the IDE, compiler, language design, server configuration, or literally any of the million things that can break your system unexpectedly on any given day.
I agree that this is all good practice and helps prevent common classes of bugs. Obviously it will also safe you time of debugging these.
But your stack doesn't end at your code. There's a whole bunch of other code yours is linked to, there might be an interpreter or vm, there's the OS, the kernel and firmware. So IMO it's a bit naive to say you don't need debugging tools/skills.
I didn't say that.
>But your stack doesn't end at your code. There's a whole bunch of other code yours is linked to, there might be an interpreter or vm, there's the OS, the kernel and firmware. So
Good quality code / projects would mean keeping a firm control over all of that stuff, so that when it breaks, it's fairly obvious what caused it (e.g. distinguishing between bugs caused by a different environment and bugs caused by newly pushed code).
That being said that has been really bad for my debugging skills which have atrophied tremendously. At work where the code bases are complicated messes I find myself not being able to debug something as well as I should.
We used to write an awful lot of tests on everything. When we stopped we increased productivity by almost 60% at almost no cost.
As I said, we still work with TDD, but we only do it when it makes sense.
In my current project we follow a "bug -> reproduce bug in testcase -> fix bug" method for some parts of the code. This has cause quite an extensive test suite to develop for those parts of the system.
How often do the tests find a problem?
When it does find a problem, doesn't this mean that one hits a bug which existed before?
Can you give an example of a case when the test was useful?
Our regression tests have literally found hundreds of bugs over the years and saved us a lot of time and embarrassment.
There is a "Test Driven Debugging": when you look around a bit inside the production error dumps, then try to reproduce it with a test, then debug the test in a much smaller scope with something heavy like Valgrind or a Record-Replay debugger (https://github.com/mozilla/rr).
If you write tests regularly during development then you can also isolate and test parts fast when something goes wrong in production.
Here's what you're doing when you debug something with a debugger:
1) Figure out how to get the program into the state where there bug shows itself.
2) Analyze output from the program or its state (via the debugger) to determine whether or not the code changes you made fixed the bug.
All testing is is writing a program that gets to the state mentioned in step 1, and having that program do the check you manually do in step 2.
Saying that testing is a waste of time implies that finding bugs, getting a program into the state where it shows the bug, and figuring out whether or not the bug is fixed is the easy part, whereas writing a few lines of source code is the hard and time-consuming part. This is unlikely to be true.
in recent years there is much more emphasis on good
practices, constant refactoring, implementing lots of
tests alongside the production code, etc. I debug maybe
1-2 hours per month, as a full-time programmer.
Do you include "Why isnt this test passing?" as debugging? I would and I spend a good amount of time answering that question.
Good logging practices help you pinpoint what code you need to debug.
It does in no way replace debugging.
"Unix for Beginners" (1979)
...obviously this is mildly fascetious, but I want to point out the absurdity of assuming debugging is a difficult endeavor. That’s how it becomes a difficult endeavor.
If I just had something along the lines of gdb, I'd use it a lot less frequently; convenience and features have absolutely made debuggers a lot more attractive over the last few decades.
I think this serves nice as a historical record of how the state of the art was 40 years ago.
Today, we have IDEs with integrated debuggers (not to mention Emacs and other editors usually integrate decently with gdb, lldb, pdb, etc.
Using a debugger doesn't have to mean leaving your code and starting a different tool and different flow.
If you'd ask any of my colleagues what they'd do if there was a new policy saying use of debuggers were no longer permitted on a daily basis, the answer would more likely than not include the word "riot".
In a huge, complex code-base created by a diverse team of varied skill and competences, you can't assume all code is good or self-explanatory.
When you get confused about what happens where and why, the debugger is probably going to be your best friend.
I’d imagine you’d have the same effect if you mandated the use of debuggers.
And hey, I’m not arguing against their use, just saying they aren’t needed for most debugging. I would bet the vast majority of bugs in managed languages have bugs that are easily discernable if you have decent type/assertion boundaries. Needing to use a debugger is certainly a smell about your code.
One less celebrated but very valuable approach to grasp a codebase is the way you document what you've learnt. While you can learn about a codebase through exploratory debugging (mentioned by someone on a separate comment) you have to persist that knowledge on "paper" in some way. My favorite ways to do that:
- class diagrams (for static stuff like data structures, class relationships, etc.)
- sequence diagrams (for dynamic stuff like which functions call which and data passed between them)
Reverse engineering is probably a loaded word, but in general digging in to understand the codebase helps you work more efficiently and avoid introducing bugs to the extent possible.
Young engineers are bad at debugging, bad means real bad but good at programming puzzles.
This points to the fact that people value puzzles over experience and in turn miss the big picture.
Sure you can code up a balanced RB tree in 5 mins, but what will you do when packets start showing up in bursts on your service? Can you find out that on a live system?
I despise this new Bro culture of mugging programming puzzles from leetcode and hackerank. Faltering at first real world challenge is not only disappointing, but an insult to craft of engineering.
Until the application process changes, the applicants won't change.
try upgrading the dependencies. wrap the service in a harness to make sure it gets restarted when it fails. forget about proactively looking for bugs, wait until they come in, and if you have time that week before they slip into the 'stale - never to be addressed' pile, spend a few minutes trying to reproduce and mark 'works for me'
If everything that one can do with a graphical debugger is single-step, step-into and continue, then they are using like 1% of its capabilities.
Taking advantage of a graphical debugger, specially when the environment supports code reload, is quite productive for interactive programming.
When I was working in Java 7 + Eclipse the ability to hotswap in changes to functions made debugging (even in production) a dream! Dropping a few new lines of conditional logging into a service was crucial for those 3am war room sessions.
Now, working in Java 8 + Intellij hot swapping fails constantly because lambdas don't seem to compile is a stable fashion. Eclipse uses a more development friendly compiler ECJ so I wonder if it has this solved.
3) Visual representation of data structures
4) Visual representation of ongoing tasks and thread stacks
5) Visualization of memory, CPU, IntelliTrace(.NET) and Java Flight Recorder data
6) Extra one as it is only for hobby coding, GPU debugging
I think it's useful to think of two independent 'axes' of debugging:
- single-stepping vs reading traces
- using specialist debugging software vs modifying the code
Most of the time I much prefer reading traces, and IDEs often have decent primitives for setting sophisticated tracepoints, but the UI support for them is often very weak (while adding a breakpoint might be a single key press).
So I've often found myself using 'printf'-style debugging because (until compile times get very long) it's more ergonomic then using the debugger's tracepoints.
I haven't tried rr yet. Does anyone know how good it is at this? I'd like to be able to do things like define a keypress for a given tracepoint and toggle its output on and off while I'm scrolling through a trace.
int* foo = ....
*foo = 123;
printf("killroy was here");
That's why GCC advertises -O0 as «make debugging produce the expected results» (which in practice will avoid the problem in your example too).
There are lots of other ways to cause yourself trouble using printf to debug, of course (eg clobbering errno).
And the horror that descends upon you once you commit+push those statements.
At best it might be a slight embarrassment (that can be avoided by checking the diff before commit).
Then again, having killed the bug is more important than pretty commits.
if (debugLevel != 0) printf(....
Sometimes the condition compares debugLevel to some other value, but it will never print if debugLevel is zero, and its going to be zero unless I set it to something else.
See my list of publications for a number of empirical studies on debugging and tools to support more efficient debugging: http://austinhenley.com/publications.html
- Anybody can look at the logs, not just developers.
- The logs show the sequence of events, not just a snapshot.
- Feasible to do in a live system.
I have found debugging to be fuzzy and largely based on experience, mental models, and internal (scientific method based?) approaches.
There seem to be a relatively few debugging methodologies or frameworks. (ie in performance/systems realm there are USE, RED, ...?). I have only started to recently create a formal framework (a google doc I update after each incident) after 9 years of engineering experience! Not because I wasn't interesting in debugging, but it literally took 9 years worth of debugging incidents for patterns to start to become apparent to me (i work devops/system so when there is a general feeling of a problem I'm usually tasked with finding root causes).
On a side note: so far the framework has been successful within my org and I hope to formalize it enough to write about it in the near future :)
Or something mobile clients send their exceptions?
Sadly the post is not very clear, so we can only guess.
I think it's useful to turn that on its head a little -- I know how the code should work, but a debugger can show me what it's actually doing, so I can see what's going wrong.
I don't think there's a big difference here between debugging and testing, say. Couldn't you also say that tests explain how your own code works?
I'm afraid of building a habit of making haphazard tweaks until it seems to DTRT in one case.
I like to think of debugging more as a science. It's literally a mix of theory and experiment: the program does something weird, you run some random experiments to gather more data, you come up with a theory that explains the results, you design an experiment to test your theory; a successful theory will then lead you to the correct fix.
A good debugger can help get the most data from your experiments. It's as if the LHC had sensors capable of tracking individual gluons, or the ability to replay a specific collision over and over, or even the ability to set up a desired collision from scratch.
If have never ever written a bug, contact me for a job offer, I'll pay you 300% of my salary.
I know just the guy. He was interested in learning how to code.
But you're fallible. This is the same reason you need to use a profiler to see if your optimizations actually sped anything up. A debugger shows you what really happens, not what you think happens.
There isn't such thing as "my own code".
Depending on the situation, I might spend most of my day debugging, as a way to better understand the program and as a code analysis tool.
This was especially useful if dealing with a code base that I was not familiar with, while in code that I wrote myself most of the time logging is sufficient.
I'm not sure why the author thinks that debugging is not a priority for tool makers, the debuggers that we currently have are awesome and have been so for years.
Bug handling is easy, fast, and friendly
Information on “code flow” is clear and discoverable
I use interactive debugger inside IDE or inside profiling tools. Debugging is something I do alone, so it does not need special process support.
We should learn and hone that skill of course, as it does not only offer a solution when lightning strikes, but a way for the developer to gain comprehension of the "internals".
I put together http://www.debug.coach/ because a guy I used to work with had better debugging skills than anybody and I noticed it was because he always knew the right questions to ask.
Of course, all degrees exists between the two extremes, and when going too far on one side for a given application is going to cause more problems than it solves (e.g. missing time to market opportunities).
Anyway, some projects, maybe those related to computer infrastructure or actually any kind of infrastructure, are more naturally positioned on the careful track (and even then it depends on which aspect, for ex cyber security is still largely an afterthought in large parts of the industry), and the careful track only need debugging as a non-trivial activity when everything else has failed, so hopefully in very small quantities. But it is not that when it is really needed as a last resort, good tooling are not needed. It is just that it is confined to unreleased side projects / tooling, or when it happens in prod it marks a so serious failure that compared to other projects, those hopefully do not happen that often. In those contexts, a project which need too much debugging can be at the risk of dying.
So the mean "value" of debugging might be somehow smaller than the mean "value" of designing and writing code and otherwise organizing things so that we do not have to debug (that often).
gdb has an awful interface; it requires you to read a long manual and learn outdated command syntax before even using it; it is often faster to add logging and rerun the program. It is not a debugger for humans.
For comparison, JS debugger in Chrome devtools works pretty good and is easy to use. It is really helpful and allows you to identify a problem very quickly (faster than reading an introduction page for gdb). But it has problems sometimes, when you try to set a breakpoint and it is set in another place or is not set at all without any error messages.
I saw some comments here against visual debugging; maybe their authors had experience with some buggy or poorly written debugger?
I also tried to use Firefox developer tools in several versions up to FF45 and almost every time something didn't work. And when they work, they choke on scrolling large minified files or heavy sites with lots of ads.
You're comparing an IDE to a command line environment, Linux has various IDE's that are more comparable and I believe some standalone front ends to gdb, windows also has windbg and dotnet mdbg that are the command line equivalents of gdb that VS is interacting with for you.
Gdb (and windbg for that matter) is much more powerful than visual studio debugging though, much like vim and similar tools it's not easy to learn but it's easy to use once you have learned. Stop expecting instant gratification.
That needs languages designed upfront with debugging etc as a first class live feature.
Seriously though, I imagine that the junior devs would have such a view of debugging. But the matured devs would plan for it as they know their limitations.
If you are careful when you program (assertions, tests, logging) you rarely need to use a debugger. Time spend on a debugger is invisible and lost. It is much better to spend time adding assertions, levels of logging, and writing tests.
Debuggers can be I instrumented to do a lot of dev-only logging for you.
I do write tests and assertions, but if those fail, I am dropped into a debugger (if prog is executed interactively), I can immediately inspect the state of my program and more often than not fix the problem on the fly.
I strive to not manage state but to write purely functional code, it makes life so much easier - see your second point as an example :)
Everyone debugs the software they are building or working on.
Not everyone uses a debugger (depends on language, framework, etc).
IMHO debugging is best done through 1) proper logging 2) metrics 3) proper isolation and software architecture 4) crash early philosophy
Of course I can be wrong and write bullshit. But why egoistic?
Everyone would prefer to provide something fresh and clean than to wade through other people's poop.
I have found that code that needs a debugger is code that is written in an extremely stateful fashion.
The very reason why debuggers exist is to deal with code that has a dozen variables book-keeping complex state that you need to track closely to figure out how something works (or why something doesn't). Debuggers even have "watches" to track something closely so that it doesn't change underneath you, when you aren't looking.
To me this is a code smell. I would suggest that instead of using a debugger, we ought to be composing small functions together.
Small and pure functions only need an REPL to test.
Once you make sure the function is behaving right, you can almost just copy paste over the test cases you ran in your REPL into unit tests.
The simplest way to enforce this discipline is to do away with assignments, and instead constrain oneself to using expressions. This forces one to break code into smaller functions. Code written this way turns out to be much more readable, as well as reusable.
I suggest that we eschew the debugger in favour of the REPL.
There are a LOT of different ways to write software. Some more stateful than others. Some lower level than others. Some more asyncrounous than others. Some more functional. Some more object oriented.
Your experience with the software you write in your day to day, is not the same stack and may not even be in the same universe as what others are attempting. Writing a React app is very different from writing a game in Unity, which is very different than writing a game engine in C++ which is very different that writing large scale services that talk to clusters of computers and hardware devices, which is different than writing some low level library in Rust or Haskell or doing data science in R or Octave.
Please fucking stop with this “one right way”, “you’re dodging it wrong” dogmatic bullshit. Your limited worldview does not apply to ALL of software development.
Use a fucking REPL if you want. Use a debugger if you want. Use logs and traces if you want.
Just try to write good software, try to understand your code, and ignore dogmatic and valiantly bad advice and opinions like the parent comment on Hacker News.
And here I thought I was writing an innocuous comment on debugging code. Hyperbole much?
> There are a LOT of different ways to write software... Writing a React app is very different from writing a game in Unity, which is very different than writing a game engine in C++.
It's interesting that you brought up these 3 use cases. I started my career working for a shop that wrote a game engine in C++, then moved to Unity. I now write React apps for a living. Do I know you? :)
With the exception of a few corner cases, like portions of code that does rendering, and HFT software, where performance concerns trump everything else, what I have seen in that time is the following:
1) decomposing your code into smaller functions is objectively better than NOT doing so.
2) writing functions that don't mutate its local variables is objectively better than writing functions that DO.
By "objectively better", I mean that the code written conforming to the above have the following characteristics:
- more readable than code that does not.
- more testable than code that does not.
- more re-usable than code that does not.
It's on these two assumptions that I build my case for eschewing the debugger altogether in favour of the REPL. If you write code conforming to the above, you have absolutely no need for a debugger.
Having said that, it is possible that you need a debugger to wrap your mind around code bases that you inherit, which were written poorly (or equivalently, in a very "object oriented" fashion).
Using a debugger (as in, software where you step through code) and developing "debugging skills" are entirely different things. I was only commenting on the former.
> Your post is harmful because it suggest to junior developers that if they must use a debugger they must be developing their software in a poor fashion.
Would it be better if I said: "if you need a debugger, you have are either mucking around in poorly written code that you inherited or you're probably doing it wrong"?
> There are many good reasons to use a debugger, including to develop software or to explore a codebase and to debug your code.
I concede that exploration of a codebase is a valid use of a debugger.
Seeing that fn1 breaks on DataB is not as useful as seeing that fn1(f2(DataA)) is the actual error.
I would agree that if you're using a debugger to verify over many iterations that something isn't going wacky with state, then yeah, that's not a great sign.
It would be nice if everyone wrote code that minimized state, or if every problem could operate with minimal state. But not all code is or should be like that.
"to do away with assignments" is not my favorite way of phrasing what you mean either; I would prefer people assign as many new variables as they like, and just avoid mutating them (i.e. making them `const` and using immutable data structures if performance allows)
While I would agree that it is much more important to have functions NOT mutating their local variables, doing away with assignments is a straight-jacket which immediately forces you to decompose larger functions into smaller ones.
In fact, even in languages like Haskell where mutations of local state is entirely done away with, I see constructs such as `where` and `let` being abused to construct inordinately long functions.
Longer functions tend to accrue even more length over a code base's life and become harder and harder to test as they do.
I generally follow the rule of threes ("if the code appears in three places the exact same way, extract it to a separate function") as this helps me balance readability of functions with size of functions.
I understand your advice applies very neatly to functional languages. I would love to live in a functional world. I don't. So that's why my advice is like that.
The janitors probably lack the skills to go to space and command a spaceship.