Hacker News new | comments | show | ask | jobs | submit login
Why Isn't Debugging Treated as a First-Class Activity? (ocallahan.org)
244 points by dannas 35 days ago | hide | past | web | favorite | 191 comments



One practice I don't think I've ever really seen properly discussed is exploratory debugging. Given a large codebase I'm not familiar with, I've found that this workflow is an incredibly fast way to get familiar with the overall flow and structure of the code:

    - run the thing
    - grab an output string of some sort (either output proper or logs or whatever else)
    - grep for it
    - set a breakpoint where the string is generated
    - run again
From there, you can explore the stack, and you start to get your bearings. How is this particular thing generated? Find the place in the stack where it first shows up, set a break point, start over and debug from there. Repeat until you have a sufficiently solid grasp of the codebase that you can just navigate the source tree searching for where things are.


To each their own, I use to enjoy such. I absolutely hate it now. Given a codebase, I want to know the story. Why was the code written, what problem was being solved, why did they decide to solve it in a specific ways, WHY, WHY, WHY? I need to know the whys. I want the story, I want to know the domains. How is the software layered, can I explore one layer without going into other layers? I don't want to do so with a debugger. I want to do so with a document reader.


The downside is that this assumes a "why" exists. Codebases are often evolved rather than designed, and just are the way they are without persistent reasons for things. A common reason would be "because it was this other way before."

Imagine all the stuff that's probably in MS office, for which the reason is totally lost. There were reasons, but they're not really retrievable.


> Codebases are often evolved rather than designed, and just are the way they are without persistent reasons for things. A common reason would be "because it was this other way before."

Yes and no, I think. Codebases represent a collection of point-in-time decisions, and each one of those decisions had a reason. There's always a 'why' even if it's not captured.

> Imagine all the stuff that's probably in MS office, for which the reason is totally lost. There were reasons, but they're not really retrievable.

True - and that's Really Bad. That means you're stuck not able to improve or change certain things, because you don't know why they were done that way - and when you have a userbase the size of Office, you can't just say 'f it' and break things - at least not lightly, as a code-level decision.


The worst "why?" answers often go like this:

1. Green field+0 years: This particular portion of the system was designed in this way as a cargo cult.

2. Green field+2 years: Because that portion of the system was designed in that way, we had to modify this other portion of the system to achieve X.

3. Green field+3 years: Because of the change in 2, and this new constraint Y, we had to make this other change in a third portion of the system.

5. Green field+(3 to N) years: repeat steps 2 and 3.

6. Green field+N+M years: Wow 5 (or previous 6) has created a mess and exposed a serious architectural flaw from step 1 or previous 6. All of the logic in the various iterations of 2+3 are refactored as best as possible.

So, the "why" basically grounds out in "the refactoring we could afford to do in order to correct a mess that resulted from a cargo cult or previous refactoring of even older cargo cult".

Is that "why" functionally helpful? Usually not for me -- in either case, I can't really assume a lot and I have to proceed with laborious "just watch what the code does" debugging/logging that I would've done without the "why?".

But this is probably a ymmv situation.


The worst why that I regularly hear is, "the client changed their mind and it's the best we could do."

Which, of course, has two negative implications. Not just one.


Yes, that's the root cause of 2+3.


This is why a test suite is so important. There's no better source of why that a developer who is new to the codebase can turn to. Documentation is likely out of date and pestering the original developer with questions takes both developers' time and is often not possible in cases where the original developer has moved on or forgotten the why. But tests show all the use cases that the original developer thought were important.


Test suites don't really explain the "why". The worst is having a hundred tests and every one of them verifying incorrect behavior - I've seen it happen.


Microsoft keeps a meticulous record of their code going back to the 90s. You start having to use older tools past 2000 but the change logs and bug databases are all there.

Of course every once in a while you see the super helpful commit log "fix bugs". Some of those are mine and I've even burned myself in the past.


This is the same problem as "why aren't the docs up to date" and asking for the "why" is going to break down in the same place.

I think there might be ways to mostly sidestep this, though. Docs that look more like changelogs than "docs." Store them reverse chronologically so someone new can either look back at recent history or try to jump in to when a particular feature was originally added. Lean in to there being a series of "whys" for the code.


That’s what annotate is for. You can see the layers of sediment and work out the story.


> WHY

If a developer doesn't check for a null value, are you really going to head to the documentation to find out why? Is the segfault really by-design? It might sound facetious, but that's the 90% of bugs: not specifically segfaults, but trivial mistakes, invalid values and logic problems. Only a debugger can show you the steps and conditions involved that lead to that invalid value. Sometimes, through debugging, you find a problem that does span multiple layers or components.

If something spans multiple layers or components then that's a design problem. When you are solving a bug like this you have to "debug" at a much higher abstraction level. You shouldn't use a debugger for this (as you've pointed out); you should use a team of peers (documentation alone is not the correct tool). That team might rely on documentation, but would have to keep in mind that documentation rots over time.

If you're solving the former with the processes of the latter, I'm surprised you get anything done. Documentation won't tell you where an invalid value originates from or why. Only extremely verbose logging or debugging can do that.


> If a developer doesn't check for a null value, are you really going to head to the documentation to find out why?

Yes. I often write code in a way such that no null is checked. But if I do so, I always document it properly and my reasoning why, if the function is used properly, no null will be passed.

This way, you can be sure that if a null is dereferenced, not my function is the source of the bug, but a function that does not obey the pre-invariants that my function imposes on its parameters.

So you know that you are not supposed to "fix" this bug by introducing a null check to my function, but instead fix the bug in the other function that calls mine.


I mostly agree with this sentiment, but I find it a bit friendlier to other developers to assert(x not null) in most languages, at least so that they can find out sooner that the error is, in fact, their fault.


How do you know if you made a programming boo boo unless you do a null check...

if (value == null) print "Must try harder"

?


I hope that you're leaving a comment in the function noting why you think that NUL should not be checked. Also, depending on the field, you might want to consider defensive programming.


The meaning of "defensive programming" is highly contextual even within a given "field". Also, major influential parts of the industry (last example in date being the C++ committee) are moving away from indistinct checks and exceptions, and toward contractual programming.


> Only a debugger can show you the steps and conditions involved that lead to that invalid value.

No. Static analysis can and is actually what is used most of the time. Either with an automated tool, or with your brain.

Of course debugging is also needed, but the right mix between the two is needed, and you are actually doing it even if you use a debugger a lot (at least I hope so, otherwise you are probably not fixing your bugs correctly way too often).


But is not breaking into a debugger and seeing the call stack often faster? Brain won't help you much if you have thousands of files and more than 100 000 LOC and with the debugger you just walk up the stack until you see where that NULL came from. And after fixing the bug, of course you can use your brain and search for similar code patterns to make sure that there is no same mistake anywhere else.


This would be nice, but who wants to write the why?

If you're lucky, you've got the commit messages and associated bug/work tracker items that explain where the software comes from. I agree there's no substitute for planning documents, but those don't always survive confrontation with the enemy.


This worked for me when I was working with startup sized codebases doing mostly CRUD app stuff. Once you know how it's generally laid out you can start being productive.

But now I'm working on nuts and bolts cybersecurity stuff for a tech giant. Stuff like public key infrastructure. Even if I know why things are done, stepping through someone else's code with expected behaviors, pathological cases, and error cases has been extremely helpful because there are so many more subtleties to worry about.


For anything complex, you'll probably need a combination of both approaches (or some method of examining the existing codebase), and then go down another layer of whys. Why is the code written the way it is in the context of the business goals? Why is there this undocumented behavior? Why does the business requirement say x while the code does y?


Agree.

Yes, ability to explore and understand the project via debugger is a great and useful skill.

However, if it takes debugger to understand that project, then, I'd say, it is a badly written and poorly documented project.


A lot of modern IDEs are debuggers too. So now you’re reading the code, in your document reader, but it’s got a bunch of the variables filled in with real values.


These are orthogonal. Yes, why's would be nice but you will never actually get a satisfying answer. Debuggers are I guess the best option.


This is an unreasonably effective tactic. It even works if you don't have any source code; it's usually possible to get the disassembler to tell you where the start address of a string is being loaded into a register, so just breakpoint that and you've got your reverse engineering starting point. Or just overwrite it with another string. "Thank you for playing Wing Commander".

Wielding the low-level binary debugger well is also a good way to unnerve your coworkers and survive weird situations, like when you've got no tools because you've broken your tools with your tools ( https://www.usenix.org/system/files/1311_05-08_mickens.pdf ; content warning for hilarious hyperbolic rant)


Nice rant, good read, and I find my self wanting to know how they actually solved the problem.


I currently work at a small development shop whose main product is a debugger. The process you describe is our standard way of navigating the code base - load the debugger up in the debugger and poke around.

I'm not a fan of this workflow. I like to start learning about something from the top down and there is basically no way to do that with this method. It's great for getting into the nitty gritty details of the code to solve a problem, but it's terrible for learning how the different chunks of code fit together or how the code looks from the top. For our developers with many years of experience, this isn't a problem. For newcomers like me, it is.

It probably doesn't help that this particular debugger is both written in and used for assembly language. If it was in a higher level language, I could probably get more of a top level view from the commentary, meaningful variable names longer than 8 characters, and the fact that 10 lines of python does a lot more than 10 lines of C which does a lot more than 10 lines of assembly.


In VisualWorks Smalltalk, you could ask all the strings in the running system if they matched a pattern, even with regexes. Then you could ask the matching strings for all the objects that held runtime references to them, and quickly browse them.


I do much the same.

Alas, inferring flow of control (stepwise) is challenging with async operations.

But even harder is inferring the data flows. So many abstractions, wrappers, transforms, slice & dice. Especially with these dynamic languages.

So in desperation I'll use/write "tracing wrappers" over interfaces to record/log inputs and outputs.

One example, I created code generating TraceGL (wrapper for OpenGL). Run my app, render a few frames, run the generated code, tweak til it works, then use that knowledge to fix my app.

One recent example, code generating wrappers for S3, DynamoDB, Redis. So I can figure out exactly what data is going where. Then I can fix the code.

Rob Pike has a quote about the primacy of data over code.


The way you describe "exploratory debugging" make it sound very close to binary reverse engineering: you put breakpoints/tracepoints at strategic places, run the program repeatedly, dump strategic memory addresses ... in an attempt to get a grasp at what the program is doing.

This is what you do when you can't rely on file names, variable names, function names, types, and comments.

A codebase that make this the fastest possible workflow probably has some important design issues.


This process gets really fun when you don't have a debugger. Just source and program. Add log, recompile. Add log, recompile. oy.


I actually prefer logs to interactive debugging -- grepping through 20 different execution traces is much more pleasant than stepping through them, and I can discover everything instead of stepping through N times to fill out my mental image as new information becomes salient from previous debugging sessions. Also, I have never learned how to interactively debug anything parallel or especially distributed :(

But I also try to write/architecture my code so that the big stuff always gets logged. In cases where the logging isn't as good as it should be, a few hundred lines of reflection/meta-programming can enable just enough aspect-oriented programming capabilities to enable this approach.


Additional fun when you are debugging something timing dependent like a livelock or a race condition.


I totally agree. One of the things that keeps me in Ruby land is the absolutely phenomenal debugging/repl environment provided by Pry and related gems. When I want to understand how something works, I drop a binding.pry, run the code and start digging through the stack with the Pry toolchain. I feel like this process is so important and it's the first thing I find myself missing when I work with other languages. (I suspect you can get similar debugger/repl functionality with Python's iPython, but I don't have a lot of experience with it.)


Whenever I explore a library, I do the same, but when I think of CRUD functions I always look for these first in this general order:

- Create methods

- Delete methods

- Update Methods

- Read Methods

Creating and deleting things are usually the easiest starting points. You always have an idea on how an app does this so its pretty easy to figure what things you need to do to trigger the breakpoint. Usually the create and delete methods are named something obvious so grepping for it is not terribly difficult

If you work with JS libraries there is always some demo website, you can check for any event listeners on that element too, set breakpoints go from there.


You can tell people place no value on this by looking at how hard the call stack is to follow. Abstractions on abstractions so you can’t tell where you’re coming from or going to.

Heck, even logging frameworks are this way. I wish we could sit down and rewrite some of these critical libraries for legibility. Especially as the number of libraries we have to interact with continues to grow day by day.


Given a large codebase I was not familiar with, I wrote https://vlasovstudio.com/runtime-flow/ tool. It shows and records executed code in real-time just as you interact with an application. (Works only for .NET Windows programs.)


This is pretty similar to how I learn new systems.


The omission is interesting only in that it highlights that programming and debugging are one in the same.

Us old folks that have been programming for 20 years don't even separate the two, there is no meaningful distinction. Programming is not a write-only operation (Perl excepted).

If it's an existing project/product, I get it running and find the entry point. If it's new, I write and entry point and get it running. Then I change something, or write something, and debug it. Is it working as expected? Maybe the execution flow isn't what I expected. Why do I always forget to initialize things right. Probably because every language thinks their version of native v. abstract references are fancier.

Blah, I need to get to work cod... debug.... what am I doing today? Ah yea, writing documentation. Fack


“By June 1949 people had begun to realize that it was not so easy to get programs right as at one time appeared. I well remember when this realization first came on me with full force.

The EDSAC was on the top floor of the building and the tape-punching and editing equipment one floor below. […] It was on one of my journeys between the EDSAC room and the punching equipment that ‘hesitating at the angles of stairs’ the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs.”

— Sir Maurice Wilkes


Right. Reading through code is just executing it on a high level, and very fuzzy and forgetful, virtual machine in your own brain.


My only complaint about my meatware computer is that it has a really bad ability to handle space complexity. With paper I can do OK with time complexity (although clockspeed is also an issue).


Mine is like a quantum computer. Not in performance, but that the register values decay within seconds to minutes.


I loved this comment. Great visualization!


> ... not a write-only operation (Perl excepted).

Really need to squash the Perl is write-only. Modern Perl is quite readable, supported by tests and linters.

http://modernperlbooks.com/


The problem with Perl is too many snowflakes.

https://flatline.org.uk/snowflake.pl.txt


Unfair! Perl can be perfectly readable and maintainable by the person who wrote the code. It is just that the language has been deliberately designed [1] to make it maintainable only by the same person who wrote the code.

[1] http://modernperlbooks.com/books/modern_perl_2016/01-perl-ph...


I really doubt that. I’ve read too much code written by some asshole with the same username as me. I spend too much time trying to keep that from happening and it still does.

> Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

The loophole is be as clever as you can at making the code as clear as you can. And even then you aren’t always clever enough.


That's an interesting reading of that chapter.


Possibly with the exception of COBOL, I think you can write write-only code in any programming language.


You can most definitely write write-only code in COBOL. Reading the words does not imply understanding the meaning. Prolixity leads to clarity in code just about much as it does in prose. (It's unfortunate COBOL style has taken over almost completely.)


Going to second the write-only COBOL thing. I've read real-world COBOL code that generated reports with legions of impenetrable variable names. Without the context of the business logic it was nearly impossible to understand at times.


I'd take C code sprinkled with assembly or bash one-liners optimized for fewest number of characters over Python, JS or PHP spaghetti that's partially documented by someone who can not write proper sentences in the whatever language they're writing documentation in. If you can't be sure your documentation actually reflects what the code does and how it does it then just don't document it or at least throw a disclaimer in the docs. This is even more important is "the docs" are your comments in the source and/or commit messages.

Edit: And why exactly is this opinion unacceptable? Being able to communicate what you're done (and hopefully why you did it) is an essential part of writing software that will be worked on and/or used by other people. Code that takes just as long to figure out how to use as it does to create from scratch is as good as no code.


Yes! Writing code that documents itself is the recommended practice when practical. It’s inexpensive relative to other forms of documentation, and it documents precisely what the code does, since it is the code. (And please don’t read this as “never write any other documentation”. This is just the easiest way to get some documentation.)


>Programming is not a write-only operation

A lot of people would take issue with that statement.

Most devs probably wouldn't notice that their backspace key went missing until they were fine tuning the phrasing of an email about why the best way to solve their spaghetti is to make the same spaghetti slingers port the codebase to $FAD_OF_THE_WEEK.

Edit: It should be obvious that this is a joke but if it's not you should probably check if your backspace key still works.


> check if your backspace key still works.

Back in the late 90's, when Java was still sort of new, my employer insisted that I install an IDE to do my Java development that I had been doing mostly in vi. The one they picked (for some reason) was Symantec's Visual Cafe. I found that I could reliably, reproducibly, crash the IDE by pressing the backspace key. Maybe that was their way of pushing a "no mistakes" philosophy?


In that era no IDE ran stable across its whole feature set. Crashing debuggers were very common. VisualAge had the most stable debugger but would crash just typing in code.

Jetbrains was the first one that was reliable. Slow, but not as slow as VA.


I think you’re wrong about most (seniorish) developers but I loved the way you said it haha.


Maybe they're just deleting code in command mode...


Debugging is an extraordinarily first-class, up-front, frequent activity in Common Lisp, as the facilities are built into the language itself. While Lisp gets a lot of flak for basically (up to a few daring power-user exceptions) requiring Emacs, SLIME/SLY [1,2] are environments that will make you feel closer to your program than you get in other environments, including graphical IDEs.

Given that the normal way to develop a Lisp program is to do it incrementally, your program (or someone else’s!) will break, and it will break often, and you’ll get launched into an interactive debugger. But it’s friendly, not requiring a completely separate, foreign toolchain, and much of the functionality works regardless of which editors/IDEs you’re using (though Emacs makes the experience much more ergonomic).

I think this is perhaps a bigger selling point of Common Lisp than all the great, oft-quoted things like metaprogramming.

[1] https://common-lisp.net/project/slime/

[2] https://github.com/joaotavora/sly


Debugging is an extraordinarily first-class, up-front, frequent activity in Common Lisp, as the facilities are built into the language itself.

Smalltalk is something like the ultimate debugger. The advanced Smalltalk coders would often do most of their coding in the debugger. (It's very easy to unwind the stack a few places, then resume execution.) The VisualWorks debugger was so nimble, people would put expressions to be executed and debugged into code comments! (The debugger would execute the code in the context in which it was evaluated, so this could be absolutely awesome.)

Given that the normal way to develop a Lisp program is to do it incrementally, your program (or someone else’s!) will break, and it will break often, and you’ll get launched into an interactive debugger.

One of the most frequent demos we did in Smalltalk, was to write some empty scaffold methods, launch into an exception, then fill in everything in the debugger until the system was complete.


> It's very easy to unwind the stack a few places, then resume execution

That's what some Lisp systems do, too.

> The debugger would execute the code in the context in which it was evaluated, so this could be absolutely awesome

That's what Lisp debuggers would do, too.

> launch into an exception, then fill in everything in the debugger until the system was complete.

That's actually an old story about Marvin Minsky:

--- Here's an anecdote I heard once about Minsky. He was showing a student how to use ITS to write a program. ITS was an unusual operating system in that the `shell' was the DDT debugger. You ran programs by loading them into memory and jumping to the entry point. But you can also just start writing assembly code directly into memory from the DDT prompt. Minsky started with the null program. Obviously, it needs an entry point, so he defined a label for that. He then told the debugger to jump to that label. This immediately raised an error of there being no code at the jump target. So he wrote a few lines of code and restarted the jump instruction. This time it succeeded and the first few instructions were executed. When the debugger again halted, he looked at the register contents and wrote a few more lines. Again proceeding from where he left off he watched the program run the few more instructions. He developed the entire program by `debugging' the null program. ---

One of the main debugging differences between Lisp and Smalltalk though is that Lisp systems often can run interpreted Lisp code (where Smalltalk runs bytecode) and then the debugger can see and change the actual source code while it is running. A Lisp Interpreter runs of Lisp source code - which still is data, since in Lisp source = structured data.

Useful Lisp debugging experiences existed as early as late 60s / early 70s in BBN Lisp.

http://www.bitsavers.org/pdf/bbn/tenex/TenexLispRef_Aug72.pd...

The BBN Lisp development then moved to Xerox PARC and was renamed to Xerox Interlisp.


I think direct language support for "missing code here!" is a brilliant idea. Even when you have types and documentation, you often need to dummy check that values you'll get called with are what you expect.


I've heard Smalltalkers tell tales of starting an empty project by just hitting "Run", letting the debugger take exception to that fact that there is no startup function, then start typing code as it becomes necessary to proceed.


That's what I just wrote!


I read the GP as saying “you don’t even need to create the scaffold methods” described in your post.


I've loved this aspect of CL. I often repeat a self-quote of mine (somewhat patronizingly) to my teams.

Often times, it's not necessarily how something works, but how well it breaks. Things will break, so engineer for it. You'll be happy, the team will be happy, and the business will be happy.

Debugging and error handling are first class activities. Building a thoughtful, "well engineered" tool chain around these activities is the mark of a great platform (and the developers behind it), in my opinion.


> Debugging is an extraordinarily first-class, up-front, frequent activity in Common Lisp

A bit of nitpicking: It is anything but "extraordinarily", it is a completely ordinary activity.


To nitpick your nitpick, the word "extraordinarily" here is not an adjective modifying "activity"; is an adverb that stresses that debugging is uncommonly "first-class, up-front, and frequent" in Common Lisp compared to other languages.


Point taken. Thanks.


I'm using IDE debuggers as integral part of my normal programming workflow, not for actual "debugging", but for stepping through new code to check if it "feels right". I hardly find bugs that way (that usually happens when the code is out in the wild and several unexpected things happen at the same time), but I add a lot of little changes based on the step-debugging... rename a variable here, add a forgotten comment there, and also think about what to do next. I'm quite sure I spend more time in the debugger then writing code.

I wish edit-and-continue, rewinding time, etc... would be better supported across IDEs. Programming should be a "conversation with the machine", and ultimately with other programmers. When I'm stepping through my own code, I try to assume the role of an "outsider" and judge my code from the "outside".

The debugger is also the most important tool for me to understand code written by others.


Edit-and-continue works in several languages. Java can hotswap, for instance. And you can to some extent rewind by dropping the current frame (unless your code had side effects, but that's just another reason to avoid them ;) )

But I think my workflow is similar to yours. I often set a breakpoint in my new code and just watch the state to see if it looks as expected, so small tidbits of usage here and there.


Edit-and-continue and the WYSIWYG GUI editor are two of the best features of VB6 that I wish other languages had.


> I wish edit-and-continue, rewinding time, etc... would be better supported across IDEs. Programming should be a "conversation with the machine", and ultimately with other programmers. When I'm stepping through my own code, I try to assume the role of an "outsider" and judge my code from the "outside".

This is because of a good reason: implementing debugging is hard ... very hard, and therefore expensive to implement.

This is of course the main reason in practice.


From the article: > One thing developers spend a lot of time on is completely absent from both of these lists: debugging!

I notice that times changed a bit. This was true in software engineering like 10 years ago, but in recent years there is much more emphasis on good practices, constant refactoring, implementing lots of tests alongside the production code, etc. I debug maybe 1-2 hours per month, as a full-time programmer.


I tend to find that the amount of time/effort required on debugging is inversely proportional to good practices.

If everything is heavily design by contract, code is clean, there are lots of tests, everything it tdd'd then most bugs are almost always shallow and surfaced quickly. Good debugging skills and tools aren't so useful in this environment.

By contrast, if the code base is a mess and there are no tests I seem to spend about 60% of my time ascertaining the cause of unexpected behavior. Good debugging skills and tools are invaluable in this environment.


Exactly, but I would argue that which of these environments you operate in isn't always under your or even your company's control, so debugging skills are indispensable for the overwhelming majority of engineers.

For example, I spend 50% of my time debugging another company's code, 20% of my time debugging my company's legacy code, 20% of my time refactoring/implementing unit tests (I'm sure like most engineers here, I consider those inseparable and essentially one activity), around 9% on new feature implementation written to the same standards as that refactored code, and maybe 1% of my time debugging code that is up to standard.

The 50% is the real killer because I have absolutely no control over the codebase that forms the bulk of it. It's produced by another company that's the primary contractor on the project and we just have to make our one small component work within the larger application. So it's not even like debugging a typical third-party dependency, in that we can't replace it with a different dependency and (due to terrible architecture) our code is riddled with hidden dependencies on their shitty, poorly documented code.

My point being: debugging skills are a core competency for most engineers because it's rare to have complete control over the code you interact with, and even rarer to have it all up to acceptable standards. Even in the best of environments, we all spend time debugging issues arising from third-party dependencies, whether it's an issue with the IDE, compiler, language design, server configuration, or literally any of the million things that can break your system unexpectedly on any given day.


> everything is heavily design by contract, code is clean, there are lots of tests, everything it tdd'd then most bugs are almost always shallow

I agree that this is all good practice and helps prevent common classes of bugs. Obviously it will also safe you time of debugging these.

But your stack doesn't end at your code. There's a whole bunch of other code yours is linked to, there might be an interpreter or vm, there's the OS, the kernel and firmware. So IMO it's a bit naive to say you don't need debugging tools/skills.


>it's a bit naive to say you don't need debugging tools/skills.

I didn't say that.

>But your stack doesn't end at your code. There's a whole bunch of other code yours is linked to, there might be an interpreter or vm, there's the OS, the kernel and firmware. So

Good quality code / projects would mean keeping a firm control over all of that stuff, so that when it breaks, it's fairly obvious what caused it (e.g. distinguishing between bugs caused by a different environment and bugs caused by newly pushed code).


Couldn't agree more. When I work on my own software with very nice and clean design most bugs (which are quite rare) are almost always very superficial and simple. The most I ever need to do is to hit the bug in the debugger and look at the stack trace and the bug is obvious.

That being said that has been really bad for my debugging skills which have atrophied tremendously. At work where the code bases are complicated messes I find myself not being able to debug something as well as I should.


I agree on good practices, but often TDD is wasted time. Not on larger systems, but on small systems with limited functionality where you build using principles like SOLID so you always know where the flaw is, even without the test.

We used to write an awful lot of tests on everything. When we stopped we increased productivity by almost 60% at almost no cost.

As I said, we still work with TDD, but we only do it when it makes sense.


You don't have to do TDD for this to be useful or happen.

In my current project we follow a "bug -> reproduce bug in testcase -> fix bug" method for some parts of the code. This has cause quite an extensive test suite to develop for those parts of the system.


I'm curious... I can see the obvious value of having a testcase for a class of bugs, but a testcase for a particular bug instance which was already fixed earlier does not seem right, so I probably misunderstood something.

How often do the tests find a problem?

When it does find a problem, doesn't this mean that one hits a bug which existed before?

Can you give an example of a case when the test was useful?


This is known as regression testing, and it's extremely useful. For example, you find some bug where, say, a calculation produces a nonsense result. You fix it, then add a test for what you just fixed. 2 years from now, someone refactors the code and reintroduces the bug because their refactor didn't take into account all of the possible input cases. Your test now catches it.

Our regression tests have literally found hundreds of bugs over the years and saved us a lot of time and embarrassment.


Thanks. Of course. I see what you meant now.


But having a test framework always ready is useful.

There is a "Test Driven Debugging": when you look around a bit inside the production error dumps, then try to reproduce it with a test, then debug the test in a much smaller scope with something heavy like Valgrind or a Record-Replay debugger (https://github.com/mozilla/rr).

If you write tests regularly during development then you can also isolate and test parts fast when something goes wrong in production.


> often TDD is wasted time

I'd say it depends. I find that TDD works mostly as the complement to a good type checker. It is extremely useful when the PL has weak or no type checking (eg javascript, ruby, python, c, etc).


I don't think anyone suggested that TDD be used.

Here's what you're doing when you debug something with a debugger:

1) Figure out how to get the program into the state where there bug shows itself.

2) Analyze output from the program or its state (via the debugger) to determine whether or not the code changes you made fixed the bug.

All testing is is writing a program that gets to the state mentioned in step 1, and having that program do the check you manually do in step 2.

Saying that testing is a waste of time implies that finding bugs, getting a program into the state where it shows the bug, and figuring out whether or not the bug is fixed is the easy part, whereas writing a few lines of source code is the hard and time-consuming part. This is unlikely to be true.


    in recent years there is much more emphasis on good
    practices, constant refactoring, implementing lots of 
    tests alongside the production code, etc. I debug maybe 
    1-2 hours per month, as a full-time programmer.
I mean, sure, but in my career the practices you describe are rare. An awful lot of software development involves slogging through "legacy" code that is about as far away from those practices as it gets.


What do you mean by "slogging through legacy code" if not debugging, refactoring, and (hopefully/sometimes) writing tests?


Yeah that's what I mean. Because I spend a lot of time slogging through legacy code, I spend a lot of time debugging.


I guess I misunderstood/reversed the point you were making; sorry.


I love when a debate ends with two people realizing they essentially were agreeing all along ;-)


Aren't you agreeing with him? :)


> I debug maybe 1-2 hours per month

Do you include "Why isnt this test passing?" as debugging? I would and I spend a good amount of time answering that question.


Logging. You forgot to mention good logging practices. It is like predicting what information will be needed if you ever need to troubleshoot a piece of code.


> Logging. You forgot to mention good logging practices.

Good logging practices help you pinpoint what code you need to debug.

It does in no way replace debugging.


> The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.

"Unix for Beginners" (1979)


I'd argue that print and log statements are just poor man's debugger. I don't deny the use and usefulness of logging in general, just it's use for debugging. In order to catch a bug you need to cast a very wide net of print statements. Debugger allows you to seamlessly travel up and down the stack frames, look at different variables on the spot and investigate different code execution paths, even the ones that you didn't not expect you would at the start of the session. Doing the same with log statements is amazingly inefficient.


I’d call the debugger a nuclear powered print statement. Printing allows you to view particular state at a specific point in program flow; this is often all that is necessary to identify the bug. Doing the same thing with a debugger is amazingly inefficient.

...obviously this is mildly fascetious, but I want to point out the absurdity of assuming debugging is a difficult endeavor. That’s how it becomes a difficult endeavor.


Note the date. There have been some changes in tools in the last 40 years.


It’s still true. A debugger is still unnecessary for daily debugging.


Unnecessary, maybe, but it's nice to have. I use IntelliJ's most days.

If I just had something along the lines of gdb, I'd use it a lot less frequently; convenience and features have absolutely made debuggers a lot more attractive over the last few decades.


> A debugger is still unnecessary for daily debugging.

I think this serves nice as a historical record of how the state of the art was 40 years ago.

Today, we have IDEs with integrated debuggers (not to mention Emacs and other editors usually integrate decently with gdb, lldb, pdb, etc.

Using a debugger doesn't have to mean leaving your code and starting a different tool and different flow.

If you'd ask any of my colleagues what they'd do if there was a new policy saying use of debuggers were no longer permitted on a daily basis, the answer would more likely than not include the word "riot".

In a huge, complex code-base created by a diverse team of varied skill and competences, you can't assume all code is good or self-explanatory.

When you get confused about what happens where and why, the debugger is probably going to be your best friend.


> If you'd ask any of my colleagues what they'd do if there was a new policy saying use of debuggers were no longer permitted on a daily basis, the answer would more likely than not include the word "riot".

I’d imagine you’d have the same effect if you mandated the use of debuggers.

And hey, I’m not arguing against their use, just saying they aren’t needed for most debugging. I would bet the vast majority of bugs in managed languages have bugs that are easily discernable if you have decent type/assertion boundaries. Needing to use a debugger is certainly a smell about your code.


I think also that the excessive over reliance of framework that are almost impossible to debug makes the practice a chore - I program in debug often because I prefer to think programs in terms of data flows and not control flows and hate when reflective framework take complete control over initialization without giving debugging hooks in the documentation. it makes understanding your program almost impossible.


That implies that either tests fail rarely, or when they do fail, it's always easy to determine what the root cause is. Both of those are far outside my experience. Which is it, for you?


More broadly, reverse engineering a codebase is a critical software activity. If you can't reverse engineer a codebase you can't debug it or add to it. Working with an existing code base is pretty much a given, unless you're developer #1 on a startup. Your success as a software developer hinges on how well you can work with an existing codebase.

One less celebrated but very valuable approach to grasp a codebase is the way you document what you've learnt. While you can learn about a codebase through exploratory debugging (mentioned by someone on a separate comment) you have to persist that knowledge on "paper" in some way. My favorite ways to do that:

  - class diagrams (for static stuff like data structures, class relationships, etc.)

  - sequence diagrams (for dynamic stuff like which functions call which and data passed between them)
With good documentation for your own learning purposes, it'll become easier to start working on a codebase and especially when you context switch, it'll help you return to a codebase more easily.


Ideally, documentation would be written so that there is no need to reverse engineer every project you wade into. Otherwise why share the source code in the first place?


There's various kinds of documentation of differing granularity. Typically the high level documentation (coarse grained if you will) about architecture, requirements, etc. stays good over time but finer grain documentation (the kind I mentioned in my earlier comment -- data structures, order of function invocation, etc.) gets stale quickly because source code is constantly changing -- it gets refactored, plenty of bug fixes are put in, etc.

Reverse engineering is probably a loaded word, but in general digging in to understand the codebase helps you work more efficiently and avoid introducing bugs to the extent possible.


Reading the comments here I get the impression folks are thinking about "debugging MY code". In my experience when you are debugging you are almost always looking at an issue in someone else's code.


I see a disturbing trend in industry especially in Bay area from my experience.

Young engineers are bad at debugging, bad means real bad but good at programming puzzles.

This points to the fact that people value puzzles over experience and in turn miss the big picture.

Sure you can code up a balanced RB tree in 5 mins, but what will you do when packets start showing up in bursts on your service? Can you find out that on a live system?

I despise this new Bro culture of mugging programming puzzles from leetcode and hackerank. Faltering at first real world challenge is not only disappointing, but an insult to craft of engineering.

/Rant over


Echoing this and an earlier thread...interviewers could have debugging exercises with an existing codebase. I have seen this ambitious approach at a few startups. Instead of asking yet another sorting algorithm or HackerRank brainteaser.

Until the application process changes, the applicants won't change.


interviewing is only a small part of the picture. mvp has so eroded the notion of correctness that whole organizations don't even look for bugs anymore.

try upgrading the dependencies. wrap the service in a harness to make sure it gets restarted when it fails. forget about proactively looking for bugs, wait until they come in, and if you have time that week before they slip into the 'stale - never to be addressed' pile, spend a few minutes trying to reproduce and mark 'works for me'


sorry what does "bro" culture have anything to do with the puzzle/algorithm heavy interview culture in the valley? That term is tossed around so much, and it doesn't seem to mean anything here


The trend isn’t just in the Bay Area. I’m seeing the same thing in the Midwest.


I use debuggers in anger since Turbo Vision based IDE for Turbo Pascal 6.0 on MS-DOS.

If everything that one can do with a graphical debugger is single-step, step-into and continue, then they are using like 1% of its capabilities.

Taking advantage of a graphical debugger, specially when the environment supports code reload, is quite productive for interactive programming.


One of the besting things about Java and having a strong IDE ecosystem.

When I was working in Java 7 + Eclipse the ability to hotswap in changes to functions made debugging (even in production) a dream! Dropping a few new lines of conditional logging into a service was crucial for those 3am war room sessions.

Now, working in Java 8 + Intellij hot swapping fails constantly because lambdas don't seem to compile is a stable fashion. Eclipse uses a more development friendly compiler ECJ so I wonder if it has this solved.


I am yet to try out that scenario on Eclipse as I tend to refrain my FP enthusiasm due to team members.


what are the top 5 other things you find yourself doing with the visual debugger?


1) Going through the code

2) Edit-and-continue/REPL

3) Visual representation of data structures

4) Visual representation of ongoing tasks and thread stacks

5) Visualization of memory, CPU, IntelliTrace(.NET) and Java Flight Recorder data

6) Extra one as it is only for hobby coding, GPU debugging


He writes « Perhaps people equate "debugging" with "using an interactive debugger" »

I think it's useful to think of two independent 'axes' of debugging:

- single-stepping vs reading traces

- using specialist debugging software vs modifying the code

Most of the time I much prefer reading traces, and IDEs often have decent primitives for setting sophisticated tracepoints, but the UI support for them is often very weak (while adding a breakpoint might be a single key press).

So I've often found myself using 'printf'-style debugging because (until compile times get very long) it's more ergonomic then using the debugger's tracepoints.

I haven't tried rr yet. Does anyone know how good it is at this? I'd like to be able to do things like define a keypress for a given tracepoint and toggle its output on and off while I'm scrolling through a trace.


This pertains only to C and C++ but the problem with printf type debugging is it might throw you off. Consider:

  int* foo = ....
  ...
  *foo = 123;
  printf("killroy was here");
Now that code could actually print "killroy was here" but also crash on the pointer dereference. When this happens a naive developer will be thinking that the foo ptr access can't be the problem.


Alas, the same problem has been known to happen with debugger-provided breakpoints and tracepoints.

That's why GCC advertises -O0 as «make debugging produce the expected results» (which in practice will avoid the problem in your example too).

There are lots of other ways to cause yourself trouble using printf to debug, of course (eg clobbering errno).


> using 'printf'-style debugging

And the horror that descends upon you once you commit+push those statements.


What horror?

At best it might be a slight embarrassment (that can be avoided by checking the diff before commit).

Then again, having killed the bug is more important than pretty commits.


Either "don't do that then" or have a logging framework that lets you leave enough printf in to diagnose crashes in the field.


I used to debug using print statements earlier and that was because we were not allowed to use any IDE. Now that I know hoe to use IDEs properly, things have been better. And there is always logging and `git diff` to help you out. Those colouring schemes really come handy.


Doesn't happen to me. My printfs start like this:

if (debugLevel != 0) printf(....

Sometimes the condition compares debugLevel to some other value, but it will never print if debugLevel is zero, and its going to be zero unless I set it to something else.


A pre-commit hook can prevent that for you. (I use a linter that will only allow log statements if they're preceded by an exception comment, which I will only add if I actually intend to log things.)


what I do in the situation where I have had to track down a really elusive bug and chased it around the code with print statements is checkout a fresh copy of the code and only merge in my fix.


Do you people not read through your changes before checking in? I mean, it's better to buddy check-ins but even rubber-ducking through the changes will catch a ton of mistakes.


There were Smalltalk server processes that would catch an exception, snapshot their memory image. A developer could then come along, and open a full interactive debugger on a live, running version of the system, which they could then modify then run as a quick dev test.


The academic Software Engineering community certainly does put a lot of attention on debugging.

See my list of publications for a number of empirical studies on debugging and tools to support more efficient debugging: http://austinhenley.com/publications.html


I worked on petabyte-scale data at Mozilla. Here is a talk I gave on the history of debugging and the ways in which doing distributed large data analysis requires moving beyond printf() and STEP:

https://www.youtube.com/watch?v=QHJBPzfrokU


I agree with the author that a debugger is a poor fit for many systems. However, debugging is essential, and I think debugging by checking logs works very well in many cases (provided that the logging is good). Some of the advantages (over debuggers):

- Anybody can look at the logs, not just developers.

- The logs show the sequence of events, not just a snapshot.

- Feasible to do in a live system.

https://henrikwarne.com/2014/01/01/finding-bugs-debugger-ver...


A debugger that doesn't support those three things is just a lousy debugger.


There are many different kinds of debugging, and a debugger optimized with an interface for pinhole inspecting and stepping isn’t the best tool for logging and log processing, it is ok for them to be separate things.


I never said not to log or look at logs, but a good debugger should be possible to attach to a running process, and reverse debugging can give you access to past events.


Sure, but you are still dealing with a pinhole even if you can move that pinhole around. It isn’t a complete debugging experience, nor was it meant to be.


I feel like it has a lot to do with the ambiguity involved in debugging, and the fact that it is hard to teach.

I have found debugging to be fuzzy and largely based on experience, mental models, and internal (scientific method based?) approaches.

There seem to be a relatively few debugging methodologies or frameworks. (ie in performance/systems realm there are USE, RED, ...?). I have only started to recently create a formal framework (a google doc I update after each incident) after 9 years of engineering experience! Not because I wasn't interesting in debugging, but it literally took 9 years worth of debugging incidents for patterns to start to become apparent to me (i work devops/system so when there is a general feeling of a problem I'm usually tasked with finding root causes).

On a side note: so far the framework has been successful within my org and I hope to formalize it enough to write about it in the near future :)


"90% of coding is debugging. The other 10% is writing bugs” - Bram Cohen


Somebody once posted in a Slack channel I was part of asking for things they should include in a course they were teaching for new programmers. My #1 recommendation was how to read a stack trace. The second was how to put a logging statement into your code to capture state.


Just get them working on some mess of a WordPress project and you'll get them all reaching for xdebug right quick :D


The result of debugging will be summed up in a commit or a ticket in Gitlab. Why should Gitlab care about the process?


Maybe he wants something like a log analyzer similar to splunk?

Or something mobile clients send their exceptions?

Sadly the post is not very clear, so we can only guess.


Gitlab supports CI for build + test. An additional setup to run tests under gdbserver is a reasonable request.


I'm afraid of building a habit of making haphazard tweaks until it seems to DTRT in one case. I don't like the idea of needing the machine to explain to me how my own code actually works. The feeling of complete understanding is an important motivator to me.


I don't like the idea of needing the machine to explain to me how my own code actually works.

I think it's useful to turn that on its head a little -- I know how the code should work, but a debugger can show me what it's actually doing, so I can see what's going wrong.

I don't think there's a big difference here between debugging and testing, say. Couldn't you also say that tests explain how your own code works?

I'm afraid of building a habit of making haphazard tweaks until it seems to DTRT in one case.

I like to think of debugging more as a science. It's literally a mix of theory and experiment: the program does something weird, you run some random experiments to gather more data, you come up with a theory that explains the results, you design an experiment to test your theory; a successful theory will then lead you to the correct fix.

A good debugger can help get the most data from your experiments. It's as if the LHC had sensors capable of tracking individual gluons, or the ability to replay a specific collision over and over, or even the ability to set up a desired collision from scratch.


I find it quite useful to have the machine explain to me how other people's code works, especially once working on a system of sufficient size and legacy that a truly complete understanding is impossible. Especially over here in C++ land where surprises lurk under every operator.


If you had a bug, then you didn't know what machine is doing. You think you have an idea it's doing X, while it does Z. If you have ever written a bug because you should have written ++x instead x++, then you need the machine to tell you what it's doing, because it's not doing what you think.

If have never ever written a bug, contact me for a job offer, I'll pay you 300% of my salary.


> If have never ever written a bug, contact me for a job offer, I'll pay you 300% of my salary

I know just the guy. He was interested in learning how to code.


> I don't like the idea of needing the machine to explain to me how my own code actually works.

But you're fallible. This is the same reason you need to use a profiler to see if your optimizations actually sped anything up. A debugger shows you what really happens, not what you think happens.


Good luck achieving that in feature implementations on distributed teams with 10+ devs.

There isn't such thing as "my own code".


I used to spend my days in the Java debugger, stepping through framework code to troubleshoot application-level errors, as the stacktrace we had obfuscated the error, or everything was simply broken.

Depending on the situation, I might spend most of my day debugging, as a way to better understand the program and as a code analysis tool.

This was especially useful if dealing with a code base that I was not familiar with, while in code that I wrote myself most of the time logging is sufficient.

I'm not sure why the author thinks that debugging is not a priority for tool makers, the debuggers that we currently have are awesome and have been so for years.


Debugging is hidden here:

Bug handling is easy, fast, and friendly Information on “code flow” is clear and discoverable

I use interactive debugger inside IDE or inside profiling tools. Debugging is something I do alone, so it does not need special process support.


I think the approach we should take is to pour more effort into minimizing the risk of debugging being necessary. Good practices, better type checkers and logic models (e.g. there was an article posted in the front page here about using linear logic to push the boundaries of file systems). Debugging can be a time sink and can become a painful task for fast distributed systems.

We should learn and hone that skill of course, as it does not only offer a solution when lightning strikes, but a way for the developer to gain comprehension of the "internals".


In my opinion the reason that debugging isn't treated as a first-class activity is that it isn't taught at all in schools. Or at least it wasn't when I was in school. We were given about 2 sentences that said, "the debugger is at /usr/bin/gdb. It can help you step through your code to find a bug." And then we were thrown to the wolves with what is arguably one of the worst debuggers in the world with no documentation and no help.


I often wonder why it's not treated as a first class citizen when teaching programming. I think there should be a full CS course dedicated to it.

I put together http://www.debug.coach/ because a guy I used to work with had better debugging skills than anybody and I noticed it was because he always knew the right questions to ask.


While we probably will always have to debug, and if not the "code" it will be a specification formal enough so that it can be considered code anyway, there are different ways to approach its role within the lifecycle of software development: on the two extremes, one can "quickly" but somehow randomly throw lines without thinking much about it, then try the result and correct the few defects that their few tries reveal (and leave dozen to hundreds of other defects to be discovered at more inconvenient times); or one can think a lot about the problem, study a lot about the software where the change must be done, and carefully write some code that perfectly implement what is needed, with very few defects both on the what and on the how side. Note that the "quick" aspect of the first approach is a complete myth (if taken to the extreme, and except for trivially short runs or if the result does not matter that much), because a system can not be developed like that in the long term without collapsing on itself, so there will either be spectacular failures or unplanned dev slowdown, and if the slowdown route is taken, the result will be poorer as if a careful approach would have been taken in the first place, while the velocity might not even be higher.

Of course, all degrees exists between the two extremes, and when going too far on one side for a given application is going to cause more problems than it solves (e.g. missing time to market opportunities).

Anyway, some projects, maybe those related to computer infrastructure or actually any kind of infrastructure, are more naturally positioned on the careful track (and even then it depends on which aspect, for ex cyber security is still largely an afterthought in large parts of the industry), and the careful track only need debugging as a non-trivial activity when everything else has failed, so hopefully in very small quantities. But it is not that when it is really needed as a last resort, good tooling are not needed. It is just that it is confined to unreleased side projects / tooling, or when it happens in prod it marks a so serious failure that compared to other projects, those hopefully do not happen that often. In those contexts, a project which need too much debugging can be at the risk of dying.

So the mean "value" of debugging might be somehow smaller than the mean "value" of designing and writing code and otherwise organizing things so that we do not have to debug (that often).


I think that is because there are no good free open source debuggers? In Visual Studio you just set a breakpoint and examine the variables; it works great. On Linux you are left alone against a command-line monster with outdated weird command syntax.

gdb has an awful interface; it requires you to read a long manual and learn outdated command syntax before even using it; it is often faster to add logging and rerun the program. It is not a debugger for humans.

For comparison, JS debugger in Chrome devtools works pretty good and is easy to use. It is really helpful and allows you to identify a problem very quickly (faster than reading an introduction page for gdb). But it has problems sometimes, when you try to set a breakpoint and it is set in another place or is not set at all without any error messages.

I saw some comments here against visual debugging; maybe their authors had experience with some buggy or poorly written debugger?

I also tried to use Firefox developer tools in several versions up to FF45 and almost every time something didn't work. And when they work, they choke on scrolling large minified files or heavy sites with lots of ads.


> In Visual Studio you just set a breakpoint and examine the variables; it works great. On Linux you are left alone against a command-line monster with outdated weird command syntax.

You're comparing an IDE to a command line environment, Linux has various IDE's that are more comparable and I believe some standalone front ends to gdb, windows also has windbg and dotnet mdbg that are the command line equivalents of gdb that VS is interacting with for you.

Gdb (and windbg for that matter) is much more powerful than visual studio debugging though, much like vim and similar tools it's not easy to learn but it's easy to use once you have learned. Stop expecting instant gratification.


Because most languages treat edit, compile, test, debug phases. No organic evolution of your idea to code to running app all the while tweaking it without restarts, which are pit stops that completely break your momentum.

That needs languages designed upfront with debugging etc as a first class live feature.


One thing that may be relevant is that I was never introduced to debugging in university at all. You were expected to just invent useful techniques on your own. Without any exposure, it is hard to justify learning that over any other topic you feel like you aught to be learning.


If you needed to debug, it is a sign that you have made a mistake. Who wants to admit to making the mistake? :-)

Seriously though, I imagine that the junior devs would have such a view of debugging. But the matured devs would plan for it as they know their limitations.


Because debugging is not a first-class activity.

If you are careful when you program (assertions, tests, logging) you rarely need to use a debugger. Time spend on a debugger is invisible and lost. It is much better to spend time adding assertions, levels of logging, and writing tests.


Depending on the capabilities of a debugger, developers might have test failures drop them into an interactive debugger that allows reloads/re-execution of code units.

Debuggers can be I instrumented to do a lot of dev-only logging for you.

I do write tests and assertions, but if those fail, I am dropped into a debugger (if prog is executed interactively), I can immediately inspect the state of my program and more often than not fix the problem on the fly.


I think that, while this might work, the approach to managing state and writing the tests is not ideal. For one, your tests should be fast (ie you should be able to rerun them with almost 0 time penalty and see the results instantly). Second, if it’s easier to stop the flow and look at the state / modify the state than looking at what the state should be via the test setup it’s possible that the abstractions you’re using are not quite right and/or the code is tightly coupled.


> I think that, while this might work, the approach to managing state and writing the tests is not ideal.

I strive to not manage state but to write purely functional code, it makes life so much easier - see your second point as an example :)


I think there is a difference between debugging and using a debugger.

Everyone debugs the software they are building or working on.

Not everyone uses a debugger (depends on language, framework, etc).

IMHO debugging is best done through 1) proper logging 2) metrics 3) proper isolation and software architecture 4) crash early philosophy


This is such egoistic bullshit


> This is such egoistic bullshit

Of course I can be wrong and write bullshit. But why egoistic?


Not OP, but a guess: Implying you are careful, but others aren't.


For the same reason that the water treatment plant gets more respect than the sewage treatment plant.

Everyone would prefer to provide something fresh and clean than to wade through other people's poop.


Ask Bryan Cantrill. He is a first grade debugger.


has anyone written intelliJ plugins that deal with the debugger? I have an interesting idea wrt to this, but writing an IntelliJ plugin has so far been extremely painful. If anyone out there is interested send me a message and let's chat


I would go so far as to suggest that if you need a debugger, you're probably doing it wrong.

I have found that code that needs a debugger is code that is written in an extremely stateful fashion.

The very reason why debuggers exist is to deal with code that has a dozen variables book-keeping complex state that you need to track closely to figure out how something works (or why something doesn't). Debuggers even have "watches" to track something closely so that it doesn't change underneath you, when you aren't looking.

To me this is a code smell. I would suggest that instead of using a debugger, we ought to be composing small functions together.

Small and pure functions only need an REPL to test.

Once you make sure the function is behaving right, you can almost just copy paste over the test cases you ran in your REPL into unit tests.

The simplest way to enforce this discipline is to do away with assignments, and instead constrain oneself to using expressions. This forces one to break code into smaller functions. Code written this way turns out to be much more readable, as well as reusable.

I suggest that we eschew the debugger in favour of the REPL.


This is perhaps one of the most harmful statements I’ve ever read on Hacker News. You’re not doing it wrong if you use a tool to help you understand what your program is doing. Should you follow good design patterns, like using small functions when writing your software? Yes. Should you minimize complex state? Sure. Should you use tools to help you understand what your program is doing? Absolutely. The two are not mutually exclusive.

There are a LOT of different ways to write software. Some more stateful than others. Some lower level than others. Some more asyncrounous than others. Some more functional. Some more object oriented.

Your experience with the software you write in your day to day, is not the same stack and may not even be in the same universe as what others are attempting. Writing a React app is very different from writing a game in Unity, which is very different than writing a game engine in C++ which is very different that writing large scale services that talk to clusters of computers and hardware devices, which is different than writing some low level library in Rust or Haskell or doing data science in R or Octave.

Please fucking stop with this “one right way”, “you’re dodging it wrong” dogmatic bullshit. Your limited worldview does not apply to ALL of software development.

Use a fucking REPL if you want. Use a debugger if you want. Use logs and traces if you want.

Just try to write good software, try to understand your code, and ignore dogmatic and valiantly bad advice and opinions like the parent comment on Hacker News.


> This is perhaps one of the most harmful statements I’ve ever read on Hacker News.

And here I thought I was writing an innocuous comment on debugging code. Hyperbole much?

> There are a LOT of different ways to write software... Writing a React app is very different from writing a game in Unity, which is very different than writing a game engine in C++.

It's interesting that you brought up these 3 use cases. I started my career working for a shop that wrote a game engine in C++, then moved to Unity. I now write React apps for a living. Do I know you? :)

With the exception of a few corner cases, like portions of code that does rendering, and HFT software, where performance concerns trump everything else, what I have seen in that time is the following:

1) decomposing your code into smaller functions is objectively better than NOT doing so. 2) writing functions that don't mutate its local variables is objectively better than writing functions that DO.

By "objectively better", I mean that the code written conforming to the above have the following characteristics: - more readable than code that does not. - more testable than code that does not. - more re-usable than code that does not.

It's on these two assumptions that I build my case for eschewing the debugger altogether in favour of the REPL. If you write code conforming to the above, you have absolutely no need for a debugger.

Having said that, it is possible that you need a debugger to wrap your mind around code bases that you inherit, which were written poorly (or equivalently, in a very "object oriented" fashion).


I don’t disagree with your principals of good code quality. I disagree that by following those principals you no longer need to use a debugger or to develop debugging skills. Your post is harmful because it suggest to junior developers that if they must use a debugger they must be developing their software in a poor fashion. This is not true. There are many good reasons to use a debugger, including to develop software or to explore a codebase and to debug your code.


> I disagree that by following those principals you no longer need to use a debugger or to develop debugging skills.

Using a debugger (as in, software where you step through code) and developing "debugging skills" are entirely different things. I was only commenting on the former.

> Your post is harmful because it suggest to junior developers that if they must use a debugger they must be developing their software in a poor fashion.

Would it be better if I said: "if you need a debugger, you have are either mucking around in poorly written code that you inherited or you're probably doing it wrong"?

> There are many good reasons to use a debugger, including to develop software or to explore a codebase and to debug your code.

I concede that exploration of a codebase is a valid use of a debugger.


Debuggers help to isolate _where_ code breaks as much as what state it was in when it broke.

Seeing that fn1 breaks on DataB is not as useful as seeing that fn1(f2(DataA)) is the actual error.


Yes, this is why they are useful. Generally speaking once I've found where the error is, it's real simple to find the one or two variables in scope, look at their values, and figure out the bug.

I would agree that if you're using a debugger to verify over many iterations that something isn't going wacky with state, then yeah, that's not a great sign.


It is true that you can answer any question by changing the question.

It would be nice if everyone wrote code that minimized state, or if every problem could operate with minimal state. But not all code is or should be like that.

"to do away with assignments" is not my favorite way of phrasing what you mean either; I would prefer people assign as many new variables as they like, and just avoid mutating them (i.e. making them `const` and using immutable data structures if performance allows)


> I would prefer people assign as many new variables as they like, and just avoid mutating them

While I would agree that it is much more important to have functions NOT mutating their local variables, doing away with assignments is a straight-jacket which immediately forces you to decompose larger functions into smaller ones.

In fact, even in languages like Haskell where mutations of local state is entirely done away with, I see constructs such as `where` and `let` being abused to construct inordinately long functions.

Longer functions tend to accrue even more length over a code base's life and become harder and harder to test as they do.


Eventually function decomposition becomes a proxy for assignment anyway, especially in languages where assignment and mutations are necessary. Function decomposition is not necessary for all code and can actually be harmful to one's understanding if done improperly.

I generally follow the rule of threes ("if the code appears in three places the exact same way, extract it to a separate function") as this helps me balance readability of functions with size of functions.

I understand your advice applies very neatly to functional languages. I would love to live in a functional world. I don't. So that's why my advice is like that.


That might be true for library-type code, but anything operational is going to have environmental factors that can cause induced misbehavior rather than a pure "bug" from improper implementation. Good debugging tools help induce these things and monitor behavior. For example, I might have a timing problem that I suspect to be related to network, and I can simulate a lossy network.


Prejudice. The same reason NASA janitors aren't glorified as the astronauts are.


Well, the astronauts could always wipe the floors themselves, or just work in a dirty building.

The janitors probably lack the skills to go to space and command a spaceship.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: