Hacker News new | past | comments | ask | show | jobs | submit login
What does debugging a program look like? (jvns.ca)
110 points by mfrw 89 days ago | hide | past | web | favorite | 27 comments



> A bug is never just a mistake. It represents something bigger. An error of thinking that makes you who you are.

I like this quote from Mr. Robot. Most of the time a bug is not an error in logic, but a misunderstanding in how some component of the system works or an unexpected outcome of the emergent behavior of basic rules. Always check your assumptions.


I can imagine many more types of bugs:

- fatigue bugs

- inattention bugs

- typo bugs

- unknown unknown bugs

- bad merge bugs

- upgrade bugs

- multiple teams or devs screwing up together bugs

I mean, you can always stretch the meaning of "assumption" to match any of those. But to me realistically "assumption" is more about when you conscientiously suppose things.

After all "unexpected outcome of the emergent behavior of basic rules" could be pretty much be applied to anything. And when your definition can be applied to anything, it's not that much of a definition.


I suppose most of those could/should be tested against, and bring back every bug under "assumption bug". But yeah, I agree with you, this definition is not wrong it's just unhelpful


- copy-n-paste bugs


That's more obviously an assumption failure though - the copy-paster has assumed that the top hit or similar-looking thing is going to work, without properly understanding it and modifying it as necessary.


You would be surprised by how many people don't know gdb's visual mode, tui, exists.

https://i.imgur.com/iIPW7lv.png

You can also view assembly and registers.


I never heard of tui (an ironic name for a gui), but I used ddd.


It’s not a GUI, it’s a text user interface. Nothing ironic about it.


I couldn't live without it! I don't program often in C or assembly, but when I learned about tui, learning C and assembly became much easier!

This and learning how to set breakpoints properly in GDB and perhaps analyzing a binary with IDA Pro plus searching via Google a lot is what really helped me.


In re: "check your assumptions"

I once got to listen to a brief lecture by a genuine 10x programmer. The title was something like, "Why I'm 10x better programmer than you." and believe me, we were all eager to hear what he said because we knew he was freakishly better than us. (I mention that because some people, reading that title, might get the wrong idea. Dude is cool, title was tongue-in-cheek, it's all good.)

Anyhow, the main take away was this:

"Always have 100% confidence that you know exactly what the code is doing."

If anything ever happens to indicate to you this is not the case (that you know exactly what the code is doing) stop immediately and do whatever is necessary to re-establish that condition (of knowing exactly what the code is doing.)

And let me be clear, he meant down to the machine code.


> writing a unit test that reproduces the bug (if you can). bonus: you can add this to your test suite later if it makes sense

Why is that a "bonus". If I manage to create a failing test, I'll definitely want that permanently in the test suite.


It's weird to read a whole article about debugging that doesn't discuss debuggers. "Use a debugger," it says, offhandedly, as if you already know how to do that. But the sort of person who needs this article almost certainly doesn't know how to use a debugger.

Debuggers have a mixed reputation on HN ("I never use a debugger; logging statements are way better") but IME too many developers, especially ones fresh out of school, literally have no idea how to set a breakpoint and step in/out.

For those who do know how to set a breakpoint, that's typically all they know (and so they assume that debuggers aren't very useful); they don't know how to drop ("restart") stack frames, run arbitrary code in the debugged process, set conditional breakpoints, ignore ("blackbox") files, debug remote processes, edit the in-process code without restarting, etc. etc. etc.

Every developer working on code mostly written by other people should learn how to use their debugger's tools thoroughly.


I'd put tracing tools (dtrace, eBPF, etc.) in the same category. Many bugs only happen in production, and you can't run a debugger. The might also be triggered in some way such that they don't happen when stepping through code, for example race conditions or network faults.


The humorous thing is Microsoft doesn't think debugging should be allowed in their latest SQL Server Management Studio release. Imagine trying to debug SQL stored procedures without it.

While I agree SQL stored procedures should be slim, I've had situations where I had to write very complex ones even though I argued against it.


Is there any SQL engine that does debugging of functions/stored-procedures well? If there is a way to do it well in postgres I'd love to know about it!


I just got started with postgres and have hated the procedure syntax so much that I've begged my manager to go to MSSQL or upgrade to the latest database version (I'm currently on 9.3). And fortunately, the database is small enough where this really isn't a big deal. But my guess would be to search on the web for one and try them out. Apparently, pgAdmin 4 has this capability. But I've never used it. They're usually disabled on default install and have to be enabled at the server. pgAdmin calls it the pldbgapi extension.


Sometimes the point of experimentig is to narrow down to find what you don't understand.

In this situation, having an input the system works for, and having an input the system fails for is useful. (You can usually try to experiment to narrow down to _why_ the failing case fails).


I'd add in the importance of understanding your code. Sometimes we do such a good job of breaking our code down into bite-sized chunks (components, etc), that coders can lose the big picture of how the app truly flows.

I've seen coders spend hours debugging problems, stepping through line-by-line in their IDE, and not finding the problem. Then when I suggest they stop, step through the creation of the bug, and just think about what algorithms and data changes are occurring at each step, they often have an epiphany and realize where the problem must lie.

By all means, use the tools that are at our disposal, but the most important part of debugging is simply to think.


> when I suggest they stop, step through the creation of the bug, and just think about what algorithms and data changes are occurring at each step, they often have an epiphany and realize where the problem must lie.

All the prior exploration probably often lays the foundation for that epiphany, though.


> accept that it’s probably your code’s fault

One “unpopular opinion” I have is that once you’ve written enough code, ignore this advice and assume that it’s the library’s fault very quickly into your debugging session. Some small fraction of the time you’re actually going to be right and you’ll have saved yourself some work, and the times you’re wrong I argue that you’re actually not any worse off. For example, if you have a function returning an error, just assume that the framework is making a mistake somewhere: I tend to find myself that I’m a lot more thorough and open-minded at reviewing other people’s code than my own, so I’ll look a lot more for the reason why it’s behaving in an unexpected way. Often this gives me insight in to the internals of something new for free along with the solution (oh, the error is being created at this check inside an internal function because I passed in NULL and I shouldn’t have).


How much code are you writing before testing the functionality / behaviour of the library in question? Ive seen folks write pages and pages of code before trying to run anything, but my personal approach is to prototype the smallest working bit before integrating it into a codebase.


I write a lot of code in the debugger, to be honest.


This depends hugely on what kind of code you're doing. If you're mostly writing "glue" code between in-house libraries which may not be battle-tested, it's probably a reasonable assumption. If you're writing large amounts of application code yourself and the libraries you're using are widely used, then it's much less likely that there's something wrong with the libraries.


Again, it doesn’t really matter whether the library actually has a bug in it: you’re just tricking yourself into finding the bug by looking at code that’s not your own. However, from personal experience well-used libraries and tools have more bugs in them than you’d think ;)


Alternatively, you can just internalize that anything running on your machines is your code. There's no difference between the piece of software you're writing and the library it's calling; you can and should step back and forth between them when debugging. Abstractions are cool, but there's great value in just getting your hands dirty and looking under the hood.


I can’t always fix “my” code, then :(


I'd generally agree with this (bumped up against more gcc, libc and kernel bugs than I'd have liked), but I find that my code has even more bugs than that. :)




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: