
I Don't Like Debuggers (2000) - deepaksurti
http://lwn.net/2000/0914/a/lt-debugger.php3
======
sriram_malhar
I loved what Rob Pike had to say about what he learnt from Ken Thompson:

```A year or two after I'd joined the Labs, I was pair programming with Ken
Thompson on an on-the-fly compiler for a little interactive graphics language
designed by Gerard Holzmann. I was the faster typist, so I was at the keyboard
and Ken was standing behind me as we programmed. We were working fast, and
things broke, often visibly—it was a graphics language, after all. When
something went wrong, I'd reflexively start to dig in to the problem,
examining stack traces, sticking in print statements, invoking a debugger, and
so on. But Ken would just stand and think, ignoring me and the code we'd just
written. After a while I noticed a pattern: Ken would often understand the
problem before I would, and would suddenly announce, "I know what's wrong." He
was usually correct. I realized that Ken was building a mental model of the
code and when something broke it was an error in the model. By thinking about
_how_ that problem could happen, he'd intuit where the model was wrong or
where our code must not be satisfying the model.

Ken taught me that thinking before debugging is extremely important. If you
dive into the bug, you tend to fix the local issue in the code, but if you
think about the bug first, how the bug came to be, you often find and correct
a higher-level problem in the code that will improve the design and prevent
further bugs.

I recognize this is largely a matter of style. Some people insist on line-by-
line tool-driven debugging for everything. But I now believe that
thinking—without looking at the code—is the best debugging tool of all,
because it leads to better software.'''

[http://www.informit.com/articles/article.aspx?p=1941206](http://www.informit.com/articles/article.aspx?p=1941206)

~~~
writeslowly
This makes sense to me for certain types of codebases. I would almost never
bother stepping through code I wrote by myself for example.

But when I’m working on large software projects or certain external libraries
I tend to encounter bugs or design issues where I realize I made the wrong
assumptions about how someone else’s code works in the first place, and a good
debugger is very useful in those cases when the problem changes from debugging
your own logic to reverse engineering someone else’s.

~~~
toastking
This is why I think debuggers are necessary. Without being able to see a full
stack frame of information about variables it can be extremely difficult to
debug when using other people's code. So many errors boil down to assumptions
about what is in a value.

~~~
lugg
Why are you giving your functions uncertain data?

The first point your code sees uncertain data is the point that it needs to
clarify what it has.

The only way you get the errors you're talking about is if you ignore the
above practice.

Garbage in. Garbage out.

For code i write, there is usually only one option for what is in a variable,
the type of the data i out in there. This is true for dynamic languages as
much as static.

In some situations I will also allow a null value. All that means is that
there was no data, and no default is wanted. Usually, I want a default.

Closing off all entry points for uncertain data you need to make assumptions
about is the first port of call when dealing with other people's or legacy
code. It's the way to reason about code without a debugger.

If you can't reason about code without a big question mark above every piece
of data you need a debugger.

~~~
okcando
This amounts to saying that you don't need a debugger if you don't make
mistakes.

If your mental model of how the code that you're interfacing with works is
wrong, then it won't help you. Your data validation will error out and you
might find that it was too much too early. Not only do you have logic based on
flawed assumptions but check code as well.

Understanding the code base you're programming against, that's a skill that
can be improved but hoping to guard against all misunderstandings is probably
unrealistic. Given that, following a trace may help you identify disagreements
with your model more quickly.

~~~
lugg
Far from it.

I am saying you don't need a debugger in the scenario above because the only
time you need it in that case is if your inputs can't be trusted.

If you control the domain, the only way you can't trust your input is if you
fucked up.

The answer to that isn't "oh now I need a debugger" the answer is to go clean
your code.

While I'm here, I'd also like to point out this isn't a "I'm good and your
shit" thing I'm describing, this is my day job. I make lazy crap code to get
things done, then I go in to modify it and find I can't reason about it, so I
go and clean up the mess.

Maybe I made it, maybe I didn't, it's besides the point, you have a mess,
clean that first, then reach for a debugger.

Hell reach for a debugger to help clean up if you need to, just don't sit
there and tell me you need a debugger because code inherintly needs to make
assumptions about what is in a variable. It doesn't. If it does, it's a code
smell.

> So many errors boil down to assumptions about what is in a value.

Only avoidable errors. They should not be dictating your tools or your
language.

------
pierrebai
This is a story-telling technique not specific to writing code. An advance
practitioner will trash a technique every beginner must learn to outline a
higher-level mastery. It's a discourse that happens in everything: climbing,
skiing, fishing, name it.

It makes for an impressive story and establishes the speaker as a jedi master.

It's also mostly bollock. Every one will have epiphanies. I sometimes work on
a problem for a while, go to the loo and have the solution come up. It's the
well-known phenomenon of stepping away for a while and letting teh brain
unfocus from the details.

Yes, it works when the problem was something fundamental or architectural. It
rarely does when the problem was using the wrong variable, or a test that was
inverted. These are easier to find with traces or a debugger.

But dissing the low-level practices is so much better at getting that alpha
status...

~~~
_bxg1
There's definitely a self-serving aspect here, but importantly he's only
speaking to Linux Kernel development, not development in general. My takeaway
is that he only wants Jedi Masters working on his kernel, not that everyone
else is completely worthless.

------
pjc50
This was pretty much the high point of "coder machismo", along with Eric
Raymond and his power tool analogies. Presumably this is also why the kernel
doesn't have a test suite. Interesting to compare with the other paragon of
software development processes, SQLite, with its vast test suite. It doesn't
have a "debugger" per se, but it does have some tips:
[https://www.sqlite.org/debugging.html](https://www.sqlite.org/debugging.html)

Always difficult to tell what the cost of this was. What features did we not
get because developers were put off? What problems did it prevent? How did
this affect downstream Linux ecosystem development and culture?

Re: use of debuggers, I've found them most useful when dealing with other
people's code; you can short-circuit a _huge_ amount of "where is the code
that does this" or "how did I get here" by just setting a breakpoint and
getting a stacktrace.

~~~
cosinetau
I feel that I'm reading that tone in Linus's post, too. But after reading
these comments, it also appears that some people simply have different use
cases for these tools aside from stepping through logic.

And in other cases, the debugging statements have to be present in the code to
even get to step through the program, and their concerns revolve around code
management when tools that have code inserts like that.

But it would be nicer if they wrote their stuff like their use cases weren't
the only ones, or the most important. What features have we lost because they
couldn't think past themselves?

------
simula67
He seems to be arguing that in order to weed out poor engineers he prefers to
keep things complicated. As hard as it would have been for me to accept this a
few years ago, I think there might be some truth to this argument.

For example, I would think that the best Java engineers are probably much more
productive than the best C++ engineers. However, the market is probably full
of poor Java engineers that the average Java engineer is probably a lot worse
than the average C++ engineer.

C++ is such a complicated language that if you are a professional C++ engineer
and still employed, you must be fairly skilled. So, for example if you start a
tech company or an open source project it might be better to choose C++ as the
implementation language even though Java might make you more productive.

Funnily enough, Linus does not apply this logic to C vs C++ debate and has
come to the _opposite_ conclusion :
[http://harmful.cat-v.org/software/c++/linus](http://harmful.cat-v.org/software/c++/linus)

~~~
headmelted
Not a C++ engineer here, but I would _think_ that the vast majority of C++
gigs around are legacy systems - in which case most of the work would probably
be bug triage, that likely doesn't call for more than "fix my immediate
problem" thinking.

Obviously there's still a lot of greenfield C++ work out there too, but I
would expect that it's no longer the lion's share (C# is very much getting to
this stage of it's life too).

I would guess that the top-level C++ folks have by-and-large moved on to those
greenfield projects naturally as it's where their skills are most required,
and where the most money is available to pay for them (e.g. fintech).

~~~
carlmr
Embedded is huge for C/C++ and there are still many greenfield projects.
Embedded (real-time, safety critical) usually requires GCless languages, which
already rules out the vast majority of super productive new languages.

I wish we moved to Rust (or even Ada), because most of the bugs I see
occurring couldn't happen there. But embedded compilers are usually lagging a
bit.

------
dmitryminkovsky
What a line:

> And quite frankly, I don't care. I don't think kernel development should be
> "easy". I do not condone single-stepping through code to find the bug.

Reminds me of this quote from the introduction to Log4j docs [0]:

> As Brian W. Kernighan and Rob Pike put it in their truly excellent book "The
> Practice of Programming":

>> As personal choice, we tend not to use debuggers beyond getting a stack
trace or the value of a variable or two. One reason is that it is easy to get
lost in details of complicated data structures and control flow; we find
stepping through a program less productive than thinking harder and adding
output statements and self-checking code at critical places. Clicking over
statements takes longer than scanning the output of judiciously-placed
displays. It takes less time to decide where to put print statements than to
single-step to the critical section of code, even assuming we know where that
is. More important, debugging statements stay with the program; debugging
sessions are transient.

The above quote made me reconsider my intense debugger usage that I had fallen
into at the time. On one hand you can't take all your cues from authority, but
on the other hand, what K&P said sounded like it might make sense. So I tried
using the debugger less and logging more, and over time I think it's made me a
better programmer. I think the key statement is:

> we find stepping through a program less productive than thinking harder and
> adding output statements and self-checking code at critical places.

which Linus also echoes:

> I happen to believe that not having a kernel debugger forces people to think
> about their problem on a different level than with a debugger. I think that
> without a debugger, you don't get into that mindset where you know how it
> behaves, and then you fix it from there. Without a debugger, you tend to
> think about problems another way. You want to understand things on a
> different _level_.

[0]:
[https://logging.apache.org/log4j/2.x/manual/index.html](https://logging.apache.org/log4j/2.x/manual/index.html)

~~~
ams6110
I've never really used debuggers routinely. They are a last resort for me. I
use print statements and logs for the most part.

~~~
mruts
I think debuggers are actually a lot less useful than people think. Just by
thinking, using a REPL, or putting in print statements you can solve probably
99% of all bugs.

Debuggers are useful when you really have no fucking idea what's going on.

~~~
tom_
Or instead of putting in print statements, you could use a debugger. You don't
even need to know which print statements you need, because when the program is
stopped you can look at whatever you like.

Thinking is still required, but I don't recommend relying on it, since it's
typically what gets you into this whole mess in the first place. At the very
least, don't rely on thinking to guess what the computer is doing, when you
can use the debugger to find out for certain.

------
Scuds
``` Apparently, if you follow the arguments, not having a kernel debugger
leads to various maladies: \- you crash when something goes wrong, and you
fsck and it takes forever and you get frustrated. \- people have given up on
Linux kernel programming because it's too hard and too time-consuming \- it
takes longer to create new features.

And nobody has explained to me why these are _bad_ things. ```

Uhh, you're dealing with a largely voluntary unpaid workforce that requires a
lot of effort to become proficient, and by definition is not user facing. Any
barrier to entry you throw up is going to decrease the viability of the
project and have knock on effects years into the future.

~~~
theon144
>you're dealing with a largely voluntary unpaid workforce that requires a lot
of effort to become proficient

Are you, though?

"While many people tend to think of open source projects as being developed by
passionate volunteers, the Linux kernel is mostly developed by people who are
paid by their employers to contribute." \-
[https://thenewstack.io/contributes-linux-
kernel/](https://thenewstack.io/contributes-linux-kernel/)

And even if this were true, Linux is no ordinary FOSS project at the risk of
losing "viability", with what I imagine is an absolute minimal amount of
casual contributors which would be turned off by this sort of supposed
"barrier to entry".

~~~
steveklabnik
That article is 17 years younger than the OP. I agree with you today, but was
that just as true back then? I’m not sure.

~~~
theon144
Fair point, it really may have been a different situation in 2000, I didn't
realize that.

------
cbanek
I never knew that Linus didn't like debuggers, but I completely agree, though
for slightly different reasons. I haven't stepped through something in a
debugger in probably 10 years.

I don't do kernel development, most of my stuff is distributed systems /
networking / devops type stuff. In this realm I feel like debuggers are almost
an antipattern because:

1\. Distributed systems with a lot of nodes means you might have to attach
debuggers to everything, and that's just too hard to control.

2\. Anything with networking, like web services, distributed systems,
protocols, etc. will likely time-out the connections while you step through
line by line. Once your TCP socket gets closed on you from below or the other
side timing out waiting for a response, then you have to set up your debugging
environment all over again.

3\. Most of the real hard bugs I have to deal with usually involve timing or
race conditions. This means that I might not be able to reproduce them in a
debugger, and sometimes even the act of attaching a debugger makes the timing
different enough to not repro the bug.

4\. In production of these systems, most times you can't just hook up a
debugger and basically shut down the system while you try to diagnose a bug. I
collect what info I can from the logs and move on. In this situation, you need
great logs. Great/useful logs are built over time, by adding log statements to
the right places. So for every bug I diagnose, I keep the logging statements
in I used for debugging (although I usually don't leave them emitting
messages, because I have log levels).

~~~
throwaway808080
I worked on Browser devtools. The #1 used tool was console. Nothing beats
being able to put some console.log and see its output. Console.warn is even
better since it captures the stacktrace.

Logs give a timeline of change.

I had this idea of console.snap but never got around to it. The idea is that
it would not only capture the stacktrace, but a shallow copy of scopes at
every function. You can query the snap to see how a variable changed, or find
what set of variables caused something else to change later on.

I feel like this is the holy grail of debugging. Smarter logs that you can
reason with and do time travel analysis.

I never got around to doing it, but now working at an analytics company, I see
it being very valuable as it saves so much guess work.

~~~
cbanek
I totally agree - I love the ability in python to print out the stack trace, I
use it all the time! I would love if that also had variables as well
(although, this is tricky, because you have to worry about cycles between the
variables if you are trying to print everything out. Maybe a list of variables
to print out with the stack trace would be enough?)

~~~
throwaway808080
That’s why I suggested a shallow copy

------
scandox
> I'm a bastard. I have absolutely no clue why people can ever think
> otherwise. Yet they do. People think I'm a nice guy, and the fact is that
> I'm a scheming, conniving bastard who doesn't care for any hurt feelings or
> lost hours of work if it just results in what I consider to be a better
> system. And I'm not just saying that. I'm really not a very nice person. I
> can say "I don't care" with a straight face, and really mean it.

What's interesting to me is that a man could reach the age of 31 and still be
talking like this. It's teenage language.

At the same time he's also so clearly in command of his domain.

I think expertise and power in one domain can really stunt your development in
others.

~~~
pjc50
> teenage language.

It's machismo. We don't hear very much of this any more from adult
professionals in Western desktop jobs, but it's definitely out there. Even in
the tweets of certain high controversy members of government.

~~~
peterwwillis
Well, not to be too pedantic, but it's more career-focused hypermasculinity.
Machismo is an Iberian-originating concept which isn't solely negative
properties. When immature people don't get taught what machismo or other
gender-focused values systems are actually about, they end up with just the
negative values. But you can have someone who's "macho" or "manly" regarding
their career and also not a dick.

------
saagarjha
> Without a debugger, you basically have to go the next step: understand what
> the program does. Not just that particular line.

On the contrary, I find that a lack of debuggers makes me lazy when performing
this step: I’ll go “yeah, these lines of code remove the item from the list”,
but then having no way to check that I get burned later when it turns out it
actually did something subtly different than what I expected. I think he is
right, though, that debuggers are not always the right tool for the job;
occasionally they cannot produce relevant output (there might be too much too
look at, of the part you are looking at is difficult to reproduce), so having
these “big picture” skills are quite useful.

------
PaulHoule
For kernel work I am inclined to agree with Linus.

Every kernel problem I have ever faced down was an intermittent problem
involving some sort of race condition. A debugger doesn't help you there
because:

1\. Timings might be changed so you can't (possibly) reproduce the bug in the
debugger, or 2\. The race condition involves some rare events that might turn
up every week on a server but that could take years to find in the debugger so
you can't (practically) reproduce the bug.

In applications work (with say Python or Java), however, I am strongly against
debug printf's and any temporary changes to the source code which are
motivated by the needs of debugging.

Typically you wind up with the code (somewhere) in a state having some
temporary debug-related changes and an in-progress fix. It might be checked in
and pushed because that is how you get it up on the test server.

These temporary changes have a way of becoming permanent, thus somebody tells
your company that there is something at the bottom of the home page that
shouldn't be there and you realize that it had been there for two years
without anyone noticing.

Yes you can push back against that with code reviews and process, but wouldn't
you rather use that spray can of management on real problems instead of
avoidable problems?

Thus the debugger is great if it means you can __eliminate __the use of
temporary debug code changes (e.g. not the in-progress-fix)

I also like developing unit tests in the debugger in Java and C#. You edit
this, you edit that, you can look at data structures at any point in time...
It's a lot like using the REPL in Python.

~~~
m0llusk
What is being coded makes a big difference in development cycles. Most of what
I am currently working on are small services that log everything they do
anyway so adding some debugging output to the mix doesn't change much.

Where I have seen debuggers being useful at every point in the stack from
kernel to compiler to loader and beyond is with ports. If all you are doing is
running some existing well understood code on different hardware then being
able to poke at the internals when things go wrong can often help find the
cause quickly.

------
apo
> I do not condone single-stepping through code to find the bug.

Hard to take this post seriously because he offers no clue as to how he
actually finds the bug (being "careful" is ludicrously vague). That would be
an interesting read.

~~~
nickjj
I can't speak for Linus but I haven't used a debugger since VB6 and most of my
debugging process nowadays (web development) is done by print statements.

If you're getting unexpected output without syntax errors then it's due to
something along the way being set to the wrong value. Print statements are
very helpful for uncovering that.

As soon as you see where things are going wrong, you know exactly what needs
to be changed to fix it. With a bunch of print statements you can see the
state of the system in multiple spots at once.

Plus with print statements you have the option of keeping them around all the
time but tucked behind a DEBUG log level. In development, you can often take
the guess work out of things if you're always in a position to see the state
of your app at various steps. It can speed up development by preventing bugs /
unexpected output before it happens.

~~~
apo
> Print statements are very helpful for uncovering that.

Yes, I used to do that all the time. It was excruciating, only I didn't
realize it.

Then I started actually learning how to use a debugger. Efficiency in
uncovering bugs went through the roof.

~~~
nickjj
I don't find it bad at all. I see print / debug statements almost like tests.
They are always there. Where as debugging with a debugger is more ad-hoc.

Plus, debugging is way less useful for web development where you have a bunch
of different levels of technology at play. Also, most editors can't even be
configured to work nicely with Docker.

------
zvrba
I'm amazed at the amount of worshipping (in the comments) of luddites when it
comes to debuggers. Citing Linus, Log4J developers ("use logging instead"),
Kernighan and Pike... especially the latter two had learned programming and
used computers that were orders of magnitude less capable than today's
computers thus being able to run orders of magnitude less complex programs
thus more readily understandable by humans.

It's always the two polarities: understanding the program vs focusing on tiny
details of the bigger picture. For me, the debugger is THE tool for
understanding the bigger picture!

Example of a concrete problem that I solved with a debugger today: I was
populating a list in the UI, yet the elements were constantly disappearing.
Why were they disappearing?

It's a JavaFX where the UI elements are bound to observable lists.. So I put a
change listener on the list and a breakpoint inside it. The breakpoint had
been hit a couple of times and one of these times I saw a piece of my code in
a remote location [1] that was clearing the list! Root cause found, problem
solved.

EDIT: could I have logged the stack trace rather than using a breakpoint? Yes,
but the stack trace printout is far less actionable (harder to pinpoint
something interesting, can't inspect variables, etc.) than an interactive
stack trace in the IDE while the program is suspended.

[1] Why remote location? The UI is the frontend that communicates with a
backend. The UI is rather strongly separated from the backend bridge; the
bridge also runs in separate threads. So the bridge takes care of the back-
end, receives messages and interprets them and updates the elements (data
model) observable from the UI.

So my impression is that people who talk down debuggers simply haven't learned
how to use them effectively.

~~~
throwaway808080
Using debuggers effectively takes skill. When I was younger I would ask other
people “hey how does this work?” Now I just debug it and go deep on
interesting function to see how deep the rabbit hole goes. It’s helped me pick
up things pretty fast.

------
ToFab123
Admitted, I use a debugger, but in many situations I choose not to use it when
facing a bug. For some errors I want to be able to see what the causes of the
error is by looking in the logs. Why? Because attaching a debugger in
production is not always possible, so I want it in my logfiler. In development
I add more and more info to the trace log until I can fix it only by looking
at the logs. This has saved me countless times when facing a problem in
production.

------
Const-me
I think OP's rationalizing poor state of Linux debuggers.

Making good GUI is hard, and goes against terminal-centric Linux culture.
Debugger needs IDE integration, and integration again goes against the
culture, see "doing one job well" mantra.

When I program for Linux, I don't like debuggers either. Using gdb sometimes
but it's not too useful.

When I program for Windows, visual studio debugger is awesome.

------
FrancisStokes
Reading these old Linus exchanges always has me in two minds. On the one hand,
it's great to peer into the mind of a genius and see the thought process, but
on the other - and in light of his stepping back after recognising his own
highly unprofessional behaviour over the years - I wonder if we should repost
and idolise these emails.

Are people - especially young and impressionable developers - able to separate
the two sides of the coin, or does it serve to plant a seed of normalisation
for this kind of communication?

~~~
enriquto
> Are people - especially young and impressionable developers - able to
> separate the two sides of the coin

As a young coder, I remember reading these linus oldtime posts as an
exhilarating breath of fresh air. Everyone around me was all-in about "object
orientation" and similar bullshit. When I say everyone I mean it: there was
not a single person that suggested even a hint of hypothetical disagreement on
software engineering mantra. Thus, it was refreshing to read these posts and
realize that there existed, after all, another side of the coin.

We should not be afraid that young and impressionable people see "all sides of
the coin". We should be afraid that they only get to see one side.

~~~
vkazanov
I don't really know if 33 is young or old but I really cannot agree more about
the mantra thing. And it's not only the OOP craze that completely ate the
industry at some point..! NoSQL-related discussions, all UIs and CLIs, the big
enterprise influences vs open source approaches, etc.

~~~
Cthulhu_
It's easy to echo who you perceive as smart people; it's harder to have your
own opinions, and be at a level where you can defend them in a satisfactory
way.

Somehow, the Go language is doing it, eschewing traditional OOP in favor of a
get shit done fast and a little dirty approach. It's not even because of the
charm of an individual either.

Then again, before Go there was JS as the big opponent to rigid OOP. I still
don't understand why they felt like they had to add half-baked classes / OOP
to it.

~~~
jes5199
yeah but people _hate_ writing Go. It’s like using the dullest possible knife.
It doesn’t have objects, it doesn’t have generics (so you can’t really do
Functional Programming) and it doesn’t really let you do C-style machine work
either. Its basic assumption is that you’ll have an endless supply of human
beings to implement everything, because the computer isn’t going to let you
abstract anything

~~~
sk0g
> people hate writing Go

Bit of a blanket statement. I enjoy it much more than the little Node I
investigated for work.

The lack of functional programming though, yeah. I feel like it might be due
to GC optimisations as well, however. From what I understand, the reason the
GC is so fast is because such a style of programming is heavily discouraged.

~~~
marcosdumay
> I enjoy it much more than the little Node I investigated for work.

I'm another one that prefers putting a bullet on my foot than putting it on my
head.

------
bitwize
Debuggers are to debugging what IDEs are to programming: they make you so much
more productive that as a professional programmer, forfeiting that
productivity is literally leaving money on the table. Something you can afford
to do if the project is a hobby project (which Linux still more or less is to
Linus), but when you have actual customers -- no way.

The quality and integration of the debugger alone made Visual Studio head and
shoulders above most development tools for other environments.

------
bsaul
Side remark : has anyone talked to linus the same way he did to others
following a bug linus has coded himself ? Like, calling something he did a
piece of trash, or something similar.

~~~
ElBarto
I've known outstanding guru-level engineers that were that blunt when I was a
student.

You need to know the person. In my experience most of them are very nice guys.
It's just that when they think something is not that great they will just say
"it's shit".

I was told many times my code was shit by these guys at the time and I didn't
take it as a personal insult but as constructive feedback. They liked that I
took it that way and we ended up good friends.

People should see through the words before being offended.

~~~
C4stor
How is saying "it's shit" is "constructive feedback" ? What is the
constructive part ?

I'm not offended when developers say that, I just observe that they are unable
to build and express a reasoning, and that worries me about their abilities as
developers.

~~~
geezerjay
> How is saying "it's shit" is "constructive feedback" ?

It means the contribution isn't up to the standards and therefore can't be
integrated, and thus informing the contributor that he needs to improve his
work in specific aspects in order to avoid similar problems.

> What is the constructive part ?

You're focusing too much on irrelevant aspects of the communication (i.e.,
which and how a word was used) and in the process ignoring (intentionally or
not) what was actually said. Typically linus includes specific comments on the
problems present or caused by a contribution, whether in code or the
development process. In the very least it's very easy to understand that this
sort of process works through negative feedback.

------
pixelbeat__
Some tend to use debuggers for development details which I agree isn't great,
as one tends to focus on details rather than overall structure.

But for actual debugging, especially for code you're not familiar with, they
can save huge amounts of time. Some more notes on this at:

[http://www.pixelbeat.org/programming/debugger/](http://www.pixelbeat.org/programming/debugger/)

------
Insanity
I kind of agree on the point about thinking of bugs at a different level. I'll
first try to reason about what went wrong and look through the code based on
the stacktrace.

But the debugger is nice for a certain set of problems. If you get a (java)
lambda statement that is going fubar, it's nice to attach a debugger and just
analyze the lambda flow to see how each statement effects the data.

One of the projects that I worked on, we'd often repeat that we had to look
for the _root_ causes of mistakes. Which is done without a debugger, it might
just be a conceptual error. e.g: if we had a nullpointer, we could just add
the simple null-check. But it'd be more interesting to reason about if it
should even be possible for that variable to be _null_ anywhere in the code.
So we could rewrite it in a way that it can't ever be null and avoid the
check.

Anyway, you probably should use a debugger , but it's just one tool of many.

------
rbanffy
What would be really interesting to have would be an extensive set of
automated tests.

A lot of the code can run in the build environment, supported by dummy
components, without requiring to boot a VM. A lot would need that VM, and the
VM would need to be able to emulate _a lot_ of hardware for the tests to be
comprehensive. OTOH, this emulation code (or the config files that would set
it up) would serve as always verifiable definitions of how the hardware is
supposed to work. And dummies and emulation could be automatically validated
against actual live hardware when it's available.

------
burtonator
I've found that thinking and rationalizing about units and following TDD is
much more important than a debugger.

The debugger generally sucks because by the time you have to use it the whole
system isn't functioning properly.

If you can just find out WHICH component broke you can rip out that component
and build a test for it or expand upon an existing test.

It usually turns out some input for a function broke the range and domain of
the function and just adding another test for it and then fixing the tests
fixes the problem.

The additional benefit here is that not only do you not need to use the
debugger but since its now a unit test and in your code you can't ever break
that way again.

A lot of people joke about how they don't write unit tests because they're
trying to save time.

In my experience, in any significantly complicated code base something massive
will break and knowing exactly WHEN it broke and in what commit can
dramatically lower the time needed to fix the problem.

------
mnm1
Maybe this applies to kernel development, but for web development with OOP
languages like JavaScript and PHP, I find a debugger to be incredibly useful.
Our apps have millions of lines of code and sometimes I watch other developers
struggle with print statements and other such techniques and never get
anywhere, whereas with a debugger I'm able to find and fix the issue within
minutes. They are not worse programmers than me, but they sure are inefficient
and frankly, using the wrong tool for the job. For functional languages like
clojure, the debugger is almost useless as they have a repl based interface
and debugging is mainly playing with the repl and isolating the issue. Still,
I wouldn't give up my debugger for OOP based languages for anything. I often
use it during development just to check that the code is running as intended,
before bugs even creep up.

------
jes5199
you can tell this is dated because “People think I’m a nice guy” is definitely
not reflecting how people talking about the man lately

------
charlesism
If you know the symptoms of NPD, this stuff is cringe-worthy...

    
    
        I'm a bastard. I have absolutely no clue why people 
        can ever think otherwise. Yet they do. People think I'm 
        a nice guy, and the fact is that I'm a scheming, conniving 
        bastard who doesn't care for ...
    

On the surface, the message is "Isn't it impressive how dominant I am?" but
the unintended message is "I have a personality disorder!"

~~~
Cthulhu_
Oh he's aware of it. He just doesn't care, because it works for him and the
software he is the end responsible for.

I mean compare Linux to the Java ecosystem, which is similar in scale
(billions of installs / dependencies) but run by commission. How is that
working out? Less hurt feelings I guess but it's not making any significant
advances either and has been stagnant for years.

~~~
WalterGR
_it works for him and the software he is the end responsible for._

But does it work for the software he’s responsible for, or does the software
he’s responsible for work _in spite of_ his behavior?

 _How is that working out? Less hurt feelings I guess but it 's not making any
significant advances either and has been stagnant for years._

Commissions that don’t hurt people’s feelings make something stagnant? Could
there be other forces at work to explain Java’s stagnation, other than a
commission that doesn’t hurt people’s feelings enough?

~~~
ben509
> Could there be other forces at work to explain Java’s stagnation, other than
> a [committee] that doesn’t hurt people’s feelings enough?

Not a chance. All of Java's roadmap discussion revolves around dealing with
legacy issues, and they even had an entire major release just trying to
refactor the language into modules because the core was so bloated.

You can't blame lack of funding, or that it's a niche product or anything like
that.

You can't blame lack of talent, because there were a ton of very smart people
working on it.

You can't blame novel ideas, because very little in Java was unprecedented.

Java's legacy traces inexorably back to:

a. features that no one could say no to b. badly implemented features

------
kstenerud
> And quite frankly, I don't care. I don't think kernel development should be
> "easy".

Ah yes, the tried-and-true arrogant neckbeard stance that so defined unix
interfaces for decades: If it was hard to write, it should be hard to
understand.

But we've already proven that good UX is a good thing. In fact, UX has been so
front and center for so long on HN, I'm rather surprised that people still
cling to the old ways.

~~~
svnpenn
I find this dismissive.

We are not talking about some throwaway web app. We are talking about the
Linux kernel. Something that has been used for decades and is currently used
on millions of devices every day. From literally computers that power
spacecraft down to mobile devices.

UX certainly has its place, however I think changes and additions to the
kernel should be carefully considered. If excluding a debugger excludes some
careless programmers then maybe that is a good thing.

------
jopsen
It makes a lot of sense to prefer to think about the real problem.

Years ago at school my group wrote a ray tracer from scratch, after a few
months we noticed that the vector class had an inverted scalar operation. The
program worked, because we had inverted the entire vector space as we had just
fixed the code until it worked.

After fixing the vector class most of the other formulas in the program
started to look more reasonable :)

------
FourierTformed
If someone feels the need to tell you they are an unapologetic asshole, I
think that's a small dog with a smaller bark.

------
aboutruby
Hmm didn't think this was real when I saw (but seems to be very real after
checking):

> And quite frankly, I don't care. I don't think kernel development should be
> "easy".

Previous discussion on HN from 2015:

[https://news.ycombinator.com/item?id=10814514](https://news.ycombinator.com/item?id=10814514)

------
chrisg3
I'm personally not a fan of casting aside a tool as something never to be used
(especially as a matter of principle).

I personally do not use debuggers, but I can see why they might be used,
especially to step through someone else's convoluted code.

------
izacus
This is now 18 years old - is it still relevant? Was he right?

~~~
sam_lowry_
He was right at least in this part: «My biggest job is to say "no" to new
features, not trying to find them.»

------
raible
WGAF? I don't like source code control systems.

;)

------
jdlyga
I didn't like debuggers much back then either. But they've gotten better.

------
growlist
Imagine being so famous that you are known only by your first name and what's
more not just for making crumby pop music, but for doing something that's
changed the world for the better. Thanks Linus.

------
dvfjsdhgfv
I, for one, miss the old Linus.

------
PorterDuff
and I sort of miss ICE machines.

Oh well.

------
htor
you bastard!!!

------
jondubois
>> Yet they do. People think I'm a nice guy, and the fact is that I'm a
scheming, conniving bastard who doesn't care for any hurt feelings or lost
hours of work if it just results in what I consider to be a better system.

Such honesty. I think that jerks who scheme and hurt individuals for the
common good are the best kinds of jerks and the world needs more of them.

Also, his points make perfect sense for Linux Kernel development. It shouldn't
be easy to change. Stability is by far the most important feature. Even
excellent developers don't cut it for the Kernel; you need complete freaks of
nature. It's impossible to even wrap one's mind around the sheer number of
systems that depend on the Linux kernel. Kernel development cannot be slow
enough. Most of the new code should be thrown away without batting an eyelash.
In fact, every line of code merged should be so good that it should deserve an
international conference dedicated to it and the author should get a medal.

Heck, every line of code should have a religion built around it; complete with
churches, priests, a pope, schools, etc...

