Hacker News new | past | comments | ask | show | jobs | submit login
I Don't Like Debuggers (2000) (lwn.net)
186 points by oumua_don17 on March 22, 2019 | hide | past | favorite | 239 comments

I loved what Rob Pike had to say about what he learnt from Ken Thompson:

```A year or two after I'd joined the Labs, I was pair programming with Ken Thompson on an on-the-fly compiler for a little interactive graphics language designed by Gerard Holzmann. I was the faster typist, so I was at the keyboard and Ken was standing behind me as we programmed. We were working fast, and things broke, often visibly—it was a graphics language, after all. When something went wrong, I'd reflexively start to dig in to the problem, examining stack traces, sticking in print statements, invoking a debugger, and so on. But Ken would just stand and think, ignoring me and the code we'd just written. After a while I noticed a pattern: Ken would often understand the problem before I would, and would suddenly announce, "I know what's wrong." He was usually correct. I realized that Ken was building a mental model of the code and when something broke it was an error in the model. By thinking about how that problem could happen, he'd intuit where the model was wrong or where our code must not be satisfying the model.

Ken taught me that thinking before debugging is extremely important. If you dive into the bug, you tend to fix the local issue in the code, but if you think about the bug first, how the bug came to be, you often find and correct a higher-level problem in the code that will improve the design and prevent further bugs.

I recognize this is largely a matter of style. Some people insist on line-by-line tool-driven debugging for everything. But I now believe that thinking—without looking at the code—is the best debugging tool of all, because it leads to better software.'''


This makes sense to me for certain types of codebases. I would almost never bother stepping through code I wrote by myself for example.

But when I’m working on large software projects or certain external libraries I tend to encounter bugs or design issues where I realize I made the wrong assumptions about how someone else’s code works in the first place, and a good debugger is very useful in those cases when the problem changes from debugging your own logic to reverse engineering someone else’s.

This is why I think debuggers are necessary. Without being able to see a full stack frame of information about variables it can be extremely difficult to debug when using other people's code. So many errors boil down to assumptions about what is in a value.

Why are you giving your functions uncertain data?

The first point your code sees uncertain data is the point that it needs to clarify what it has.

The only way you get the errors you're talking about is if you ignore the above practice.

Garbage in. Garbage out.

For code i write, there is usually only one option for what is in a variable, the type of the data i out in there. This is true for dynamic languages as much as static.

In some situations I will also allow a null value. All that means is that there was no data, and no default is wanted. Usually, I want a default.

Closing off all entry points for uncertain data you need to make assumptions about is the first port of call when dealing with other people's or legacy code. It's the way to reason about code without a debugger.

If you can't reason about code without a big question mark above every piece of data you need a debugger.

This amounts to saying that you don't need a debugger if you don't make mistakes.

If your mental model of how the code that you're interfacing with works is wrong, then it won't help you. Your data validation will error out and you might find that it was too much too early. Not only do you have logic based on flawed assumptions but check code as well.

Understanding the code base you're programming against, that's a skill that can be improved but hoping to guard against all misunderstandings is probably unrealistic. Given that, following a trace may help you identify disagreements with your model more quickly.

Far from it.

I am saying you don't need a debugger in the scenario above because the only time you need it in that case is if your inputs can't be trusted.

If you control the domain, the only way you can't trust your input is if you fucked up.

The answer to that isn't "oh now I need a debugger" the answer is to go clean your code.

While I'm here, I'd also like to point out this isn't a "I'm good and your shit" thing I'm describing, this is my day job. I make lazy crap code to get things done, then I go in to modify it and find I can't reason about it, so I go and clean up the mess.

Maybe I made it, maybe I didn't, it's besides the point, you have a mess, clean that first, then reach for a debugger.

Hell reach for a debugger to help clean up if you need to, just don't sit there and tell me you need a debugger because code inherintly needs to make assumptions about what is in a variable. It doesn't. If it does, it's a code smell.

> So many errors boil down to assumptions about what is in a value.

Only avoidable errors. They should not be dictating your tools or your language.

> This amounts to saying that you don't need a debugger if you don't make mistakes.

Not making mistakes certainly saves time.

I often use a debugger to inspect variables and check my assumptions about the application state at that point.

I generally like her writing when I come across it, but I disagree with that one.

Good programmers are humans and make mistakes, of course. However, that doesn't mean good programmers don't exist or that people can't vary a lot in skill or average quality of output.

She is correct that starting with some notion of "good people" is no silver bullet. But the conclusion that "we all suck", I hope that is hyperbole, because it isn't true. (Or maybe: everybody sucks a little bit differently.)

I interpreted her point as, "don't be overconfident or arrogant", such that you think that everyone else sucks. That you're too good to "lower" yourself to everyone else's level.

Yea, I find this take on my post odd.

I am the first person to understand developers are far from perfect, my self included. But it's just code, you can refactor and learn from mistakes.

The thing I don't buy is that this stuff is difficult or that you can't expect good behavior from professionals.

Avoiding side effects, or pushing them out to the edge where reasoning about them is clear is something you should have been taught when you got your expensive piece of paper.

Likewise, avoiding, mutation, over abstraction, early optimisation, and variable reuse are all really basic shit you need to understand, if you're not hiring for these basic qualities, what the fuck are you hiring for? Good looks?

I'm sorry did you mean to imply something with that link?

Ironically kernel space is one of the places where you're more likely to get "wrong" values being teleported in to your program by bits of hardware, kernels running on other cores, and nightmare out-of-spec ACPI BIOSes.

An example inconvenient debugging story: https://mjg59.dreamwidth.org/11235.html

(I don't think DMA triggers hardware watchpoints, but if you set a hardware watchpoint and an an address changes value without hitting a watchpoint you know that something funny is going on)

> But when I’m working on large software projects or certain external libraries I tend to encounter bugs or design issues where I realize I made the wrong assumptions about how someone else’s code works in the first place

This. A few days ago, I spent a couple of hours trying to get a Jest+Enzyme test to open in chrome inspector, (apparently there's a bug which causes debugger statements in Jest tests to be ignored), because I hit an edge case bug in a method in the Enzyme library. If I had been able to step through it in the debugger, it would have taken me minutes to figure it out, instead I spent a couple of hours going through the codebase and figuring out where to put the log statements.

This is absolutely true.

But when everyone starts taking this view on it, the result is that changes get made tactically based on what makes your task work. With no understanding of the overall vision. The result undermines the integrity of the system, and makes debuggers ever more necessary going forward.

This is a good point to hold in your mind as you read or re-read Programming as Theory Building by Peter Naur: http://pages.cs.wisc.edu/~remzi/Naur.pdf.

But when everyone starts taking this view on it, the result is that changes get made tactically based on what makes your task work. With no understanding of the overall vision. The result undermines the integrity of the system, and makes debuggers ever more necessary going forward.

I guess the pie in the sky vision is that tactically making your task work somehow becomes harmonized with the overall vision. Extreme Programming was supposed to do this through the high information exchange of pair programming, the constant refactoring, and the practice of there only being 7 or so large scale patterns for the whole of the application.

The way this works in most of the real world, is that it's supposed to work like this, but there's no pair programming, and you never get enough time to refactor.

Exactly. I mostly use the debugger to see the call stack so I can understand what calls what without having to read everything. I can work backward from the faulty behavior to better understand the context of the issue.

I rarely actually fire up the debugger, and I try to refrain from gratuitous prints (though I use them in emergencies or if a debugger isn't convenient).

I think this makes me a better programmer, though that's really hard to tell objectively.

for that case - rather than resort to the debugger - I've always went through the code, come up with a theory of operation and then and instrumented it with either print statements or a scoreboard style struct in some shared mem to validate.

So apparently a lot of people seem to equate debuggers with single-stepping through code.

You can pry my debugger from my cold-dead hands, but I don't even know how to step through code in my favorite debugger[1].

In my opinion, the only right way to fix a bug is to build a mental model of the code, and a debugger is a massive force multiplier in doing so. For any code that wasn't written by me in the past 6 months or so, the code itself is a mystery and while reading the code is a big aid in understanding how it works, it can also mislead you. In particular there is a class of bugs that are when how the code actually works diverges from how the author thinks the code works. A good author will structure the code to guide a reader into how the code works, but when the author was mistaken, this can be very misleading (in particular I will actively avoid reading comments when I know there is a bug in the code, because comments are the one part of code that are never tested).

Another way of putting it: The source code is very good for telling you how the program is intended to operate, and a debugger is very good for telling you how the program actually operates.

1: I'm sure it's listed how somewhere here, but I've never felt the need for it: https://common-lisp.net/project/slime/doc/html/Debugger.html

I'm going to expose my ignorance here, but other than stepping through code, how would you use a debugger? Just to get stack traces and variables values at a specific break point?

A few examples off the top of my head:

1. Instrument the code in one way or another. At the simplest level, many debuggers can log each function call.

2. Change the definition of functions while the system is running. Most highly dynamic languages will let you do this. I'm told that there are C IDEs that can do this too though (modulo inlining anyways).

3. Change the timing characteristics of the program; if a race condition is suspected, ordering can be forced through the use of thread-local breakpoints, for example.

4. Inject specific data. Have a function called under normal operating conditions and modify some or all of its parameters. Think instant unit-test, but no need to mock anything.

> but I don't even know how to step through code in my favorite debugger

If your lisp supports it, pressing 's' in sldb should do it.

That's a nice quote, but why does it have to be an either or? Sometimes you think about problems and sometimes you use debuggers. And sometimes you do both. I mean for something like C/C++ I just view a debugger as an interpreter. Also using print statements is just a less interactive version of a debugging process.

Personally I _love_ debuggers when coming to new codebases. I view code in the process of execution as the natural state of a codebase. Instead of hoping around through source code by hand, why not step into functions, continue execution, and let the program flow do it for you? Most code only has small parts that are important and with debuggers I can usually find those parts right away.

I don't blame Torvalds for not wanting to use a debugger. He doesn't like them and doesn't want to support them. That's his choice. But I find it odd to just categorically dismiss them.


One of my favorite bugs I ever introduced was typing 0 instead of O in a variable name. Review your mental model all you want, a debugger is going to be a lot more useful in sussing that kind of thing out. Even just being able to see you have two variables on the stack RockOn and Rock0n will pretty much save you.

Hah, yes, I've done the same thing. It always bugged me how close together the two keys are on a US English qwerty keyboard.

A font with a discernible difference would help, too.

Agreed! I was young at the time. Also, my poor coworker was the one that had to find and fix it since I was in class that day. Who knows what font he was using.

> I don't blame Torvalds for not wanting to use a debugger. He doesn't like them and doesn't want to support them. That's his choice. But I find it odd to just categorically dismiss them.

I rather suspect that Torvalds' beef with using debuggers is that so many engineers get lazy and begin to use them as a substitute for thinking things through.

After all he does use debuggers. He said "I use gdb all the time, but I tend to use it not as a debugger, but as a disassembler on steroids that you can program." This indicates that he is using the debugger as a means to improve his mental models of the CPU / hardware.

Love that story.

I helped found a Mathematics of Finance masters program at my university, teaching a numerical methods course before we could hire specialized staff. The students got stronger each year. My last year at it, three very strong students were trying to implement a research paper, tying together all their work to strengthen their resume. They had an impression of me as hot-shot C programmer from a computer algebra system I had written (everything is relative) and they sought my advice when their code wouldn't work.

I was in deep inner cringe, aware of their hefty tuition, how I truly didn't understand a word of what they were saying. I had to think of something to say that would get them to leave my office, apparently satisfied. Then I heard their voices catch as they apologized for introducing a fractional time step as their algorithm shifted phases.

"No! Always a bad idea! Here's another way to do that. I don't know if that's your bug, but..."

The email that evening was profusely thankful, warning me they'd be back the next day with their new bug. Launder, rinse, repeat.

While I'll probably never hang a shrink shingle in Silicon Valley, as tempting as the idea is, one can also listen to one's own voice catch. Ever notice how everything that can go wrong cooking was actually anticipated and ignored at the time? Mathematical research teaches one to listen to the faintest anomalies. It's the universe whispering truths.

I've read that story several times. It's never sat well with me, and it's only now that I have a concise argument against it:

Pike is conflating the diagnosis process with the quality of the chosen solution. In practice, they are often unrelated.

A good debugging tool will not only show you the state of the system when the bug happens, but help you understand the execution path that formed that state. (If your debugger doesn't do that for you, then you should either get better at using it, or find a better debugger.)

A debugger _doesn't_ tell you how a problem should be solved; that's a separate decision which often has many other factors beyond "improve the design to be more future-proof".

The story also implies that it should be enough for the programmer to explore the mental model they currently have. But if I have a problem that needs debugging, and a minute's thought isn't enough to reveal it, then it's usually because that mental model _is wrong_. I've spent enough time dealing with engineers who furiously insist that a system shouldn't be doing something that it clearly _is_ doing (including myself) to know that your mental model can be just as much of a barrier to debugging as it is an aid.

So, as often with arguments about whether you, a smart engineer, can solve all problems with pure reason: No, you can't. There are often things that you don't know, and things that you don't know you don't know, and getting external assistance is often a huge time- and embarrassment-saver.

This is pretty much the Luke Skywalker turning off his targeting system approach, and the reason why IDEs are a blight on programming. I run into Java programmers who have incredibly weak mental modeling faculties. The are so reliant on a machine telling them what to do, invoking their toolchains, and boiler plating for them that they can't begin to understand what's really going on under the hood.

You've got to turn off the targeting system and start thinking about problems holistically. Programming is not yet a solved problem and the really valuable element in the equation is a human's ability to reason and predict by aggregating massive amounts of information.

Similar to the Feynman method:

1. Write down the problem 2. Think really hard 3. Write down the solution

Surprisingly, step 1 is the most important thing to me.

I often debug in the same way I design software. Step away from the computer and write something down.

It's probably a personal thing, but I solve problems much faster with a pen in my hand as opposed to a keyboard at my fingertips.

In my career, almost none of my projects have been greenfield, so the debugger is the tool I use to explore the code in action in order to create the mental model I can rely on when I dont have the debugger.

Debuggers often frustrate me. What I prefer to do is find a point of entry (ie: Button if GUI, or command line argument, etc). I then read through the code from that point of entry, tracing it down to where it goes (db, disk, etc.), logging in my journal key points of interest. I do this over and over until I have a really good idea of the code.

Some, things to note. I've worked on code bases that were in the 10s of millions of lines of code (healthcare industry). I still use this technique, but with the understanding I may never have a complete model of a code base that size.

That is essentially the same approach, but mine is guided by the debugger: I like to watch the state changes as they happen.

Although I hate to imply that I'm anything near as great as Ken Thompson (I'm most certainly not), I do share his approach to "debugging". I've also long thought it weird that so many engineers that I've worked with think this is somehow an exceptional thing to do.

It only makes sense to me, although it does require you to actually understand what the code is (trying) to do.

I do sometimes use the standard debugging techniques, including debuggers, logging, etc. But I end up doing that only when I'm working with code that I don't fully understand, or if I'm totally at a loss as to what's going on. I'd say that covers 10-20% of cases.

Ideally people could find the problem quickly with the debugger, but then have the wisdom to raise it up into the greater context before deciding on a fix. Of course, many people don't do this in practice.

Yes, one thing I've noticed is that good code has application-specific invariants. You should think about it at a higher level than lines of source code and the values of individual variables.

Of course you can verify these invariants with a debugger. But when you hit a bug and immediately start digging in like Pike says, that tends not to be what you're thinking about.

Debuggers are great for putting in the minimal hack in a production codebase that doesn't disturb the code. But they're generally bad for developing software, because most (all?) software should have some invariants that allow the Thompson style of debugging.

You step back, think about what invariant could have been violated to cause the bug, put in a print statement somewhere to see if it was violated. Then you take another step back to think about how to fix it, possibly with a fundamental change to the data structures. That is, you design your software differently to avoid the bug, rather than just putting in the minimal hack.


The other problem with debuggers is that once you've used one and modified the code, then everybody who modifies the code later probably has to use one too. The code has become subtler, but not in a good way. This makes development slower.


I think it also relates to another thing Linus said [1]:

Bad programmers worry about the code. Good programmers worry about data structures and their relationships.

It's easier to diagnose problems in data structures (by printing them) than to diagnose problems in control flow (which is greatly aided by a debugger).

One weakness of debuggers is that they don't print things in application-specific formats. You have to write debugger plugins, and those aren't set up for most projects, and they don't behave the same way on all platforms, etc.

So basically having a bug that you can't figure out without a debugger is a sign that you may be using too much CODE and not enough DATA. It's a smell that indicates a problem with the software design.

Of course, in the real world, sometimes you have to swoop in with a debugger and fix something. But I do think that there is something wrong with living in a debugger for months on end (which I have experienced on one dev team.)

[1] https://softwareengineering.stackexchange.com/questions/1631...

Could you explain more about these invariants?

Here's one example that comes to mind.


As you can see, a number of invariants tie the different record fields together. Maintaining such invariants takes real work. You need to document them carefully so you don't trip over them later; you need to write tests to verify the invariants; and you must exercise continuing caution not to break the invariants as the code evolves.

It's also covered here under "make illegal states unrepresentable".


The author is advocating OCaml, which allows you to encode such invariants in the type system.

But 99% of state machines are not written in OCaml -- more often it's C or C++. For example, the entire Linux kernel :-/ So basically I'm saying rather than poking and prodding at the code with the debugger and flipping variables, you can think of your whole program as a state machine, and the state machine has invariants like the ones listed:

last_ping_time and last_ping_id are intended to be used as part of a keep-alive protocol. Note that either both of those fields should be present, or neither of them should. Also, they should be present only when state is Connected.


For another example, the "lossless syntax tree" data structure I use for my shell has a number of invariants:


I use the same data structure for generating errors and translating between two languages. So it really has to have some higher-level properties rather than just setting and getting variables.

In C# they have something called "Red-Green Trees" that are similar:


In summary, your debugger is not going to show you which nodes are red and green in this sense! It has no idea about such things. IMO it's better to think at a higher level when you can.

(Note that red-green trees are just a name for an application-specific concept they made up. I t has nothing to do with red-black trees, which are a generic data structure. Even so, your debugger also has no idea about the invariants in a red-black tree implementation either!)


Maybe a simpler way to put it:

- the assert() statement in C and Python checks invariants at runtime. There's something of an art to doing this; I think it's covered in books like "Code Complete". You can grep a codebase for assert() and see what invariants the author wants to maintain.

- I'm not sure if they still teach "loop invariants" anymore, but it basically means "something that's true at every iteration of the loop", regardless of whether it's the first iteration, last one, in the middle, etc. If you have off-by-one errors and you stick in a -1 to fix it, that's a sign that you could think more rigorously about the loop.)

Invariant means "a thing that's always true, regardless of the values of the variables." It's a higher-level property than what the debugger shows you.

I think this is a drastically more appropriate response from a leader in a community.

Survival of the fittest sure, but when the requirements to change are hard because of high technical accomplishment and a high standard vs grossly complex are two different things.

I like Linus and his willingness to say what he thinks but sometimes he really is just an asshole from the old guard.

In fairness, he has recognized that its not a good way to lead, and is working on it.


Day to day, I tend to do a bit of both. Most often, I attach a debugger when it’s code that someone else has written. I find that it’s an easy way for me to visualize the flow of data between function calls.

This is a story-telling technique not specific to writing code. An advance practitioner will trash a technique every beginner must learn to outline a higher-level mastery. It's a discourse that happens in everything: climbing, skiing, fishing, name it.

It makes for an impressive story and establishes the speaker as a jedi master.

It's also mostly bollock. Every one will have epiphanies. I sometimes work on a problem for a while, go to the loo and have the solution come up. It's the well-known phenomenon of stepping away for a while and letting teh brain unfocus from the details.

Yes, it works when the problem was something fundamental or architectural. It rarely does when the problem was using the wrong variable, or a test that was inverted. These are easier to find with traces or a debugger.

But dissing the low-level practices is so much better at getting that alpha status...

There's definitely a self-serving aspect here, but importantly he's only speaking to Linux Kernel development, not development in general. My takeaway is that he only wants Jedi Masters working on his kernel, not that everyone else is completely worthless.

This was pretty much the high point of "coder machismo", along with Eric Raymond and his power tool analogies. Presumably this is also why the kernel doesn't have a test suite. Interesting to compare with the other paragon of software development processes, SQLite, with its vast test suite. It doesn't have a "debugger" per se, but it does have some tips: https://www.sqlite.org/debugging.html

Always difficult to tell what the cost of this was. What features did we not get because developers were put off? What problems did it prevent? How did this affect downstream Linux ecosystem development and culture?

Re: use of debuggers, I've found them most useful when dealing with other people's code; you can short-circuit a huge amount of "where is the code that does this" or "how did I get here" by just setting a breakpoint and getting a stacktrace.

I feel that I'm reading that tone in Linus's post, too. But after reading these comments, it also appears that some people simply have different use cases for these tools aside from stepping through logic.

And in other cases, the debugging statements have to be present in the code to even get to step through the program, and their concerns revolve around code management when tools that have code inserts like that.

But it would be nicer if they wrote their stuff like their use cases weren't the only ones, or the most important. What features have we lost because they couldn't think past themselves?

He seems to be arguing that in order to weed out poor engineers he prefers to keep things complicated. As hard as it would have been for me to accept this a few years ago, I think there might be some truth to this argument.

For example, I would think that the best Java engineers are probably much more productive than the best C++ engineers. However, the market is probably full of poor Java engineers that the average Java engineer is probably a lot worse than the average C++ engineer.

C++ is such a complicated language that if you are a professional C++ engineer and still employed, you must be fairly skilled. So, for example if you start a tech company or an open source project it might be better to choose C++ as the implementation language even though Java might make you more productive.

Funnily enough, Linus does not apply this logic to C vs C++ debate and has come to the opposite conclusion : http://harmful.cat-v.org/software/c++/linus

> He seems to be arguing that in order to weed out poor engineers he prefers to keep things complicated.

I don't know whether this is his goal (he does make it sound like it), but I think the consequence is another: In order to manage not having a debugger, things are by necessity kept simple. It reminds me of a quote from Brian Kernighan:

> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

Since simplicity has other positive consequences apart from ease-of-debugging, there might sometimes be advantages to not using the fanciest tools all the time.

> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

By taking twice as long to debug it ?

By not having to debug complicated situations most of the time. Because you lacked the tools that would have allowed you to get in such a bad situation in the first place.

Phone a friend ? Hire a consultant... handicapping yourself seems questionable unless playing a week opponent.

> if you are a professional C++ engineer and still employed, you must be fairly skilled.

Nah. Many of the C++ gigs I've had were at places which only used fractions of the language. There are a lot of bad C++ programmers. I'm not disagreeing with your overall point about ratios though.

Supporting annecdata: I’ve seen some C++ where entire classes were copy-pasted (including `//TODO: deduplicate method` comments), because the author didn’t want to use inheritance. And when they insisted their code was fully optimised, it turned out to be doing an unnecessary O(n^2) operation.

>Many of the C++ gigs I've had were at places which only used fractions of the language.

I don't think any project or person uses or even knows the entire language. I think even Bjarne himself said that no human can possibly know all of C++.

Seriously, in every C++ project I've seen, it seems like whoever's in charge picks some subset of the language to confine themselves to, and uses that. One of C++'s strengths is that you can do this; it's adaptable to your project and style.

I'm not sure this correctly characterizes Linus' opinion. It seems more that he's saying debuggers cause a kind of myopia by putting a specific failure under a microscope, and that he wants developers who are willing to undertake the harder task of understanding the failure at a system level. Not that he wants to impose complexity for its own sake to cull out weaker developers.

By simple statistics, most people are, on average, about average. This is true for any population large enough. I don't buy your assertion that C++ developers are better than Java developers, and all else being equal, bad Java code doesn't blow up in as destructive a way as bad C++ code.

Deliberately keeping a working environment hazardous and unsafe doesn't give you a breed of superhumans that never let accidents happen, it just leads to a lot of unnecessary accidents.

I think the main problem with a lot of high level language software developers (Java, ...) is that they often fail to consider the consequences of the fact that their software runs on real world hardware -> Memory access, Cache, Network Latency, Bandwith (Memory and Network), etc. Instead its all about artificial constructs as Design Patterns, OOP, etc. which often results in bloated, slow and therefore unusable software.

> By simple statistics, most people are, on average, about average.

I'm pretty sure this isn't true. Averages can be skewed by outliers. It's likely that most people are better/worse than average. Consider how the average life expectancy was horribly skewed by infant mortality.

You're right, it isn't true. It doesn't even require outliers, just the curse of dimensionality. See for example https://www.thestar.com/news/insight/2016/01/16/when-us-air-...

You just made the exact opposite argument of Paul Graham's "Blub" article:


The "Blub Paradox" argues you should hire developers using the most productive tools, even if those tools are unpopular, because that naturally selects for the best developers.

You are arguing that you should hire the developers using the least productive tools, because if they can get anything working at all with those tools, they must be pretty talented.

No he didn't, because "productive" != "(not?) difficult", they're mostly orthogonal. LISP is productive according to PG because it has a kind of unlimited amount of abstraction, I don't think he would say that LISP is difficult. LISP/smalltalk: PG-productive, not difficult Java: PG-unproductive, not difficult C++: PG-unproductive, difficult (Forth? Haskell?): PG-prodictive, difficult

So there are points in all 4 quadrants I think.

Reformatted your list so it's easier to read:

LISP/smalltalk: PG-productive, not difficult

Java: PG-unproductive, not difficult

C++: PG-unproductive, difficult

(Forth? Haskell?): PG-prodictive, difficult

That's not what is says at all. The article claims that you should use the most 'powerful' language available when writing application software (that meets your performance requirements), and that doing so will result in the greatest productivity.

The "Blub Paradox" is the claim that you can only understand how powerful languages are in terms of the powers the languages you know have. So if you understand Python and C++ and not Lisp, you are unable to estimate the power of Lisp.

Weeding out is useless because there are far more "poor" or "average" developers than there are "great" developers. This means that you're catering to the tail end of a distribution and are way more likely to progress at a significantly slower rate because you're looking for the 10% of the population that is "the best". Also, that scale is quite subjective and opinionated.

While I didn't get this opinion from my read of Linus's post, Linus seems to want to make things unnecessarily hard and somehow fails to see how that could be bad because of lack of arguments while simultaneously claiming making things easy would just be bad outright with little supporting arguments for it.

I was under the impression that software should be made as simple and easy to understand as possible. So for example, should I make some esoteric bit manipulation to multiply a number by 2^n or should I just use a library function? It may make sense if performance and memory are limited, but that's about the only reasons I can think of. Otherwise, you should have the code explicitly state your intent. It makes me think the kernel is a garbled mess they way its talked about in that post with not very easy ways of maintaining it.

Have you actually looked at the kernel code or worked with it? It's some of the cleanest code I've ever seen. It does have to do some things very differently due to it being kernel code rather than userspace, and it is a VERY large codebase (esp. when you add in all the driver code), but aside from that it's quite nice IMO.

Most commercial projects I've worked on really were a "garbled mess" by comparison, and far less maintainable.

I've looked at some of it. But its been a long time.

> This means that you're catering to the tail end of a distribution and are way more likely to progress at a significantly slower rate because you're looking for the 10% of the population that is "the best".

It seems like he'd exactly agree. He goes on to say he doesn't want to add all those features, he'd rather go slower and his biggest job is to say "no" to new features.

But Linus is not god and is prone to his own biases. The fact that he's at the top and reviews all the commits is simply a product of history more than anything. I'm sure there have been several "average" developers that have worked on kernel code and submitted it just by the shear effort needed to maintain such a large code base.

I guess, my original point is, there will be "average" developers committing to the kernel just by statistics alone.

Not a C++ engineer here, but I would think that the vast majority of C++ gigs around are legacy systems - in which case most of the work would probably be bug triage, that likely doesn't call for more than "fix my immediate problem" thinking.

Obviously there's still a lot of greenfield C++ work out there too, but I would expect that it's no longer the lion's share (C# is very much getting to this stage of it's life too).

I would guess that the top-level C++ folks have by-and-large moved on to those greenfield projects naturally as it's where their skills are most required, and where the most money is available to pay for them (e.g. fintech).

Embedded is huge for C/C++ and there are still many greenfield projects. Embedded (real-time, safety critical) usually requires GCless languages, which already rules out the vast majority of super productive new languages.

I wish we moved to Rust (or even Ada), because most of the bugs I see occurring couldn't happen there. But embedded compilers are usually lagging a bit.

Obviously there's still a lot of greenfield C++ work out there too, but I would expect that it's no longer the lion's share (C# is very much getting to this stage of it's life too).

Given that C# is the go-to language for new development on Windows, and with Microsoft now officially supporting cross-platform C# and .NET, I find that very hard to believe.

What’s the trajectory of - say - new Github projects that use C#/.NET?

Sure there is a lot of legacy work but I'm not sure it's the "vast" majority - at least, not out of proportion with other older general purpose languages.

It's still used in a lot of new embedded work. It's still the best option for a lot of HPC and numerics. Some of this work ends up being targeted mostly for inclusion in another language runtime, sure, but it's still new work.

What a line:

> And quite frankly, I don't care. I don't think kernel development should be "easy". I do not condone single-stepping through code to find the bug.

Reminds me of this quote from the introduction to Log4j docs [0]:

> As Brian W. Kernighan and Rob Pike put it in their truly excellent book "The Practice of Programming":

>> As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.

The above quote made me reconsider my intense debugger usage that I had fallen into at the time. On one hand you can't take all your cues from authority, but on the other hand, what K&P said sounded like it might make sense. So I tried using the debugger less and logging more, and over time I think it's made me a better programmer. I think the key statement is:

> we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places.

which Linus also echoes:

> I happen to believe that not having a kernel debugger forces people to think about their problem on a different level than with a debugger. I think that without a debugger, you don't get into that mindset where you know how it behaves, and then you fix it from there. Without a debugger, you tend to think about problems another way. You want to understand things on a different _level_.

[0]: https://logging.apache.org/log4j/2.x/manual/index.html

I'm a little confused about their apparent implication that they have to single step all the way to the point that they want to inspect. Maybe older debuggers made you do this? But today, breakpoints are useful and seem to be widely supported, so you can run the program at full speed to the point of inspection, and then instead of getting a drive-by snap-shot as you do with a printout, you can do a more in-depth interrogation of the state of the system "Oh, this part of the state has value X as I expected, maybe it's this other part of the state instead--yes, there is something weird over here!" instead of discovering value X in the first run, adjusting your print statement, recompiling, and discovering the unexpected behavior.

All that having been said, surely there is still great value in applying a similar amount of consideration as to where to put the breakpoints as you would in determining where to put a print statement! (edit: and also in thinking hard about what new values to inspect if your initial guess is not borne out)

> "Oh, this part of the state has value X as I expected, maybe it's this other part of the state instead--yes, there is something weird over here!" instead of discovering value X in the first run, adjusting your print statement, recompiling, and discovering the unexpected behavior.

This is why I love debuggers. Well, the Chrome debugger. But I find that this situation is less common or doesn't happen at all when I'm writing Java with unit tests. It seems like dynamic environments like the browser are more ripe for debugger use as you describe. But then again, I've met some incredible developers who made really great browser UI (ie the RethinkDB UI) who told me they've never used the browser debugger.

The real trick is that gdb is scriptable. You don't print debug - you set your breakpoints, add some logic for deciding what you want to output and where, and then run at full speed.

I've never found something as nice as Ruby's binding.pry. I've added JRuby to Java projects temporarily just to get the interactive introspection of a REPL.

I wholeheartedly agree with K&R.

A lot of the hard problems I've debugged have been the interaction of multiple state machines. In these cases I needed to see the forest of interactions rather than the trees of instructions. On top of that, I felt it was faster with getting more information up front (the entire log) rather than identifying what I need to look at, starting the program with debugger attached, setup break points, get more information, and repeat (yes, a lot of my problems required relaunching the process or even rebooting so I could start from a known good state, yay hardware drivers).

I suppose Rob Pike has a R in it, but the R in K&R usually refers to Dennis Ritchie. This should probably be K&P.

K&P usually refers to Kernighan and Plauger's Software Tools, or The Elements of Programming Style.

For the book Kernighan and Pike wrote, The Practice of Programming, the canonical abbreviation is TPOP.

Sorry about the pedantry.

Bah, my brain auto-completed.

Personally I find debuggers invaluable when you're handed some huge chunk of code that someone else wrote and asked to fix it quickly. The "stop and think about it" only works when you already know the code in and out. If you're having to discover how it works to fix problems with it being able to single step through the flow is invaluable. Once you grok the code you can dial back the debugger use, but by that point you've fixed the bugs and moved on to the next project.

I admit that I am not a good programmer as many in this thread are.

I remember being scared and nervous looking at a large codebade and single stepping through the code. I did not know how else to understand the code.

You mention that one needs to think at a different level. Could you offer pointers as to what that level could be. I lost my job as a developer and I am trying to get a developer position. However I am absolutely scared of looking at large codebase and not being able to understand it and later not being productive.

Oh I'm not sure I personally completely agree with any of those statements I quoted, including that one "needs to think at a different level" whatever that might mean. They were written by people who had an entirely different path into programming than me, and I think it's important to keep that in mind when consider what makes sense for you. I think practically each one of us can only think at the level at which we are currently capable of thinking, while understanding that we are capable of more.

Single stepping through code for me has also been a great entry point into large code bases. I think what happens over time, especially if you look into the internals of open source project and libraries, is you lean to identify what kind of thing you're looking at: like, Is this a thing that's driven by an event loop? Is this an event driven system that calls some pipeline of functions through the stack? You begin to see that people generally make things in only a handful of different ways, and so it's easier figure out how things are connected. But for me it took a lot of stepping through code to figure out a lot of these things, so I'm not sure either how else you can go about it.

I'm generally not a debugger user, not really out of principle but out of generally just not using one much, but I have two major exceptions: 1. Highly recursive code 2. Understanding a new codebase.

It is true you ought to build up a model of what code is doing to work on it properly, but I find trying to just read the code that I'm not familiar with to build that model is often both slower and worse than using a debugger to walk through it. Slower, because I'm sitting on the outside trying to stare in at a dynamic process based on my radically incomplete understanding of its static description and the stored state in my rather meager head (I think my raw ability to hold state in my head is probably a bit below average in the programming domain, as evidenced to me by the behaviors I've developed to compensate and other people's lack of need for them). Worse, because when simply reading code it's easy to miss the "validate_input(input)" call that in fact turns out to invoke a multi-thousand line subsystem that doesn't just validate the input, but also statefully mutates it based on all kinds of things and is actually 60% of the system despite its seemingly innocuous name. Stepping through can fix both problems much more quickly, and I'm not being paid to "not use a debugger".

I don't disagree that pervasive, continuous use of a debugger can afford bad habits, but the best answer is to simply not develop those bad habits, and then still use it when it is helpful... because there are days when it is very very helpful.

(Highly recursive code is an exception because Ye Olde Printe Statementse really break down at that point, and even if you make them work it can still be a lot easier to understand it from the inside than the outside, after the fact. I don't just mean "recursively folding on a list", I mean code that does things like walk heterogeneous trees for a DOM tree or a compiler AST and does a whole bunch of inter-related recursive stuff. Sometime you just have to admit your brainstack has overflowed and call in the tools. Best to avoid writing that code at all if you can, of course, but there's times when it is not avoidable.... because that's where the superpowers come from: https://steve-yegge.blogspot.com/2007/06/rich-programmer-foo...)

> It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is.

Stupid question - did their debugger not have breakpoints? Not that that would make using the debugger superior in all cases, but would eliminate time wasted "single stepping to the critical section".

I do agree with their overall point.

I've never really used debuggers routinely. They are a last resort for me. I use print statements and logs for the most part.

The problem with print statements is that you're now adding unnecessary code to the application that you then have to take out. How many of us have been lazy and just left them in? I know I have. Thus you violate KISS and add unnecessary complexity to code when you could've just used a debugger.

As for logs, those are a necessity and should be in every complex program not just because of exceptions, but because of program logic that you implement and want to know where you are, and were, while the program is in use.

It's not exactly challenging to have prints only show up in debug builds.

Yeah, but they're still in the code.

Tracing is nice when it is a feature built into the language implementation (i.e. the compiler or the interpreter).

Using a print statement to display the value of a variable is no different than putting a breakpoint at that line and just reading the state of your program off the locals window

The latter freezes your program for non-trivial amounts of time which can be problematic in some cases.

Yes, for example setting a breakpoint in your input method engine while it's relaying your keystrokes to the terminal with the debugger tends to go poorly. On the other hand, gdb supports the "commands" command, which allows you to automatically run a script when a specific breakpoint is run. That way, you can do essentially everything you can do by inserting logging statements directly into the code, but without having to restart the process.

With dynamic trace points and scripting there is no freezing.

there are significant differences as discussed elsewhere. Additionally, you can't read globals or other complex computed expressions from the locals window...

Same, but I always feel like using a debugger has to be a better way to handle that stuff and that I just haven't invested enough time to get the workflow right. Maybe prints and log statements are better.

I was the same way until I worked at a place that would not let us merge clearly WIP code. Now, debuggers are life!

I think debuggers are actually a lot less useful than people think. Just by thinking, using a REPL, or putting in print statements you can solve probably 99% of all bugs.

Debuggers are useful when you really have no fucking idea what's going on.

Or instead of putting in print statements, you could use a debugger. You don't even need to know which print statements you need, because when the program is stopped you can look at whatever you like.

Thinking is still required, but I don't recommend relying on it, since it's typically what gets you into this whole mess in the first place. At the very least, don't rely on thinking to guess what the computer is doing, when you can use the debugger to find out for certain.

I use a repl with a debugger attached in clojure. Best of both worlds :).

I'd agree generally. I pretty much have only used debuggers when reverse engineering something or when I'm so hopelessly unable to see what the problem could be that I start suspecting the compiler might be at fault.

Even when reverse engineering something it's best not to get bogged down in the instruction-by-instruction details and instead try to follow the general program flow and reason about how you would do something if you were in the programmers' shoes.

This has been my experience as well: I only resort to the debugger when my other avenues have proven fruitless. They can also be useful for Heisenbugs.

However, I did have one job working on an embedded device with no console at all, and a JTAG debugger was invaluable there. But I would have preferred to have a console and be able to do printf debugging, as for easier bugs it's just a lot quicker.

I think it depends a lot on context. On a project with good automated tests, I find that I don't need to use the debugger very often.

But, those times when I had to use it, it saved me probably thousands of hours of work - especially when debugging complex software which I didn't write myself.

``` Apparently, if you follow the arguments, not having a kernel debugger leads to various maladies: - you crash when something goes wrong, and you fsck and it takes forever and you get frustrated. - people have given up on Linux kernel programming because it's too hard and too time-consuming - it takes longer to create new features.

And nobody has explained to me why these are _bad_ things. ```

Uhh, you're dealing with a largely voluntary unpaid workforce that requires a lot of effort to become proficient, and by definition is not user facing. Any barrier to entry you throw up is going to decrease the viability of the project and have knock on effects years into the future.

>you're dealing with a largely voluntary unpaid workforce that requires a lot of effort to become proficient

Are you, though?

"While many people tend to think of open source projects as being developed by passionate volunteers, the Linux kernel is mostly developed by people who are paid by their employers to contribute." - https://thenewstack.io/contributes-linux-kernel/

And even if this were true, Linux is no ordinary FOSS project at the risk of losing "viability", with what I imagine is an absolute minimal amount of casual contributors which would be turned off by this sort of supposed "barrier to entry".

That article is 17 years younger than the OP. I agree with you today, but was that just as true back then? I’m not sure.

Fair point, it really may have been a different situation in 2000, I didn't realize that.

Yet in practice, has it hurt Linux? Linus' entire point is that he doesn't think more is better. He'd rather have a smaller group of more talented individuals working the kernel. Even if you disagree, the premise of your argument, that Linux devs are largely an unpaid, voluntary workforce, is incorrect.

"Yet in practice, has it hurt Linux? "

Very much, yes. There are still so many bugs and the monolithic nature which forces driver to be integrated in the kernel ... requires so much more people at least to understand the kernel better. Linus could still have his elite darwinist selected core hacker group, but for the whole linux eco system it is very bad that the barriers for the kernel are so arbitarily high.

>forces driver to be integrated in the kernel

Isn’t that a result of the intentional non-guarantee on not maintaining abi compatibility? Otherwise you end up with microsoft’s much more difficult to fix situation (maintain abi, difficult to update apis in general, but vendors can trivially produce them without a concern). Im not sure how microservice architecture or something would sidestep the issue; its more of a political question of who's in charge of maintaining the drivers

"its more of a political question of who's in charge of maintaining the drivers"

And even more stupid to set the bar then artificial high, when you don't have enough elitist, ideological pure enough hackers to do that.

What bar..? It gives freedom to kernel developers to update their apis more freely and maintain effective backwards compatibility, at the cost of extra work managing/updating the drivers themselves instead of relying on vendors to do it. The only bar then is that the driver needs to be in a sane state before being accepted into the kernel tree, so that it remains easy to maintain sanity. Which seems to me a very fair, if not necessary, ask.

It’s a pretty clearcut tradeoff, and they apparently have managed the manpower for it thus far... I don’t know why you need “elitist, ideological hackers” to support this strategy


Could you please stop posting uncivil and/or unsubstantive comments to Hacker News?

I never knew that Linus didn't like debuggers, but I completely agree, though for slightly different reasons. I haven't stepped through something in a debugger in probably 10 years.

I don't do kernel development, most of my stuff is distributed systems / networking / devops type stuff. In this realm I feel like debuggers are almost an antipattern because:

1. Distributed systems with a lot of nodes means you might have to attach debuggers to everything, and that's just too hard to control.

2. Anything with networking, like web services, distributed systems, protocols, etc. will likely time-out the connections while you step through line by line. Once your TCP socket gets closed on you from below or the other side timing out waiting for a response, then you have to set up your debugging environment all over again.

3. Most of the real hard bugs I have to deal with usually involve timing or race conditions. This means that I might not be able to reproduce them in a debugger, and sometimes even the act of attaching a debugger makes the timing different enough to not repro the bug.

4. In production of these systems, most times you can't just hook up a debugger and basically shut down the system while you try to diagnose a bug. I collect what info I can from the logs and move on. In this situation, you need great logs. Great/useful logs are built over time, by adding log statements to the right places. So for every bug I diagnose, I keep the logging statements in I used for debugging (although I usually don't leave them emitting messages, because I have log levels).

I worked on Browser devtools. The #1 used tool was console. Nothing beats being able to put some console.log and see its output. Console.warn is even better since it captures the stacktrace.

Logs give a timeline of change.

I had this idea of console.snap but never got around to it. The idea is that it would not only capture the stacktrace, but a shallow copy of scopes at every function. You can query the snap to see how a variable changed, or find what set of variables caused something else to change later on.

I feel like this is the holy grail of debugging. Smarter logs that you can reason with and do time travel analysis.

I never got around to doing it, but now working at an analytics company, I see it being very valuable as it saves so much guess work.

I totally agree - I love the ability in python to print out the stack trace, I use it all the time! I would love if that also had variables as well (although, this is tricky, because you have to worry about cycles between the variables if you are trying to print everything out. Maybe a list of variables to print out with the stack trace would be enough?)

That’s why I suggested a shallow copy

> 3. Most of the real hard bugs I have to deal with usually involve timing or race conditions. This means that I might not be able to reproduce them in a debugger, and sometimes even the act of attaching a debugger makes the timing different enough to not repro the bug.

Debuggers actually are often really good for that sort of thing. If you can break the code when the race happens and start looking at the stack and execution trace finding what happened is usually trivial. The big deal is by understanding very clearly what happened you know your fix actually fixes the problem.

I'm talking about race and timing conditions between multiple machines. It's a lot easier to just log the various points to find the ordering of things for these kinds of conditions. As long as your clocks are closely sync'd, it's easy to merge the logs together to look and see what multiple machines are doing as a call flow.

Of course understanding the root cause is the key to fixing any issue.

> I'm a bastard. I have absolutely no clue why people can ever think otherwise. Yet they do. People think I'm a nice guy, and the fact is that I'm a scheming, conniving bastard who doesn't care for any hurt feelings or lost hours of work if it just results in what I consider to be a better system. And I'm not just saying that. I'm really not a very nice person. I can say "I don't care" with a straight face, and really mean it.

What's interesting to me is that a man could reach the age of 31 and still be talking like this. It's teenage language.

At the same time he's also so clearly in command of his domain.

I think expertise and power in one domain can really stunt your development in others.

> teenage language.

It's machismo. We don't hear very much of this any more from adult professionals in Western desktop jobs, but it's definitely out there. Even in the tweets of certain high controversy members of government.

Well, not to be too pedantic, but it's more career-focused hypermasculinity. Machismo is an Iberian-originating concept which isn't solely negative properties. When immature people don't get taught what machismo or other gender-focused values systems are actually about, they end up with just the negative values. But you can have someone who's "macho" or "manly" regarding their career and also not a dick.

You should try working at a bank. Finance professionals are probably the most immature group of people I have ever worked with.

> Without a debugger, you basically have to go the next step: understand what the program does. Not just that particular line.

On the contrary, I find that a lack of debuggers makes me lazy when performing this step: I’ll go “yeah, these lines of code remove the item from the list”, but then having no way to check that I get burned later when it turns out it actually did something subtly different than what I expected. I think he is right, though, that debuggers are not always the right tool for the job; occasionally they cannot produce relevant output (there might be too much too look at, of the part you are looking at is difficult to reproduce), so having these “big picture” skills are quite useful.

For kernel work I am inclined to agree with Linus.

Every kernel problem I have ever faced down was an intermittent problem involving some sort of race condition. A debugger doesn't help you there because:

1. Timings might be changed so you can't (possibly) reproduce the bug in the debugger, or 2. The race condition involves some rare events that might turn up every week on a server but that could take years to find in the debugger so you can't (practically) reproduce the bug.

In applications work (with say Python or Java), however, I am strongly against debug printf's and any temporary changes to the source code which are motivated by the needs of debugging.

Typically you wind up with the code (somewhere) in a state having some temporary debug-related changes and an in-progress fix. It might be checked in and pushed because that is how you get it up on the test server.

These temporary changes have a way of becoming permanent, thus somebody tells your company that there is something at the bottom of the home page that shouldn't be there and you realize that it had been there for two years without anyone noticing.

Yes you can push back against that with code reviews and process, but wouldn't you rather use that spray can of management on real problems instead of avoidable problems?

Thus the debugger is great if it means you can eliminate the use of temporary debug code changes (e.g. not the in-progress-fix)

I also like developing unit tests in the debugger in Java and C#. You edit this, you edit that, you can look at data structures at any point in time... It's a lot like using the REPL in Python.

What is being coded makes a big difference in development cycles. Most of what I am currently working on are small services that log everything they do anyway so adding some debugging output to the mix doesn't change much.

Where I have seen debuggers being useful at every point in the stack from kernel to compiler to loader and beyond is with ports. If all you are doing is running some existing well understood code on different hardware then being able to poke at the internals when things go wrong can often help find the cause quickly.

I use print with python for small scripts, if I'm doing a big app I use the logging module and set log level to debug when debugging, because that's what log levels are for.

Debuggers are nice for big apps that you don't truly grasp the scope of, I know debuggers have saved my bacon several times when dealing with quirks/bugs of UI frameworks. Just being able to inspect variables and check the call stack is a great way to find out why/when shit hit the fan.

> I do not condone single-stepping through code to find the bug.

Hard to take this post seriously because he offers no clue as to how he actually finds the bug (being "careful" is ludicrously vague). That would be an interesting read.

I can't speak for Linus but I haven't used a debugger since VB6 and most of my debugging process nowadays (web development) is done by print statements.

If you're getting unexpected output without syntax errors then it's due to something along the way being set to the wrong value. Print statements are very helpful for uncovering that.

As soon as you see where things are going wrong, you know exactly what needs to be changed to fix it. With a bunch of print statements you can see the state of the system in multiple spots at once.

Plus with print statements you have the option of keeping them around all the time but tucked behind a DEBUG log level. In development, you can often take the guess work out of things if you're always in a position to see the state of your app at various steps. It can speed up development by preventing bugs / unexpected output before it happens.

> With a bunch of print statements you can see the state of the system in multiple spots at once.

With a debugger you can see the state of the system in multiple the spots at once, except that you don't need to guess in advance which spots you will care about, and you don't need to recompile / re-run every time you guess wrong

> Print statements are very helpful for uncovering that.

Yes, I used to do that all the time. It was excruciating, only I didn't realize it.

Then I started actually learning how to use a debugger. Efficiency in uncovering bugs went through the roof.

I don't find it bad at all. I see print / debug statements almost like tests. They are always there. Where as debugging with a debugger is more ad-hoc.

Plus, debugging is way less useful for web development where you have a bunch of different levels of technology at play. Also, most editors can't even be configured to work nicely with Docker.

You look at the code and think really hard.

And it's important to believe that bug has really happened. To never even think that the error is impossible.

He also hints about guessing the cause through the higher level instead of going from the lower level with the debugger.

I'm amazed at the amount of worshipping (in the comments) of luddites when it comes to debuggers. Citing Linus, Log4J developers ("use logging instead"), Kernighan and Pike... especially the latter two had learned programming and used computers that were orders of magnitude less capable than today's computers thus being able to run orders of magnitude less complex programs thus more readily understandable by humans.

It's always the two polarities: understanding the program vs focusing on tiny details of the bigger picture. For me, the debugger is THE tool for understanding the bigger picture!

Example of a concrete problem that I solved with a debugger today: I was populating a list in the UI, yet the elements were constantly disappearing. Why were they disappearing?

It's a JavaFX where the UI elements are bound to observable lists.. So I put a change listener on the list and a breakpoint inside it. The breakpoint had been hit a couple of times and one of these times I saw a piece of my code in a remote location [1] that was clearing the list! Root cause found, problem solved.

EDIT: could I have logged the stack trace rather than using a breakpoint? Yes, but the stack trace printout is far less actionable (harder to pinpoint something interesting, can't inspect variables, etc.) than an interactive stack trace in the IDE while the program is suspended.

[1] Why remote location? The UI is the frontend that communicates with a backend. The UI is rather strongly separated from the backend bridge; the bridge also runs in separate threads. So the bridge takes care of the back-end, receives messages and interprets them and updates the elements (data model) observable from the UI.

So my impression is that people who talk down debuggers simply haven't learned how to use them effectively.

Using debuggers effectively takes skill. When I was younger I would ask other people “hey how does this work?” Now I just debug it and go deep on interesting function to see how deep the rabbit hole goes. It’s helped me pick up things pretty fast.

Admitted, I use a debugger, but in many situations I choose not to use it when facing a bug. For some errors I want to be able to see what the causes of the error is by looking in the logs. Why? Because attaching a debugger in production is not always possible, so I want it in my logfiler. In development I add more and more info to the trace log until I can fix it only by looking at the logs. This has saved me countless times when facing a problem in production.

I think OP's rationalizing poor state of Linux debuggers.

Making good GUI is hard, and goes against terminal-centric Linux culture. Debugger needs IDE integration, and integration again goes against the culture, see "doing one job well" mantra.

When I program for Linux, I don't like debuggers either. Using gdb sometimes but it's not too useful.

When I program for Windows, visual studio debugger is awesome.

Reading these old Linus exchanges always has me in two minds. On the one hand, it's great to peer into the mind of a genius and see the thought process, but on the other - and in light of his stepping back after recognising his own highly unprofessional behaviour over the years - I wonder if we should repost and idolise these emails.

Are people - especially young and impressionable developers - able to separate the two sides of the coin, or does it serve to plant a seed of normalisation for this kind of communication?

> Are people - especially young and impressionable developers - able to separate the two sides of the coin

As a young coder, I remember reading these linus oldtime posts as an exhilarating breath of fresh air. Everyone around me was all-in about "object orientation" and similar bullshit. When I say everyone I mean it: there was not a single person that suggested even a hint of hypothetical disagreement on software engineering mantra. Thus, it was refreshing to read these posts and realize that there existed, after all, another side of the coin.

We should not be afraid that young and impressionable people see "all sides of the coin". We should be afraid that they only get to see one side.

I almost always find myself agreeing with Linus (I don't super agree with his thoughts on debuggers, but I can see the merits), but I think you can be skeptical about technology (OOP, C++, exceptions, debuggers) without declaring to the world you have no empathy for other humans. I'm glad to see Linus taking stock of his behavior and starting to consider how he makes others feel. I think that's really hard for anyone to do, but especially someone who's been as successful as he has been. It's a huge personal leap, and I think we should be proud of him.

I don't really know if 33 is young or old but I really cannot agree more about the mantra thing. And it's not only the OOP craze that completely ate the industry at some point..! NoSQL-related discussions, all UIs and CLIs, the big enterprise influences vs open source approaches, etc.

It's easy to echo who you perceive as smart people; it's harder to have your own opinions, and be at a level where you can defend them in a satisfactory way.

Somehow, the Go language is doing it, eschewing traditional OOP in favor of a get shit done fast and a little dirty approach. It's not even because of the charm of an individual either.

Then again, before Go there was JS as the big opponent to rigid OOP. I still don't understand why they felt like they had to add half-baked classes / OOP to it.

I always strongly disliked the OOP school of program design, even though I learned to parrot everything required by one's typical interview process.

Keeping data and functions together can be nice. In some rare cases inheritance is almost OK. But still don't understand how it all went nuts and they started writing, say, parsers in that style...

But it's always nice to hear that there are other people having similar opinions. Especially if it's people like Linus.

yeah but people hate writing Go. It’s like using the dullest possible knife. It doesn’t have objects, it doesn’t have generics (so you can’t really do Functional Programming) and it doesn’t really let you do C-style machine work either. Its basic assumption is that you’ll have an endless supply of human beings to implement everything, because the computer isn’t going to let you abstract anything

> people hate writing Go

Bit of a blanket statement. I enjoy it much more than the little Node I investigated for work.

The lack of functional programming though, yeah. I feel like it might be due to GC optimisations as well, however. From what I understand, the reason the GC is so fast is because such a style of programming is heavily discouraged.

> I enjoy it much more than the little Node I investigated for work.

I'm another one that prefers putting a bullet on my foot than putting it on my head.

> I still don't understand why they felt like they had to add half-baked classes / OOP to it.

That one I'll never forgive :-)

>and in light of his stepping back after recognising his own highly unprofessional behaviour over the years - I wonder if we should repost and idolise these emails.

I'd say we should. For me his apologizing was more cringe worthy than anything in these emails.

Basically caving in to the American (and some Europeans by mimicry) culture of and "shame! shame! shame!"-style demand for public apologies.

We need more people standing up to what they see as BS in tech. And if the use strong language doing it, that's in a great tradition too.

>Are people - especially young and impressionable developers - able to separate the two sides of the coin, or does it serve to plant a seed of normalisation for this kind of communication?

Yeah, as if we suffer from too much candor and strong words in today's tech landscape.

There are ways of providing feedback without frequent profanities.

I will admit that I didn’t take the time to read through those articles, but I doubt any of them mention the positive effects of using profanity to deride others…

Because there has never been any positive effect from deriding others?

From social change (e.g. "look at these backwards bigots"), to running a tight crew ("wtf Kowalski, you stupid! You nearly got us all killed"), deriding others can just as well have positive effects.

Then there's the fact that some things naturally and inevitable deserve derision.

Not all ideas are derision proof.

Let's have Theranos or Fyre festival or non-vaccination as an example that we all agree on.

A healthy society should include a healthy dose of derision for derision worthy ideas.

(And if you think my argument above is stupid and deserves derision, then you've just made my point!)

And how and why exactly are those ways of providing feedback better than those with frequent profanities?

They are less likely to offend others and discourage people from contributing to the project.

>Of course, I'd also suggest that whoever was the genius who thought it

>was a good idea to read things ONE FCKING BYTE AT A TIME with system

>calls for each byte should be retroactively aborted. Who the fck does

>idiotic things like that? How did they noty die as babies, considering

>that they were likely too stupid to find a tit to suck on?

Surely there are better ways to express a point without going at great length attempting to offend someone else. If you believe this sort of abusive attitude is acceptable, then why shouldn't people provide feedback with their fists as well? It can carry a similar degree of persuasion.

>They are less likely to offend others and discourage people from contributing to the project.

What if the purpose is to discourage the people making those mistakes from contributing to the project?

It also dissuades other people who may not make mistakes from contributing. I’m certainly not getting involved with a community run by someone who mirrors past abuse I’ve experienced, for example.

Then it's an inefficient method to do so, because along with discouraging these particular people, it may also discourage many other productive and competent developers who don't wish to condone this sort of abusive leadership.

Perhaps the intention was to discourage the kind of people that are easily offended from such "abuse".

Unless there's some demonstrated correlation that this subset of people is likely to be less productive contributing to a project like Linux, then this would just be bad leadership on behalf of Linus who'd exclude potential contributors based on personal whims.

You might consider simply emailing back with "please don't make mistakes contributing to the project". That works really well with me and everyone else I know. I don't need to be bullied to be convinced.

Then that’s not a particularly nice thing to do. People naturally make mistakes, and I’m pretty sure that the consensus is that belittling them when they do doesn’t really help.

And there are condiments other than mustard. So what?

Some of them happen to achieve similar effects in a more pleasant way.

Your condiment preferences are subjective, and your campaigning against mustard threatens to make the world a blander place.

Yes, let us hide old posts from public until candidness and frankness (driving forces of innovation and self-correction) is completely obliterated from the corporate world.

And let us also gently place everyone's thumbs into their mouths while they are asleep because it feels cute and cuddly.

Your sarcastic comment is absolute trash.

I just wanted to drive home the fact that we don't accept insults, and respectable discourse is always preferred.

I would not accept shit from people just because they have some authority in some place.

Yet I can't agree more with it.

That felt nice, didn't it?

I really like your nickname.

What's wrong with Linus communication in this post ? Someone asked for his opinion. Not only did he respond with his opinion, he also clearly communicated why he holds that opinion. Sure he has written some abusive stuff, but that doesn't make everything else abusive, or can you point out the abusive parts in this post ?

Does your comment also apply to this specific e-mail?

I just read it and it seems pretty tame to me. It's direct but if people get offended by this these days, by gods, we have a problem. Sometimes growing a thicker skin is the answer, one can't put the burden all on the messenger.

I mean, there's some Linus e-mails (and also Erik Naggum posts on Usenet) that I thought were awesome when I was young and impressionable but that I now think are over the line. But this specific example, if that's not possible... phew.)

(I also get the impression some commenters here do not realize what "fsck" refers to.)

When you say "genius" you mean his technical abilities, by "unprofessional" you probably mean those moments when he pointed out other people's technical mistakes in a... straightforward manner.

Some people are offended by this other side of his style, yes. TBH, I don't understand why was it such a big deal. He never spoke about non-technical things in that way. Meanwhile, his technical opinions always were clear, sound and to the point.

I would argue against the "straightforward" part. I would call him a blunt rambler. Here's an example post where someone goes through a Linus email and rewrites it for clarity: https://www.destroyallsoftware.com/blog/2018/a-case-study-in...

> This is a much better email. It has 43% as many words, but loses none of the meaning. It's still forceful and unambiguous. With fewer words, it's easier for someone to absorb the core message about unthinking deference to standards.

> It also doesn't berate anyone, building a needlessly antagonistic culture around the project. Writing this email instead of the original email doesn't require any extra work, and will save mileage on Linus' (or your) fingers besides.

Author commentary: https://twitter.com/garybernhardt/status/1009844030656561153...

> If you read apologists commenting about this email, they often completely miss Linus' actual point. He's fine with the change; he doesn't like the justification, which uses an appeal to authority. The very people who apologize for the useless bloat miss this because of the bloat!

> These people missed the signal because it was buried in noise. But most of them go on to argue that the noise is boosting the signal; that ranting is good because it drives the point home. But they didn't even detect the signal because they were so focused on defending the noise!

You didn't think it was a problem, but he did.

It's not like Linus coming to think it was a problem objectively makes it one.

I say make the world more interesting not less... People are adults and they can decide. To the extent that I agree that society 'needs to change' the most serious problems (abuse) are all things which have happened in private. It someone wants to be borish in public, everyone is aware, everyone can deal with it.

I don't think anyone else is idolising this kind of toxic behaviour. You're the one calling it 'genius'! I think it's an absolutely terrible way to speak to people and I hope I never have to interact with him to get something done in my job. I don't know how a professional person can be happy with knowing there are people who would avoid them like that.

>I don't know how a professional person can be happy with knowing there are people who would avoid them like that.

Well, they know there are hundreds of millions of others who use their work, thousands who worked with them, and a trillion dollar industry built on their labor and effort, and that they have spoken their mind, with no BS, and no fake niceties like it's the norm.

I guess they can always take comfort in that.

They must be different to me - I'd feel really bad if I knew the way I spoke to people made them not want to talk to me.

Some don't want to be people pleasers in general. They'd rather break bridges that censor themselves and be less blunt.

Others don't want certain kinds of people to "want to talk to them" in the first place, so they are non-polite towards their kind on purpose.

And a final similar case is wanting people to talk to you only if they can stand that level of discussion/no BS talk/bluntness/sense of humor/profanity/etc. So that you can be yourself among them, and they can be themselves with you, with no facades.

> I think it's an absolutely terrible way to speak to people

I, for one, pretty much prefer being spoken to that way, instead of a sanitized corporate newspeak that means exactly the same thing but uses "professional" words.

I agree. Corporate speak is the worst combination of ass-covering and abdication of responsibility.

I think people mean well when they talk about empathy and kindness, but those concepts assume that the only reality is how we feel and speak about things, when generally it's the underlying reality that is painful or harsh. And when you have to deliver a product, you can't avoid that reality.

I try to address it by frequently talking about my own fuckups and thus making it easier for junior devs to understand that failing is something we all go through.

But nothing can change the fact that it hurts to invest deeply in something and see it fail. We can't make pain go away, so it's better to just get it over with and learn to adapt to it and be resilient.

I don't think there's anything specifically corporate about it. I wouldn't speak to people like this socially either.

I read quite a lot of Linus emails, and honestly, he was harsh, but he was always constructive.

As a younger developer - what matters to me most is the constructive criticism. I don't want to have it phrased nicer(nor harsher) than needed - i just want it to be down to earth and constructive.

Linux kernel was also probably a lot smaller back then. Easier to reason about things when something goes wrong.

Personally i think when projects grow, there is a point after which you cant really reason that much about what is going on, just have to debug it.

Even then it was most likely still one of the most ample software projects on the planet

actually when the project is too big you can't debug it, you need to trace it.

Debuggers are to debugging what IDEs are to programming: they make you so much more productive that as a professional programmer, forfeiting that productivity is literally leaving money on the table. Something you can afford to do if the project is a hobby project (which Linux still more or less is to Linus), but when you have actual customers -- no way.

The quality and integration of the debugger alone made Visual Studio head and shoulders above most development tools for other environments.

Side remark : has anyone talked to linus the same way he did to others following a bug linus has coded himself ? Like, calling something he did a piece of trash, or something similar.

Without defending that he does that, that's an unfair summary. When he's been publicly chewing people out on LKML it's not because someone just made an honest mistake, it's usually something like "a subsystem maintainer should know better" resulting in a systemic fuckup, not some isolated mistake or bug.

I've known outstanding guru-level engineers that were that blunt when I was a student.

You need to know the person. In my experience most of them are very nice guys. It's just that when they think something is not that great they will just say "it's shit".

I was told many times my code was shit by these guys at the time and I didn't take it as a personal insult but as constructive feedback. They liked that I took it that way and we ended up good friends.

People should see through the words before being offended.

There's being blunt, and there's being a bastard (to borrow Linus' word).

True story:

I once worked in a place where a new trainee engineer was struggling with a piece of work. There were a few reasons for this that weren't their fault (not a lot of explanation of the requirements). Nonetheless, the work wasn't getting done as fast as the PM wanted.

Now, the PM in this case had moved across to software development from a post-graduate background in teaching, which I only point out to explain that this particular PM considered themselves something of a big deal.

(Having moved around as an independent contractor I'm fairly used to seeing small fish in smaller ponds that think they're the office genius, but this was next-level conceit.)

Anyway, they decide to let this engineer go one morning. Their prerogative I guess, but the PM shares with everyone later on that same day that they were able to steer this person in the right direction on the way out the door by kindly explaining to them that it hadn't worked out because "they didn't have the natural aptitude for engineering and would be better served pursuing a different career path".

Most of my gigs are in a fairly small city with a smaller community of engineers, so I'm used to encountering people a couple of years down the road. Sure enough, I never saw this person around again in any of the meetups or in any other gig, so I assume they left the industry based on this manager's feedback.

So yeah, bastards abound.

Maybe he was right?

I didn't say it was a he.

Nonetheless, that would require this person to be an arbiter of the software industry.

Given that you're on HN, and therefore very likely in the industry yourself, does the idea of someone you've never heard of deciding who does and doesn't belong doing this work sit well with you?

If so, yikes.

and maybe he was wrong. whats your point?

I know this was referring to the parent, but just to add to this:

I wouldn't trust Linus Torvalds, Bill Gates, Dennis Ritchie (may he rest in peace), or whoever else to assess whether or not someone has a place in the software industry - so the idea of a mid-level manager in a nameless tech department doing so is utterly ridiculous to me.

How is saying "it's shit" is "constructive feedback" ? What is the constructive part ?

I'm not offended when developers say that, I just observe that they are unable to build and express a reasoning, and that worries me about their abilities as developers.

> How is saying "it's shit" is "constructive feedback" ?

It means the contribution isn't up to the standards and therefore can't be integrated, and thus informing the contributor that he needs to improve his work in specific aspects in order to avoid similar problems.

> What is the constructive part ?

You're focusing too much on irrelevant aspects of the communication (i.e., which and how a word was used) and in the process ignoring (intentionally or not) what was actually said. Typically linus includes specific comments on the problems present or caused by a contribution, whether in code or the development process. In the very least it's very easy to understand that this sort of process works through negative feedback.

Their abilities as developers are judged by what they deliver. Obviously they need to deliver before they can call someone else's code "shit", or at least before they can say that and be taken seriously...

> when they think something is not that great they will just say "it's shit"

Linus doesn’t just say “it’s shit” though - he says “you’re shit”. That’s the problem.

Of course this is not something to do in general.

But let's face it, too many people in this industry think that they are geniuses and take it very badly when they are brought back down to Earth.

The guy has to sort so many contributions that at some point it's not crazy to shut some people out. The effect of Sturgeon's Law on someone a but rough like Linus...

From the very beginning of his collaborative effort, Linus took the role of a code reviewer/merger. That way, he essentially decided the feature set and direction of the project.

I don't know if he did any development other than committing patches/merging-rebasing branches after the collaboration started, but I'm guessing it was rare.

I know he did the groundwork for Git at least, you can see his own commits in the history of https://github.com/git/git; that was 14 years ago by now though, according to that.

Some tend to use debuggers for development details which I agree isn't great, as one tends to focus on details rather than overall structure.

But for actual debugging, especially for code you're not familiar with, they can save huge amounts of time. Some more notes on this at:


I kind of agree on the point about thinking of bugs at a different level. I'll first try to reason about what went wrong and look through the code based on the stacktrace.

But the debugger is nice for a certain set of problems. If you get a (java) lambda statement that is going fubar, it's nice to attach a debugger and just analyze the lambda flow to see how each statement effects the data.

One of the projects that I worked on, we'd often repeat that we had to look for the _root_ causes of mistakes. Which is done without a debugger, it might just be a conceptual error. e.g: if we had a nullpointer, we could just add the simple null-check. But it'd be more interesting to reason about if it should even be possible for that variable to be _null_ anywhere in the code. So we could rewrite it in a way that it can't ever be null and avoid the check.

Anyway, you probably should use a debugger , but it's just one tool of many.

What would be really interesting to have would be an extensive set of automated tests.

A lot of the code can run in the build environment, supported by dummy components, without requiring to boot a VM. A lot would need that VM, and the VM would need to be able to emulate a lot of hardware for the tests to be comprehensive. OTOH, this emulation code (or the config files that would set it up) would serve as always verifiable definitions of how the hardware is supposed to work. And dummies and emulation could be automatically validated against actual live hardware when it's available.

I've found that thinking and rationalizing about units and following TDD is much more important than a debugger.

The debugger generally sucks because by the time you have to use it the whole system isn't functioning properly.

If you can just find out WHICH component broke you can rip out that component and build a test for it or expand upon an existing test.

It usually turns out some input for a function broke the range and domain of the function and just adding another test for it and then fixing the tests fixes the problem.

The additional benefit here is that not only do you not need to use the debugger but since its now a unit test and in your code you can't ever break that way again.

A lot of people joke about how they don't write unit tests because they're trying to save time.

In my experience, in any significantly complicated code base something massive will break and knowing exactly WHEN it broke and in what commit can dramatically lower the time needed to fix the problem.

Maybe this applies to kernel development, but for web development with OOP languages like JavaScript and PHP, I find a debugger to be incredibly useful. Our apps have millions of lines of code and sometimes I watch other developers struggle with print statements and other such techniques and never get anywhere, whereas with a debugger I'm able to find and fix the issue within minutes. They are not worse programmers than me, but they sure are inefficient and frankly, using the wrong tool for the job. For functional languages like clojure, the debugger is almost useless as they have a repl based interface and debugging is mainly playing with the repl and isolating the issue. Still, I wouldn't give up my debugger for OOP based languages for anything. I often use it during development just to check that the code is running as intended, before bugs even creep up.

you can tell this is dated because “People think I’m a nice guy” is definitely not reflecting how people talking about the man lately

If you know the symptoms of NPD, this stuff is cringe-worthy...

    I'm a bastard. I have absolutely no clue why people 
    can ever think otherwise. Yet they do. People think I'm 
    a nice guy, and the fact is that I'm a scheming, conniving 
    bastard who doesn't care for ...
On the surface, the message is "Isn't it impressive how dominant I am?" but the unintended message is "I have a personality disorder!"

Do you think Linus Torvalds has an exaggerated sense of importance?

These threads are always a bit disconcerting as people get so sensitive and personally offended by them. And invariably despite the astonishing success of Linus' leadership, we'll just throw that all aside and posit that if it were a committee of equals who treated each other with love and admiration it would be as successful.

Or maybe it would be an abject failure, which odds tell us it would have been?

Cheers. I just feel vicariously embarrassed for him (in regard to his writing here, at least). I can see how one might mistakenly read more into what I wrote than that, but that's all I intended to say.

Oh he's aware of it. He just doesn't care, because it works for him and the software he is the end responsible for.

I mean compare Linux to the Java ecosystem, which is similar in scale (billions of installs / dependencies) but run by commission. How is that working out? Less hurt feelings I guess but it's not making any significant advances either and has been stagnant for years.

it works for him and the software he is the end responsible for.

But does it work for the software he’s responsible for, or does the software he’s responsible for work in spite of his behavior?

How is that working out? Less hurt feelings I guess but it's not making any significant advances either and has been stagnant for years.

Commissions that don’t hurt people’s feelings make something stagnant? Could there be other forces at work to explain Java’s stagnation, other than a commission that doesn’t hurt people’s feelings enough?

> Could there be other forces at work to explain Java’s stagnation, other than a [committee] that doesn’t hurt people’s feelings enough?

Not a chance. All of Java's roadmap discussion revolves around dealing with legacy issues, and they even had an entire major release just trying to refactor the language into modules because the core was so bloated.

You can't blame lack of funding, or that it's a niche product or anything like that.

You can't blame lack of talent, because there were a ton of very smart people working on it.

You can't blame novel ideas, because very little in Java was unprecedented.

Java's legacy traces inexorably back to:

a. features that no one could say no to b. badly implemented features

    he's aware of it.
He comes off like someone who suffered abuse and mockery as a child. Was he aware he sounded like that? I doubt it.

That said, a behavior can be both cringe-worthy, and also productive. I can think of certain comedians, talk-show hosts, etc, who manage both.

Out of curiosity, why do you find it cringe worthy?

Do you think the statement is untrue? What makes you believe that he tries to convey "how dominant he is" instead of just stating a fact? Why would it be a problem that he has a personality disorder[1]?

[1] I believe that Linus himself aknowledged that when the CoC was added to the kernel documentation.

    why do you find it cringe worthy?
It's cringe-worthy the way someone who barks "do you not know who I am?" at a McDonalds employee is cringe-worthy. It's the grandiosity and lack of self-awareness.

    Why would it be a problem that he has 
    a personality disorder?
On reflection, it wasn't constructive for me to sound off about his mental health. It's none of my business. If my original comment were still editable, I'd delete it.

> And quite frankly, I don't care. I don't think kernel development should be "easy".

Ah yes, the tried-and-true arrogant neckbeard stance that so defined unix interfaces for decades: If it was hard to write, it should be hard to understand.

But we've already proven that good UX is a good thing. In fact, UX has been so front and center for so long on HN, I'm rather surprised that people still cling to the old ways.

I find this dismissive.

We are not talking about some throwaway web app. We are talking about the Linux kernel. Something that has been used for decades and is currently used on millions of devices every day. From literally computers that power spacecraft down to mobile devices.

UX certainly has its place, however I think changes and additions to the kernel should be carefully considered. If excluding a debugger excludes some careless programmers then maybe that is a good thing.

It makes a lot of sense to prefer to think about the real problem.

Years ago at school my group wrote a ray tracer from scratch, after a few months we noticed that the vector class had an inverted scalar operation. The program worked, because we had inverted the entire vector space as we had just fixed the code until it worked.

After fixing the vector class most of the other formulas in the program started to look more reasonable :)

If someone feels the need to tell you they are an unapologetic asshole, I think that's a small dog with a smaller bark.

Hmm didn't think this was real when I saw (but seems to be very real after checking):

> And quite frankly, I don't care. I don't think kernel development should be "easy".

Previous discussion on HN from 2015:


I'm personally not a fan of casting aside a tool as something never to be used (especially as a matter of principle).

I personally do not use debuggers, but I can see why they might be used, especially to step through someone else's convoluted code.

This is now 18 years old - is it still relevant? Was he right?

He was right at least in this part: «My biggest job is to say "no" to new features, not trying to find them.»

He was right about being careful. About debuggers? Maybe. Attitude-wise? Doubtful.

WGAF? I don't like source code control systems.


I didn't like debuggers much back then either. But they've gotten better.

Imagine being so famous that you are known only by your first name and what's more not just for making crumby pop music, but for doing something that's changed the world for the better. Thanks Linus.

I, for one, miss the old Linus.

and I sort of miss ICE machines.

Oh well.

you bastard!!!

>> Yet they do. People think I'm a nice guy, and the fact is that I'm a scheming, conniving bastard who doesn't care for any hurt feelings or lost hours of work if it just results in what I consider to be a better system.

Such honesty. I think that jerks who scheme and hurt individuals for the common good are the best kinds of jerks and the world needs more of them.

Also, his points make perfect sense for Linux Kernel development. It shouldn't be easy to change. Stability is by far the most important feature. Even excellent developers don't cut it for the Kernel; you need complete freaks of nature. It's impossible to even wrap one's mind around the sheer number of systems that depend on the Linux kernel. Kernel development cannot be slow enough. Most of the new code should be thrown away without batting an eyelash. In fact, every line of code merged should be so good that it should deserve an international conference dedicated to it and the author should get a medal.

Heck, every line of code should have a religion built around it; complete with churches, priests, a pope, schools, etc...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact