But debugging is about "trying out random things". You can call it a Monte-Carlo tree search is you want to sound smart.
And I don't feel is not something that is worth teaching in universities, because it is 90% experience and for me, the point of universities is not to replace experience, just give enough to students so that they are not completely clueless for their first job, the rest will come naturally.
What universities can teach you are the tools that you can use for debugging: debuggers, logging, static and dynamic analyzers, etc..., different classes of bugs: memory errors, the heap, the stack, injection, race conditions, etc..., and testing: branch and line coverage, mocking, fuzzing, etc... How to apply the techniques is down to experience.
In fact, that's what I find most disappointing is not juniors programmers struggling with debugging, this is normal, you need experience to debug efficiently and juniors don't have enough yet. The problem is when seniors are missing entire classes of tools and techniques, as in, they don't even know they exist.
The "Monte-Carlo tree search" space is usually far too large for this to work well!
It is true that initially you may not know where the bug is but you have to collect more evidence if possible, see if you can always cause the bug to manifest itself by some tests, explore it further, the goal being to form a hypothesis as to what the cause may be. Then you test the hypothesis, and if the test fails then you form another hypothesis. If the test succeeds, you refine the hypothesis until you find what is going wrong.
Such hypotheses are not formed randomly. You learn more about what may be the problem by varying external conditions or reading the code or single stepping or setting breakpoints and examining program state, by adding printfs etc. You can also use any help the compiler gives you, or use techniques like binary search through commits to narrow down the amount of code you have to explore. The goal is to form a mental model of the program fragment around where the code might be so that you can reason about how things are going wrong.
Another thing to note is you make the smallest possible change to test a hypothesis, at least if the bug is timing or a concurrency related. Some changes may change timing sufficiently that the bug hides. If the symptom disappears, it doesn't mean you solved the problem -- you must understand why and understand if the symptom disappeared or the bug get fixed. In one case as I fixed secondary bugs, the system stayed up longer and longer. But these are like accessories to the real murderer. You have to stay on the trail until you nail the real killer!
Another way of looking at this: a crime has been committed and since you don't know who the culprit is or where you may find evidence, you disturb the crime scene as little as possible, and restore things in case you have to move something.
But this is not usually what happens. People change things around without clear thinking -- change some code just because they don't like it or they think it can be improved or simplified -- and the symptom disappears and they declare success. Or they form a hypothesis, assume it is right and proceed to make the "fix" and if that doesn't work, they make another similar leap of logic. Or they fix a secondary issue, not the root cause so that the same bug will manifest again in a different place.
I suspect that GP was talking about some notetaking tactics to systematically narrow things down while throwing educated guesses against the wall. Because so much of debugging is running in circles and trying the same thing again. No amount of notetaking can completely remove that, because mistakes in the observation are just as much an error candidate as the code you observe, I'm convinced that some "almost formalization" routine could help a lot.
Good points on the tool side. While "debugger driven development" is rightfully considered an anti-pattern, the tool-shaming that sometimes emerges from that consideration is a huge mistake.
I worked with programmers around my junior year and some of them were in classes I was in. I thought they were all playing one-upsmanship when I heard how little time they were spending on homework. 90 minutes, sometimes an hour.
I was a lot faster than my roommate, and after I turned in my homework I’d help him debug (not solve) his. Then I was helping other people. They really did not get debugging. Definitely felt like a missing class. But it helped me out with mentoring later on. When giving people the answer can get you expelled, you have to get pretty good at asking leading questions.
Then I got a real job, and within a semester I was down below 2 hours. We just needed more practice, and lots of it.
This is why internships and real world experience is so important. A course is 3 in class hours a week over 12-14 weeks typically. After homework and assignments it is ultimately maybe 40-80 hours of content.
Which means you learn more in one month of being on a normal, 40 hour workweek job than you have in an entire semester of one course.
Not all hours are created equal. This is on the verge of saying “I took 1,000 breaths on my run, so if I do that again, it’s like going for a run.” Just because you’re measuring something, it doesn’t mean that you’re measuring the right thing. You’re just cargo-culting the “formal education is useless” meme.
Were you the sort of person who responsibly worked a little bit on the assignments over the course of the week/two weeks, or did you carve out an evening to try to get the whole thing done in one or two sittings?
My group did the latter. I think based on what we know now about interruptions, we were likely getting more done per minute than the responsible kids.
Including reading, we might have been doing 15 hours a week sustained, across 2-3 core classes.
But these were the sort of people who got their homework done so they could go back to the ACM office to work on their computer game, or work out how to squeeze a program we all wanted to use into our meager disk space quota.
Anything more than a B was chasing academia over practical knowledge. B- to C+ was optimal.
I believe that software-related college degrees are mainly there to get the horrible first few tens of thousands of lines of code out of people before they go into industry.
What do you mean by people trying random things? I think that approach (if I understand the term correctly) is more or less what debugging is as a form of scientific investigation.
If you observe a car mechanic trying to find the problem with a car, he would go like: "is this pin faulty? No. Is the combustion engine faulty? No. Are the pedals faulty? Yes." where the mechanic starts with some assumptions and disproves them by testing each of those assumptions until (hopefully) the mechanic finds the cause of the fault and is able to fix it. Similar types of investigations are important to how natural science is done.
So it would be helpful if you can clarify your intended meaning a bit more. Maybe I or someone else would learn from it.
Trying random things seems to be how a large number of professional software engineers do their jobs. Stack Overflow and now CodeGPT seem to contribute to this.
I'm not sure if software engineering classes in particular do, but at my university, they teach C++ in the second required course, and they teach you about using GDB and Valgrind on Linux there. They don't explicitly teach you about systematically debugging, though, beyond knowing how to use those two programs.