For crashes: Get backtraces! Knowing the line and the state of the callstack makes many common errors trivial to fix. Some languages (Java, Python) are helpful and generate them automatically. Others (C, C++) require a debugger. I've seen lots of programmers ignore gdb or Visual Studio and spend minutes with printf() looking for the exact line of a crash. Code crashes all the time. Dealing with it methodically and effectively will add minutes - or even hours - to your day.
When students come to me with broken C code, I ask two questions: do you know which line it breaks on, and did you run it with valgrind?
Trivial bugs where available evidence points directly to a cause. The scope of these increases with experience, but most bugs that come up in day to day coding are of this variety.
Deep bugs are reproducible, but require bisecting the problem to figure out the cause. Performing optimal bisections requires skill, experience, and raw brain capacity. This is where a good, but let's say, not virtuoso developer can distinguish himself from his less-than-average colleagues in a major way.
Heisenbugs are the most difficult because you can't reproduce them, either because it's a concurrency problem or some limitation of the environment the bug occurs in. To solve these bugs, a functional language and pure mathematics may be the best way to get reasonable assurances that a bug is gone, but also radical creativity in code instrumentation or expensive test scenarios with dedicated hardware may be necessary.
So, yes, being a productive programmer requires a level of skill where most bugs are trivial. But nevertheless, the second two flavors exist no matter how smart you are, and being productive over the long term means being able to tackle those systematically, even if it takes days or weeks to solve, and if all else fails, figuring out a workaround.
Observation is in particular quite a hard practice. On several occasions I've realized that the evidence was staring at me in the face from day one, but my eyes were too clouded to see it. Sometimes, it helps if you know the system inside-out, but at other times that itself happens to be a blinding disadvantage.
Forget all that stuff. Just improve your code until it will tell you problem and it exact cause, you will save a lot on "Repeat" step.
IMHO, the hardest bugs are the ones where you have too much data and/or running through the test case takes a very long amount of time. Usually the best way to chip away at these is to try and speed up the test phase by mocking things or to just do a good old stare at the code until the bug shows up. It's a cut once, measure twice approach except with time. You can either run the test again, or you can stare at the code for twice as long and not have the run the test multiple times.
We eventually traced the problem to the metallic tape used to define features in the arena. When the metal roller at the back of the robot touched this after a few minutes of operating it caused a static discharge which stopped the computer!
For example, say you have a method that prints a string, and you "require" that all callers of the method to actually provide a string, and that it's not 0 length:
- (void)printAString:(NSString *)aString
NSParameterAssert(aString || [aString length]);
NSLog(@"Printing a string: %@", aString);
Asserts are also great to affirm that a method returns only "kosher" results:
- (NSData *)loadDataFromDatabase:(NSString *)databasePath
NSData *someData = [aDataBaseClassInstance loadData:databasePath];
NSAssert1(someData, @"The |databasePath| is invalid: %@", databasePath);
While you have to be careful where you use NSAssert()/NSParameterAssert() (Obj-C methods) and NSCAssert()/NSCParameterAssert() (C functions), it can make it easier to stay out of debugger land.
More in terms of tools: if you are tracking down a memory leak, get a memory profiler, don't guess blindly at what is leaking. What can be done in an hour of tracing rooted objects with windbg takes five minutes with a profiler. Likewise with performance profiling.
This I guess is less applicable to the web dev/machine learning domains that Gabriel operates in.
There were awful lot of tasks I tried to achieve via pure coding and rewriting / rewiring blocks here and there. All they actually needed was some analysis on paper.
When I sit at my desk coding, I tend to think in detailed, code (c/c++ for me) specific ways. My thinking is constrained by how I think the code might look.
When I turn off the screens and grab a pen and pad, I think in abstract, mathematical ways. Once I have a mathematical solution, it's easy to turn that into a practical implementation.
1. Tools. I generally shy away from tools. I just don't like using anything that makes me more productive when I'm programming. I prefer to type out every line of code, occasionally cutting and pasting from the same program or something similar from the past, but not very often. I want the writing of code to be painstakingly slow and deliberate. Why? Because productivity is not the objective. Becoming "one" with my project is. I may not start as fast as others, but that doesn't matter. It's not how fast you start, it's how soon you deliver a quality product. Like memorizing a speech, cranking out the code by hand makes it "firmware" in my brain. It's not unusual for me to crank out 300 lines of code and then be able to reenter them on another machine from memory. So when it comes time to debug, refactor, enhance, or rework, those cycles go very quickly; that code is already in my brain's RAM and it got there the hard way.
2. Simple Algorithms. Yes! I love this example:
* EnterOrders 08/31/10 edw519
3. Debugging. I don't. I've seen 25 different debuggers and I hate them all. Print() is my friend, especially beyond the right hand margin. Better yet, write code that doesn't need to be debugged (See #1 & #2 above.) (Of course, this is much harder with someone else's code.)
4. References. Don't need no stinkin' manual. I have become extremely adept at about 4% of what's available to me, but that 4% accomplishes 98% of what I want to do. (OK, I'll refer to a manual when I need something from the other 96%, but that doesn't happen too often.)
Nice post, Gabriel. Got my juices flowing. We could talk about all kinds of other things too, like variable naming, how to iterate and branch, and never typing the same line of code twice, but this was a nice starting subset.
The thing is, every single bit of code you write depends on something else that you DIDN'T write. Everything. No exceptions.
And the thing about stuff you didn't write is that you end up making assumptions about it - often without even realising. Sometimes the assumption is simply that the code works. Sometimes, some of your assumptions will be wrong.
Debuggers help you check your assumptions. They're a very useful tool for this - more so than many people who say "I prefer printf" realise.
I apologise if you're not in that category, but did you know that many debuggers have a way to print a message (without stopping) whenever the program hits a certain line? Just like printf, only it doesn't have to be built in to your code ahead of time.
There are times when a debugger isn't the right tool for the job, but it's always better to make an informed choice.
The big advantage of printf as a debug technique is that it doesn't work unless you understand what you're doing. If you plan to debug using printf, you must write code that is easy to understand.
Great debuggers let you try to fix code that you don't understand. Worse yet, they make it reasonable to write such code.
Since the person reading the code is never as smart as the person who wrote the code, encouraging simple to understand code is extremely important.
Better to use a debugger, if you can. Not that I haven't ever sprinkled print's into the middle of a big mass of somebody else's code or framework just to get the initial lay of the land :-)
Regarding "-g", at the job a few years ago where we were using C for some in-house jobs, we skipped using "-O" for the most part and simply deployed with "-g" left on. Better safe than sorry for single-use software.
> The big advantage of printf as a debug technique is that it doesn't work unless you understand what you're doing.
This isn't true, nor would it be an advantage if it were.
Any fool can stick a printf statement into their code, just like any fool can run it in a debugger. You might get lucky and find the problem, or you might not. Understanding what you're doing will help you out whichever technique you use. Better yet, it will help you decide which technique is more appropriate to the problem.
That's like saying that you might get lucky if you make random changes to program code. Yes, it's true, but ....
> Understanding what you're doing will help you out whichever technique you use. Better yet, it will help you decide which technique is more appropriate to the problem.
Except that I was talking about understanding the program, not "what you're doing".
In my experience, debuggers let me find things, or, more often, think that I'm doing something useful, with less understanding than printf. YMMV.
A long time ago, I used debuggers and printf.
For years, I've instead added tests to checks my assumptions -- and I run the tests multiple times an hour.
(I still printf since it works well with the tests if I run a subset of them.)
(Edit: This was for when I write code, not working with others.)
Tests are very useful, but they fulfill a different need to debuggers. In fact the two work well together: stepping through a failing test case with a debugger can be a very effective way to find the root cause of a problem.
This is pretty hard with your own code too. :-)
It never ceases to amaze me how often I go back to code I've written and think, "Why did I think this would work for all cases and exceptional conditions?". Then again, it's often hard to remember what I knew a year ago... maybe I wasn't even aware of these cases.
There is nothing like simply getting a "Hello World" up and running as a simple sanity check and starting point.
Users get to see something early on.
Developers build a structure to work in.
You have an integration platform.
You have something to demonstrate.
You have a better feel for progress.
As a solo founder I need working code, that I can put out of mind, while I produce more.
Personally, I'm fond of configurable logging, but I'm not under any illusions; logs are like freeze-dried debug sessions with less features. Logs verbose enough to contain the relevant information can also get borderline unwieldy and time-consuming to generate and process. The compiler I work with, when hosted in the IDE, could easily produce over 100GB of log output on a complex project test case, so I usually try and keep it down to 1..2GB or so, but that often leaves out the incriminating evidence, so it takes some back and forth... And the reason I'm using log files at all is often because debugging itself gets unwieldy on large test cases.
We're talking about programming, and that's not programming. That's called piling shit on top of shit.
So, it goes something like this:
1. Insert `import pdb; pdb.set_trace()` into the code where I think there's an issue.
2. `print foo`
3. Hmm, that doesn't seem right... `print baz(foo)`
4. Ah, I needed to change... tabs over to editor
This style is heavily influenced by the fact that I primarily program in Python and Ruby, where one generally learns the language by typing commands into a REPL, rather than executing a file. When working on a Python project with a bunch of classmates who were good programmers, but used to Java and C++, I found that they found this approach utterly unintuitive.
It's much better to use the tools as far as you can , use nice big descriptive names (since you never have to actually type them out this is win-win) and try to be as consistent as possible . For projects I've been the driver on I never have to look at anything. I know how the project is going to be laid out, what the naming scheme will be like, etc. I can e.g. open up the Visual Studio solution file, Ctrl+n, type in the capital letters of the class I know will be of interest (e.g. WRI for WidgetRepositoryImplementation) and off I go. While you're still birthing out your first class I'll already be tightening up my tests.
And there are people vastly better/faster than I am at this.
 But be sensible about this. For me, big code generators are a no go. I use spring because it (a) doesn't sprinkle it's code anywhere I have to see and (b) regenerates the code every time so there is no sync issue between changing a source and regenerating a dependency.
 Always refactor when you touch code, including making the code consistent with current coding standards.
I don't hate to say this, but responses like yours are the classic example of hn's biggest problem: people talking when they should be listening.
I suppose your opinion could be accurate if we had a flux capacitor and you were making your prediction in 1979. But it's you versus 1,000,000 lines of code deployed, 1,000 successful projects, and 100 satisfied customers. They outrank you.
FWIW, I made 2 posts yesterday. This grandparent which included intimate secrets of my success and earned nothing and this little joke:
which earned 58 points.
Gabriel Weinberg made wrote an interesting post which got my juices flowing and led me to share my experience (experience, not opinion). Then people like you tell me I'm wrong or worse, that what actually happened couldn't have.
Any wonder why outliers with successes to share are reluctant to share them?
I don't care much for the "no true Scotsman" nonsense, but I don't agree that someone should be listening to out dated advice. If you meant it as a story about the glory years then ok.
> This grandparent which included intimate secrets of my success and earned nothing and this little joke:
The one where you showed you were good at real estate? Are you here to earn imaginary points or to communicate with like minded people? You need a couple hundred karma to get downvote ability but after that? Who cares.
>Then people like you tell me I'm wrong
Your methodology is no longer the most effective way to develop software. One can still do it, but one would be artificially limiting what they're capable of.
EDIT: Edited to remove some of the unnecessary barbs. No one likes to be told they are what's wrong with anything but no one wins a flame war.
You're the one who started with unsubstantiated claims about how the OP was a poor programmer because he didn't follow your methodology. If edw has managed to find a method that works for him, why shouldn't he share it? You can take it or leave it. If you feel the need, you can also tell him where you think he is going wrong. But there is no need to start by insulting people or telling them that you can code circles around them. Edw may be the worse coder, but you've come across as the bigger dick.
>you can code circles around them
My point wasn't that I can code circles around him but that modern practice can code circles around him. Which is something he could take up as well.
And if there's too many modifications, a different trick will do. Change the breakpoint to a counted breakpoint with a nice high count, then run until the corruption occurs, then check the pass count on the breakpoint. You can then set the count on the breakpoint to the exact value needed to stop at the corruption, like stepping back in time (i.e. the famous "who modified this variable last" debugger feature).
#3 - inspired a "printState() div" 10,000px off-page
#4 - Especially when there's more than one way to do it, relying on my little mind saves huge amounts of productive time.
I think this is a great reason to take a nap in the middle of the day if your environment is conducive to it. It works for me as long as the code is the top thing on my mind when I fall asleep.
Other debugging helpfuls:
- Rubber duck:
- or just look at your code in a different editor/syntax highlighter. Pull up ubuntu in VMware fusion ;-}
For some reason any attempt to implement a Y-combinator for recursion was crashing and for a couple of days I had no idea what could be wrong - then sitting on a bus on the way to visit my sister I was looking at a cinema from the bus and the thought came into my head "it's your aggressive re-use of applicative nodes".
Sure enough next day I removed this attempt at an optimization and everything worked!
I've had a few experiences like that and I now know that if I find a really tricky bug the best strategy isn't to sit there all night looking at it but to go and get a decent nights sleep - there is a very good chance you will know the answer by the next morning.
Next morning I looked at the note and it was exactly right.
It's great when I can't solve a problem in the evening, get a good nights sleep, and then I am able to quickly solve the problem in the morning. Sometimes though I actually have dreams about sitting at my desk writing the code that I couldn't come up with during the day. On the one hand it's great to wake up in the morning with the solution to my coding problem. But on the other hand, I'm not sure it's healthy to have my dreams filled with the same thing that I do all day while awake. Whenever that happens I definitely make some extra time for my hobbies that don't involve computers.
There's a decent chance that story is actually lifted wholesale from a similar one told about Dali, except it was a spoon instead of ball bearings. I suppose it's the best you can do after melting your alarm clock.
To prevent himself from crossing all the way over the "genius gap" into deep sleep, he would nap with his hand propped up on his elbow while he clutched a handful of ball-bearings. Then he would just drift off to sleep, knowing that his subconscious mind would take up the challenge of his problem and provide a solution. As soon as he went into too deep a sleep, his hand would drop and the ball-bearings would spill noisily on the floor, waking him up again. He'd then write down whatever was in his mind.
Taken from: http://www.wilywalnut.com/Thomas-Edison-Power-Napping.html
If there was one thing I could add it would be take notes. If you can explain to someone else the "whys" and "hows" of your solution to a problem, you've got it down pat. Sometimes we get something to work by googling a quick solution, but we forget to try and understand that solution. This doesn't serve us well. Taking notes helps me to submlimate what I learn on the job into hard-referenceable tomes of knowledge.
That said, writing about programming on your public blog is scary because you (usually) know that whatever topic you write on there are people who know more about that topic. You just have to get over that and learn to embrace the benefits of the conversations that ensue. Or write to a private audience.
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -Brian Kernighan
I can add some of my experience (since I was 8 years old?), it works in my personal framework:
- Hybrid languages/technologies: for some solutions I need to use jython to develop in python + java libraries that you don't found in other languages (i.e: htmlunit).
- I have always an interactive console opened to test stuff while using an editor.Sometimes the language doesn't have a good interactive interpreter, my preference is using python (if using .net, then ironpython, if using java, the jython, etc).
- Since I do a lot of research for customers, research and proof of concept are first in the list to reduce the real risk in the project.
- When I need optimizacion/speed C/C++ is the language, but try to glue with python, ruby, COM, etc (SWIG is your friend).
- Sometimes debugging is the best option, sometimes focusing in the code and try to find the bugs in my head.
- Then I agree with Gabriel.
Maybe this is it?
A program should be light and agile, its subroutines connected like a strings of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little nor too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity. -- The Tao of Programming
rjbond3rd: "I'm mangling the quote but what I've heard is: 'Tackle a difficult problem by redefining it as a series of solved problems.'"
What's the simplest thing that could possibly work? http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.htm...