1. Tools. I generally shy away from tools. I just don't like using anything that makes me more productive when I'm programming. I prefer to type out every line of code, occasionally cutting and pasting from the same program or something similar from the past, but not very often. I want the writing of code to be painstakingly slow and deliberate. Why? Because productivity is not the objective. Becoming "one" with my project is. I may not start as fast as others, but that doesn't matter. It's not how fast you start, it's how soon you deliver a quality product. Like memorizing a speech, cranking out the code by hand makes it "firmware" in my brain. It's not unusual for me to crank out 300 lines of code and then be able to reenter them on another machine from memory. So when it comes time to debug, refactor, enhance, or rework, those cycles go very quickly; that code is already in my brain's RAM and it got there the hard way.
2. Simple Algorithms. Yes! I love this example:
* EnterOrders 08/31/10 edw519
3. Debugging. I don't. I've seen 25 different debuggers and I hate them all. Print() is my friend, especially beyond the right hand margin. Better yet, write code that doesn't need to be debugged (See #1 & #2 above.) (Of course, this is much harder with someone else's code.)
4. References. Don't need no stinkin' manual. I have become extremely adept at about 4% of what's available to me, but that 4% accomplishes 98% of what I want to do. (OK, I'll refer to a manual when I need something from the other 96%, but that doesn't happen too often.)
Nice post, Gabriel. Got my juices flowing. We could talk about all kinds of other things too, like variable naming, how to iterate and branch, and never typing the same line of code twice, but this was a nice starting subset.
The thing is, every single bit of code you write depends on something else that you DIDN'T write. Everything. No exceptions.
And the thing about stuff you didn't write is that you end up making assumptions about it - often without even realising. Sometimes the assumption is simply that the code works. Sometimes, some of your assumptions will be wrong.
Debuggers help you check your assumptions. They're a very useful tool for this - more so than many people who say "I prefer printf" realise.
I apologise if you're not in that category, but did you know that many debuggers have a way to print a message (without stopping) whenever the program hits a certain line? Just like printf, only it doesn't have to be built in to your code ahead of time.
There are times when a debugger isn't the right tool for the job, but it's always better to make an informed choice.
The big advantage of printf as a debug technique is that it doesn't work unless you understand what you're doing. If you plan to debug using printf, you must write code that is easy to understand.
Great debuggers let you try to fix code that you don't understand. Worse yet, they make it reasonable to write such code.
Since the person reading the code is never as smart as the person who wrote the code, encouraging simple to understand code is extremely important.
Better to use a debugger, if you can. Not that I haven't ever sprinkled print's into the middle of a big mass of somebody else's code or framework just to get the initial lay of the land :-)
Regarding "-g", at the job a few years ago where we were using C for some in-house jobs, we skipped using "-O" for the most part and simply deployed with "-g" left on. Better safe than sorry for single-use software.
> The big advantage of printf as a debug technique is that it doesn't work unless you understand what you're doing.
This isn't true, nor would it be an advantage if it were.
Any fool can stick a printf statement into their code, just like any fool can run it in a debugger. You might get lucky and find the problem, or you might not. Understanding what you're doing will help you out whichever technique you use. Better yet, it will help you decide which technique is more appropriate to the problem.
That's like saying that you might get lucky if you make random changes to program code. Yes, it's true, but ....
> Understanding what you're doing will help you out whichever technique you use. Better yet, it will help you decide which technique is more appropriate to the problem.
Except that I was talking about understanding the program, not "what you're doing".
In my experience, debuggers let me find things, or, more often, think that I'm doing something useful, with less understanding than printf. YMMV.
A long time ago, I used debuggers and printf.
For years, I've instead added tests to checks my assumptions -- and I run the tests multiple times an hour.
(I still printf since it works well with the tests if I run a subset of them.)
(Edit: This was for when I write code, not working with others.)
Tests are very useful, but they fulfill a different need to debuggers. In fact the two work well together: stepping through a failing test case with a debugger can be a very effective way to find the root cause of a problem.
This is pretty hard with your own code too. :-)
It never ceases to amaze me how often I go back to code I've written and think, "Why did I think this would work for all cases and exceptional conditions?". Then again, it's often hard to remember what I knew a year ago... maybe I wasn't even aware of these cases.
There is nothing like simply getting a "Hello World" up and running as a simple sanity check and starting point.
Users get to see something early on.
Developers build a structure to work in.
You have an integration platform.
You have something to demonstrate.
You have a better feel for progress.
As a solo founder I need working code, that I can put out of mind, while I produce more.
Personally, I'm fond of configurable logging, but I'm not under any illusions; logs are like freeze-dried debug sessions with less features. Logs verbose enough to contain the relevant information can also get borderline unwieldy and time-consuming to generate and process. The compiler I work with, when hosted in the IDE, could easily produce over 100GB of log output on a complex project test case, so I usually try and keep it down to 1..2GB or so, but that often leaves out the incriminating evidence, so it takes some back and forth... And the reason I'm using log files at all is often because debugging itself gets unwieldy on large test cases.
We're talking about programming, and that's not programming. That's called piling shit on top of shit.
So, it goes something like this:
1. Insert `import pdb; pdb.set_trace()` into the code where I think there's an issue.
2. `print foo`
3. Hmm, that doesn't seem right... `print baz(foo)`
4. Ah, I needed to change... tabs over to editor
This style is heavily influenced by the fact that I primarily program in Python and Ruby, where one generally learns the language by typing commands into a REPL, rather than executing a file. When working on a Python project with a bunch of classmates who were good programmers, but used to Java and C++, I found that they found this approach utterly unintuitive.
It's much better to use the tools as far as you can , use nice big descriptive names (since you never have to actually type them out this is win-win) and try to be as consistent as possible . For projects I've been the driver on I never have to look at anything. I know how the project is going to be laid out, what the naming scheme will be like, etc. I can e.g. open up the Visual Studio solution file, Ctrl+n, type in the capital letters of the class I know will be of interest (e.g. WRI for WidgetRepositoryImplementation) and off I go. While you're still birthing out your first class I'll already be tightening up my tests.
And there are people vastly better/faster than I am at this.
 But be sensible about this. For me, big code generators are a no go. I use spring because it (a) doesn't sprinkle it's code anywhere I have to see and (b) regenerates the code every time so there is no sync issue between changing a source and regenerating a dependency.
 Always refactor when you touch code, including making the code consistent with current coding standards.
I don't hate to say this, but responses like yours are the classic example of hn's biggest problem: people talking when they should be listening.
I suppose your opinion could be accurate if we had a flux capacitor and you were making your prediction in 1979. But it's you versus 1,000,000 lines of code deployed, 1,000 successful projects, and 100 satisfied customers. They outrank you.
FWIW, I made 2 posts yesterday. This grandparent which included intimate secrets of my success and earned nothing and this little joke:
which earned 58 points.
Gabriel Weinberg made wrote an interesting post which got my juices flowing and led me to share my experience (experience, not opinion). Then people like you tell me I'm wrong or worse, that what actually happened couldn't have.
Any wonder why outliers with successes to share are reluctant to share them?
I don't care much for the "no true Scotsman" nonsense, but I don't agree that someone should be listening to out dated advice. If you meant it as a story about the glory years then ok.
> This grandparent which included intimate secrets of my success and earned nothing and this little joke:
The one where you showed you were good at real estate? Are you here to earn imaginary points or to communicate with like minded people? You need a couple hundred karma to get downvote ability but after that? Who cares.
>Then people like you tell me I'm wrong
Your methodology is no longer the most effective way to develop software. One can still do it, but one would be artificially limiting what they're capable of.
EDIT: Edited to remove some of the unnecessary barbs. No one likes to be told they are what's wrong with anything but no one wins a flame war.
You're the one who started with unsubstantiated claims about how the OP was a poor programmer because he didn't follow your methodology. If edw has managed to find a method that works for him, why shouldn't he share it? You can take it or leave it. If you feel the need, you can also tell him where you think he is going wrong. But there is no need to start by insulting people or telling them that you can code circles around them. Edw may be the worse coder, but you've come across as the bigger dick.
>you can code circles around them
My point wasn't that I can code circles around him but that modern practice can code circles around him. Which is something he could take up as well.
And if there's too many modifications, a different trick will do. Change the breakpoint to a counted breakpoint with a nice high count, then run until the corruption occurs, then check the pass count on the breakpoint. You can then set the count on the breakpoint to the exact value needed to stop at the corruption, like stepping back in time (i.e. the famous "who modified this variable last" debugger feature).
#3 - inspired a "printState() div" 10,000px off-page
#4 - Especially when there's more than one way to do it, relying on my little mind saves huge amounts of productive time.