Hacker News new | comments | show | ask | jobs | submit login

BIG disclaimer: I have NO formal training.

1. Tools. I generally shy away from tools. I just don't like using anything that makes me more productive when I'm programming. I prefer to type out every line of code, occasionally cutting and pasting from the same program or something similar from the past, but not very often. I want the writing of code to be painstakingly slow and deliberate. Why? Because productivity is not the objective. Becoming "one" with my project is. I may not start as fast as others, but that doesn't matter. It's not how fast you start, it's how soon you deliver a quality product. Like memorizing a speech, cranking out the code by hand makes it "firmware" in my brain. It's not unusual for me to crank out 300 lines of code and then be able to reenter them on another machine from memory. So when it comes time to debug, refactor, enhance, or rework, those cycles go very quickly; that code is already in my brain's RAM and it got there the hard way.

2. Simple Algorithms. Yes! I love this example:

  * EnterOrders 08/31/10 edw519
  *
  return();
I now have a working programming - Woo hoo! You say you want more features? No problem. Let's start enhancing it.

3. Debugging. I don't. I've seen 25 different debuggers and I hate them all. Print() is my friend, especially beyond the right hand margin. Better yet, write code that doesn't need to be debugged (See #1 & #2 above.) (Of course, this is much harder with someone else's code.)

4. References. Don't need no stinkin' manual. I have become extremely adept at about 4% of what's available to me, but that 4% accomplishes 98% of what I want to do. (OK, I'll refer to a manual when I need something from the other 96%, but that doesn't happen too often.)

Nice post, Gabriel. Got my juices flowing. We could talk about all kinds of other things too, like variable naming, how to iterate and branch, and never typing the same line of code twice, but this was a nice starting subset.




I'm amazed and a bit saddened whenever I hear sentiments like this. These days more than ever.

The thing is, every single bit of code you write depends on something else that you DIDN'T write. Everything. No exceptions.

And the thing about stuff you didn't write is that you end up making assumptions about it - often without even realising. Sometimes the assumption is simply that the code works. Sometimes, some of your assumptions will be wrong.

Debuggers help you check your assumptions. They're a very useful tool for this - more so than many people who say "I prefer printf" realise.

I apologise if you're not in that category, but did you know that many debuggers have a way to print a message (without stopping) whenever the program hits a certain line? Just like printf, only it doesn't have to be built in to your code ahead of time.

There are times when a debugger isn't the right tool for the job, but it's always better to make an informed choice.


> I'm amazed and a bit saddened whenever I hear sentiments like this. These days more than ever.

The big advantage of printf as a debug technique is that it doesn't work unless you understand what you're doing. If you plan to debug using printf, you must write code that is easy to understand.

Great debuggers let you try to fix code that you don't understand. Worse yet, they make it reasonable to write such code.

Since the person reading the code is never as smart as the person who wrote the code, encouraging simple to understand code is extremely important.


A big disadvantage of printf() in C (or similar non-memory-managed languages) is that printing values can disturb the "noise" on the stack that you were trying to find. That is, if you had a bug due to an uninitialized local variable or buffer overrun, the printf() call could "stabilize" things and cause a bug to stop manifesting.

Better to use a debugger, if you can. Not that I haven't ever sprinkled print's into the middle of a big mass of somebody else's code or framework just to get the initial lay of the land :-)


uh, the -g compiler flag to enable debugging will create much more disturbance than a few printf ever could, debuggers are mostly useless for tracking down race conditions as they change the timing completely. OTOH, I'd use a low overhead logging package instead of printfs.


Good one: yes, race conditions are not going to manifest while manually slowly stepping. You of course need to plan / read carefully for code which will be vulnerable to such.

Regarding "-g", at the job a few years ago where we were using C for some in-house jobs, we skipped using "-O" for the most part and simply deployed with "-g" left on. Better safe than sorry for single-use software.


I disagree with almost every sentence in your post, but there's one bit in particular I wanted to focus on:

> The big advantage of printf as a debug technique is that it doesn't work unless you understand what you're doing.

This isn't true, nor would it be an advantage if it were.

Any fool can stick a printf statement into their code, just like any fool can run it in a debugger. You might get lucky and find the problem, or you might not. Understanding what you're doing will help you out whichever technique you use. Better yet, it will help you decide which technique is more appropriate to the problem.


> Any fool can stick a printf statement into their code, just like any fool can run it in a debugger. You might get lucky and find the problem

That's like saying that you might get lucky if you make random changes to program code. Yes, it's true, but ....

> Understanding what you're doing will help you out whichever technique you use. Better yet, it will help you decide which technique is more appropriate to the problem.

Except that I was talking about understanding the program, not "what you're doing".

In my experience, debuggers let me find things, or, more often, think that I'm doing something useful, with less understanding than printf. YMMV.


>>Debuggers help you check your assumptions. They're a very useful tool for this - more so than many people who say "I prefer printf" realise.

A long time ago, I used debuggers and printf.

For years, I've instead added tests to checks my assumptions -- and I run the tests multiple times an hour.

(I still printf since it works well with the tests if I run a subset of them.)

(Edit: This was for when I write code, not working with others.)


If you're saying tests are an alternative to using a debugger, then I disagree.

Tests are very useful, but they fulfill a different need to debuggers. In fact the two work well together: stepping through a failing test case with a debugger can be a very effective way to find the root cause of a problem.


"Better yet, write code that doesn't need to be debugged (See #1 & #2 above.) (Of course, this is much harder with someone else's code.)"

This is pretty hard with your own code too. :-)

It never ceases to amaze me how often I go back to code I've written and think, "Why did I think this would work for all cases and exceptional conditions?". Then again, it's often hard to remember what I knew a year ago... maybe I wasn't even aware of these cases.


"2. Simple Algorithms. Yes! I love this example:

  * EnterOrders 08/31/10 edw519
  *
  return();
I now have a working programming - Woo hoo! You say you want more features? No problem. Let's start enhancing it."

There is nothing like simply getting a "Hello World" up and running as a simple sanity check and starting point.


I like the game programming story about the black triangle to illustrate this point: http://rampantgames.com/blog/2004/10/black-triangle.html


This are called Tracer Bullets in the book "The Pragmatic Programmer" by Andrew Hunt and Dave Thomas. They advocate:

Users get to see something early on.

Developers build a structure to work in.

You have an integration platform.

You have something to demonstrate.

You have a better feel for progress.

As a solo founder I need working code, that I can put out of mind, while I produce more.


I've always heard "tracer bullet" as a request to a production system that selectively gets much more detailed logging or diagnostic output than usual.


Which is to say you haven't read The Pragmatic Programmer? I highly, highly recommend it. I bought my first copy about 8 years ago, and pull it off the shelf once a year or so to freshen up. I always learn something new every time I do.


I know this as top-down programming.


I share your aversion to debuggers. Everyone is different, and plenty of people like them - but I've always thought that they consume brain cycles that otherwise could be engaging with the code. That's why I like print(); it keeps my head in the code.


When you're debugging code you didn't write, and you don't have time to read and understand that half a million lines because you need to ship in one week, and you have 2 hours to fix the bug, the debugger can come in handy.

Personally, I'm fond of configurable logging, but I'm not under any illusions; logs are like freeze-dried debug sessions with less features. Logs verbose enough to contain the relevant information can also get borderline unwieldy and time-consuming to generate and process. The compiler I work with, when hosted in the IDE, could easily produce over 100GB of log output on a complex project test case, so I usually try and keep it down to 1..2GB or so, but that often leaves out the incriminating evidence, so it takes some back and forth... And the reason I'm using log files at all is often because debugging itself gets unwieldy on large test cases.


When you're debugging code you didn't write, and you don't have time to read and understand that half a million lines because you need to ship in one week, and you have 2 hours to fix the bug, the debugger can come in handy.

We're talking about programming, and that's not programming. That's called piling shit on top of shit.


No, it's what happens when you have code that's been maintained for decades by dozens of programmers, most of whom have since moved on to pastures new. Not all of the code has someone on the team who has full insight into it. Sometimes that code breaks because of changes elsewhere. Yes, it would be nice to fully understand the code before fixing the problem; but that's not always a luxury you have.


If you are clever enough to avoid situations where that is necessary than it might be said that you are skilled in the career choices of a programmer. However it does not say anything about your skill as a programmer per se.


I would argue that piling shit on top of shit would qualify as part of the job description for half of the corporate world, programmers and non.


Getting paid to move code around in a text editor != programming.


I use debuggers as ways of inserting print statements into the code, without knowing what exactly I want to print before running it.

So, it goes something like this:

1. Insert `import pdb; pdb.set_trace()` into the code where I think there's an issue.

2. `print foo`

3. Hmm, that doesn't seem right... `print baz(foo)`

4. Ah, I needed to change... tabs over to editor

This style is heavily influenced by the fact that I primarily program in Python and Ruby, where one generally learns the language by typing commands into a REPL, rather than executing a file. When working on a Python project with a bunch of classmates who were good programmers, but used to Java and C++, I found that they found this approach utterly unintuitive.


I hate to say this, but you're never going to be very good (unless you change this). Someone good will have so many projects that they could never remember every detail of each of them, so every second you've spent burning this into your brain will be wasted.

It's much better to use the tools as far as you can [1], use nice big descriptive names (since you never have to actually type them out this is win-win) and try to be as consistent as possible [2]. For projects I've been the driver on I never have to look at anything. I know how the project is going to be laid out, what the naming scheme will be like, etc. I can e.g. open up the Visual Studio solution file, Ctrl+n, type in the capital letters of the class I know will be of interest (e.g. WRI for WidgetRepositoryImplementation) and off I go. While you're still birthing out your first class I'll already be tightening up my tests.

And there are people vastly better/faster than I am at this.

[1] But be sensible about this. For me, big code generators are a no go. I use spring because it (a) doesn't sprinkle it's code anywhere I have to see and (b) regenerates the code every time so there is no sync issue between changing a source and regenerating a dependency.

[2] Always refactor when you touch code, including making the code consistent with current coding standards.


I hate to say this, but you're never going to be very good (unless you change this).

I don't hate to say this, but responses like yours are the classic example of hn's biggest problem: people talking when they should be listening.

I suppose your opinion could be accurate if we had a flux capacitor and you were making your prediction in 1979. But it's you versus 1,000,000 lines of code deployed, 1,000 successful projects, and 100 satisfied customers. They outrank you.

FWIW, I made 2 posts yesterday. This grandparent which included intimate secrets of my success and earned nothing and this little joke:

http://news.ycombinator.com/item?id=1649922

which earned 58 points.

Gabriel Weinberg made wrote an interesting post which got my juices flowing and led me to share my experience (experience, not opinion). Then people like you tell me I'm wrong or worse, that what actually happened couldn't have.

Any wonder why outliers with successes to share are reluctant to share them?


>people talking when they should be listening.

I don't care much for the "no true Scotsman" nonsense, but I don't agree that someone should be listening to out dated advice. If you meant it as a story about the glory years then ok.

> This grandparent which included intimate secrets of my success and earned nothing and this little joke:

The one where you showed you were good at real estate? Are you here to earn imaginary points or to communicate with like minded people? You need a couple hundred karma to get downvote ability but after that? Who cares.

>Then people like you tell me I'm wrong

Your methodology is no longer the most effective way to develop software. One can still do it, but one would be artificially limiting what they're capable of.

EDIT: Edited to remove some of the unnecessary barbs. No one likes to be told they are what's wrong with anything but no one wins a flame war.


"This isn't a dick size contest is it?"

You're the one who started with unsubstantiated claims about how the OP was a poor programmer because he didn't follow your methodology. If edw has managed to find a method that works for him, why shouldn't he share it? You can take it or leave it. If you feel the need, you can also tell him where you think he is going wrong. But there is no need to start by insulting people or telling them that you can code circles around them. Edw may be the worse coder, but you've come across as the bigger dick.


Programming today isn't like programming in the 70's. We're dealing with vastly more complexity now. Edw's old school methodology may have served him well in his day but a new person taking this on will never be able to keep up for the same reason some guy with a mule and plow can't keep up with a modern farm.

>you can code circles around them

My point wasn't that I can code circles around him but that modern practice can code circles around him. Which is something he could take up as well.


Finding a buffer overrun is much easier with a debugger; if you have a deterministic test case, all you need to do is find the address of the corrupted memory (any point after you notice it's gone bad will do), then restart, set up a hardware breakpoint for that address, and continue: usually no more than a modification or two will do the trick.

And if there's too many modifications, a different trick will do. Change the breakpoint to a counted breakpoint with a nice high count, then run until the corruption occurs, then check the pass count on the breakpoint. You can then set the count on the breakpoint to the exact value needed to stop at the corruption, like stepping back in time (i.e. the famous "who modified this variable last" debugger feature).


#1,2 - http://news.ycombinator.com/item?id=1649922

#3 - inspired a "printState() div" 10,000px off-page

#4 - Especially when there's more than one way to do it, relying on my little mind saves huge amounts of productive time.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: