Hacker News new | comments | show | ask | jobs | submit login
Rob Pike: The Best Programming Advice I Ever Got. (informit.com)
427 points by chanux 1533 days ago | hide | past | web | 137 comments | favorite

This reminds me an episode in "Surely You're Joking, Mr. Feynman!" where he fixes a noisy radio just by trying to understand how can the problem happen and comes to conclusion that the most likely cause is that amp's vacuum tubes heat first and generate a lot of noise before the rest of the circuit is ready. He swaps tubes, problem is gone and the owner of the radio (which was very sceptical at first) goes around telling about this twelve years old boy "he fixes radios by thinking!".

The power of thought is strange. I happen to have this ability as well. That's mainly why I'm still employed.

A couple of years ago, we originally had a 4 man team. 3 of those guys left, leaving me (the junior dev) to deal with the whole platform.

I didn't have any time for error, but I didn't know a darn thing. I did, however, had the ability of the "hunch". Basically, when a problem happened, instead of opening the text editor and looking for the problem, I just stood back and thought about it. I'm not sure what I'm thinking about. Its strange. Its like my brain just traverses through the infrastructure and comes up with the LOGICAL solution. Once I get that answer, I then go look at the code base. 99% of the time, its my hunch that finds the problem.

I amaze myself all the times with this. To me its just logic, but I'm sure its something more. I'll probably never figure it out, but that's not to my detriment.

This also helps with writing new applications, because you already know what will work and what won't before you get there. Its, craziness!!

Me too. I used to really get frustrated when others couldn't see what I thought were obvious answers looking at a situation. Now I've come to sincerely believe there's something strange in me that allows me to see these things because when I've tried to take the Socratic method to get others to see what I see in various situations, I realize there are a lot of people who actually can't. Some people can get it via experience, but I've been able to do the same thing in areas or situations where I have zero experience (and now I'm talking about life, not just technical stuff).

I don't want to get on a high horse and say I have superpowers or something like that. But for some reason I'm able to do something that others can't, even though I think it's simply simple logic. It's sometimes a scary thing, because when I'm wrong, I have a difficult time figuring out why I'm wrong if someone else isn't able to do the same thing and "see" ahead.

Sometimes it seems to me that "seeing" is a better word than "thinking" here, because for the simpler things, it's not like I try to use brain cycles on the issues at hand.

And I hate saying this stuff, because it makes me sound arrogant when I know that I'm really incompetent at so many things.

It's sad to say, but the majority of people cannot follow a logical progression more than a two or three steps. It's not even an issue of "smart" vs "dumb", as many of my quite clever coworkers have this issue. It seems to me to be predicated on the ability to keep a lot of state in your head. If in order to reach an answer you need to synthesize the results of two logical conclusions and you can't keep the first result in your head while you work on the second, you're going to get lost.

> It seems to me to be predicated on the ability to keep a lot of state in your head.

There's some research and discussion among educators and neuroscientists about how working memory may be more important than IQ. IANAE but my anecdotal experience also supports this.


I think a lot of engineers get to this point through different methods.

For me when I was new engineer (no CS background) I'd never spend too much time on a bug before I got frustrated and would head over to a more senior engineer's cube for advice. Usually during the walk over while I tried to properly form the exact question to ask I'd figure out the answer. Dan (my go to guy for this) got used to seeing me head his way then just say hi, he'd ask "Figured it out?" and I'd say "Yep" and head back to my desk.

I amaze myself all the times with this. To me its just logic, but I'm sure its something more. I'll probably never figure it out, but that's not to my detriment.

I vaguely recall a story that my grandfather used to be very good at doing math in his head, and actually lost this skill when he tried to understand his own thought process well enough to explain it to others...

Thank you for re-re-re-reminding me to read that.

link for anyone else who's been meaning to read this: http://amzn.com/0393316041

I recommend: http://www.amazon.com/Classic-Feynman-Adventures-Curious-Cha...

It has both "Surely You're Joking, Mr. Feynman!" and "What Do You Care What Other People Think?" in a nice hardcover.

Does the order of the two terminal conditions matter? / Think about it.

Does the order of the two terminal conditions matter? / Try it out!

Does the order of the two previous answers matter? / Yes. Think first, then try.

- The Little Schemer

One of the best programmers I have known was a CS theory guy.

We were working on these tightly-coupled multiprocessor machine which was quite unstable, and would hard lockup if something went wrong in the program (requiring a walk over to the machine room, hitting the reset button, etc.). We would start hacking at our assignments quickly, and make innumerable trips to the machine room.

This guy, on the other hand, would just sit and stare into space for a while; then jot down the entire program on paper. Then he would enter it into the editor, fix a couple of typos that the compiler caught, and run it. His program always worked on the first or second attempt.

Funny part? he hated to program, and was a theoretician.

The best advice I had (from a theoretician) was that I should not try to randomly fix my code but I should understand why it does not work. How to do that?

Can you prove (in the mathematical sense) that it always does what you want?

Giving a full proof that takes language semantics into account is very difficult. I think it isn't a practical way to verify programmes but maybe it'll be in future.

The advice isn't to mathematically prove anything, which as you mentioned is hard and may even be unsolvable.

The advice is to look at the issue as you would a mathematical problem, critically and analytically. Don't use shotgun debugging or quick fixes, but instead try to understand the code thoroughly, what its goal is and how it accomplishes it. Instead of testing with random data, mentally walk through all the possibilities and branches.

This is akin to making a mental model of what you are trying to do, and then verifying that the model is correct, and that your code matches the model.

I don' think the advice is to do a proof, but rather to design and think about it in such a way that you could given enough time.

One of my CS theory profs had the policy, "A proof is anything that convinces me you could write a proof."

That's why it's very hard to juggle multiple projects at the same time. Every time you switch projects, you have to flush the old mental models of the old project out and reload the mental models of the new project.

Same thing with interruption, it takes a while to rebuild the mental model after being interrupted.

Ah, Yes. I use to have a strong mental model of our security infrastructure and could troubleshoot problems by just hearing the description of the issues and a couple of variables. I would literally see visual gaps where my knowledge of the system stopped if I was drawn there. I would then fill in those gaps.

Now as a developer I haven't worked on a model long enough (my excuse) to be able to do that. But now I realize I already have a process for identifying gaps that I don't use.

I work with a guy who can do this and recently he has amazed me with how he can do it. Contrary to this though, sometimes I think he works in outdated models that limit his creativity.

I am beginning to realize this recently at my new job. Due to a couple of people leaving and stuff like that, i got handed a reportedly(half-written) code base with no test cases. And a knowledge transfer document, that was mainly written by a poor guy who had gotten the code(before me) and left the project in 3 weeks. In any case, couple of months into the project i got handed a different requirement. Guess what, a total of 10 months after i took the job the second tool is done and is in testing while i am still working on the first one. But the main problem so far has been, lack of clear visualisation of the requirements beforehand. Documenting a project specifications (functional or technical) is unheard of here it seems. I swear taking the time to gather all the requirements would have halved the time i spent developing,demoing, getting feedback,correcting/fixing suggestions etc.. sigh

A professor of mine who worked at Bell Labs once made the same point. "In the old days we had to think a lot about how our punch card program worked because we'd only find out if it worked the next day. Nowadays you guys just throw crap at the wall and see what sticks. Find the middle ground."

Same with the former French department store programmer who taught us Pascal in high school - punch cards have shaped that generation. That mindset still exists in industries with physical processes, but "measure twice, cut once" is wisdom lost to the desktop generation - experimenting can replace some thinking... But we certainly overestimate how much.

I agree there's been an overall shift in styles, but instant-feedback aspects of programming also have a pretty long pedigree, in the form of Lisp's REPL.

That's one of the less obvious (to me, at least) benefits of test driven development: When you're writing out your unit test, you're forced to think about how the implementation is going to work.

I had the opposite reaction: I wonder if tests make it easier for you to fix code without forcing you to develop a mental model of it, assuming you're working in an unfamiliar codebase. That seems like something of a hidden drawback.

That may be possible, but it isn't inevitable. I use tests to validate that my mental model is correct. When I'm doing something greenfield, you'll see my tests are full of rather stupid-looking assertions of really basic stuff, and the reason for that is that about 5% of the time, my really basic so-simple-it-couldn't-be-wrong is wrong.

If you're building something that's going to be used as a foundation by lots of other things, those 5% errors add up really fast.

It does help in some sense, but not always. I found that, if i wrote out test cases like i am preparing a test scenario document for someone in plain English it works. If i have to open vim and write test cases, i seem to the hack mode and write out the most trivial cases, causing painfully slow development. Test Document + thinking/visualization works better for me.

I've found this as well. Just blindly writing test cases doesn't work so well unless you've already understood the higher level operation of what you're trying to build, and obviously does tend to slow down development.

I thought thinking in code was how every programmer worked. So people usually write code like they write words?

Well, it's not uncommon for code to be blurted out without first thinking about the implications - to similarly embarrassing effect.

I don't think in code. My mental model is the process, not the series of functions and objects that make up the process. When the process breaks, I'll dig around to see what code makes up that step of the process.

How do you find the middle ground? My hypothesis is that one can write the tests first and use them as a compass. If debugging proceeds monotonically, you have thought enough. If you fix the bug revealed by test r, but later when you fix the bug revealed by test s, test r starts failing again, that is a clue that you didn't think enough.

What the clue means will depend on the order of the tests. If the tests were written in order of increasing code coverage it is probably a clue that the algorithm needs more thought, but it could be a clue that one hasn't thought enough about test coverage and has ones tests in an unhelpful order.

I wonder if building these mental models has become more difficult with the rise of multi-layered/full-featured programming frameworks. I'm thinking of things like Spring or Rails. A given webapp might involve a dozen layers or so: a database language (SQL), a database wrapper (Hibernate or ActiveRecord), a server side language, templating languages, client side languages (javascript), CSS, HTML, etc... etc...

Understanding all of that seems overwhelming. IOW, being a "full stack" generalist is getting harder, imho.

> I wonder if building these mental models has become more difficult with the rise of multi-layered/full-featured programming frameworks.

I very much suspect this is one of the reasons that motivated Ken and Rob to create Go.

And see also Ken's comments about C++ at the recent Turing award winners celebration. ( http://amturing.acm.org/acm_tcc_webcasts.cfm )

IOW, being a "full stack" generalist is getting harder, imho

But more valuable. Coding with people who started with Rails, they may not know that there are 1000x differences between things that seem equivalent to them. In ye olden dayes it was hard to be off by that much because the problems would surface sooner.

I know people who make incredibly good money cleaning up after teams that lack a single full-stack person.

I think the size and complexity of the program is the reason why we need to think first before we look at the code. I'll probably look at the obvious indicators (logs, stacktrace etc) and then the "thinking" could begin. Obviously with a huge program there are more probability for the local fix to happen because there are so many moving parts

One of the things it has led me to do is assume that the everything else is correct and I must be doing something wrong. We do that with compilers, interpreters, the OS, hardware, etc., too.

It's usually a decent starting assumption, but it can lead to wasted hours when the problem really is outside of what you've done.

Yes - there was once a bug that took me three weeks to track down, and it ended up being in the compiler.

I think this might be an advantage of pair programming that isn't generally talked about.

The time to find a bug with a debugger has a much tighter distribution than the time to find one by thinking about the code. There are some problems that you just can't get to the bottom of by thinking.

By having one person take each path, you get the advantages of both. It might seem that a single programmer should be able to achieve the same by switching between the two approaches, but the cost of context switching is so great that it will be more like starting again each time. So it's really hard to know when to stop thinking about it and to get the debugger out.

The advice that once you find the bug you should work out what the problem really is definitely holds solid, though.

I'd say beware the opposite as well.

If you're ‘just a’ developer in a larger company, you may rarely get to actually fix these high-level problems you discover. You kind of have to live with it, which is frustrating.

And if you're working alone, too ‘high-level’ thinking may well slow down getting things done. If you don't have around someone with more pragmatic attitude, be sure to have a bit of it yourself. =)

I don't understand the lesson to be about "high-level problems", but about not seeing the forest for all the trees. A debugger can give an invaluably direct view of what is happening, but it can allow be a very narrow view.

But the debugger is not the problem - in fact, when you don't know the code perfectly, a debugger can be very useful in enabling you to form a sufficiently complete model of the code in the first place.

It only becomes a problem if, at that point, you keep looking at details rather than stopping to think about what you've learned.

That's why most people 'here' are probably not 'just a dev' at a larger company. There is a lot frustrating about that position.

I agree with your second remark; you need pragmatism, but yes, as I get older I notice that I solve bugs in my head instead of debugging in the usual fashion. I strive to type as little as possible and to do that you need to do a lot of head work; 20 years ago, I was the opposite.

Can't agree more. Infact at my current company working alone has led me to slow down on getting stuff done. my solution? Write obsessively, i.e to the extent of opening up 750words.com and putting down whatever thoughts interrupt work even if it is at 30 minute intervals.

This reminds of of two things:

Toyota's '5 Whys' root cause analysis, don't just fix the manifestation of the problem, but keep asking why it happened until you get to the real cause.

The NASA space shuttle programmers


When they find a bug, they go to great lengths to find how and where in their process of programming this was able to happen, sometimes finding other bugs before they surface.

As he says it's indeed a matter of style/preference.

I've personally spent 10 years in academia deeply understanding many things that more often than not were good for nothing. Now I'm OK with just getting things to work quickly and not looking back.

PS: I'm a really fast typer.

This is a really relevant article. These days there are legions of programmers who don't think about underlying issues and just throw clever hacks in to get things working and move on. Worse yet, some of these types end up in managerial roles where they expect others to work as quickly and sloppily as they used to.

It's worth noting that the "clever hacks in to get things working and move on" has proved to be a fairly decent business model for smaller projects or smaller companies. Creates ugly code, sure...

agreed. But there is a difference between knowingly doing something sloppy to get things done in the short term versus working like that on a regular basis and thinking it's just how programs are written.

At what point in a neophyte programmer's life should he/she switch from the "immediate & non-stop coding" Khan Academy approach recently discussed here on HN to this Ken Thompson "take a moment and think first" approach? Isn't there the danger that they might not be able or motivated to make the switch?

It's important to see that these aren't mutually exclusive, even though they sound quite contrary. On the one hand you have the hack -- the one-off prototype thrown together simply to verify what can or can't be done. On the other hand you have the design -- once you know what perspective is best, to build the system from that perspective and validate that your intuitions are building one coherent whole.

Kay's Law is that the right perspective is worth 80 points of IQ: that is, if you find the right way to look at a problem, you can do things with the ease of someone with a 180 IQ who didn't have the advantage of that perspective. How do you get a good perspective? Well, there are a couple different ways. Some are pre-made perspectives: for example, dynamic programming: "let's just build up all the solutions in order from n=0, reduce everything to some other problem we've already built up." Some are intermediate, like wishful thinking: just imagine that magically, you have functions which you don't have, so that you write an algorithm which is correct, but references a bunch of functions which don't yet exist.

Some approaches to new perspectives are more general: hacking, for example, is when you try a bunch of things which just barely work, to find one which does. Another is duck solving: explain your problem out loud to an inanimate object (like a stuffed or rubber duck), and half the time you'll accidentally create a perspective purely for explaining the problem which is useful for solving it. These are very generally phrased because "problems" are a very general topic for intellectual discourse.

Use debugging to form a mental model.

When I'm trying to understand foreign code in less than enough time, I instrument it (almost always at inputs/outputs) and try to treat functions as black boxes. In an ideal world the black boxes would be working.

Data flows are as important as algorithms. See this thing from Guy Steele.


In particular:

Fred Brooks, in Chapter 9 of The Mythical Man-Month, said this:

"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious."

That was in 1975.

Eric Raymond, in The Cathedral and the Bazaar, paraphrased Brooks' remark into more modern language:

"Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious."

That was in 1997, and Raymond was discussing a project coded in C, a procedural language. But for an object-oriented language, I think this aphorism should be reversed, with a twist:

"Show me your interfaces, the contracts for your methods, and I won't usually need your field declarations and class hierarchy; they'll be irrelevant."

I think, however, that practitioners of both procedural and object-oriented languages can agree on Raymond's related point:

"Smart data structures and dumb code works a lot better than the other way around."

"Smart data structures and dumb code works a lot better than the other way around."

An interesting assertion. In your link Guy Steele observes the duality between objects (where it's easy to add new data types but harder to add new operations that work on all of them) and abstract data types (where it's easy to add new operations but harder to add new data types).

Guy says that the former tradeoff is almost always the right one, but I've encountered many situations where the latter was much more convenient. It's far more preferable IMO to be able to choose which tradeoff you prefer based upon the constraints of your particular problem.

This usually is not so much a language-level problem as a cultural problem; many programmers are infatuated with OO (I know I was at one time) and unaware of the tradeoffs OO makes or when it's appropriate to use another approach. Hopefully over time multiparadigm languages like Python will help make "objects vs ADTs" more of an engineering question and less of a religious one.

As a recent neophyte, I will use my anecdotal data of size 1 to hazard an answer. I believe once syntax has become relatively understood is when a programmer should start mapping (mentally or on paper) the steps they need to successfully complete a program.

You are only so smart. Once the complexity of the programming model reaches a certain point a debugger is necessary to validate and discover the true nature of a system. Often that point is quite low.

This might be an argument for not going overboard with complexity in the first place, which is probably a view Pike and Thompson would subscribe to.

> Often that point is quite low.

I've found that liberal logging and careful error management has almost replaced the debugger entirely. In the last three months, I've only fired up the debugger twice, and that was when I was working with untyped memory and peeking at memory through the debugger was the most efficient way to get things done.

I'm also one to use logging and error management to help my way through a program. I don't understand how anyone else does it any other way. But surely the debugger helps a lot when no other way to reason about your code exists.

I have a lot of problems explaining this to the people I work with. Most of the errors we make are in some way related to having imperfect information through the debugger.

At some point complexity gets so high the debugger isn't even capable of handling the system under inspection. What are the tools we use in these cases?

I disagree, you'd be surprised how many people have the mental model of various large code bases (think Linux kernel) in their minds. It's not as if you are some savant memorizing all the lines of code and it helps to differentiate between "complexity" and "a mess." :)

Linus refused to merge the kernel debugger patches for a very long time because he felt that if a bug could not be solved by thinking (and logging), then the code was too complicated.

Did he state what changed his mind?

>>You are only so smart. Once the complexity of the programming model reaches a certain point a debugger is necessary to validate and discover the true nature of a system.

That is true but if you rely only on the tools, you will never gain a smarter understanding of the systems you are working with.

I just want to point out that Rob designed much of the logging infrastructure at Google. Take from that what you will.

For people who do not work at Google, is there something unusual (positive or negative) about Google's logging infrastructure?

Think about the scale at which it must operate. It is quite literally awesome. I wish I could give numbers. :-(

One should always try to write well modularized code with lots of good abstraction barriers so that you only ever have to have a fraction of the complexity of the system in your head at a time. If you had to understand everything down to the transistor level you could never even understand a Hello World program. That's because the language you use gives you good abstractions so you don't have to worry about linking, operating systems, memory allocation, etc. When writing complex systems you should strive to create such good abstractions yourself.

Not to mention a debugger to try to understand the "outside world", libraries, hardware, most of it with bugs that may affect you.

Yuck, a debugger exposes so little of the world that it's only appropriate in highly targeted approaches.

This is relevant: http://esr.ibiblio.org/?p=316

Also, I'm curious about something. Those of you who are good at building mentals models: are you also visual thinkers?

Some are visual thinkers but I am not.

I am a logical thinker and good at creating sensible (that is high-level or very granular dependent on importance) abstractions in my head. Once this is done, it's easy to think "this abstract data gets computed by this function which passes its output to this function, etc."

The actual solution will then appear from (0) the association of an (erroneous) result with a particular piece of data/behaviour, (1) strictly following through this model in my head, (2) a holistic understanding of the flow of data and the computational roles inside the system (or external to it in the case of end-users), or (3) a whim to subvert and reframe the question/problem in interesting ways. As Carl Jacob's said: "Invert, always invert."

Generally you progress from (0) to (2) before taking out a debugger. Most easy problems are solved at (0). (3) is unusual but intellectually gratifying if you find a way of solving a problem in a way that could not be understood simply through using a debugger and hacking a couple of lines of code.

Very. It's just how my mind puts things together, not something I can try to do or not do. I tend to see programs in blocks and patterns interacting rather than sequences of logical operations, probably why I tend to prefer the object-oriented paradigm over others. I get bogged down when I can't visualize the interactions. It isn't just programming, either -- working with derivative products in finance I formed very detailed visual mental models of how they operated.

Sometimes visual but more often I imagine the code physically. It has weight, or friction, or rigidity, depending on what aspects of the code I'm trying to think about.

Interesting. I should try using that. That is mechanical models. I have come across it(using a mechanical device analogy/metaphor for thinking about something) in other fields(http://www.ribbonfarm.com/2010/06/30/the-philosophers-abacus...).But never really used it in code/design consciously so far.

It's not actually mechanical. It's called kinesthetic learning or sensing. That's one of the three types: kinesthetic, visual, and auditory.

Yes, I recognize this. A piece of code may be ugly (visual) but it may also be solid (kinesthetic). Or beautiful/brittle.

Here's a link to the (incomplete) sample chapter on their website for 'The Practice of Programming' on Debugging.


I had this in my wishlist for a while. This just made me buy it.

Wonder how different it will be from Code Complete 2.

It's very similar to Code Complete, but much, much shorter. Also, Code Complete has a fair bit of dumb stuff mixed in with the gems of wisdom that comprise most of it; TPOP doesn't.

Thanks for that quick overview. :)

I think if you are just pushing through (brute force) to fix a bug "without thinking" then you will never figure it out. Also, "thinking before" as the article claims, without actually looking at the code or stack traces, that's just mental brute force.

Pretty much it's a combination of both: you look at the code, you think about what's happening and what could go wrong, you look at the code again, you think some more, you look at stacks, variables, output.. and then you think again and BOOM: you figure it out.

The best debugging tool is you. Use all you have.

Actually, I think it's a bit more subtle than that. By using stack traces and debuggers etc, I can come up with a working and correct solution. But, by thinking through the problem at a deeper level I can come up with an improved design which may simplify future programming and avoid similar bugs.

But one must also be careful not to fall into the trap of premature optimizations and building unnecessary complexity.

You've just hit a real actual problem in the code. Factoring that out isn't premature, it's responding to reality.

I was reacting to the idea that instead of fixing the immediate bug you just found you're going to redesign the whole thing to prevent that type of bug from occurring ever again in the future.

And simonh was pointing out that it's not really premature if it's already occurred :)

Don't get me wrong, you have a point - there is certainly a balance to be struck.

Then again, how many times have you found a bug caused by a single line or function call or bit of syntax that didn't do exactly what you thought, that was easily overlooked? Particularly in someone else's code in a language that isn't your main one. I think line level interactive debugging is valuable precisely because it tells you what you really know for certain at the local level, and so lets you reason more effectively about the big picture.

You can discern those through thinking, too. "Well, it got to this point and broke... hm... well, what do I know about this environment at this point. What assumptions am I making? Since this assumption isn't being fulfilled, it must be this issue... and that was caused here"

Also, this was pair-programming, both people are making sure that what's written is what's intended to be written and so those issues should be (mostly) gone.

This is my method of debugging. I tend to think through code for a long time away from a keyboard before writing it, trying to understand corner cases, implications, and so on -- developing a mental model. Well, I wouldn't do this for everything, more for core aspects of a complex/somewhat-complex system.

My co-worker does the same thing. Thinks instead of opening the debugger. In fact, he never uses a debugger even though he's a hardcore low-level C/C++ guy. Another important thing is to have good logs. Don't log too much, but log the most pertinent info. Next he just looks at the code. If it's really tough, he adds a print statement or two. I've never seen him do more in 3 years of working with him.

He says he might use a debugger depending on the type of application he's working with.

Also, there's something to be said for extremely tight, minimalist, highly reusable code. It makes it easier to walk through it in one's head.

Ken was watching Rob soften the lid on the jar only to swoop in at the right time and deal the decisive blow. ;D

This is great advice for when you're working on something that follows logical principles and guidelines. It's also great advice to follow when you're working on a green field project where you're allowed to work out the best place for certain pieces of logic to live.

However, when you're working on legacy systems or have to refactor some horrendous code written by developers many leagues out of their depth, and nothing whatsoever is in its logical place, it's less effective than in other circumstances. This sort of critical thinking is a wonderful tool to have; but just like any other tool, there's times when its use is appropriate, and times when it isn't.

Very good advice. Even when I use a debugger or print statements to locate and fix the bug, I sit down afterwards and make sure I understand why the bug occurred and why the fix is correct.

Thanks for the advice. I've always had a feeling that this was the right approach, yet I was usually too lazy to actually do it. Seeing Rob Pike coming to this conclusion is a good motivation. I've also noticed that the most persistent and hardest to solve bugs came either from logical errors in my mental model or its implementation.

Good advice - after I noticed that I solved most of my tough debugging problems on the walk home from work, I started going outside to take a walk around the building whenever I got stuck. Removing yourself from immediate access to the code lets you think at a higher level how it's organized and what could go wrong.

This is only true for man made systems which you have written yourself or have been working with for long periods of time.

When you try to "think up" a business then bad things start happening. I think this is a huge trap for us programmers who want to become entrepreneurs.

Nature doesn't "think or do research" it "creates and tests aggressively".

i've recently had to deal with similar situation, accept the person that refactored my code didn't make such a big improvement. instead he went on vacation and a lot of things broke when we released the code. even though i've looked at all his check-ins, and they looked harmless, i was not able to foresee the problems we experienced when this code run in prod..

i'm conflicted because on one hand i don't want to be the code police that simply refuses any changes from others, but at the same time its hard to be responsible for a system when so many core updates are done by other team mates w/out proper testing.

i think the happy middle ground is: no such changes in common trunk. Do those in a branch, and only merge them in when benefits are clear, team is on board with all changes, you're ready to release to production, and will be around during the release.

The advice is to be Ken Thompson?

I think this works because you're devoting more energy to the problem than to the sensory and mechanical parts of your brain that will be used to enact the fix. But it's not natural because socially it can look as if you're not 'working'.

Always something to be said for running the code through the compiler in your head. Nowadays you need a virtual distributed environment in your head to emulate the design.

Interesting to note that he has now replaced C with Go (GoLang)

I also found that interesting, but not a big surprise since he is one of the co-authors of go...

Agreed - we'd probably all be most productive working in a language we'd designed ourselves, irrespective of the merits or demerits of that language for other programmers.

Yeah, but not everybody designs a language. Right? [1]The usage of a language directly depends on comfort, productivity and efficiency. Maybe ''Pike's language'' was a factor, but all the above relations[1] come true in wider light.

I'm not bashing Go here, just to clarify - I'm just saying that given Rob Pike's deep knowledge of the language and its libraries, and his heavy design input to it, it doesn't mean much that he's productive in it (well, besides the implied statement that he thinks it's not a toy language any more).

I find that I'm much more productive working in languages designed by other people, because they have much larger standard libraries. :)

Fair - but if you can get Google to pay for a bunch of engineers to build the standard library for you, I bet it would work better ;-)

Go is very productive and efficient for me too...

But you're not supposed to use debuggers. You're supposed to use Test Driven Development.

Mental models. They underlie everything.

read. "pragmatic thinking and learning" l mode and r mode people...

Thanks, I learnt a lot today.

Golden Advice.

The lesson is very old: "Typing is no substitute for thinking." from Kemeny and Kurtz in their book on Basic.

Ken Thompson said that never happpend; they didn't need to look at each others code, because they were all "pretty god coders": http://www.informationweek.com/software/operating-systems/qa...

That quote (and you misquoted, btw) is from an interview discussing his working relationship with Dennis Ritchie. The particular quote is from a section of the article titled "On Collaborating With Dennis Ritchie"

"Dr. Dobb's: Was there any concept of looking at each other's code or doing code reviews?

Thompson: [Shaking head] We were all pretty good coders."

Nothing to do with Rob Pike, who has worked with Ken for decades since.

Your link is discussing the development of Unix, which was before Rob Pike came on the scene. Rob Pike is discussing an incident that happened a number of years later when Rob would have been significantly junior to Ken.

There is no reason that both versions of history can't be true.

On top of that, things would have grown in complexity since the early days.

Don't kid yourself. Systems may be larger and seemingly more complex, the individual parts are as simple or complex as they ever were. Anyway, if things are more complex the necessity of thinking it through are even greater.

In many ways Plan 9 is simpler than Unix, and Go is simpler than C.

Unix was also much simpler than Multics.

Software doesn't need to always grow in complexity, but it takes lots of determination to accomplish this.

A Go implementation is likely more complicated than a C one. You don't think that GC is free, do you?

You forget the fact that Go is a young language while modern C implementation has to support several standards, countless extensions and probably decades of cruft in the codebase. Additionally, if you factor in that go includes a tool that replaces most of what autotools gdoes in the C world, we're talking about a huge blob of accidentally complex code.

It's very reasonable to think that the Go implementation is simpler.

I'm not forgetting. The make tools are not part if the language and irrelevant. Most C developers I know avoid extensions for portability reasons. The cruft part doesn't matter much for this discussion either, you can start from scratch if you want like TCC. The language definition of Go requires some things that C does not for any reasonable implementation. I think it is much more reasonable to think a C implementation will be much simpler.

Everything else you said I'll grant with "we have differing opinions" except

"The make tools are not part if[sic] the language and irrelevant"

No, they're quite a lot more than just make replacement. And definitely not irrelevant!

They are irrelevant to discussing implementation details of the languages.

Actually, the go implementation is very simple, if a little arcane in the style of C used.

Simple compared to what? The Go implementation is very immature, I'll grant you that, but it is definitely more complex than an implementation of C at a similar point in its life. At the very least Go supports, fairly straight forwardly, a lot of C and it has a GC and multiplexing user land threads over multiple cores.

* Assume C90

Ahh, if we are comparing equivalent implementations, you are definitely correct. GC and goroutines are much more complex than anything in C. In fact, off the top of my head, the only features truly missing in Go from C are unions (unfortunately) and the preprocessor (effectively a part of C).

Have you ever implemented GC or fibers? They can be pretty simple to implement, really.

There is a big difference between implementing them for fun and implementing them for a production system. Regardless, even if they are simple to implement they are more than what C90 has so, by definition, more complex implementation.

They're a lot simpler than printf, strxfrm, or mktime, all of which are in C90!

They are conceptually more high level than anything in C - that's what I meant.

Used to code in C...moved to work at Google Labs...now raves about Go being the most productive language EVARR and has replaced C...Hmmm.

I think you missed "coauthored Go..."

Not just that, he coauthored Go together with the coauthor of C.

Didn't realise either of those points. I guess that adequately explains my wondering. What's amusing now are the insecure defensive posts and the downvoting.

You're getting down voted because cynicism is cheap, and that's all you brought to the discussion.

After the ....Hmmmm should have been a thoughtful comparison of the two languages.

....Unless maybe you understand implicitly that you're probably not as smart as Rob Pike.

I find myself productive in go too... and So do many people..

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact