A couple of years ago, we originally had a 4 man team. 3 of those guys left, leaving me (the junior dev) to deal with the whole platform.
I didn't have any time for error, but I didn't know a darn thing. I did, however, had the ability of the "hunch". Basically, when a problem happened, instead of opening the text editor and looking for the problem, I just stood back and thought about it. I'm not sure what I'm thinking about. Its strange. Its like my brain just traverses through the infrastructure and comes up with the LOGICAL solution. Once I get that answer, I then go look at the code base. 99% of the time, its my hunch that finds the problem.
I amaze myself all the times with this. To me its just logic, but I'm sure its something more. I'll probably never figure it out, but that's not to my detriment.
This also helps with writing new applications, because you already know what will work and what won't before you get there. Its, craziness!!
I don't want to get on a high horse and say I have superpowers or something like that. But for some reason I'm able to do something that others can't, even though I think it's simply simple logic. It's sometimes a scary thing, because when I'm wrong, I have a difficult time figuring out why I'm wrong if someone else isn't able to do the same thing and "see" ahead.
Sometimes it seems to me that "seeing" is a better word than "thinking" here, because for the simpler things, it's not like I try to use brain cycles on the issues at hand.
And I hate saying this stuff, because it makes me sound arrogant when I know that I'm really incompetent at so many things.
There's some research and discussion among educators and neuroscientists about how working memory may be more important than IQ. IANAE but my anecdotal experience also supports this.
For me when I was new engineer (no CS background) I'd never spend too much time on a bug before I got frustrated and would head over to a more senior engineer's cube for advice. Usually during the walk over while I tried to properly form the exact question to ask I'd figure out the answer. Dan (my go to guy for this) got used to seeing me head his way then just say hi, he'd ask "Figured it out?" and I'd say "Yep" and head back to my desk.
I vaguely recall a story that my grandfather used to be very good at doing math in his head, and actually lost this skill when he tried to understand his own thought process well enough to explain it to others...
link for anyone else who's been meaning to read this: http://amzn.com/0393316041
It has both "Surely You're Joking, Mr. Feynman!" and "What Do You Care What Other People Think?" in a nice hardcover.
Does the order of the two terminal conditions matter? / Try it out!
Does the order of the two previous answers matter? / Yes. Think first, then try.
- The Little Schemer
We were working on these tightly-coupled multiprocessor machine which was quite unstable, and would hard lockup if something went wrong in the program (requiring a walk over to the machine room, hitting the reset button, etc.). We would start hacking at our assignments quickly, and make innumerable trips to the machine room.
This guy, on the other hand, would just sit and stare into space for a while; then jot down the entire program on paper. Then he would enter it into the editor, fix a couple of typos that the compiler caught, and run it. His program always worked on the first or second attempt.
Funny part? he hated to program, and was a theoretician.
Can you prove (in the mathematical sense) that it always does what you want?
The advice is to look at the issue as you would a mathematical problem, critically and analytically. Don't use shotgun debugging or quick fixes, but instead try to understand the code thoroughly, what its goal is and how it accomplishes it. Instead of testing with random data, mentally walk through all the possibilities and branches.
This is akin to making a mental model of what you are trying to do, and then verifying that the model is correct, and that your code matches the model.
Same thing with interruption, it takes a while to rebuild the mental model after being interrupted.
Now as a developer I haven't worked on a model long enough (my excuse) to be able to do that. But now I realize I already have a process for identifying gaps that I don't use.
I work with a guy who can do this and recently he has amazed me with how he can do it. Contrary to this though, sometimes I think he works in outdated models that limit his creativity.
If you're building something that's going to be used as a foundation by lots of other things, those 5% errors add up really fast.
What the clue means will depend on the order of the tests. If the tests were written in order of increasing code coverage it is probably a clue that the algorithm needs more thought, but it could be a clue that one hasn't thought enough about test coverage and has ones tests in an unhelpful order.
Understanding all of that seems overwhelming. IOW, being a "full stack" generalist is getting harder, imho.
I very much suspect this is one of the reasons that motivated Ken and Rob to create Go.
And see also Ken's comments about C++ at the recent Turing award winners celebration. ( http://amturing.acm.org/acm_tcc_webcasts.cfm )
But more valuable. Coding with people who started with Rails, they may not know that there are 1000x differences between things that seem equivalent to them. In ye olden dayes it was hard to be off by that much because the problems would surface sooner.
I know people who make incredibly good money cleaning up after teams that lack a single full-stack person.
It's usually a decent starting assumption, but it can lead to wasted hours when the problem really is outside of what you've done.
The time to find a bug with a debugger has a much tighter distribution than the time to find one by thinking about the code. There are some problems that you just can't get to the bottom of by thinking.
By having one person take each path, you get the advantages of both. It might seem that a single programmer should be able to achieve the same by switching between the two approaches, but the cost of context switching is so great that it will be more like starting again each time. So it's really hard to know when to stop thinking about it and to get the debugger out.
The advice that once you find the bug you should work out what the problem really is definitely holds solid, though.
If you're ‘just a’ developer in a larger company, you may rarely get to actually fix these high-level problems you discover. You kind of have to live with it, which is frustrating.
And if you're working alone, too ‘high-level’ thinking may well slow down getting things done. If you don't have around someone with more pragmatic attitude, be sure to have a bit of it yourself. =)
But the debugger is not the problem - in fact, when you don't know the code perfectly, a debugger can be very useful in enabling you to form a sufficiently complete model of the code in the first place.
It only becomes a problem if, at that point, you keep looking at details rather than stopping to think about what you've learned.
I agree with your second remark; you need pragmatism, but yes, as I get older I notice that I solve bugs in my head instead of debugging in the usual fashion. I strive to type as little as possible and to do that you need to do a lot of head work; 20 years ago, I was the opposite.
Toyota's '5 Whys' root cause analysis, don't just fix the manifestation of the problem, but keep asking why it happened until you get to the real cause.
The NASA space shuttle programmers
When they find a bug, they go to great lengths to find how and where in their process of programming this was able to happen, sometimes finding other bugs before they surface.
I've personally spent 10 years in academia deeply understanding many things that more often than not were good for nothing. Now I'm OK with just getting things to work quickly and not looking back.
PS: I'm a really fast typer.
Kay's Law is that the right perspective is worth 80 points of IQ: that is, if you find the right way to look at a problem, you can do things with the ease of someone with a 180 IQ who didn't have the advantage of that perspective. How do you get a good perspective? Well, there are a couple different ways. Some are pre-made perspectives: for example, dynamic programming: "let's just build up all the solutions in order from n=0, reduce everything to some other problem we've already built up." Some are intermediate, like wishful thinking: just imagine that magically, you have functions which you don't have, so that you write an algorithm which is correct, but references a bunch of functions which don't yet exist.
Some approaches to new perspectives are more general: hacking, for example, is when you try a bunch of things which just barely work, to find one which does. Another is duck solving: explain your problem out loud to an inanimate object (like a stuffed or rubber duck), and half the time you'll accidentally create a perspective purely for explaining the problem which is useful for solving it. These are very generally phrased because "problems" are a very general topic for intellectual discourse.
When I'm trying to understand foreign code in less than enough time, I instrument it (almost always at inputs/outputs) and try to treat functions as black boxes. In an ideal world the black boxes would be working.
Data flows are as important as algorithms. See this thing from Guy Steele.
Fred Brooks, in Chapter 9 of The Mythical Man-Month, said this:
"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious."
That was in 1975.
Eric Raymond, in The Cathedral and the Bazaar, paraphrased Brooks' remark into more modern language:
"Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious."
That was in 1997, and Raymond was discussing a project coded in C, a procedural language. But for an object-oriented language, I think this aphorism should be reversed, with a twist:
"Show me your interfaces, the contracts for your methods, and I won't usually need your field declarations and class hierarchy; they'll be irrelevant."
I think, however, that practitioners of both procedural and object-oriented languages can agree on Raymond's related point:
"Smart data structures and dumb code works a lot better than the other way around."
An interesting assertion. In your link Guy Steele observes the duality between objects (where it's easy to add new data types but harder to add new operations that work on all of them) and abstract data types (where it's easy to add new operations but harder to add new data types).
Guy says that the former tradeoff is almost always the right one, but I've encountered many situations where the latter was much more convenient. It's far more preferable IMO to be able to choose which tradeoff you prefer based upon the constraints of your particular problem.
This usually is not so much a language-level problem as a cultural problem; many programmers are infatuated with OO (I know I was at one time) and unaware of the tradeoffs OO makes or when it's appropriate to use another approach. Hopefully over time multiparadigm languages like Python will help make "objects vs ADTs" more of an engineering question and less of a religious one.
I've found that liberal logging and careful error management has almost replaced the debugger entirely. In the last three months, I've only fired up the debugger twice, and that was when I was working with untyped memory and peeking at memory through the debugger was the most efficient way to get things done.
I have a lot of problems explaining this to the people I work with. Most of the errors we make are in some way related to having imperfect information through the debugger.
At some point complexity gets so high the debugger isn't even capable of handling the system under inspection. What are the tools we use in these cases?
That is true but if you rely only on the tools, you will never gain a smarter understanding of the systems you are working with.
Also, I'm curious about something. Those of you who are good at building mentals models: are you also visual thinkers?
I am a logical thinker and good at creating sensible (that is high-level or very granular dependent on importance) abstractions in my head. Once this is done, it's easy to think "this abstract data gets computed by this function which passes its output to this function, etc."
The actual solution will then appear from (0) the association of an (erroneous) result with a particular piece of data/behaviour, (1) strictly following through this model in my head, (2) a holistic understanding of the flow of data and the computational roles inside the system (or external to it in the case of end-users), or (3) a whim to subvert and reframe the question/problem in interesting ways. As Carl Jacob's said: "Invert, always invert."
Generally you progress from (0) to (2) before taking out a debugger. Most easy problems are solved at (0). (3) is unusual but intellectually gratifying if you find a way of solving a problem in a way that could not be understood simply through using a debugger and hacking a couple of lines of code.
I had this in my wishlist for a while. This just made me buy it.
Wonder how different it will be from Code Complete 2.
Pretty much it's a combination of both: you look at the code, you think about what's happening and what could go wrong, you look at the code again, you think some more, you look at stacks, variables, output.. and then you think again and BOOM: you figure it out.
The best debugging tool is you. Use all you have.
Don't get me wrong, you have a point - there is certainly a balance to be struck.
Also, this was pair-programming, both people are making sure that what's written is what's intended to be written and so those issues should be (mostly) gone.
Also, there's something to be said for extremely tight, minimalist, highly reusable code. It makes it easier to walk through it in one's head.
However, when you're working on legacy systems or have to refactor some horrendous code written by developers many leagues out of their depth, and nothing whatsoever is in its logical place, it's less effective than in other circumstances. This sort of critical thinking is a wonderful tool to have; but just like any other tool, there's times when its use is appropriate, and times when it isn't.
When you try to "think up" a business then bad things start happening. I think this is a huge trap for us programmers who want to become entrepreneurs.
Nature doesn't "think or do research" it "creates and tests aggressively".
i'm conflicted because on one hand i don't want to be the code police that simply refuses any changes from others, but at the same time its hard to be responsible for a system when so many core updates are done by other team mates w/out proper testing.
i think the happy middle ground is: no such changes in common trunk. Do those in a branch, and only merge them in when benefits are clear, team is on board with all changes, you're ready to release to production, and will be around during the release.
"Dr. Dobb's: Was there any concept of looking at each other's code or doing code reviews?
Thompson: [Shaking head] We were all pretty good coders."
Nothing to do with Rob Pike, who has worked with Ken for decades since.
There is no reason that both versions of history can't be true.
Unix was also much simpler than Multics.
Software doesn't need to always grow in complexity, but it takes lots of determination to accomplish this.
It's very reasonable to think that the Go implementation is simpler.
"The make tools are not part if[sic] the language and irrelevant"
No, they're quite a lot more than just make replacement. And definitely not irrelevant!
* Assume C90
....Unless maybe you understand implicitly that you're probably not as smart as Rob Pike.