things like that
Thinking first, writing later. When practicing for the ACM programming competitions in college, I discovered that when writing code before understanding the solution, I eventually needed to start over. I have never seen a counterexample.
Learning new things. Every year I try to learn a new major skill. Two years ago I learned Common Lisp, last year I hacked on some AI textbooks, and this year I'm teaching myself how to build websites in Python. Try for some variety - Alan Kay said "Perspective is worth 20 IQ points," and while I make no standardized testing claims, learning from multiple fields can connect the dots in interesting ways.
Reading code. Norvig's PAIP knocked me out of my Object-Oriented rut and changed how I think about coding problems.
Working with smart people. Surrounding yourself with motivated and effective workers has a great multiplying effect on your own productivity.
The thing is. Some times writing code is how you best come to understand the solution. One of the things that have made me a better programmer, was exactly realising this. If there is something I don't understand, I now try to program it, rather than thinking too much over it first. Call it prototyping if you want. It's a fine line of course.
Thinking first, writing later .... I discovered that when writing code before understanding the solution, I eventually needed to start over.
I think this may depend on how and when you think most effectively. Having the discipline to be productive in pure thought is impossibly difficult to quantify. At times I feel productive working that way, but other times just writing code helps me think through things faster. I type fast so it's not a big burden to type a lot of stuff even if I have to delete it.
Of course, sometimes just drawing things out on paper helps as well. I seem to recall Dijkstra making some criticisms of that, but for me it is often highly effective.
Either you plan too much ahead and you end up tweaking your design forever, eventually building high-level cruft code that still won't get anything done because you don't dig deep to the ground to find the real culprits of your solution.
Alternatively, you plan nothing at all and end up rewriting your program several times because you just won't see the big picture as you're just digging up dirt from many different spots.
I think it works best if you plan only as much as you initially can and then start coding a proof-of-concept version as quickly as possible. You can only plan what you know of, and it's not much at first but it's something. Quickly getting to coding helps you understand the problem better, and you will throw away a many approaches that turn out to be suboptimal or solutions to the wrong problem, but there's nothing wrong with that. When the coding brings in more knowledge, then you can plan a bit more, and repeat.
If you're good, you can do that most of the most of the time. A conservative estimate might be half the time. More importantly, you will never hit the optimal planning versus coding point but you wave around on the both sides, sometimes planning too much and sometimes coding too much.
Experienced programmers can keep the amplitude of that zigzagging low, inexperienced programmers rush from endpoint to endpoint doing too much either.
I can plan out what I'll put stuff in and the names of the classes and methods that'll work with data, but I'm clueless until I start trying to implement the idea.
I suspect that once I've done enough things, it'll be easier to plan. I know when I go back to any of my old C# projects, I keep slapping my forehead and am able to see other (often better) solutions.
So I think the better advice is: Try until you know what you're doing, then toss it out and plan around all the terrible mistakes you made.
Design your program from high-level and iterate through phases into low-level. So for instance, I want you to make a program that, lets say, is a customer database.
High level 0: I need a customer database
Level 1a: I have these storage requirements.
Level 1b: I need these inputs
Level 1c: I need data to exit the system in these ways.
Level 2: Diagram of work-flow processes (how it interacts with real people).
Level 3: Diagrams of how the database will look, identify what types of objects you will need to work with (of course, this specific action gears you into OOP design and not functional).
Level 4: Problems. Spend some time bad-mouthing everything and how it works, go back to level 2 and when you're tired of crying about the thing go onto the next level.
Level 5: Document how it works. This is your Manual.
Low Level 6: Write the code, and when you need guidance, RTFM.
Level 7: Throw the thing into the trash and start over, they wanted an inventory management system, another tribute to the synergistic perspicacity of business people and software engineering's verbal constipation.
Seriously, the most difficult problems I've faced have been solved easily just after taking a quick nap (30 - 40 mins). Most of the time it comes to the point that there's no solutions in the horizon, I'm pulling my hair out and biting on my keyboard and then, a quick nap later most of the problem is solved in 20 mins when I sit in front of my computer again. It's got probably something to do with unconcscious mind.
But the problem is employers may not see this "sleeping to solve problems" act as productive as you do. I read that Google has "nap rooms", and I'll definitely have one of those when I start a company.
Let me explain:
You are actually inviting your right hemisphere(the creative side of your brain) to come out and play. The right hemisphere cannot be forced into thinking that much so by taking a nap you are actually letting the unconscious work for you.
As a really interesting side note, Thomas Edison used to take a nap whenever faced a difficult problem. He used to nap with ball-bearings in his hand and when he would fall into a deep sleep dropping the ball-bearings would wake him up, and he would wake up and tackle the problem. :)
PS: By “debunked” it was discovered that the brain has more plasticity in how and where specific tasks where preformed that initially assumed. Also the high level understanding of what doing “Math” or “Poetry” has little connection to how the brain actually does this stuff. EX: Some people can count time accurately why reading other people can’t. The most probable explanation as you grow up the brain chooses how to approach high level problems in a fairly arbitrary fashion.
I agree with you on all of those items. It's a little scary, actually.
2) Learning how to communicate effectively. Keeping interested parties in the loop at all times. Not hiding mistakes or difficulties, or waiting until the last minute to let a PM know that a task is going to be late.
3) Not falling into the 'stupid user' trap. Your users aren't stupid. They know their business better than you do. You need to understand and accomodate their workflow, not the other way around.
Also agree on 3) If a user asks for something 'stupid' you can't just shoo them away. There's a reason they asked and your job is to find out what that reason was - and to come up with a solution for your client that fits the rest of the system. The client may not know what they want but they do know that there's a problem that needs to be addressed somehow.
Plus if nothing else, you'll be able to do that task much faster the next time.
"Any man who reads too much and uses his own brain too little falls into lazy habits of thinking."
To me, the best readings aren't ones that I learn directly from. They're the ones that spark thought processes inside my head. I haven't read past Chapter 2 of pg's "On Lisp", but I still consider it one of the best books on programming languages because it changed how I think about them in a lot of ways.
Avoid both of those.
When I have a a problem I do one of two things: 1) I work on trivial things like code cleanup and reorganizing things, or 2) I get my head completely out of the computer. Go for a walk. Sit outside and just look around for a few minutes.
The anti-pattern here is popping off to look at your email, FB, or HN. It keeps the brain active in context-switching mode, instead of settling into one context and having it work on the problem behind-the-scenes.
>Avoid both of those.
I'll keep it short and sweet. Family, religion, friendship ... these are the three demons you must slay if you wish to succeed in business.
You know, a funny joke? Haha?
2. Writing down every question that occurs to me about the technology I am working with, at any point, specifically the behaviour of libraries and nuances of programming languages: I am not an expert in any programming language. Then chasing those questions until they are resolved.
3. When stuck with a slippery bug, attempting to reconstruct the bug in a toy program. If reconstructed, fixing it is easier. If not, I know it's not where I thought it was. Sometimes I never make it as far as actually writing the toy program; the intent is enough.
Programming is 10% writing code, and 90% reading code. Reading your own code (debugging, refactoring, coming back to something you wrote on Friday on Monday), and reading other code ("why doesn't this library work?").
Once you learn to read, a number of things happen. You are no longer a slave to your libraries; you can open them up, see exactly what they're doing, and either change your mental model or change the code. You don't have to get stressed out about "coding guidelines" anymore, because you actually understand what the code means. Inane details like tabs versus spaces and cuddled elses or whatever don't matter anymore, because you have seen all the possible combinations. And, you pick up the style of your peers, because you aren't coding in a vacuum anymore.
I could go on and on, but if I boil it down to one action item, it's "when you have a problem with a library, read it". Everything else comes from there. (And before you know it, you'll be mentioned in the Changelog of everything you use! Good for getting your next job.)
Thinking in a functional fashion also made it easy for me to pick up JS and start writing event-driven code almost immediately, because so much of modern JS relies on proper understanding of closures and asynchronous events. Furthermore, if you find you like functional programming, I also highly recommend teaching yourself Church's basic untyped lambda calculus; I found that it helped give me a much better understanding of the basic underpinnings of so many languages. Similarly, having an understanding of the fundamental concepts that underlie all programming languages -- such as the differences between call-by-value and call-by-reference, or static and dynamic scope, or the various kinds of OO systems (prototype-based, class-based, mixins) -- is really important. My knowledge of these has not only made it really easy for me to pick up and learn a language extremely quickly, but has also saved me on more than one occasion from a pitfall that I would have otherwise made (such as forgetting that older versions of Perl use dynamic scope by default).
Also, as many other people here have mentioned, coding up solutions to problems from ACM competitions, Project Euler, Google Code Jam, etc. is a great way to get good at just coding. It is also a great way to familiarize yourself with a language.
If nothing is exciting, if there's resistance (usually from lack of sleep, lack of exercise) then I focus on restoring my routine, and working on something easier like copywriting, email, invoice admin etc. until the excitement for something else wells up again. But I keep working. Often you need to just press on before the excitement comes back.
In the long term, I pick technologies I'm excited in, even if they bring short term costs. Connecting the dots, this process has lead to near-perfect strategy in hindsight.
This is counter-intuitive, but embracing NIH has actually made me a better programmer. It's the spirit of vertical integration. It's taught me how to do things and developed my understanding. Rather than using say an SMTP client, if I don't know how it works I dive in and write one myself. It costs the project in the short term, but in the long term the project at least has one more programmer who understands the nuances of another protocol.
Imagine a Google that outsourced their filesystem, data center, server racks, JS engine, browser, caching, proxying, map/reduce, machine learning, DNS, etc. Would they be as good as the Google of today? That's what separates IT from hackers. IT configure and use existing software. Hackers write their own. IT know how to "cobble together" things. Hackers understand things. Without the intimate understanding that comes from NIH, there is no room for the hack.
Being a better programmer is a long term motivation, and it's often the methods that pay off in the long term that contribute most towards this goal.
When you reach a seemingly impassable problem, talking it over with other people and then sleeping on it will make everything clearer come morning.
If you work constantly you are a worse programmer than those who know to take breaks. These are the people who are thinking about what they're doing.
I certainly don't recommend quitting every time you have a problem you can't solve in half a hour, but there's a happy medium between that and wasting days (while you're getting paid, or at least wasting your project's time) working on something that the guy in the next room could sweep away in ten minutes.
2. Find a work flow that keeps you moving forward no matter how slowly. here is mine. Basically design, top down then code bottom up, testing each piece as you go. An example of when I finally get to coding.
a. write the comment for the function
b. write the signature
c. write the tests
d. write the internals of the function
It will never almost never work out that perfectly but you should have the goal that, when you are done with that function you should never have to look at it again*
* you will. I do. Being perfect would be nice, but it just isn't going to happen.
When you're developing, there are a thousand little decisions to be made. Don't assume that the server will be up before the client. Don't assume that the file you open will be writeable. Don't assume that the database is there. Don't assume that the user knows what you know. Don't assume everything works.
Maybe it's because I spend lots of time in teams developing large (overly) complicated systems, but a common problem is simply finding out what went wrong. Strictly speaking, you can sometimes accept something as a precondition if it is explicitly stated as such--but you still at a minimum need to at least fail loudly and obviously if it doesn't do what you expected. Silent failures suck.
Don't assume anything while debugging, either. As mentioned elsewhere, when debugging I find it useful to take the symptom and think of every possible failure that could could cause the symptom. Then, start paring down the list by doing more tests. Do not assume anything. Don't assume the cables are connected. Don't assume the code is current, or the driver is working, or the the library documentation is correct. Verify, verify, verify.
By all means, use your experience to prioritize your search space. I personally prioritize stuff that's easy to check, or stuff that's known to have a problem. Of course, I generally suspect my own code long before turning my gaze toward third-party components. But don't assume! Remember the old saying, "it's not a compiler bug?" Well, yeah--that's really the last thing I'd suspect. But if I ruled everything else out, I'd at least do a Google search to see if anyone at least brought up the possibility.
But of course, I've never gotten that far. So far, every debugging problem has been reducible to a failure to check my assumptions.
There's a fantastic Mark Twain quote that I was amazed to find the other day; apologies if I've included it here before: “It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so.”
Recently it was jQuery and the Play! Framework -- for the longest time I was trying to think of some "grand" project that I could write to use them all together and become the master of all that is web... then a year went by and I continued to try and figure out what this ultimate project would be.
What a waste.
Then one Saturday I was bored and sat down and wrote a simple collection of AJAX utilities that do the most basic crap that anyone could write - using those two technologies. It helped get me over that hump and I learned a lot about the two techs I was curious about.
I went ahead and stuck the utils online for anyone to use and moved on with my life.
It was a great exercise for me, and a method of write/release I plan on using from this point forward for just about anything.
How easy is your code to modify later when you need to add a new feature? I think that's the only measure of good code that I trust. I get a little proud of myself when I need to modify something I've written previously to add a new feature and it's relatively easy. So I look back at how I wrote it then and try to learn from it. The opposite is also true. If I need to modify my code to add something and it is truly painful then I know I did something wrong way back when. The same is true for bugs found. If every bug found requires massive code changes than I did a pretty piss poor job initially. But if most bug fixes only require a couple of lines than I did well.
This makes identifying good and bad practices especially difficult because it requires that you stick around for things to fall apart. This is one of the reasons that commercial software sucks so much and why consultants leave giant messes in their wake. The short-term priorities of business conflict with the long-term reality of determining software quality.
As a developer it's your obligation to learn from your past mistakes and better yourself.
* turning off Twitter & IM
* closing email and Google Reader
* headphones headphones headphones
* big ol' notepad for notes and doodles
Kill as many distractions as possible, make the ability to create & think ridiculously easy and just keep on truckin until you have something to test.
Last week, on a whim, I went cold turkey to vim.
The result: I'm probably staying with a (heavily modded) vim.
I added the standard bash/emacs/OS X input keybindings to insertion mode.
It feels like the best of both worlds.
The moral: it's good to branch out and try new things every now and then. You might get surprised.
That might mean committing my changes more frequently, refactoring smaller blocks of code, or avoiding the temptation to fit a lot of functionality in one class/function/file.
Before committing anything, I double-check everything, refactoring the code I have modified/added. This is easy because, once you have your new functionality or fix working, maintaining the new correct behavior as you modify the code is low-risk if you do it step by step.
English, as being my second language, is an important part of my job. I try to read all the books, blogs, movies, and so on in English. Every time I come across a new word, I look it up in wordreference (plugin for chrome), and add it to a list in Sidenote (mac app). When I need to communicate through email, I sift through that list looking for words that may fit in my message. It is a simple practice that has helped me out big time in uplifting my skills (of course I share this list with my friends!). Certainly this not directly related to programmig but, as I see it, if you want to improve as a programmer, you're going to need to become rather fluent in English.
Writing a blog. Especially invaluable when you get feedback of your posts. Folks out there shall give you a kick in the ass more often than you expect. Expect and embrace them. It's a great way to grow up.
Get to know that the finest blogs, books and people are in your domain. For example, years ago I programmed in Java, usually building web sites with the help of very known frameworks (Spring, Struts, you name it). I thought I was quite competent until I come across a book called "Effective Java" by a dude named Joshua Bloch. Needless to say I was struck by it and I ended up feeling like I knew nothing (literally). You can't program in Java and at the same time not know who Bloch, Goetz, Unble Bob, and so on. Same with Lisp and Norvig, Siebel, Graham.
I always strive to keep up with the basics. This is the killer skill that shines when everything else I try fails. With a new technology, starting out learning the ropes from the top abstraction layers makes me feel competent because I can get my job done, plus or minus. Basics alone aren't that helpful, but when something arises that steers from the standard way of doing the stuff promoted by the top abstraction layers, a great deal of the times I will need to dig right into the core in order to be able to solve my issue. Well, plus or minus, I personally call this the Onion Theory. This is related to the 'darkness surrounding you' concept explained above. Awesome.
2- avoid mutable variables. Using the same name for two things is just another way to get tripped up.
3- if you're going to ship it, try to stay in the lower (50%) range of your abilities. This is how to get things done lightning fast in my opinion. Work on your skills and improve every day, but to produce something functional and enduring you should be technically conservative and produce something you thoroughly, thoroughly understand.
4- Your abstractions should be accurate, precise, and cognitively manageable.
5* Non-programming organization skills matter. Identify stakeholders, gather requirements, set priorities, PRODUCE, get feedback... repeat.
I've had colleagues that had wrote hundreds of programs, but they were mostly the same boilerplate, same glue, with different business logic. Writing a lot of different, varied programs, is key.
Refactoring your code, or possibly coming up with an entirely different approach to solving the problem is what I consider one the most important parts of becoming a better programmer.
I don't know if I can describe this very well, and I've only recently started explicitly noticing this about myself, but I seem to have a well-developed intuition of how much I am "in the dark" about a particular domain, problem, technology, library, behavior, etc. I seem to sense well (and can back up with explicit arguments if necessary, but it stars with a feeling) when there's too much darkness around me, and then my primary focus must be on learning/understanding/tinkering/iterating to make it go away, rather than groping around in it. But it's also important to know when to stop, to avoid the danger of depth-first devouring of information that can consume too much time (not unlike "wikipedia surfing", when you suddenly come to and realize you've been reading it for three hours).
I never imagined this to be any kind of special ability, until I started noticing that some otherwise competent people seem to lack it. So perhaps it's a useful habit, and perhaps it can be cultivated, but I'm not sure how, except by trying to be aware of your ignorance as much as possible.
This principle seems to apply in different situations: when designing, when debugging, when writing code to interact with someone else's code. It always pays to maintain a mental model that includes the gaps, and to estimate how important filling the gaps is.
For example, when debugging a difficult problem, like an elusive bug or mysterious behavior, I usually make conjectures of where the problem could broadly originate, and try to rule them out one by one. Maybe I can ask if things are already "bad" after this place or before this place in the source code. Maybe I can vary or reduce interaction with other systems to rule out the problem there. I'm half-explicitly half-intuitively dispelling the "darkness" around my understanding of the problem, forcing it to hide in fewer and fewer places. Suppose that one of the plausible conjectures is that this mysterious behavior may be caused by a bug in a core library or the compiler/interpreter of the language. I may need to "dive" there and start reading much unfamiliar code to learn and understand those domains, but I'm going to postpone this until absolutely necessary, and rule out easier domains first. I'm managing my ignorance explicitly.
(Now that I'm thinking about it, maybe this is why many people, including myself, often prefer debug prints to working in a debugger - debug prints are good at giving you useful negative information: "it's still OK here, the bug's not here". When working interactively with a debugger, this is more difficult to get at).
Or imagine that you're thinking about how you will use a standard component - an SQL database, a "NoSQL" database, a network library, S3, a language, anything. Assume you understand the API. How much do you know about the constraints and limitations of that component, and how much should you know? I don't feel the need to throw together a piece of code that uses connect(), accept(), send() etc. before I use sockets pervasively, but if it were my first time writing a network client/server, I probably would. I've never used S3, so merely reading about it and reading the API wouldn't be enough to start something big using it. I'd have to tinker a little first, get some intuitive, dispel some darkness. All this seems rather trivial to write out, but I think that we fail to act this way surprisingly often. I've seen people write multi-threaded programs in pure Python, complete with starting multiple threads, using locks, etc., oblivious to the existence of the GIL and the fact that they're losing rather than improving performance. I've almost done the same thing myself when I was new to Python (I still think that Python hides this aspect of its behavior much too well from outsiders and beginners, and am a little chagrined over it).
Premature optimizations usually fall under this principle as well. When I'm optimizing prematurely, it's because I am not uncomfortable enough with the amount of darkness around the behavior of my system. I don't actually have a good understanding of where the bottlenecks are now or may be in the future, but it doesn't bother me enough; I'm groping in the dark without realizing it. If I do realize it, I will step back, try to look at my system with a critical eye and do some hard measurements before I try to optimize. This will usually be a good thing.
isn't quite the same thing, but it may be related. Flexibility with ignorance can lead to better understanding of one's ignorance and faster adaptiveness overall.
Per programming productivity, I think it about with stacks or queues. There's always something you can work on -- and variously they all eventually 'dovetail' together.
You're right that if the threads are mostly I/O-bound, it can be a net win, although I seem to recall that even in this case, Dave Beazley's work showed that even one CPU-bound thread is enough to spoil the party significantly for the rest.
Re:Premature optimizations. I believe the great Pele offers something to cure this... it might help you.
When I was younger, I hoped I knew pretty much everything I needed to know. Oh boy, that was hard. That effectively cut me off learning because if you don't admit you don't know something you can never learn anything. My learning was diverted to many impractical programming mindgames instead of bare hands-on programming.
Luckily, programming is a very binary thing. A program either works or it doesn't. If you don't know something or you don't understand something, you can't solve the problem. Enough of the cases where someone smarter had written code that I just couldn't have written the same way or as elegantly finally returned me back on my feet.
Then I dipped slightly to the other side of the axis. I assume I don't know anything about some problem or new piece of code until I study it enough to confirm that certain similarities to what I've seen before do exist. The downside is that it takes time until I find my confidence but eventually the magic will dissolve, I see how the program works, and I finally touch the code and start making modifications. But I remain very careful unless I'm really, really, really sure I know better to make a big modification.
While it is stressful to the ego, for me it's a much better way.
In other words, don't go against the grain of whatever framework/library you are working in. Learn the path of least resistance.
This leads to a surprisingly large number of practical habits.
1. I welcome code reviews. Some people can't stand their code being criticized by other people. To me the fact that someone else spends time improving my code is a clear win. Ego is not bruised if you start with assumption of being stupid.
2. I seek out and use tools that help me find bugs automatically and understand my code better. Static code checkers like clang analyzer or cppcheck or pychecker or lint. Valgrind, good memory and cpu profilers, good debuggers. Source Insight (an editor).
3. I see enormous value in continuous build systems and automated testing (be it unit tests or more holistic tests).
4. I step through new code I write in the debugger just to verify that it behaves the way I expect.
5. I stay away from complexity, both self-inflicted (like trying to be too clever when implementing something) or inflicted by the tool (e.g. I avoid using advanced features of C++). I avoid multi-threading as long as I can.
6. I add diagnostics to my code. Logging, asserts in debug builds, built-in crash dump submission to my site for analyzing crashes that happen in the wild.
7. I know that despite doing all I can to prevent it, the bugs will happen and will have to read my own code to fix them long after I wrote that code. Therefore I try to make the code as readable as possible for my future self. Balanced comments (not too much, not too little). No cryptic names for variables or functions. No long functions with complex logic. I take the time to make my code look consistent.
8. It's better if other people sweat writing and fixing bugs in their code than me in mine. I look for high quality, reputable components instead of re-inventing the wheel. I would much rather use SQLite than write my own persistence layer.
I used to write a lot of things from scratch, unnecessarily. Whenever I find myself working at a level of abstraction much lower than the problem I'm trying to solve, I start googling. This has led me to django, grid 960 (css), jquery ui, etc.
These days, writing a new app usually means seeing how far I can get with Drupal modules first.
This has helped immensely when I've gone back months later and tried to figure out what I was thinking; now, it actually makes sense!
2. Accepting that code I wrote a month ago sucks. If it doesn't then I haven't improved/developed. That's really the point of refactoring. I do my best today and move on.
3. Socializing. Yes, going out to hack/codefests etc. makes you better. BTW: Yes, you have to speak with other people/developers...sitting in the corner by yourself and drinking your latte is usually not enough.
The first time you forget the name of the function, the second time you remember the name but forget the semi-colon, the third time you forget the underscore, etc. Meanwhile, you remember slowly those errors. In the end, you write your code without any mistake right out of your head.
Keep notes - I use blogger.com
Write documentation - writing man pages has forced me to fix corner cases
Having projects with real users - they are annoyingly good at finding errors
Turn on -wall, keep coding until you get none
Often the best place to solve the problem is AFK.
Healthy body, healthy mind - combined with the above I like to bicycle, an hour in the pedals pays you back. Cooking proper food yourself from real ingredients feeds your brain.
TMTOWTDI - dabble in languages you don't use as your main such as Forth / Scheme / Assembler / J / Factor / Brainfuck