Noticing pain points. It takes time to develop a taste for good solutions. But when you modify or write code, and you say, "this is harder than it should be," this is a code smell and there is a better way to globally solve the problem.
Thinking first, writing later. When practicing for the ACM programming competitions in college, I discovered that when writing code before understanding the solution, I eventually needed to start over. I have never seen a counterexample.
Learning new things. Every year I try to learn a new major skill. Two years ago I learned Common Lisp, last year I hacked on some AI textbooks, and this year I'm teaching myself how to build websites in Python. Try for some variety - Alan Kay said "Perspective is worth 20 IQ points," and while I make no standardized testing claims, learning from multiple fields can connect the dots in interesting ways.
Reading code. Norvig's PAIP knocked me out of my Object-Oriented rut and changed how I think about coding problems.
Working with smart people. Surrounding yourself with motivated and effective workers has a great multiplying effect on your own productivity.
"Thinking first, writing later. When practicing for the ACM programming competitions in college, I discovered that when writing code before understanding the solution, I eventually needed to start over. I have never seen a counterexample."
The thing is. Some times writing code is how you best come to understand the solution. One of the things that have made me a better programmer, was exactly realising this. If there is something I don't understand, I now try to program it, rather than thinking too much over it first. Call it prototyping if you want. It's a fine line of course.
I definitely won't disagree with anything you said in the general case, but for this:
Thinking first, writing later .... I discovered that when writing code before understanding the solution, I eventually needed to start over.
I think this may depend on how and when you think most effectively. Having the discipline to be productive in pure thought is impossibly difficult to quantify. At times I feel productive working that way, but other times just writing code helps me think through things faster. I type fast so it's not a big burden to type a lot of stuff even if I have to delete it.
Of course, sometimes just drawing things out on paper helps as well. I seem to recall Dijkstra making some criticisms of that, but for me it is often highly effective.
Either you plan too much ahead and you end up tweaking your design forever, eventually building high-level cruft code that still won't get anything done because you don't dig deep to the ground to find the real culprits of your solution.
Alternatively, you plan nothing at all and end up rewriting your program several times because you just won't see the big picture as you're just digging up dirt from many different spots.
I think it works best if you plan only as much as you initially can and then start coding a proof-of-concept version as quickly as possible. You can only plan what you know of, and it's not much at first but it's something. Quickly getting to coding helps you understand the problem better, and you will throw away a many approaches that turn out to be suboptimal or solutions to the wrong problem, but there's nothing wrong with that. When the coding brings in more knowledge, then you can plan a bit more, and repeat.
If you're good, you can do that most of the most of the time. A conservative estimate might be half the time. More importantly, you will never hit the optimal planning versus coding point but you wave around on the both sides, sometimes planning too much and sometimes coding too much.
Experienced programmers can keep the amplitude of that zigzagging low, inexperienced programmers rush from endpoint to endpoint doing too much either.
Since I'm still in the early stages of learning to program (past the "durr what are for loops" stage, but still not good at a language), I've found that it's nearly impossible to really plan anything out.
I can plan out what I'll put stuff in and the names of the classes and methods that'll work with data, but I'm clueless until I start trying to implement the idea.
I suspect that once I've done enough things, it'll be easier to plan. I know when I go back to any of my old C# projects, I keep slapping my forehead and am able to see other (often better) solutions.
So I think the better advice is: Try until you know what you're doing, then toss it out and plan around all the terrible mistakes you made.
I think it's more "think about what you wish to accomplish with the piece of code that you're about to write". Are you trying to write something that will satisfy users' desires? Then you darn well better think about every aspect of those desires, and make sure that the code you write satisfies them. Or are you trying to learn what users' desires are, or how the technology you have available will help you satisfy them? In that case, you may be better off just diving in and coding, but you should think about the question you want answered before you code.
I think you're both right. Some people do like the grandparent suggests and map their strategies out in advance. I personally do better with Brooks's "build one to throw away" approach. I like this better because I can't work with a plan like others do. I need to have implemented something before I can understand it. Others work better with a different approach. To each their own.
They're not mutually exclusive approaches anyway. Alternating between them quickly can be a good way to nibble away at hard problems. And it's not a big deal if you throw away some work in either mode. Think on what takes you there faster, not necessarily with less (wasted) work.
Design your program from high-level and iterate through phases into low-level. So for instance, I want you to make a program that, lets say, is a customer database.
High level 0: I need a customer database
Level 1a: I have these storage requirements.
Level 1b: I need these inputs
Level 1c: I need data to exit the system in these ways.
Level 2: Diagram of work-flow processes (how it interacts with real people).
Level 3: Diagrams of how the database will look, identify what types of objects you will need to work with (of course, this specific action gears you into OOP design and not functional).
Level 4: Problems. Spend some time bad-mouthing everything and how it works, go back to level 2 and when you're tired of crying about the thing go onto the next level.
Level 5: Document how it works. This is your Manual.
Low Level 6: Write the code, and when you need guidance, RTFM.
Level 7: Throw the thing into the trash and start over, they wanted an inventory management system, another tribute to the synergistic perspicacity of business people and software engineering's verbal constipation.
I don't know if I can describe this very well, and I've only recently started explicitly noticing this about myself, but I seem to have a well-developed intuition of how much I am "in the dark" about a particular domain, problem, technology, library, behavior, etc. I seem to sense well (and can back up with explicit arguments if necessary, but it stars with a feeling) when there's too much darkness around me, and then my primary focus must be on learning/understanding/tinkering/iterating to make it go away, rather than groping around in it. But it's also important to know when to stop, to avoid the danger of depth-first devouring of information that can consume too much time (not unlike "wikipedia surfing", when you suddenly come to and realize you've been reading it for three hours).
I never imagined this to be any kind of special ability, until I started noticing that some otherwise competent people seem to lack it. So perhaps it's a useful habit, and perhaps it can be cultivated, but I'm not sure how, except by trying to be aware of your ignorance as much as possible.
This principle seems to apply in different situations: when designing, when debugging, when writing code to interact with someone else's code. It always pays to maintain a mental model that includes the gaps, and to estimate how important filling the gaps is.
For example, when debugging a difficult problem, like an elusive bug or mysterious behavior, I usually make conjectures of where the problem could broadly originate, and try to rule them out one by one. Maybe I can ask if things are already "bad" after this place or before this place in the source code. Maybe I can vary or reduce interaction with other systems to rule out the problem there. I'm half-explicitly half-intuitively dispelling the "darkness" around my understanding of the problem, forcing it to hide in fewer and fewer places. Suppose that one of the plausible conjectures is that this mysterious behavior may be caused by a bug in a core library or the compiler/interpreter of the language. I may need to "dive" there and start reading much unfamiliar code to learn and understand those domains, but I'm going to postpone this until absolutely necessary, and rule out easier domains first. I'm managing my ignorance explicitly.
(Now that I'm thinking about it, maybe this is why many people, including myself, often prefer debug prints to working in a debugger - debug prints are good at giving you useful negative information: "it's still OK here, the bug's not here". When working interactively with a debugger, this is more difficult to get at).
Or imagine that you're thinking about how you will use a standard component - an SQL database, a "NoSQL" database, a network library, S3, a language, anything. Assume you understand the API. How much do you know about the constraints and limitations of that component, and how much should you know? I don't feel the need to throw together a piece of code that uses connect(), accept(), send() etc. before I use sockets pervasively, but if it were my first time writing a network client/server, I probably would. I've never used S3, so merely reading about it and reading the API wouldn't be enough to start something big using it. I'd have to tinker a little first, get some intuitive, dispel some darkness. All this seems rather trivial to write out, but I think that we fail to act this way surprisingly often. I've seen people write multi-threaded programs in pure Python, complete with starting multiple threads, using locks, etc., oblivious to the existence of the GIL and the fact that they're losing rather than improving performance. I've almost done the same thing myself when I was new to Python (I still think that Python hides this aspect of its behavior much too well from outsiders and beginners, and am a little chagrined over it).
Premature optimizations usually fall under this principle as well. When I'm optimizing prematurely, it's because I am not uncomfortable enough with the amount of darkness around the behavior of my system. I don't actually have a good understanding of where the bottlenecks are now or may be in the future, but it doesn't bother me enough; I'm groping in the dark without realizing it. If I do realize it, I will step back, try to look at my system with a critical eye and do some hard measurements before I try to optimize. This will usually be a good thing.
In the case I had in mind, the threads were CPU-bound and the programmers naively thought that they were running concurrently. For someone coming from a C++ or Java background, that's a natural assumption to make, and Python doesn't put up many (any) red flags to warn you on the way. For example, note that the pydoc for the module threading manages to avoid any mention of the GIL. You can learn how to create and start Python threads without having any idea that they won't run Python code concurrently.
You're right that if the threads are mostly I/O-bound, it can be a net win, although I seem to recall that even in this case, Dave Beazley's work showed that even one CPU-bound thread is enough to spoil the party significantly for the rest.
Seriously, the most difficult problems I've faced have been solved easily just after taking a quick nap (30 - 40 mins). Most of the time it comes to the point that there's no solutions in the horizon, I'm pulling my hair out and biting on my keyboard and then, a quick nap later most of the problem is solved in 20 mins when I sit in front of my computer again. It's got probably something to do with unconcscious mind.
But the problem is employers may not see this "sleeping to solve problems" act as productive as you do. I read that Google has "nap rooms", and I'll definitely have one of those when I start a company.
Actually by taking a nap you are doing something really cool.
Let me explain:
You are actually inviting your right hemisphere(the creative side of your brain) to come out and play. The right hemisphere cannot be forced into thinking that much so by taking a nap you are actually letting the unconscious work for you.
As a really interesting side note, Thomas Edison used to take a nap whenever faced a difficult problem. He used to nap with ball-bearings in his hand and when he would fall into a deep sleep dropping the ball-bearings would wake him up, and he would wake up and tackle the problem. :)
It's related to early brain research which discovered some left right specialization. Medically it's been mostly debunked, but for some reason "Pseudo Science" people really latched on to the term.
PS: By “debunked” it was discovered that the brain has more plasticity in how and where specific tasks where preformed that initially assumed. Also the high level understanding of what doing “Math” or “Poetry” has little connection to how the brain actually does this stuff. EX: Some people can count time accurately why reading other people can’t. The most probable explanation as you grow up the brain chooses how to approach high level problems in a fairly arbitrary fashion.
I can't agree with this enough, though I've found that the sweet spot for me is a 20-minute power nap. Any longer than that and I feel groggy rather than refreshed when I wake up and start coding again.
I'm a big fan of technologies that have steeper learning curves, but pay dividends over time. Examples are: Emacs, Lisp, XMonad, git, my Kinesis Advantage keyboard, swype, etc. It's nice when technologies are user friendly or obvious to use immediately (like Quicksilver), but If it's something I'll be using all the time I go for long-term efficiently over early user-friendliness.
1) Being conscientious, actually caring about the work I put out. Spending that extra hour or two or four after I am 'done' with a feature, cleaning up the layout, making sure the field level validation all works, testing alternate paths, adding a few bells and whistles.
2) Learning how to communicate effectively. Keeping interested parties in the loop at all times. Not hiding mistakes or difficulties, or waiting until the last minute to let a PM know that a task is going to be late.
3) Not falling into the 'stupid user' trap. Your users aren't stupid. They know their business better than you do. You need to understand and accomodate their workflow, not the other way around.
1) is very important for me too. Many people will say 'oh don't take your code so personally' but that just doesn't work for me. When I write something I want to be proud of it and I'm going to feel bad if it doesn't work even if it wasn't my fault or responsibility.
Also agree on 3) If a user asks for something 'stupid' you can't just shoo them away. There's a reason they asked and your job is to find out what that reason was - and to come up with a solution for your client that fits the rest of the system. The client may not know what they want but they do know that there's a problem that needs to be addressed somehow.
Learning. I'll probably be in the middle of learning something new and interesting the day I die. Some on here will disagree with me, but I'm a huge advocate of learning at the expense of doing. Spend time to learn how to not only do something, but do it well. After all, most programming tasks are only useful for a limited time. Knowledge is usually useful for much, much longer. Sometimes I'm surprised at how useful useless facts really are.
Plus if nothing else, you'll be able to do that task much faster the next time.
"Any man who reads too much and uses his own brain too little falls into lazy habits of thinking."
To me, the best readings aren't ones that I learn directly from. They're the ones that spark thought processes inside my head. I haven't read past Chapter 2 of pg's "On Lisp", but I still consider it one of the best books on programming languages because it changed how I think about them in a lot of ways.
i find i can spend a whole day reading stuff and feel like i am much better off. if i spend half the time reading and half the time trying to apply what i've learned, then i will retain it soooooo much better.
I know it has probably been said a million times before, but I have found that in many contexts, trying to tackle problems in a purely-functional style has helped me immensely. To that end, learning languages like Scheme and Haskell forced me to think of programs in a more algebraic fashion -- so many things can be accomplished in a concise, readable, maintainable fashion using basic functional concepts like function composition/bootstrapping, closures, and higher-order functions. Furthermore, because much of it is typically referentially transparent, code written in this way is often fundamentally "safe" in the sense that it lacks side effects, making it much less likely to segfault or do catastrophically bad things.
Thinking in a functional fashion also made it easy for me to pick up JS and start writing event-driven code almost immediately, because so much of modern JS relies on proper understanding of closures and asynchronous events. Furthermore, if you find you like functional programming, I also highly recommend teaching yourself Church's basic untyped lambda calculus; I found that it helped give me a much better understanding of the basic underpinnings of so many languages. Similarly, having an understanding of the fundamental concepts that underlie all programming languages -- such as the differences between call-by-value and call-by-reference, or static and dynamic scope, or the various kinds of OO systems (prototype-based, class-based, mixins) -- is really important. My knowledge of these has not only made it really easy for me to pick up and learn a language extremely quickly, but has also saved me on more than one occasion from a pitfall that I would have otherwise made (such as forgetting that older versions of Perl use dynamic scope by default).
Also, as many other people here have mentioned, coding up solutions to problems from ACM competitions, Project Euler, Google Code Jam, etc. is a great way to get good at just coding. It is also a great way to familiarize yourself with a language.
I work on what is most exciting to me and then switch when the excitement wears off, switching back again when the excitement comes back.
If nothing is exciting, if there's resistance (usually from lack of sleep, lack of exercise) then I focus on restoring my routine, and working on something easier like copywriting, email, invoice admin etc. until the excitement for something else wells up again. But I keep working. Often you need to just press on before the excitement comes back.
In the long term, I pick technologies I'm excited in, even if they bring short term costs. Connecting the dots, this process has lead to near-perfect strategy in hindsight.
This is counter-intuitive, but embracing NIH has actually made me a better programmer. It's the spirit of vertical integration. It's taught me how to do things and developed my understanding. Rather than using say an SMTP client, if I don't know how it works I dive in and write one myself. It costs the project in the short term, but in the long term the project at least has one more programmer who understands the nuances of another protocol.
Imagine a Google that outsourced their filesystem, data center, server racks, JS engine, browser, caching, proxying, map/reduce, machine learning, DNS, etc. Would they be as good as the Google of today? That's what separates IT from hackers. IT configure and use existing software. Hackers write their own. IT know how to "cobble together" things. Hackers understand things. Without the intimate understanding that comes from NIH, there is no room for the hack.
Being a better programmer is a long term motivation, and it's often the methods that pay off in the long term that contribute most towards this goal.
I've found that when "nothing is exciting" it's almost always due to the "resistance" you mentioned. Realizing this has really helped me to avoid getting discouraged when it seems like the thrill is gone.
1. Embracing paranoia - enumerating everything that can go wrong with my code and testing for it. Over time, learning to think of more things that can break.
2. Writing down every question that occurs to me about the technology I am working with, at any point, specifically the behaviour of libraries and nuances of programming languages: I am not an expert in any programming language. Then chasing those questions until they are resolved.
3. When stuck with a slippery bug, attempting to reconstruct the bug in a toy program. If reconstructed, fixing it is easier. If not, I know it's not where I thought it was. Sometimes I never make it as far as actually writing the toy program; the intent is enough.
Programming is 10% writing code, and 90% reading code. Reading your own code (debugging, refactoring, coming back to something you wrote on Friday on Monday), and reading other code ("why doesn't this library work?").
Once you learn to read, a number of things happen. You are no longer a slave to your libraries; you can open them up, see exactly what they're doing, and either change your mental model or change the code. You don't have to get stressed out about "coding guidelines" anymore, because you actually understand what the code means. Inane details like tabs versus spaces and cuddled elses or whatever don't matter anymore, because you have seen all the possible combinations. And, you pick up the style of your peers, because you aren't coding in a vacuum anymore.
I could go on and on, but if I boil it down to one action item, it's "when you have a problem with a library, read it". Everything else comes from there. (And before you know it, you'll be mentioned in the Changelog of everything you use! Good for getting your next job.)
That reminds me of my worst habit as a programmer: I want to solve every problem myself, instead of asking for help.
I certainly don't recommend quitting every time you have a problem you can't solve in half a hour, but there's a happy medium between that and wasting days (while you're getting paid, or at least wasting your project's time) working on something that the guy in the next room could sweep away in ten minutes.
Second that - there's a definite point of diminishing returns and it's worth knowing when you're reaching that point because after that you're just wasting time, burning out, and throwing off future efforts.
2. Find a work flow that keeps you moving forward no matter how slowly. here is mine. Basically design, top down then code bottom up, testing each piece as you go. An example of when I finally get to coding.
a. write the comment for the function
b. write the signature
c. write the tests
d. write the internals of the function
It will never almost never work out that perfectly but you should have the goal that, when you are done with that function you should never have to look at it again*
* you will. I do. Being perfect would be nice, but it just isn't going to happen.
While I get my hands on a new technical book, I keep all the new terms and concepts that show up in a Google Docs document, just a list of them. Every now and then I take a peek at such list. It helps me increase and/or reinforce my knowledge around the area.
Before committing anything, I double-check everything, refactoring the code I have modified/added. This is easy because, once you have your new functionality or fix working, maintaining the new correct behavior as you modify the code is low-risk if you do it step by step.
English, as being my second language, is an important part of my job. I try to read all the books, blogs, movies, and so on in English. Every time I come across a new word, I look it up in wordreference (plugin for chrome), and add it to a list in Sidenote (mac app). When I need to communicate through email, I sift through that list looking for words that may fit in my message. It is a simple practice that has helped me out big time in uplifting my skills (of course I share this list with my friends!). Certainly this not directly related to programmig but, as I see it, if you want to improve as a programmer, you're going to need to become rather fluent in English.
Writing a blog. Especially invaluable when you get feedback of your posts. Folks out there shall give you a kick in the ass more often than you expect. Expect and embrace them. It's a great way to grow up.
Get to know that the finest blogs, books and people are in your domain. For example, years ago I programmed in Java, usually building web sites with the help of very known frameworks (Spring, Struts, you name it). I thought I was quite competent until I come across a book called "Effective Java" by a dude named Joshua Bloch. Needless to say I was struck by it and I ended up feeling like I knew nothing (literally). You can't program in Java and at the same time not know who Bloch, Goetz, Unble Bob, and so on. Same with Lisp and Norvig, Siebel, Graham.
I always strive to keep up with the basics. This is the killer skill that shines when everything else I try fails. With a new technology, starting out learning the ropes from the top abstraction layers makes me feel competent because I can get my job done, plus or minus. Basics alone aren't that helpful, but when something arises that steers from the standard way of doing the stuff promoted by the top abstraction layers, a great deal of the times I will need to dig right into the core in order to be able to solve my issue. Well, plus or minus, I personally call this the Onion Theory. This is related to the 'darkness surrounding you' concept explained above. Awesome.
When you're developing, there are a thousand little decisions to be made. Don't assume that the server will be up before the client. Don't assume that the file you open will be writeable. Don't assume that the database is there. Don't assume that the user knows what you know. Don't assume everything works.
Maybe it's because I spend lots of time in teams developing large (overly) complicated systems, but a common problem is simply finding out what went wrong. Strictly speaking, you can sometimes accept something as a precondition if it is explicitly stated as such--but you still at a minimum need to at least fail loudly and obviously if it doesn't do what you expected. Silent failures suck.
Don't assume anything while debugging, either. As mentioned elsewhere, when debugging I find it useful to take the symptom and think of every possible failure that could could cause the symptom. Then, start paring down the list by doing more tests. Do not assume anything. Don't assume the cables are connected. Don't assume the code is current, or the driver is working, or the the library documentation is correct. Verify, verify, verify.
By all means, use your experience to prioritize your search space. I personally prioritize stuff that's easy to check, or stuff that's known to have a problem. Of course, I generally suspect my own code long before turning my gaze toward third-party components. But don't assume! Remember the old saying, "it's not a compiler bug?" Well, yeah--that's really the last thing I'd suspect. But if I ruled everything else out, I'd at least do a Google search to see if anyone at least brought up the possibility.
But of course, I've never gotten that far. So far, every debugging problem has been reducible to a failure to check my assumptions.
There's a fantastic Mark Twain quote that I was amazed to find the other day; apologies if I've included it here before: “It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so.”
For me, it has been writing small little utility projects using technologies that I'm curious about.
Recently it was jQuery and the Play! Framework -- for the longest time I was trying to think of some "grand" project that I could write to use them all together and become the master of all that is web... then a year went by and I continued to try and figure out what this ultimate project would be.
What a waste.
Then one Saturday I was bored and sat down and wrote a simple collection of AJAX utilities that do the most basic crap that anyone could write - using those two technologies. It helped get me over that hump and I learned a lot about the two techs I was curious about.
I went ahead and stuck the utils online for anyone to use and moved on with my life.
It was a great exercise for me, and a method of write/release I plan on using from this point forward for just about anything.
So I do a lot of stuff that's not programming (dealing with managers mostly). But I've begun to practice a simple routine - make tea, close my door, and do nothing but write code from 2 to 4 (or 1 to 3, or 1:30-3:30). Every day.
The first habit I try to impress on junior programmers is not to be satisfied when you've fixed a problem: you aren't done until you understand what went wrong in the first place, and have a plausible explanation as to why the change you made fixed it. (Bonus points if you go looking for other instances of the same problem.)
I believe that the quality of the code you write only shows itself over time. Every other metric we use to measure quality is subservient to one simple truth. How does your code age?
How easy is your code to modify later when you need to add a new feature? I think that's the only measure of good code that I trust. I get a little proud of myself when I need to modify something I've written previously to add a new feature and it's relatively easy. So I look back at how I wrote it then and try to learn from it. The opposite is also true. If I need to modify my code to add something and it is truly painful then I know I did something wrong way back when. The same is true for bugs found. If every bug found requires massive code changes than I did a pretty piss poor job initially. But if most bug fixes only require a couple of lines than I did well.
This makes identifying good and bad practices especially difficult because it requires that you stick around for things to fall apart. This is one of the reasons that commercial software sucks so much and why consultants leave giant messes in their wake. The short-term priorities of business conflict with the long-term reality of determining software quality.
As a developer it's your obligation to learn from your past mistakes and better yourself.
1- semantics matter. Names should neatly describe what they refer to.
2- avoid mutable variables. Using the same name for two things is just another way to get tripped up.
3- if you're going to ship it, try to stay in the lower (50%) range of your abilities. This is how to get things done lightning fast in my opinion. Work on your skills and improve every day, but to produce something functional and enduring you should be technically conservative and produce something you thoroughly, thoroughly understand.
4- Your abstractions should be accurate, precise, and cognitively manageable.
5* Non-programming organization skills matter. Identify stakeholders, gather requirements, set priorities, PRODUCE, get feedback... repeat.
Never just shrugging some problem off with "boy that's weird." I usually can't let something go until I understand what caused an issue. It eventually became unrealistic to chase down every oddity, but I've done enough of them to learn a lot about different pitfalls.
Funny because that's one of the things that I've learned - when to let something go as 'that's weird' and just solving the problem without spending days digging into root causes. Sometimes it's worth it, but usually it's taking time from getting other priorities accomplished.
Making peace with the fact that I'm way too stupid to write bug free code.
This leads to a surprisingly large number of practical habits.
1. I welcome code reviews. Some people can't stand their code being criticized by other people. To me the fact that someone else spends time improving my code is a clear win. Ego is not bruised if you start with assumption of being stupid.
2. I seek out and use tools that help me find bugs automatically and understand my code better. Static code checkers like clang analyzer or cppcheck or pychecker or lint. Valgrind, good memory and cpu profilers, good debuggers. Source Insight (an editor).
3. I see enormous value in continuous build systems and automated testing (be it unit tests or more holistic tests).
4. I step through new code I write in the debugger just to verify that it behaves the way I expect.
5. I stay away from complexity, both self-inflicted (like trying to be too clever when implementing something) or inflicted by the tool (e.g. I avoid using advanced features of C++). I avoid multi-threading as long as I can.
6. I add diagnostics to my code. Logging, asserts in debug builds, built-in crash dump submission to my site for analyzing crashes that happen in the wild.
7. I know that despite doing all I can to prevent it, the bugs will happen and will have to read my own code to fix them long after I wrote that code. Therefore I try to make the code as readable as possible for my future self. Balanced comments (not too much, not too little). No cryptic names for variables or functions. No long functions with complex logic. I take the time to make my code look consistent.
8. It's better if other people sweat writing and fixing bugs in their code than me in mine. I look for high quality, reputable components instead of re-inventing the wheel. I would much rather use SQLite than write my own persistence layer.
Assuming I don't know shit yet (but that sometimes I do know something).
When I was younger, I hoped I knew pretty much everything I needed to know. Oh boy, that was hard. That effectively cut me off learning because if you don't admit you don't know something you can never learn anything. My learning was diverted to many impractical programming mindgames instead of bare hands-on programming.
Luckily, programming is a very binary thing. A program either works or it doesn't. If you don't know something or you don't understand something, you can't solve the problem. Enough of the cases where someone smarter had written code that I just couldn't have written the same way or as elegantly finally returned me back on my feet.
Then I dipped slightly to the other side of the axis. I assume I don't know anything about some problem or new piece of code until I study it enough to confirm that certain similarities to what I've seen before do exist. The downside is that it takes time until I find my confidence but eventually the magic will dissolve, I see how the program works, and I finally touch the code and start making modifications. But I remain very careful unless I'm really, really, really sure I know better to make a big modification.
While it is stressful to the ego, for me it's a much better way.
Really taking the time to learn your current framework really well. For example, my first rails site was written with a shallow understanding of both rails and ruby, and I ended up writing reams more code than I needed to, and the code I wrote was usually rolling my own when rails/ruby offered a cleaner simpler and more robust way of doing it.
In other words, don't go against the grain of whatever framework/library you are working in. Learn the path of least resistance.
Prototyping semi-trivial cases. What I mean is that a solution of a problem might require some parameter so I'll just pick a value of the parameter that makes it possible to solve the problem in a brute force fashion and evolve the code base from that point. This is because I'm a visual and interactive thinker so any kind of feedback that helps me explore the problem space as soon as possible makes me at least twice as productive.
I do a top down view and a bottom up view, even if just in my head. How should this look and function when it's done? What base classes will I need? Which parts go in between? Also, breaking up things I have left to do into small enough parts that they can mostly be done in a single sitting (obviously there are exceptions). Then I never have to leave feeling like I accomplished nothing.
I used to write a lot of things from scratch, unnecessarily. Whenever I find myself working at a level of abstraction much lower than the problem I'm trying to solve, I start googling. This has led me to django, grid 960 (css), jquery ui, etc.
These days, writing a new app usually means seeing how far I can get with Drupal modules first.
Heavy caffeine usage and liberal alcohol usage on occasion.
Avoid both of those.
When I have a a problem I do one of two things: 1) I work on trivial things like code cleanup and reorganizing things, or 2) I get my head completely out of the computer. Go for a walk. Sit outside and just look around for a few minutes.
The anti-pattern here is popping off to look at your email, FB, or HN. It keeps the brain active in context-switching mode, instead of settling into one context and having it work on the problem behind-the-scenes.
The biggest difference between my younger self and now is that I know how to plan my programming passively. I've reduced my actual work time by 2-3 times and increased productivity 2-3 times by simply thinking about problems and how I'm going to solve them so that when I put down code it just flows.
Code review. Oddly enough, it wasn't so much the issues that were raised by the reviewers that made me a better programmer (though they certainly helped), but the knowledge that every aspect of my code could be inspected, and I'd be expected to fix it, that forced me to double- and triple-check my work before sending it off. Since that time, even when I'm working on personal projects, I find myself coding defensively, and will often write sections of code a couple times until I'm satisfied that it is unambiguous, correct, and clear.
This has helped immensely when I've gone back months later and tried to figure out what I was thinking; now, it actually makes sense!
Thinking about problems away from the computer always seems to bring about the answer. It seems like I do my best thinking in the shower, only problem is the usual 10 min shower sometimes turn into 30-40 mins.
1. Learning to love learning new languages. I have a few pet projects which are slightly longer than hello world that I practice with.
2. Accepting that code I wrote a month ago sucks. If it doesn't then I haven't improved/developed. That's really the point of refactoring. I do my best today and move on.
3. Socializing. Yes, going out to hack/codefests etc. makes you better. BTW: Yes, you have to speak with other people/developers...sitting in the corner by yourself and drinking your latte is usually not enough.
It is almost like when you learn to write. Each times you make an error and notice it, the next time you're going to write it right at first.
The first time you forget the name of the function, the second time you remember the name but forget the semi-colon, the third time you forget the underscore, etc. Meanwhile, you remember slowly those errors. In the end, you write your code without any mistake right out of your head.
Taking breaks for tea/showers/walks in the fresh air is the one thing with the best chance of helping me solve a problem. Of course, this is only effective after staring at it long enough that when I eventually do take a break, I can't stop thinking about it (and I'm so embedded in the problem that I know enough about it to continue thinking without referring back to it).
Understand the domain really well. You have to live and breathe it. Not just the specific problem you are trying to solve but the bigger picture. It will give you the freedom to make macro level decisions regarding your code that can have a high impact.
Those improves your coding ability a lot. For example, If your application needs to scale well, you should always consider implementing efficient algorithms. Yes, you can use some open source libraries, tools, etc but someone has to write those using efficient algorithms. This is one of the main reasons, that companies like Facebook, Google are recruiting people using these "puzzles" or contests like Topcoder, Google Codejam.
I think these kinds of problems make you comfortable with algorithmic thinking, it may not always be necessary to implement algorithm X or whatever in your day to day code but being comfortable with algorithms (by practicing) means you don't push off or miss the correct times to implement algorithm X.
Writing distributed code. Once your data is too large and your logic too distributed to run or debug locally, you are forced to become a much more careful programmer. I have to carefully read the code and think about what is going on, whereas before I often used debugging and stepping through the code as a crutch.