Ironically, I think this is something that affects seasoned programmers more than new ones; the more you know about the trade, the more voices you have saying "no, that's wrong! Don't do that!", which can freeze progress. The way I combat it is by writing down pseudocode in my source code - literally just english, non-compiling pseudocode - which I then "refactor" into working code (thus the first "green" is "it parses"). By making step 0 the expression of the idea rather "writing the fist line of code", I can get right into the process rather than getting hung up on the "how".
On the other hand, it is also important to quickly get into the state of a running program, because only that allows for iterative development where you can catch bugs shortly after you wrote them, rather than having tons of bugs after weeks of (not sufficiently tested) work.
It makes laying out the program much simpler and gives me a birds eye view of how easy it's going to be to maintain.
It's surprisingly effective.
It can also be explained as way to postpone decisions you can't clearly make right now. First, you pretend to have made the decision, and later on you implement the decision "you wish you had made".
For smaller segments though, I'll just write out function signatures, an then go back and fill em out or restructure them.
aka coding by intention
I think a big shift in modern software development came about when people finally realized that developing software is by nature a "messy" process. Loose text files, REPL sessions filled with red lines on the console, randomly scribbled boxes with arrows on loose sheets of paper strewn about the room; traversing object graphs in your head while on the drive to work. There is no "outline" to frame any of that. The outline only exists after the software has been built.
I would agree about the general outcome, though - enterprise software often is built _despite_ the politics between various departments and vendors, and much of the solution is as a result of compromise and integrating with "not quite suitable for current purpose" legacy systems. That is where a large amount of the cost goes. (It's also important to remember that what we're building today is also going to become the legacy system of tomorrow, and to include that thought in the design).
Design is messy. The implementation need not.
> “Design Patterns”, “Code architecture”, “Scalability”, “OOP”, “Maintainability”
Apart from Scalability others can be implemented from day one.
Scalability is about a dumb proxy first. Change in Data structures next.
In software, you have two choices.
1) Dumb code that works aka kludge.
The author is advocating kludge programming.
(I want to use this word distinctly from hacks, which are fewer and atleast clever)
I advocate this.
2) SQlite, Games
I advocate this too. Software that is meant to last, requires rigor.
My point is, the author is considering all programming to be
engineering programming. It's not. And I do hate project managers that confuse the two.
I even concede that you can mixmatch the two (testing for eg). Why don't people see this middle ground ?
The only way I can really describe it is (in terms of what you just said): "red, green, refactor" your life, not just your job. Constantly question everything you're doing, and find out what you don't like about it. Find the root cause of that feeling, and figure out what you can do to fix it. Find and take the steps to complete that.
Edit: I will add though, that the trick is to do so without stopping progress, as you said.
Instead, I would sculpt the entire piece of art roughly, almost unrecognizably, and then target large parts of the sculpture. Details like ears and eyes might even never get done, depending on how the requirements change for the sculpture.
I always approach writing software the same way, so the first iteration is just a hardcoded piece of software that just shows me that it will actually work, and then I iterate and start smoothing out all of the sharp edges, of which there are a ton.
"Just do it" is the best advice for stuck people - you tell them to just start doing the thing, not worry about details, and eventually you'll do it right. This is good advice for a business guy who's pin-wheeling ideas in his head, a programmer with no product sense, or anyone else who wants but does not make.
When you know a little bit about what you want to build, and maybe even a bit about _how_ to build it - when you have some actual experience, skill, and will - then the right advice is typically to slow down and think it through.
Doing some nominal TDD has helped quite a bit for me, because it does require writing code, but not code that I have to worry too much about (and if it happens to be a pretty shitty, shortsighted test, I can delete it without it, theoretically, hampering the app) screwing me over. Sometimes this physical act is enough to get me past the code-writing block.
Also, if I actually do the testing right, it prevents regression, improves code quality, etc. etc.
"Plan to throw one away. You will anyway."
The mistake most people make is that they mix learning and doing in a single round. It's much better to write a prototype with a specific goal of wringing as much knowledge from it as you can without being distracted by scalability or maintainability minutiae. Then, and only then, will you be ready to write The Real Thing with all of the bells and whistles. In the end you'll probably save more time than if you kept going back and forth waterfall-style between learning and doing on the same codebase. You'll certainly end up with a better result.
P.S. That quote is from 1975. To quote another great mind (Santayana): those who do not know...
I've heard many arguments where people will tell you, it won't scale. Or you're doing it all wrong by writing quick and dirty procedural code. Quite frankly scaling is about the nicest problem to have if you're a startup. Without traction your ugly code simply isn't going to matter. With traction you will gain funding and the ugly code problem quickly disappears like I assume it did at Facebook.
I'm certain I stole it from someone here on HN years ago (whoever it was, I owe you a beer!), but I've always called this a "Maserati Problem". That is, it's something I can think about while I'm driving down the road in my Maserati that I've purchased with money my startup has made already.
Scaling, in most cases, is a problem very much like "where am I going to store all this money that's pouring in?!" If you have it, you've already won.
We programmers have a ugly tendency to judge less than stellar codes written by others. It made us feel superior to pick apart mistakes others have made. The fact is software development is a series of technical and non-technical trade-offs. I certainly had fell in the same trap before. These days I just appreciate they can ship a working product and get it off the ground.
What's so ugly about it? It's easy to understand which for me is the single most important thing when writing code.
But of course, before starting any major project, the first thing any developer should do is map out at least a basic mental design of the program - even if that map isn't written in pseudo-code or in a form of a flow chart.
a pure functional language is really good for writing dumb code that "just works" in the sense that it does what you need it to do even if it isn't very pretty or fast, while at the same time keeping it reasonably maintainable because the code will not be rife with accidental complexity and assumptions about global state.
Have to add bugs to haskell code ?
gosh it seems impossible.
I'm probably remembering things with rose-tinted glasses.
These days, even in open source -- the once proud land of eccentric hackers and the can-do spirit, there is such a negative focus on bad developers. I can certainly see how the OP could feel paralyzed when just trying to write a line of code.
In reality we're all at different points in our journey and have had different experiences along the way. There's always room to improve ourselves and it's important to listen to criticism so that we know where to start. But sometimes you just need to cut loose and let it all hang out. I started writing a library I call "Horton," for this very purpose: whimsy, pleasure, and most of all to avoid the engineer mindset!
In order gain insights and develop new ideas you have to find new hunches and investigate them. It's hard to develop hunches in a vacuum where your reality doesn't expand beyond your own self-made bubble. You've got to push once in a while... and people out there who complain so loudly about shitty developers? Get a life and help someone out. We're just making programs.
update: https://github.com/agentultra/Horton forgot the link...
I've certainly experienced this with some of the people I worked with recently. Anything you win in the short term gets lost several weeks--if not days--later when everything is a horrible mess. Don't even think about coming back to that code months later!
I've found that putting in a little bit of care for the future pays off even in the short term. Perhaps not on the scale of hours or days, but definitely weeks. Which is still pretty immediate.
However, how you do this is also very important. The handy rule I've been using is simple: simplicity. Improve your code by making it do less not more. If you can make code more maitainable or general by simplifying, do it. If it would require adding new concepts or mental overhead, reconsider. Try to reuse existing abstractions as much as possible.
This usually--but not always--means making your code more functional. Do you really need to write this operation in place? Do you really need to tie everything together with a bunch of implicit state changes? Probably not! This is not to say that you should never use state: just be sure to only use it when it fits well and makes sense. And be explicit about it.
The functional style (at least in languages like Haskell) also lends itself very well to reusing simple and general abstractions. The key idea here is that these abstractions are not about doing more: they're about doing less. More general code simply leaves you less room to make mistakes. If you're working with lists of numbers, there are all sorts of arithmetic mistakes you can make; if you manage to write the same code against lists of anything, the only mistakes possible will be in list handling.
Haskell just makes this more practical by providing abstractions with a great "power-to-weight ratio": ones that manage to be simple, general and yet still expressive. Code with functors or monoids or traversables is easier to write than more concrete code and yet still flexible enough for many useful tasks. As a bonus, it's also more reusable. For free.
So the key idea is reduction: all these abstractions work by stripping concrete types of power. A functor can do much less than a list. A monoid can do much less than a number. This lets you write nicer code without becoming an "architecture astronaut" because instead of adding structure, you're really taking it away.
I've found that programming like this really helps in maintaining and extending the code later on. But it doesn't slow me down when I'm writing either--I actually save time because I spend less getting time getting the code correct in the first place.
These days, I've started being able to make large changes or add complicated features and have them work on the first try. Not every time, but surprisingly often. Certainly much more often than before! In large part, I think this is because of writing the code with an eye towards simplicity.
So the important insight: take the simple route, not the easy route.
Personally, after 16 years in the industry, I do stress about my code as described in the article... Not about making it work, because that will happen regardless, but about how. It affects my ability to code, because "make it work" is far drowned out by "make it work the way your peers will accept it." Many times, that "acceptable" approach is something I find far less intuitive, and far less maintainable... Also, that approach has changed so many times over the years and with each different team, it's hard to keep track. I can build far more in a weekend, on my own terms, than I can build in a week by someone else's. I am not complaining... I actually like to learn those terms, and I enjoy it as long as people are constructive about it rather than condescending. It's trying to pre-determine those terms that is the block, for me.
Seems he's fairly experienced. Anyway such things are highly subjective, and YMMV
You can tell if you're too slow when you've spent two weeks thinking about the program and have neither working code nor a detailed design, or if you've created a detailed design only to realize upon implementation you'd gone about it entirely the wrong way.
You can tell if you're too fast if you find yourself spending more time cleaning up messes of poorly-thought-out spaghetti code.
Sometimes you'll want to plan ahead because the nature of the problem means you'll have a harder time cleaning things up later.
Other times, you want to move quickly because creating castle-in-the-sky architectures is not the best use of your time (especially if it becomes paralyzing).
Experience helps you determine which type of situation you're dealing with.
I think when you take on the debt of poor code you have to remember that what it will take to fix that problem will continue to increase the longer you are away from that code.
Now I don't think you should consider fixing things for scale poor code that is it's own beast which makes the most sense to handle once you have the problem as you can't really tell what the problem will be until you have it.
Also, while it's easy to identify code which has had too little thought put into it, it's often very difficult to identify code which has had too much thought put into it. The best solutions don't look like the result of lots of hard work, but rather make hard problems look like simple problems that would be trivial to solve.
- make the function fit in your head. Literally. Make it fit in one screen, if you can. If it does not, think how you can make it fit on the screen - everything, including variable definitions.
- avoid branching as much as possible unless it is the 'exit' branches. Avoiding branching sometimes can be done by tricks like passing function pointers. Do it - it makes code much more expressive and easier to read.
These two heuristics allow me to make the code much simpler to understand for $(me+6months). I saw the same patterns in real-world code at $work as well - using the above two principles makes the code dramatically easier to support later.
Though at least for me there is somewhere a limit (2-3 lines?) where the returns start to diminish - there's more noticeably more typing involved, and more mental context switching while debugging.
Exactly. The problem is that code can tend to be either crap (spaghetti) or too much code (too many methods and layers of extraction). So, I'd go one step further: code should do less, not more, and should be clear and understandable, without making unnecessary sacrifices. What is clear to me isn't clear to everyone else, but striving for clarity and simplicity isn't a bad thing.
Surely, you mean abstraction?
But, if you know better, do better please.
After writing tons of code, I've come to realize that 80% of the time, it takes about the same time to do it "right" (scalable/secure/modular) as it does to do it "OK".
Ex: Are plaintext passwords any faster to build/code than encrypted+salt passwords? Nope.
And, it's fine to intend to refactor later. But, too many organizations have no tolerance later to let engineers do things that don't have visible results. I've suffered through a stint of handling "legacy code" like that & it was painful & demoralizing.
In part, this is simply to get past the analysis paralysis or creative block and get going on something. But also because it can sometimes be easier to make something good when you have a clear bad example in front of you to contrast it with.
Occasionally, I find myself “stuck” when writing code – the coder’s “writer’s block” I guess. This quote by Ward Cunningham is both inspiring and truthful:
-- Once we had written it, we could look at it. And we’d say, “Oh yeah, now we know what’s going on,” because the mere act of writing it organized our thoughts. Maybe it worked. Maybe it didn’t. Maybe we had to code some more. But we had been blocked from making progress, and now we weren’t. We had been thinking about too much at once, trying to achieve too complicated a goal, trying to code it too well. Maybe we had been trying to impress our friends with our knowledge of computer science, whatever. But we decided to try whatever is most simple: to write an if statement, return a constant, use a linear search. We would just write it and see it work. We knew that once it worked, we’d be in a better position to think of what we really wanted. --
Next time you’re stuck, just write the simplest thing that could possibly work!
-- So when I asked, “What’s the simplest thing that could possibly work,” I wasn’t even sure. I wasn’t asking, “What do you know would work?” I was asking, “What’s possible? What is the simplest thing we could say in code, so that we’ll be talking about something that’s on the screen, instead of something that’s ill-formed in our mind.” I was saying, “Once we get something on the screen, we can look at it. If it needs to be more we can make it more. Our problem is we’ve got nothing.” --
Before you start writing, sit down with a friend in a bar and explain to him exactly what you're trying to say and do. If you can't communicate your idea in a concise, conversational way, then you're not ready to start writing.
Applied to code, I suppose this is similar to the concept of Rubber Ducky Debugging: http://en.wikipedia.org/wiki/Rubber_duck_debugging
if your team builds apps with 500k LOC, you need to be constantly focused on abstracting and factoring out patterns or you will drown in your defect rate before the project even ships. If your team builds medical device software, if you aren't doing this, you'll get someone killed.
If you're building a personal blog, your time is better spent by pasting jquery snippets.
There is a spectrum here and most projects fall somewhere in between.
"And all that is fantastically interesting, but completely beside the point. I just fell into the classic programmer trap of exploring and learning about (what I find) fantastically interesting things that will address all sorts of amazingly complex situations, but which the learning of said things resulted in absolutely nothing of tangible value being created."
1. Let me think about this some more before giving you an answer.
2. I will use my debugger.
3. I will create a new branch for this task.
4. I have commented my code and commented it well.
5. I have accounted for fail scenarios.
6. I have considered multiple solutions to this task.
7. I have not begun hacking away immediately at a task.
8. I have attempted exploiting my code.
1. Write code fast.
2. Refactor when necessary (either because you got to a dead end and can't keep writing code fast anymore, or because you're done and want to move on to another task).
3. Goto 1.
I am now working at a startup where the early devs have left. They did precisely what was needed at that point - build quickly, ensure it works for the 3/4 customers we had back then and document it as well.
Fast forward a year later, that same code is no longer extensible for many clients. Its rigid and tightly coupled and rewriting some parts means, we have to pretty much rewrite a whole lot more than we bargained for. We will have to burn through a lot of cash to just build a whole new product based on some of the scalability and maintenance learning we now have from a year of learning and growing to client# 100.
The point is, there is always going to be "this is the right thing right now, its what our customers want" and then something else for the future.
I have seen that people who have worked for a while and in many startups do the smart thing (as expected) and while follow "build for now" they also do the sensible thing of investing for the future (from experience).
I would still go one step further and at least apply the strategy pattern/polymorphism when you have a large if/else/case block going beyond 2 or 3 conditions in a controller or model or something. (assuming MVC web apps)
The fact is you can write out some ugly code directly in a controller where it might get some code working fast and prevent the freeze up of your output, but once working you should immediately move it to the appropriate place like a model or library, and apply a basic OO strategy pattern (perhaps not with factory/interfaces etc until needed later) where appropriate to cut down on spaghetti/nested conditionals.
This really does not take much more time than stopping at working code that is ugly, and is a good middle ground where it is not painful to return to later.
There is still no good excuse for "taking a dump in the corner" of your code base just to get a marginal gain in output.
As such, when I'm not under those constraints, I sometimes go a little overboard factoring out duplicated code and looking for the beautiful solution. To me, beauty is the maximization of clarity and conciseness. Since I'm usually writing Perl, there's quite a lot to work with in that regard.
For example, yesterday I spent 4-6 hours writing approximately 40 lines of Perl (along with another 40 lines of tests).
On the plus side, what I generate during these binges is generally an efficiency multiplier for the other code I write, as well as a god way to really examine some of the more esoteric, but useful, ways to use your chosen language.
Last week, after getting to the first page on Google's search results, I'm getting about 12,000 hits a week. Monitoring shows my daily CPU usage is nearly 0%, and memory usage around 32MB. Today(3 months later) I finally did some SEO, added compression, caching, and a CDN to get drop page load time from ~2 seconds to under 500ms.
The thing is if you don't do it right with first go you will have hard time to get it done later. What management cares if it works, it's your responsibility to make it right from the start and they probably won't give you time to make some refactoring/optimizations later, because time is money :(. Sometimes it's hard to explain consequences. I have seen this too many times.
I believe you need to find golden middle, by not trying to hack things to much or overengineering problem, though sometimes is hard to define where is this middle.
A complex system that works is invariably found to have evolved
from a simple system that worked.
I often share a similar sentiment with teams I work with: Don't throw solutions at undefined problems.
That commonly applies to performance and premature optimization.
I'll agree that there needs to be some thought as to what's being built before, and while, doing so, but spending time resolving as-of-yet-undefined problems means decision making based on high levels of uncertainty.
Get something working and let problems define themselves. Often times, they do so in a way that makes them easier to fix vs. relying on the speculative solution making I mentioned above.
If I find myself getting stuck in the cycle of "this can be written better, I'll just spend 2 hours improving it rather than finishing the framework" I stop, I think about it, then I leave the working-but-probably-crap code in place and move on.
Once the framework is in place, then I start making things better, one area at a time.
That said, don't dither too much over writing code. History favours the doers rather than the vacillators. Most software except for utilities has a quite short working life anyway. There is also the standard advice of "expect to throw one away".
So I completely agree on the underlying principle: Dont to _anything_ prematurely, and that first and foremost includes actually writing any code at all.
Thinking about this stuff is the sign of either a newbie or lacking up-front design.
Does that resonate enough for you?
Seriously, seeing stuff black&white is how bad code happens. (And so many other things in life.)
Also, test your app with LOTS of data. Even during early development. A slow algorithm/query will become very obvious under a high load.
They prey on the insecurity of the smartest people. It makes us easy to take advantage of, and compromises almost all of our leverage.
Every time you let them sit in your head and natter about how your work isn't good enough because it doesn't isn't in accord with what they see as "professional", you're letting them win. The problem with assholes is that their imprints live on even years after you get away from them. It takes conscious effort to get that lingering cognitive load out of your mind.
We need to step back and simplify: solve problems, and do it well. (This includes documentation and testing, when appropriate.) Make stuff, make it good, make what's good better. We need to get back to that, rather than slugging it out over "agile" and TDD and nonsense. Zed Shaw got it right: http://programming-motherfucker.com/
With that, I'm inclined to agree with the parent, though perhaps his use of Bad Guys does detract from the underlying message. I don't think there is some deep plot by born-evil people to keep programmers down, but I agree with the point that we could all do much better by working together in a positive way instead of trying to out-shame each other in hopes of capturing the top prize of impressing someone else.
Programmers are often oblivious, when they find themselves in work environments that are designed to play divide-and-conquer games against them, to what is going on. Executives often enable/encourage the worst traits of programmers in order to weaken the group.
The people who promote careful programming methodologies aren't trying to make people miserable. In fact I'd lay half the blame with the OP, who needs to rein in a tendency to overthink things. We don't need a villain here.
The Bad Guys are plentiful and they include scope creepers, late/non payers and idea guys offering equity.
Our tribe (to borrow from your parlance) is goodhearted and kind and usually sees the best in people.
That is until we are crushed under the disillusioning reality that the Screwheads (to borrow from HST's parlance) rule the world and will exploit us instinctively. Our relationship with the Bad Guys/Screwheads is no different than the lion and the wildebeest.
Fortunately we can outsmart the bastards and destroy their outdated business models by harnessing the power of networked machine intelligence.