Hacker News new | past | comments | ask | show | jobs | submit login
Making good programmers feel like bad programmers (clipperhouse.com)
25 points by mwsherman on July 26, 2009 | hide | past | favorite | 18 comments



The problem starts before the leaky abstraction. It is putting a bad or mal-fitting model over a problem to make it easier for non-experts to work with the tool.

This is a commonly-made mistake and comes in the form of using a GUI for large documentation project that is really batch underneath, putting a nice GUI on a commercial application that is high-throughput when done on green screen or equivalent, using a big hairy IDE when vi or emacs will be higher throughput, using a mouse when the keyboard is demonstrably faster.

Seems like we need a good treatise on how adding features or layers weakens rather than strengthens a tool. This article shows one of the perils.


Second that. I'm absolutely amazed at how in-productive I am in a java/eclipse environment vs straight C, vi and make. It's like I'm continuously scanning either the documentation or some obscure error.

Of course I pretty much grew up in C but the difference feels larger than can be explained by that fact.


Perhaps you are writing different kinds of code in each.

For example, when I'm writing C, I'm usually just using the standard library and maybe a couple of OS routines. It's especially the case because I'm usually writing C in the context of a compiler, which is as close to a pure function of input to output as most programming gets.

On the other hand, when I'm writing in a higher level language, I'm tackling bigger problems and using far, far more third-party code. That's where the occasional documentation lookups and odd errors come in.


You are absolutely right, when I'm writing in C I write dataprocessing code, usually elaborate filters and pipelines of filters.

The java stuff is web applications.


I grew up with Java + IDEs and now work primarily with C++ + vim, and my experience is the opposite. I feel that I'm much more productive with an IDE, as long as the computer is fast enough to support it (not always a given; Eclipse/Netbeans/IntelliJ are pigs). I waste far more time switching between vim and xterm to run builds than I did hitting F5 in Netbeans. And I just didn't make syntax errors in Java, because the IDE would instantly flag it when I did and I'd fix it immediately, while they still happen occasionally in C++. I also spend far more time switching between Code Search windows than I did when I had type-aware autocompletion.

I suspect it's more that you'll be most productive in whichever environment you know best. I spent 3 years in Java IDEs; I've spent 2 years doing JavaScript/Python/C++ dev in vim. Given another year, they may reach parity.


You mean you were making syntax errors all the time :)

The IDE fixing it for you made the price of not knowing the syntax of the language you are using small enough for you to not bother learning the language properly.

Once you really grok a language syntax errors are mostly gone, that's for the learning phase not when you start writing serious sized blocks of code. Sure, everybody forgets a quote or a comma every now and then, especially when switching languages. But in a compiled environment without an IDE the cost of a syntax error (one more ect cycle) is too expensive to go through so you really spend time learning the language.

The "I suspect it's more that you'll be most productive in whichever environment you know best." is very true, but once you know then equally well I wonder which will win out.


Well, Eclipse's project manager is great for Java. For C I'd much rather use just a straight text editor - IDEs just get in the way.

C + emacs/vi is much faster than C + Eclipse, but C is always going to be faster to write than Java just because Java has so many more layers of abstraction that require so many more lines of code.


I don't understand this one, shouldn't the higher level abstractions require less code, rather then more? If not, why use the abstraction?


It's the difference between optimizing for lines of code written vs. lines of code maintained. Many Java patterns exist so that when you need to make a change, you can change it in one place and have it propagate to every place you've used that abstraction. This often means more lines of code in total, because you effectively have to put hooks into every place you're likely to change.

Whether or not this is a good tradeoff depends on two things:

1.) How much work will be spent in maintenance vs. how much will be spent writing the original program.

2.) How good your guesses are as to how the program will end up changing. It does no good to add lots of hooks and abstractions if those hooks are in the wrong places.

Most of the "Java sucks" vs. "Java is the only sane language" battle comes from people not understanding that their answers to the above two questions are not likely to be the same as someone else's answers to the above two questions. For startups, it's basically a given that your guesses on how the software will change are wrong, and therefore the layers of abstraction that common Java style encourages are just wasted. Java really does suck for startups, unless you're using a platform where it's the only option (eg. mobile or IDE plugins). But for large in-house IT departments, the vast majority of work is maintenance, and it's Somebody Else's Problem if the software doesn't quite fit requirements exactly. Java makes a lot of sense in these environments.


It's the difference between optimizing for lines of code written vs. lines of code maintained.

It seems odd to me to read that. From what I've seen of it, Java seems to be hard to maintain. The capacity for abstraction is limited, leading to repeated or generated code in multiple locations. To make matters worse, hooking in to things requires that they be designed to be hooked in to. Contrast Common Lisp method combination.


This pretty much sums up why I'm sometimes hesitant to use frameworks: should my problem evolve past the boundaries of the framework it can become more of a pain to use the framework than the raw constructs.

When it's no big deal to stick to the framework's boundaries, or the framework interoperates well, clearly and easily with custom lower-level code, then it seems wise to use such a framework.

The latter--interoperability with lower level code--does not seem to be true for ASP.NET based on my brief experience.


The real problem with ASP.NET is that the abstraction it provides is completely divorced from the way the web actually works. It tries to provide a stateful way to program, and it is completely driven by events. Sure it exposes the response and request, but trying to use those in anything but the most trivial ways is a complete mess

It was great when I was just starting out as a web programmer and I didn't understand how the web worked. Now that I do get it, ASP.NET Webforms is a hassle at best and maddening at worst.


You should check out ASP.NET MVC. No postback, viewstate, and other junk, just request and response.


Oh yes. I started a fairly large program at work earlier this year and used Struts2 to start off. Things went swimmingly until I started having to modify the template files, and from there I was spending far more time fighting the framework. Ended up scratching it and building my own "framework" that works better for our project in about a month.


Around 2 years ago I started a job as an asp.net developer and this was exactly how I felt. It felt like everything I wanted to do and every problem I wanted to solve had a solution that just didn't quite fit.

It was the whole square peg, round hole scenario. I would need to do something that I knew I could easily do in another language and the asp.NET solution was almost the solution I needed except to get it to work right I had to spend as much time modifying and extending it as I would have spent on a much more elegant solution in another language.


Abstraction and convenience layers still need to be learned to be used properly. If you're used to doing things yourself and you shift to using helper tools, there is going to be a learning curve. The idea is that it pays off in the long run so it's worth learning.


Remember, the solution is always to add another layer of abstraction!


"You can solve any problem by adding another level of abstraction, except having too many levels of abstraction"




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: