Hacker News new | past | comments | ask | show | jobs | submit login

Other people have cited numbers, but in my opinion, most software project (greater than 75%) end up:

* Over budget in dollars or time or both

* Fragile

* Insecure (sometimes profoundly so)

* Having major UI issues

* Missing major obvious features

* Hard to extend

* Full of bugs, both subtle and obvious

Pick any five of the above at least. Keep in mind that most software projects are internal to large companies, or are the software behind APIs, or are otherwise niche products, though there are certainly plenty of mobile apps that hit 4-7 of those points.

Who is proud of ignorance? Original author:

> There is no point in giving me binary-tree-traversing questions; I don't know those answers and will never be interested in learning them.

He proudly states that he will never be interested in learning these topics. These topics that are profoundly fundamental to computer science.

Why should it be considered harmful? Because anyone who is writing software should always be learning, and should never dismiss, out of hand, interest in learning core computer science concepts. They should be actively seeking such knowledge if they don't have it already. It's totally Dunning-Kruger to think that you don't need to know these things. His crack about learning "object oriented design" instead made me laugh: As if knowing OOD means that you don't need to know algorithms. To the contrary, if you don't understand the fundamentals, you can create an OOD architecture that can sink a project.

It's like the people who brag of being bad at math -- only worse, because this is the equivalent of mathematicians being proud of their lack of algebra knowledge.




So if the developers knew to traverse a binary tree, how many of the 75% succeed?

There is something I've mutated to my own liking called the 80/20 rule. In software, you will spend 20% of your time making the application 80% of it's maximum potential performance. You can then spend the remaining 80% of your time gaining the extra 20%. If that is cost effective to your company, then by all means do it. If it isn't 80% is just fine and you've cut your development costs by 4/5ths.

For me, being able to "rote" an algorithm on some whiteboard falls into the last 20%, maybe. Collection.Sort, whatever it uses, is good enough. Hell, you get about 78% of that 80% by using indexes properly on your RBDBMS.


> So if the developers knew to traverse a binary tree, how many of the 75% succeed?

I would say it's necessary but not sufficient. There is no perfect interview strategy. But for projects that are entirely developed by people who can't traverse a binary tree, I'd say the odds of failure are very, very high.

Sure, a strong developer can hit that 80% of performance quickly (in probably less than 20% of the time), but a weak developer who doesn't know how to optimize probably won't even make 5% of "maximum potential performance."

I had to work with a tool once that had a "process" function that would take 2-3 minutes to do its work. It was used by level designers, and it was seriously impeding their workflow. The tool was developed by an game developer with something like 10 years of professional development experience (he was my manager at the time), and he thought it was as fast as it could go -- that he'd reached that maximum potential performance, with maybe a few percentage points here or there, but not worth the effort to improve. He didn't want to spend another 80% of his time trying to optimize it for a minor improvement either, so he left it alone.

I looked at what it was doing, figured out how it was inefficient, and in less than an hour rewrote a couple of key parts, adding a new algorithm to speed things up. Bang, it went from minutes to 200ms. No, I'm not exaggerating. Yes, I had a CS background, and no, the experienced developer didn't.

If you end up with accidental n^3 algorithms baked into your architecture, 5% performance in development can be 0.1% performance in the real world, or worse as your N gets large enough. And yes, that's even if you index your data correctly in your database.

And that's when your site falls down the moment you have any load, and you end up trying to patch things without understanding what you're doing wrong. In my example above I improved the speed by nearly a factor of a thousand. In an hour. That can easily mean the difference between holding up under load and falling over entirely, or the difference between being profitable (running 4 servers that can handle all of your traffic) and hemorrhaging money (running 4000 servers and the infrastructure to scale dynamically).

Which is why you need a strong developer to run the project to begin with. Maybe you're that strong developer; I'm not really speaking to you in particular. But I know a lot of developers who just don't have the right background to be put in charge of any projects and expect that they'll succeed.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: