> The attitude that really basic Computer Science concepts like algorithms and algorithmic complexity are irrelevant is exactly why software projects are so frequently FUBAR.
You must mean "some software projects" and not "frequently." And who's proud of ignorance? Ignorance of what? And, what's more, why should it be considered a negative if someone isn't concerned, or even is proud, about ignorance of certain things.
Other people have cited numbers, but in my opinion, most software project (greater than 75%) end up:
* Over budget in dollars or time or both
* Fragile
* Insecure (sometimes profoundly so)
* Having major UI issues
* Missing major obvious features
* Hard to extend
* Full of bugs, both subtle and obvious
Pick any five of the above at least. Keep in mind that most software projects are internal to large companies, or are the software behind APIs, or are otherwise niche products, though there are certainly plenty of mobile apps that hit 4-7 of those points.
Who is proud of ignorance? Original author:
> There is no point in giving me binary-tree-traversing questions; I don't know those answers and will never be interested in learning them.
He proudly states that he will never be interested in learning these topics. These topics that are profoundly fundamental to computer science.
Why should it be considered harmful? Because anyone who is writing software should always be learning, and should never dismiss, out of hand, interest in learning core computer science concepts. They should be actively seeking such knowledge if they don't have it already. It's totally Dunning-Kruger to think that you don't need to know these things. His crack about learning "object oriented design" instead made me laugh: As if knowing OOD means that you don't need to know algorithms. To the contrary, if you don't understand the fundamentals, you can create an OOD architecture that can sink a project.
It's like the people who brag of being bad at math -- only worse, because this is the equivalent of mathematicians being proud of their lack of algebra knowledge.
So if the developers knew to traverse a binary tree, how many of the 75% succeed?
There is something I've mutated to my own liking called the 80/20 rule. In software, you will spend 20% of your time making the application 80% of it's maximum potential performance. You can then spend the remaining 80% of your time gaining the extra 20%. If that is cost effective to your company, then by all means do it. If it isn't 80% is just fine and you've cut your development costs by 4/5ths.
For me, being able to "rote" an algorithm on some whiteboard falls into the last 20%, maybe. Collection.Sort, whatever it uses, is good enough. Hell, you get about 78% of that 80% by using indexes properly on your RBDBMS.
> So if the developers knew to traverse a binary tree, how many of the 75% succeed?
I would say it's necessary but not sufficient. There is no perfect interview strategy. But for projects that are entirely developed by people who can't traverse a binary tree, I'd say the odds of failure are very, very high.
Sure, a strong developer can hit that 80% of performance quickly (in probably less than 20% of the time), but a weak developer who doesn't know how to optimize probably won't even make 5% of "maximum potential performance."
I had to work with a tool once that had a "process" function that would take 2-3 minutes to do its work. It was used by level designers, and it was seriously impeding their workflow. The tool was developed by an game developer with something like 10 years of professional development experience (he was my manager at the time), and he thought it was as fast as it could go -- that he'd reached that maximum potential performance, with maybe a few percentage points here or there, but not worth the effort to improve. He didn't want to spend another 80% of his time trying to optimize it for a minor improvement either, so he left it alone.
I looked at what it was doing, figured out how it was inefficient, and in less than an hour rewrote a couple of key parts, adding a new algorithm to speed things up. Bang, it went from minutes to 200ms. No, I'm not exaggerating. Yes, I had a CS background, and no, the experienced developer didn't.
If you end up with accidental n^3 algorithms baked into your architecture, 5% performance in development can be 0.1% performance in the real world, or worse as your N gets large enough. And yes, that's even if you index your data correctly in your database.
And that's when your site falls down the moment you have any load, and you end up trying to patch things without understanding what you're doing wrong. In my example above I improved the speed by nearly a factor of a thousand. In an hour. That can easily mean the difference between holding up under load and falling over entirely, or the difference between being profitable (running 4 servers that can handle all of your traffic) and hemorrhaging money (running 4000 servers and the infrastructure to scale dynamically).
Which is why you need a strong developer to run the project to begin with. Maybe you're that strong developer; I'm not really speaking to you in particular. But I know a lot of developers who just don't have the right background to be put in charge of any projects and expect that they'll succeed.
You must mean "some software projects" and not "frequently."
In the 2000's and prior, it was common knowledge that the majority of software projects failed. Even if they succeeded on paper and shipped, they weren't actually used. By some estimates, it was something like 75% of software projects.
If you look at the contents of "ecosystems" like Steam and the the various app stores, you'll see much the same. Most of the software out there is a defacto failure, and much of it is due to the ignorance of the programmers resulting in substandard programs.
Do you have a citation supporting the claim that those failures were due to poor performance and not far more significant problems like failing to correctly model the actual business problem or handle changes? That era was dominated by waterfall development which is notoriously prone to failure due to the slow feedback loop.
This is highly relevant because one not uncommon problem with highly-tuned algorithms is the greater cost of writing that faster code and then having to change it when you realize the design needs to be different, especially if pride + sink cost leads to trying to duct tape the desired feature on top of the wrong foundation for awhile until it's obvious that the entire design is flawed. That failure mode is especially common in large waterfall projects where nobody wants to deal with the overhead of another round.
Very large numbers of shovelware apps in the iPhone app store had the problems with crashing.
This is highly relevant because one not uncommon problem with highly-tuned algorithms is the greater cost of writing that faster code and then having to change it when you realize the design needs to be different
Can you give me a specific example of this? I'd say, give me a specific example, and most likely, I'll give you a reason why the software architecture of that example is stupid.
> Do you have a citation supporting the claim that those failures were due to poor performance and not far more significant problems like failing to correctly model the actual business problem or handle changes?
"Poor performance" isn't the only negative result of using an insufficiently experienced developer, or a developer who doesn't have a full grounding in algorithms and data structures.
Someone without a full CS background (and without the ability to remember much of that background) is likely to know a few patterns and apply them all like a hammer to a screw. This leads to profoundly terrible designs, not only when you take performance into account, but finding the best model for the actual business problem, handling changes, permuting data in necessary ways, and other issues.
> That era was dominated by waterfall development which is notoriously prone to failure due to the slow feedback loop.
Extreme programming was the first formalized "agile" approach, and the very first agile project to utilize it was a failure. [1] A big problem was performance, in fact:
> "The plan was to roll out the system to different payroll 'populations' in stages, but C3 never managed to make another release despite two more years' development. The C3 system only paid 10,000 people. Performance was something of a problem; during development it looked like it would take 1000 hours to run the payroll, but profiling activities reduced this to around 40 hours; another month's effort reduced this to 18 hours and by the time the system was launched the figure was 12 hours. During the first year of production the performance was improved to 9 hours."
Nine hours to run payroll for 10,000 people. We're not talking about computers in the '70s with magnetic tapes. This was 1999 on minicomputers and/or mainframes. If that wasn't a key algorithmic and/or architectural problem, then I would be amazed.
When you're designing a system using agile, it often ends up with an ad hoc architecture. Anything complicated really needs BOTH agile and waterfall approaches to succeed. You need to have a good sense of the architecture and data flow to begin with, and you need to be able to change individual approaches or even the architecture in an agile manner as you come across new requirements that you didn't know up front.
> This is highly relevant because one not uncommon problem with highly-tuned algorithms is the greater cost of writing that faster code and then having to change it when you realize the design needs to be different
I'm going to say [citation needed] for this claim.
I gave an example in another thread of having improved the speed of a game development level compiler tool by a factor of about 1000, with no major architectural changes, and it took me about an hour.
At the same time I performed a minor refactor that made the code easier to read.
Bad design == Bad design. That's it. Good design can include an optimized algorithm. A good design tends to be easy to extend or modify.
Very rarely it makes sense to highly hand-tune an inner loop. The core LuaJIT interpreter is written in hand-tuned assembly for x86, x64, ARM, and maybe other targets. It's about 3x faster than the C interpreter in PUC Lua, without any JIT acceleration. That is a place where it makes sense to hand-tune.
> pride + [sunk] cost[s] leads to trying to duct tape the desired feature on top of the wrong foundation
That's another orthogonal error. It makes me sad sometimes to throw away code that's no longer useful, but I'll ditch a thousand lines of code if it makes sense in a project.
Every line of code I delete is a line of code I no longer need to maintain. I'm confident enough in writing new code that I don't feel any worry about deleting old code and writing new. This is as it should be. Sad for the wasted effort, yes. But I know that the new code will be better.
The C3 system only paid 10,000 people. Performance was something of a problem; during development it looked like it would take 1000 hours to run the payroll, but profiling activities reduced this to around 40 hours; another month's effort reduced this to 18 hours and by the time the system was launched the figure was 12 hours. During the first year of production the performance was improved to 9 hours.
And I happen to know for a fact that naive string concatenation was a big part of the performance problem! (Bringing it back to my original comment.)
You must mean "some software projects" and not "frequently." And who's proud of ignorance? Ignorance of what? And, what's more, why should it be considered a negative if someone isn't concerned, or even is proud, about ignorance of certain things.