On the other hand, I'd like to have a full-stack CEO as well. Not just someone who can only say "make it work", "make it faster", "make me some money".
And by full-stack CEO, you mean someone who understands all the aspects of the business and isn't a pandering, disconnected, figure-head? In a lot of companies they'd be called the operating CEO, or in more cases, the COO/president.
Am I exaggerate in terms of salary? probably... these days, who can learn every single technology (from hardware to software) under the sun?
WebServices, REST, ORM, DB, data modeling, data warehouse, virtualization, security, capacity planning, storage, performance tuning, backup and recovery, test automation (you do test your own code right? not just throwing shitty code to QA right?), build and deployment strategy.
[No, NoSQL doesn't count]
There are really two types of coders: the scrappy Ill-figure-it-out-as-I-go-along full-stack coders, and the Im-working-9-to-5-and-have-50-certifications coders.
Of course I exaggerate a bit, but businesses like to put people in little boxes. That means that the vast majority of business programmers (which are most of them) are working in one specialized area. They can write code, but don't know how the database works. Or they can make a kick-ass website, but have no idea how the back-end works. They can make the best library you ever saw, but they have no people skills. It's very common.
In fact, "full stack" to me means making things people want, which includes a lot more than just bits and bytes. It starts with words coming out of somebody's mouth and ends with them being happy. Lots of little steps in between.
If anything, the generalist is a dying breed.
If you knew a little about assembler then maybe you'd waste a lot of time rewriting code. Instead of fixing the db. Full stack programmers are valuable for their knowledge that is an inch deep but a mile wide. Any fix a full stack performs is going to seem trivial to an expert in that field.
Consider, too, that expert-level knowledge of the sort a DB performance person will bring is often (rightly or wrongly) more expensive than that of a generalist. Maybe the expert can bring the runtimes down by a further factor of 100 (to 6 seconds), but, at that point, you have to seriously question whether it's worth it. They might be able to get another factor of 100 and get the runtime for a single analysis down to 6 seconds, but at that point, you have to start seriously questioning whether it would be worthwhile to do it. In the article's example, the analysis step could only be run twice in a week because it took 24 hours to execute (and, then, presumably the other 3 days of the week were spent doing something with the results). Since it might take a full day and a half of work to actually do anything with the results, getting the time to run an analysis down from 10 minutes to 6 seconds clearly isn't worth it. If it were me, I might have even stopped once I got the runtime down below an hour, because then it could be run over a lunch break or during a meeting or something.
I suppose my point is basically equivalent to Knuth's quip about premature optimization. In this particular example, there seems to be no reason to optimize further, because the system is probably 95% as useful as it can possibly get. Eliminating 47.8 hours of runtime per week lets you squeeze in one more run per week, but cutting that runtime down to 12 seconds per week (which is essentially 0) would seem to have a very small marginal gain.
Knowing enough to not do bone-headed things is what gets you acceptable performance. And, it gives you the ability to know where your critical paths are, and to hand those critical paths off to the most expert person in the room.
In my case at least I'm "full stack" but as soon as it's feasible it'll be someone's job to do what I've done a whole lot better.
Until then though I'm full stack because I'm intimate with my entire platform cause I wrote it of course, not because I have some deep knowledge of everything I'm doing... most of what I'm learning is how not to do things.
That's the problem with computing: Often you don't have a very good understanding if there's a good reason for the thing to be slow. Only if there is such a conceptual reason (like, you're solving a problem involving very many variables that must be optimized), you must start thinking if a different approach is in order.
There seems to be a battle here between full-stackers and specialists, but if you think of a full-stacker as "an extremely experienced generalist", then I think there are plenty of "extremely experienced specialists" who fit that description too. That is to say, an extremely experienced "database optimizer" is going to know how the presentation layer hits the database and how the database hits the disc, and I expect they would know many different database technologies.