I have seen way more damage done by the people "mostly working prototypes in hours or days" than people who want to write beautiful code. I understand that fixation to produce something beautiful can lead to paralysis but this is rarely the case. In most cases the prototype grows cancerously and it becomes impossible to fix in quite short time, my advice, listen to the experienced programmers who screwed up more time than you before, even if they seem to be obsessed with "beautiful code".
Most of the damage I've seen was from people who think they can write beautiful code, but instead write a slow overengineered framework and use clout to force everyone else's code into that framework. Sadly, these people are often smart and experienced. It's more of an attitude problem: they are not content with making a library that will sit at the bottom of the call stack, when they could instead make a framework that sits at the top.
I've worked at a couple companies now that have this problem. I know that one is now on their third try at a second gen rewrite of their aging core product. They can't ship it because they haven't found that magic formula to make it the most perfect program ever[0]. Sadly, this has been with multiple teams trying.
[0] I call this the Neo Architecture. They're looking for "the one". The architecture that will allow for any CR to be handled elegantly and beautifully, where all concerns are completely separated, where all data is perfectly abstracted. There's no such thing. It doesn't exist. Just ship already!
> I have seen way more damage done by the people "mostly working prototypes in hours or days" than people who want to write beautiful code
Because people who write "beautiful code" don't often achieve anything that can be seen. Successful companies are based on quickly written code that delivers.
They could be actually better and much faster than humans for initial diagnosis. This has far greater implications, e.g. early detection actually reduces big burden from system.
That has been available since the 1970s. Healthcare is not a very complex compsci issue for 80% of cases. The problem is presentation of symptoms is highly subjective and needs highly complex interpretation that no AI will ever be able to achieve and deal with the liability conundrum
No one claims AI will be the authority, it is good for finding candidates much earlier. We never had devices that has success rate of specialists on several eye problems in one go, definitely not in 70s. I don't understand what you are arguing against to be honest.
So many programs and devices are used where someone would be liable if they malfunctioned. In a production line for example, if something goes wrong and it has to be turned off, every hour costs $$$ to the production plant owner. Similarly for robots: there have been cases where industrial robots have killed people. Accidents with machines can happen in so many industries. If the machine is wrong in 0.2% of cases, that's a risk that can be calculated. If its rate of misdiagnoses is equal to the rate of a human expert, then replacing non-experts with it will improve patient experience. Of course, there might be super experts whose patients would be worse off if they were treated by an AI.
The jury is out on whether the Luddites were indeed troglodytes[1]. If you were to apply the 30 year nostalgia rule in 2049, then we might be hankering for the present, where 'AI' and the surrounding issues are still in their infancy.
However, a pragmatic approach would suggest that any form of AI and it's derivatives, would be assistive in the medical field and play a hybrid role, rather than being a panacea[2].
I guess Google engineers has some experience with those hard earned lessons before Facebook even existed. The diagram you mentioned is very typical of microkernels and subject of research for decades.
I don't think they actually get "angry", but rather pretty annoyed. I do the same thing. If someone asks me for help or something and I see them "hunting and pecking", then I get slightly annoyed that this person hasn't spent a couple hours learning how to properly type. It would save them so much time! I just want everyone to be more productive and something like typing is a pretty easy place to gain productivity.
I fall back on touch typing when I want to keep typos at a minimum, but it's so much slower for me than "hunting and pecking" (even though I don't look at the keyboard when doing it, so it's hardly "hunting"). But that speed increase comes at the cost of accuracy. If I could get speed parity with touch typing, it'd be amazing. I've spent days trying to learn. I don't know if I'll get there. Please forgive me, senpai. :C
I do get angry too, in a sense that this person has not spent any time optimizing when it's their full time job. How can a person who cannot optimize basic things for themselves be expected to lead a group? It smells a lot like either corruption or incompetence.
At this point, I hope you'd owe up to taking the burden proof that a person who cannot optimize basic things for themselves cannot be expected to lead a group