Hacker News new | comments | show | ask | jobs | submit login

I will disagree with you on your first point. One of the characteristics of specialists is that they know in practice how the software in which they specialize interacts with other software, where a generalist might not. For example, most specialists in any performance-critical software are pretty intimately familiar with the behavior of the Linux kernel when it comes to things like I/O scheduling and cache eviction, because of how it affects their program of choice. Generalists, on the contrary, rarely know any part of the system in enough depth to be able to quickly diagnose such problems. Often companies without suitable onsite expertise will reach out to specialists in these situations to resolve such problems.

I'll agree with your second point, to some extent. Generalists are rarely tied to any one technology and therefore can be very good at getting your organization to use the right tool (avoiding the common "hammer-nail" problem). However, just as frequently, I see generalists picking the wrong tool for the job, because they again aren't intimately familiar enough with the tools already at their disposal to understand all their capabilities or be able to make an informed decision about whether the gains of the new tool are worth the added complexity of introducing another layer to their stack. And, of course, nobody feels the pain of adding extra layers to the stack quite like DevOps do.

I'm not sure what to make of your third point. Isolated processes talking to each other is just a different strategy from a monolithic design. There are advantages and disadvantages to each. It's unclear to me that monolithic means "better designed," and in fact there are good security arguments to the contrary. But maybe I'm misreading what you're saying.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: