Hacker Newsnew | comments | show | ask | jobs | submitlogin

Overspecialization is the source of a organizational smells in a lot of medium-sized engineering companies - a lot of times it's better to have generalist engineers with some specializations in what you need to do than a bunch of specialists for a bunch of reasons, among them:

- (pure) Specialists often don't understand how their decisions affect other systems (and middle management or communication isn't always a solution)

- (pure) Specialists tie you to a particular technology when in reality you may need to evolve to use other technologies.

- If you need a bunch of different specialists to get something simple done (perhaps something you don't do all the time so don't have a process in place), just because they are siloed, it's a lot more complex and usually ends up badly designed (because it's harder to be iterative across teams). Generalists can get simple things done that require different skill sets to accomplish.




I will disagree with you on your first point. One of the characteristics of specialists is that they know in practice how the software in which they specialize interacts with other software, where a generalist might not. For example, most specialists in any performance-critical software are pretty intimately familiar with the behavior of the Linux kernel when it comes to things like I/O scheduling and cache eviction, because of how it affects their program of choice. Generalists, on the contrary, rarely know any part of the system in enough depth to be able to quickly diagnose such problems. Often companies without suitable onsite expertise will reach out to specialists in these situations to resolve such problems.

I'll agree with your second point, to some extent. Generalists are rarely tied to any one technology and therefore can be very good at getting your organization to use the right tool (avoiding the common "hammer-nail" problem). However, just as frequently, I see generalists picking the wrong tool for the job, because they again aren't intimately familiar enough with the tools already at their disposal to understand all their capabilities or be able to make an informed decision about whether the gains of the new tool are worth the added complexity of introducing another layer to their stack. And, of course, nobody feels the pain of adding extra layers to the stack quite like DevOps do.

I'm not sure what to make of your third point. Isolated processes talking to each other is just a different strategy from a monolithic design. There are advantages and disadvantages to each. It's unclear to me that monolithic means "better designed," and in fact there are good security arguments to the contrary. But maybe I'm misreading what you're saying.

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: