I don't question this decision is sometimes (often) driven by the need to increase publication count. (Which, in turn, happens because people find it esaier to count papers than read them.) But there is a counterpoint here, which is that if you write say a 50-pager (not super common but also not unusual in my area, applied math) and spread several interesting results throughout, odds are good many things in the middle will never see the light of day. Of course one can organize the paper in a way to try to mitigate the effects of this, but sometimes it is better and cleaner to break a long paper into shorter pieces that people can actually digest.
Well put. Nobody want salami slices, but nobody wants War and Peace, either (most of the time). Both are problems, even if papers are more often too short than too long.
Not only that but in the academic world 20 papers with 50 citations is worth more than one paper with 1000. Even though the total citation count is the same the former gives you an h-index of 20 (and an i-10 of 20) but the latter only gives you an h-index of 1 (ditto for i-10).
Though truthfully it's hard to say what's better. All can be hacked (a common way to hack citations is to publish surveys. You also just get more by being at a prestigious institution or being prestigious yourself). The metric is really naïve but it's common to use since actual evaluating the merits of individual works is quite time consuming and itself an incredibly noisy process. But hey, publish or perish, am I right?[0]
That's a fantastic example of that that which gets measured gets optimized. The academic world's fascination with this citation metrics is hilarious, it is so reminiscent of programmers optimizing for whatever metric managements has decided is the true measure of programmer productivity. Object code size, lines of code, tickets closed and so on...
It's definitely a toxic part of academia. Honestly if it weren't for that I'd take an academic job over an industry one in a heartbeat.
Some irony is my PhD was in machine learning. Every intro course I now (including mine) discusses reward hacking (aka Goodhart's Law). The irony being that the ML community had dialed this problem up to 11. My peers that optimized this push out 10-20 papers a year. I think that's too many and means most of the papers are low impact. I have similar citation counts to them but lower h-index and they definitely get more prestige for that even though it's harder to publish more frequently in my domain (my experiments take a lot longer). I'm with Higgs though, it's a lazy metric and imo does more harm than good.