Hacker News new | comments | show | ask | jobs | submit login

"Ranking Almost Every General in the History of Warfare"

Not even close to "almost every general". Even taking into consideration his own admission that he left out the Mongols, his analysis is obviously very, very Western centric. Somehow the omission of the most populous continent in the world and probably throughout history didn't dissuade the author from making such a bold claim. The most obvious omissions were the Chinese and Japanese generals. It's not as if those two countries didn't have a lot of wars, wrote a lot about war ("Art of War" anyone), or lack a historical tradition. Let's not forget the Indian sub-continent has its own history of warfare.

His comment about Rommel was kind of awful as well. Rather than trying to correct for why his model didn't generate some expected results, he tried to convince people that their perspective is wrong. Rommel may not be the impressive general he's popularly believed to have been but the author's understanding of Rommel is quite shallow. Rommmel's exploits goes back all the way to WWI. He was the youngest recipient of the "Pour Le Merite" when he captured over 9000 prisoners with just 150 soldiers.

This is an example of someone with technical skills applying those skills to a field they don't really understand and wrapping it up with a bold claim.




Garbage in, garbage out. If your data set comes from scraping wikipedia, you're going to have these kinds of flaws and omissions. What's the alternative though, other than to hire an army of grad students? If flawed, at least it's interesting. I wish the author had gone into more detail about counterintuitive results (like Rommel), but part of the point of an exercise like this is to find instances where the model disagrees with common wisdom. If you jump straight to rejecting the model without asking why, you don't learn anything.

Some other thoughts:

- Analysis is limited to results of individual battles. That's a very narrow slice of a general's actual job.

- He ties WAR to overall W/L, which isn't great but the data doesn't give you many options.

- The model rewards "underdog" wins. This sounds like a decent proxy for skill, but it seems like a big part of the job is avoiding being an underdog in the first place.

- Army size and casualty figures for anything pre-17th century (and that's generous) are extremely suspect.


Agreed. Trying to compare generals from ~10 different centuries without taking into account the specifics of each campaign, and context of those campaigns renders the results useless.

I wouldn't trust this model to compare Montgomery to Patton during the Sicily invasion, let alone compare Caesar to Napoleon.


One thing I would be curious at generating from this data set, given the historical period spanning, would be correlation between expected battle outcome & actual battle outcome.

As someone who picked up a computational military modeling course in college, attempting to model ancient warfar is a vastly different task than modern battles.

My gut would be that modern warfare de-correlates more strongly from numeric advantage due to increased speed and lethality of available force types.

Also, for the author, if you wanted to be more accurate, start calculating actual expected outcomes from the forces. Lanchester's Laws are as good a place as any to start.

https://en.m.wikipedia.org/wiki/Lanchester%27s_laws


Eh, scraping any encyclopedia would introduce insurmountable bias and error. As you say, only primary sources are usable here.


Yeah, didn't mean that as a knock against wikipedia specifically.


>This is an example of someone with technical skills applying those skills to a field they don't really understand and wrapping it up with a bold claim.

Except he forgot to close with "which is why you should fund my startup."


OK, we'll put "some" above.


Has HN considered color coding title changes, or even the portions of the title changed in smaller or subtle changes like here?


Can't say we have. What would be the benefit?


1) People reading comments where people complain about the title get an indication that it was changed, so there's less "The title looks fine to me", "oh, they changed it already" comments that do't add to the discussion in any way.

2) If only the changed portion of the title is colored, there's an indication of whether it was a subtle changing to better fit the article content (even if the title was the same as the article previously) or an entire rewrite because it wasn't descriptive enough or was just plain misleading or flame-bait.

3) It may help you recognize it from the main page if you see it later, knowing it previously had a different title.

4) This is something separate from what I asked about before, but if all comments from HN staff noting changes followed the format Changed the title from $OLD" to "$NEW" because $REASON I think that it would cut down on the fluff comments about whether it changed (as in #1) and also cut down on the comments that opine about HN's editorializing of titles which seems to occur fairly commonly when the title changes on submissions of sufficient activity.

#4 would probably help the most, but even an orange asterisk after the title in a span with a title tag with the original title would be good. I'm just against loss of information in general, and would like to know what the original submitted title was in some cases, and it's not always obvious. Maybe I'm just weird.


It's almost 2018 and you're still peddling intolerant anti-sensationalism wrongthink? Get with the times! /s




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: