I'm not claiming to give a formal definition of terrorism, I'm just saying that if such a definition exists it should definitely include the Paris attacks, otherwise it would be meaningless.
Regardless of that, the events that you mentioned are massive, coordinated, state-level actions. I just can't accept labeling those as terrorism without that word losing any meaning. Yes, those events are horrible, but not every horrible thing that involves killing civilians is terrorism even when if's brutal mass murder. There are other words too, like genocide, war crimes, massacre, assassinations, etc.
Hiroshima was in the midst of a full blown state vs. state war, in which the US was not the aggressor against Japan.
While it is true that it was hoped the atomic bomb would be as useful psychologically as it was physically, there is no true comparison here, and this is argumentative for the sake of trying to make a point.
True. Ironically, it's generally understood by Americans that this was done to break Japan's will to continue fighting. It worked. Everything we do in war is to break the will of the enemy to continue.
Thus unless we expect one side to nobly surrender, we ought to expect the most extreme atrocities to occur in war.
When you think about it, it's silly to draw an arbitrary distinction of humane combat, since if we really wanted to be humane we'd have our leaders compete in chess boxing or something that would minimize the destruction of property and loss of life.
This is one reason why lynx are often suggested as a first big predator to reintroduce. They keep to forests, while sheep graze on open ground. Sheep can be protected with fences (keeping them from spreading too thin), livestock guardian dogs, and active shepherding. Farmers just resent the effort.
I've built a couple of systems using this method in my early days of attempting to smash large systems down into smaller ones (around 2006-2011), and explicitly using ESIs and then Varnish ESIs as the layer of orchestration and integration.
There used to be some real drawbacks to this, especially as Varnish wasn't really happy with potentially hundreds of ESI calls per HTTP request, yet that was what we were doing (a call to a JSON API that returned a collection of things would create a document that contained ESIs pointing to each thing, and each thing could also contain ESIs to hydrate themselves, and so on).
One of the interesting bugs was that if you had a lot of ESIs then only the first 10 or 20 succeeded, and the rest did not. We raised bugs for this at the time (Varnish 2.1.4) and they were all resolved (by Varnish 3).
Varnish is great at this now and this approach can and does work, but it does still require discipline to ensure that other pitfalls are avoided (circular references, how to handle errors if you are serving non text/html, how to prevent putting a strain on Varnish now that you're forcing it to orchestrate and manage connections for all subresources).
But yeah, it works.
However... I personally moved away from building this kind of thing as it really wasn't an elegant solution.
This approach made it more difficult to add things like tracing, soak testing of new endpoints, and many other things that really increase in importance after you've gone through the first iterations of many simple v1 services.
Semantic Versioning is not a magical cure-all. It's still up to the author to make the right choice about which number to bump.
We have a node/gulp-driven build with, ultimately, about 500 dependencies. We don't store node_modules with the code, we run npm install with each build. This leads to periodic failures because some sub-sub-sub-dependency bumped number 3 when it should have bumped number 2, causing unwanted build-time behavior. Even if our package.json lists specific versions of top-level dependencies instead of ranges this will happen. It's aggravating.
Projects exist to bundle up known-good node_modules and restore at build time to avoid this. NPM offers the concepts of shrinkwrapping and peer dependencies so you can tell consumers which specific version of each runtime dependency they have to use as well. IMO if SemVer actually worked in practice, none of this would be necessary.
Some of this is an indictment of node/npm - maybe Ruby is just better - but I still think SemVer is overestimated.