No, but it would have taken a lot more words to convey the same idea, so I'm happy just giving the gist. Most of the time, you don't want to grate your onions.
That's going to end up with a slush of onion fibre + onion juices. Very much different from even small bits of cut onion. Some recipes call for blended onions though.
As I understand it (not completely), according to the SHIELD act, the losing party in a patent dispute has to pay legal fees for both sides, which are usually substantial. If the lawyers who really own the patent represent themselves, they are in danger of having to pay out if they lose. So they effectively hired a plaintiff that they could 'represent'. Handing out 10% of the proceeds if you win against halving the costs if you lose is a reasonable hedge.
However a legal firm is not supposed to do that, which is why they are in trouble.
Not a direct answer, but Ethernet is sometimes brought up as a successful example of Worse is Better. At one point Token Ring was a serious competitor - it had careful designs to avoid collisions when the network was busy, prioritize traffic, etc. But it was comparatively slow and expensive. Ethernet just says "eh, retry on collision.". And that simplistic foundation has carried on to where we have a standard for 800 Gigabit Ethernet.
A lot of "why TF would anyone you do this?" questions can be answered with variations like this. Borrowing from economics, the discount rate defines how much more valuable something is RIGHT NOW vs at some point in the future. If the system is down and we're losing money by the minute until it is up; if the company will be broke next week unless we make customer X happy; if you have to get to the airport in 2 hours - it is completely rational to make the quick fix. The costs of not doing so outweigh the costs of doing it again properly.
It really becomes a problem when the same short term calculation becomes the standard - you can quickly rack up costs that future you or future company will never pay down.
I sometimes wonder about the commercial impact of source code leaks. In this case, nobody noticed until the product was commercially irrelevant, but what might have happened if some competitor had noticed?
My guess is probably nothing. Having the interpreter source code is a liability for other companies in case of an infringement lawsuit. Are there good examples where a source code leak actually led to significant consequences for a company?
My hunch is that companies tend to be paranoid and vastly overestimate the commercial importance of their source code. Are there really that many realistic opportunities to copy secrets from one mature source tree to another and commercially benefit from it? These code bases are likely totally different, use different design patterns, different internal APIs, data models are different, maybe even different languages.
Anyone who has done integration work between two totally foreign-to-each-other code bases knows that the integration effort is often greater than just writing the code from scratch.
The biggest risk is probably someone getting their hands on the entire project, including code, art assets, build infrastructure, and just compiling an identical program to release under their own name. But that would be obvious and probably easy to prove/litigate.
> Are there really that many realistic opportunities to copy secrets from one mature source tree to another and commercially benefit from it? These code bases are likely totally different, use different design patterns, different internal APIs, data models are different, maybe even different languages.
When you're stuck failing to make something work, it can be a large benefit to be able to look at how somebody else managed to make it work. Sometimes it's a bit you forgot to set on a register somewhere, sometimes it's a sequence of operations that tickles a hardware bug which can be avoided by doing things in a different order. On a higher level, sometimes the issue is that the A API is implemented as "return <error>" and only the corresponding W API is actually implemented. Or the trick to make the API work is to cast one of the many objects you already got into a non-intuitive poorly-documented interface, allowing you to call a method which returns yet another object which allows you to do what you actually want. And so on.
This was one of the ideas pursued in the Reagan-era Strategic Defense Initiative, aka Star Wars. They weren't able to develop a system that would work against ICBMs. Powering an aircraft is both easier and harder; the aircraft presumably isn't making evasive maneuvers and trying to stop you from powering it, but the economic and safety constraints are harder.
IIRC, one of the problems with SDI was that at the time the only way to make lasers powerful enough to destroy an ICBM was to pump the lasing medium with a nuke, making them one-shot devices.
Sensing may still be a problem today, especially as stealth is also improving, but detection in general is much easier than it was in the Regan era.
That doesn't sound like an airline I would care to fly!
During takeoff a 747 consumes power at a rate of about 90MW. Having something outside the plane, whether in orbit or ground-based, pumping that much power into the plane while I'm in it, sounds quite alarming. Not to mention issues with aiming, power loss, etc.
To power a plane with renewable sources, it seems most practical to generate power on the ground and use that to produce synthetic fuel.
"zero-error policy" as described here is a remarkable euphemism. You might hope that the policy is not to make any errors. In fact the policy is not to acknowledge that errors can occur!
Grating the Gordian knot, if you will.
https://youtu.be/glIUUrh6qtQ?t=40