The robustness principle is so compressed that it invites the reader to project an interpretation onto it.
The original Usenet comp.mail.pine newsgroup post  by Mark Crispin (father of the IMAP protocol):
This statement is based upon a terrible misunderstand of Postel's
robustness principle. I knew Jon Postel. He was quite unhappy with
how his robustness principle was abused to cover up non-compliant
behavior, and to criticize compliant software.
Jon's principle could perhaps be more accurately stated as "in general,
only a subset of a protocol is actually used in real life. So, you should
be conservative and only generate that subset. However, you should also
be liberal and accept everything that the protocol permits, even if it
appears that nobody will ever use it."
See my potted history of Postel's law: http://ironick.typepad.com/ironick/2005/05/my_history_of_t.h...
That said, it's still unclear how far this extends: the example given is of an unknown error code, which might lead you to think that the requirement is "syntactically well-formed input where you can't 100% determine the semantics." That's a far cry from the way browsers handle malformed HTML. Similarly, you have to apply some judgment concerning what an agent can interpret the meaning of.
He does acknowledge that Postel's Maxim might be essential to any widely deployed protocol that wants to be successful. He also acknowledges that his alternative is inaplicable to the early life of a protocol.
The main two flaws in tghe reasoning is that incompatibility or bugs are not intentional and that success is contingent on something 'just working'. For a thousand-feet views, you want errors, whatever their source, to propagate as little as possible and affect as little of a network as possible. Postel Maxim provides that effect. Being strict ensures that some process somewhere over which you have no control will affect your system.
Fortunately, it's being applied everywhere, notwistanding purists. Your house electrical input gets filtered and aim to provide a standard volatge. Your computer power supplies filters that and aims to provide a stable voltage and amps. Your electronics are surrounded by capacitors... wand it goes up the stack. It's just good engineering.
The reasoning of the author is simple: we want a POC of an idea ASAP (lacking formal specifications of anomalies and error bounds) and when successful, error bounds and boundary conditions including specifications thereof should be communicated and implemented. That seems like a cogent and professional point to make, given the complexity of our systems.
Read through the error-recovery specification for HTML5. It's many pages of defined tolerance for old bugs. Then read the charset-guessing specification for HTML5, which is wildly ambiguous. (Statistical analysis of the document to guess the charset is suggested.) The spec should have mandated a charset parameter in the header a decade ago. If there's no charset specification, documents should render in ASCII with hex for values > 127.
Which do you think most people will chose?
However, Python source code is not typically dynamically generated, while HTML is, increasing the probability of errors the site author could not trivially predict and the user can do nothing about.
Character-set and language tags are useless in practice, even the dumbest heuristics defeat them. Statistical analysis is so effective that encoding metadata should be forbidden, not required.
"Fail early and hard, don't recover from errors" is a recipe for disaster.
That principle applied to critical systems software engineering leads to humans getting killed. E.g. in aerospace the result is airplanes falling out of the sky. Seriously. The Airbus A400M that recently crashed in Spain did so, because somewhere in the installation of the engine control software the control parameter files were rendered unusable. The result was, that the engine control software did fail hard, while this would have been a recoverable error (just have a set of default control parameters hardcoded into the software putting the engines into a fail safe operational regime); instead the engines shut off, because the engine control software failed hard.
In mission and life critical systems there are usually several redundant core systems and sensors, based on different working principles, so that there's always a workable set of information available. Failing hard renders this kind of redundancy futile.
No, Postel's Maxim holds as strong as ever. The key point here is: "Be conservative in what you send", i.e. your implementation should be strict in what it subjects other players to.
Also being string in what's expected can be easily exploited to DoS a system (Great Firewall RST packets anyone?)
The point of the draft is best summarized as "if you can detect that the other side has a problem in its implementation, raise red flags early and noticeably." It's not safe to recover to some default, because that can make you think that things are working when they're not--imagine if the engine control software defaulted to assuming a different type of engine than what existed. The resulting confusion could equally destroy the engines; this is similar to what happened to the Ariane 5 rocket that caused it to explode.
You're misinterpreting 'fail fast' - it doesn't mean 'entire system should fail catastrophically at slightest problem' or 'systems should not be fault-tolerant'. It just means that components should report failure as soon as possible so the rest of the system can handle it accordingly instead of continuing operation with an unrecognized faulty component leading to unpredictable outcomes.
Fail hard and don't recover is absolutely fine in many scenarios, especially ones where no lives or expensive property are on the line.
Control software for jet engines is a whole different kettle of fish from sharing photos online. I would dare say most of us here have never worked on software that critical. The approach---from design to implementation to testing---is formalized to a degree most of us in the "agile" world of web apps could not tolerate.
It's game theoretically successful strategy to get your implementation to work with everyone. When you accept sloppy input, this allows sloppy implementations to become popular.
Eventually de facto protocol becomes unnecessarily complicated and you need to understand quirks in popular implementations.
Expecting someone to (say) read the HTTP spec and write a compliant implementation without tests that everyone else is using as well is lunacy, and leads to the nightmare we have today.
Standards without engineering to back them up are bad.
Side effect: Committees that produce "ivory tower" standards that are unimplementable will find that their work is ignored.
Another side effect: Standards will get simpler, because over-complex nonsense will be obvious once the committee gets down to making an exemplar actually work.
Not that it will ever happen...
[I helped write an HTTP proxy once. The compliant part took a couple weeks; making it work with everyone else's crappy HTTP implementation was a months-long nightmare on rollerskates]
That same pattern exists elsewhere too, so often people need to do "API science" to figure out how to use various tools. With the common result being the discovery of how to use those tools seemingly effectively, but incorrectly.
This means Forgive all mistakes:
Be liberal in what you accept, and conservative in what you send
This means forgive no mistakes:
Protocol designs and implementations should be maximally strict.
I would suggest an alternative,
Forgive most mistakes, but always let them know they did made mistake:
Be conservative in what you send, and as liberal as possible in what you accept but always let them know what they could have done better
Though I am beginning to think it coming down to "do you want to make it easy to run on a dev-box or easy to run in production?"
I think the lack of formality actually hurt non-technical users because it made tool harder to program.