Another better way would have been keep SPDY, as there is usefulness there, separate and then on HTTP/2, to incrementally get there, and use an iteration of something like AS2/EDIINT (https://tools.ietf.org/html/rfc4130) which does encryption, compression and digital signatures on top of existing HTTP (HTTPS is usable as current but not required as it uses best compression/encryption currently available that the server supports). This standard still adheres to everything HTTP and hypertext transfer based and does not become a binary file format but relies on baked in MIME.
An iteration of that would have been better for interoperability, secure and fast. I have implemented it directly previously from RFC for an EDI product and it is used for sending all financial EDI/documents for all of the largest companies in the world Wal-mart, Target, DoD as well as most small and medium businesses with inventory. There are even existing interoperability testing centers setup for testing out and certifying products that do this so that the standard works for all vendors and customers. An iteration of this would have fit in as easily and been more flexible on the secure, compression and encryption side, and all over HTTP if you want as it encrypts the body.
I've used AS2 extensively (in EDI) and to be frank, fuck that. AS2 is a really bad version of HTTPS, you take HTTPS, you remove the auto-negotiation (email the certificates!), you disable certification CA checking (self-signed for all the things), and then you allow optional HTTPS on top of AS2 (which is a huge nightmare in its own right).
Imagine this scenario, two people want to interconnect, here's the process:
- They insecurely email their public key (self-signed) and URL (no MitM protection)
- You insecurely email your public key (self-signed) and URL
- They have a HTTPS URL
- Now the thing to understand about AS2 is that when you connect to THEM you give them a return URL to confirm receipt (MDN) of the transaction.
- HTTPS becomes a giant clusterfuck in AS2 because people try to use standard popular HTTPS libraries (e.g. that do CA checking, domain checking, and other checks which are fine for typical web-browser-style traffic, but not for specialised AS2 traffic) but in the context of AS2 where certificates are often local self-signed (some even use this for HTTPS), and the URL is rarely correct for the certificate, they fall over all of the time.
- Worse still some sites want to use either HTTP or HTTPS only, so when you connect to a HTTPS URL but give them a HTTP MDN URL sometimes they will work, sometimes they will try the HTTPS version of the URL then fall over and die, and other times they will error just because of the inconsistency.
Honestly I used AS2 for over five years, looking back, it would have saved everyone hundreds of man-hours to have just used HTTPS in the standard way and implement certificate pinning (e.g. "e-mail me the serial number," or heck just list it in your documentation).
The only major advantage of AS2 is the MDNs. However even there there exists massive inconsistency, some return bad MDNs for bad data, while others only return bad MDNs for bad transmission of data (i.e. they only check that what you send is what is received 1:1, so you could send them a series of 0s and get a valid MDN, because they check the data later and then email).
To be honest I hate MDN errors. They don't provide human-readable information in an understandable way. They're designed for automation which rarely exists in the wider world (between millions of different companies with hundreds of systems).
Give me an email template for errors any day, that way there can be a brief generic explanation and formatted data, to better explain things. The only thing MDNs do well is data consistency checking which is legitimately nice, however almost every EDI format I know has that in it already (i.e. segment counters, end segments, etc).
If I was to re-invent AS2, I'd just build the entire thing on standard HTTPS. No HTTP allowed, no hard coded certificates (i.e. you receive a public key the same way your web browser does), certificate pinning would be a key part, and scrap MDNs in place of a hash as a standard header in the HTTPS stream. Normal HTTP REST return codes would be used to indicate success (e.g. 200 OK/202 ACCEPTED, 400 Md5Mismatch/InvalidInput/etc).
That way nobody has to deconstruct an MDN to try and figure out the error. And handling a small handful of HTTP codes is much easier to automate than the information barriage an MDN contains anyway, it is both easier to automate, and easier for humans.
I wasn't saying use AS2 directly but an iteration of all the pain points of before solved, it is a decade old now. There are some things that wouldn't be needed and an iteration needed.
The things that AS/2 got right was that it rides on top of an existing infrastructure of MIME/HTTP. The other part is doing encryption/compression of any type specified by the server/client. And there is some benefit to encryption/compression/digital signing over plain HTTP.
HTTP/2 might be the first protocol for the web that isn't based on MIME for better or for worse. We are headed to a binary protocol that is called Hypertext Transfer Protocol.
HTTP/2 looks more like TCP/UDP or small layer on top of it that you might build in multiplayer game servers. Take a look at the spec and look at all the binary blocks that look like file formats from '93: https://http2.github.io/http2-spec/. It is a munging of HTTP/HTTPS/encryption in one big binary ball. It will definitely be more CPU intensive but I guess we are going live either way!
Plus AS2 was a huge improvement over nightly faxing of orders, large companies were doing this as late as 2003. AS1 (email based) and AS3 (FTP based) were available as well but HTTP with AS2 is what all fulfillment processes use now. And yes it has tons of problems but the core idea of encryption/compression/signatures/receipts over current infrastructure is nice. Everything else you mention exists and definitely are the bad parts though much of that wouldn't be needed in the core.
Another better way would have been keep SPDY, as there is usefulness there, separate and then on HTTP/2, to incrementally get there, and use an iteration of something like AS2/EDIINT (https://tools.ietf.org/html/rfc4130) which does encryption, compression and digital signatures on top of existing HTTP (HTTPS is usable as current but not required as it uses best compression/encryption currently available that the server supports). This standard still adheres to everything HTTP and hypertext transfer based and does not become a binary file format but relies on baked in MIME.
An iteration of that would have been better for interoperability, secure and fast. I have implemented it directly previously from RFC for an EDI product and it is used for sending all financial EDI/documents for all of the largest companies in the world Wal-mart, Target, DoD as well as most small and medium businesses with inventory. There are even existing interoperability testing centers setup for testing out and certifying products that do this so that the standard works for all vendors and customers. An iteration of this would have fit in as easily and been more flexible on the secure, compression and encryption side, and all over HTTP if you want as it encrypts the body.