If you own both the TLS endpoints, you can keep both keys and any attempt to intercept against the two different set of public keys will fail unless both and only both private keys are compromised.
But that defense would only stand by requiring a prior imprint of TLS signatures at both endpoints which no web browser would be able to do during a mutual TLS unless that web browser also starts to takes not only the mutual-thingie client TLS cert but to remember the TLS server public key cert as well beforehanded; not a pretty easy thing to set up by an average web user if the web browser choose to support this.
But you can do this strong protection of dual public key exchange for your own set of endpoints and no BGP hijack can get to it unless both-side’s private keys are compromised; it is just that web browser cannot bother with the prior fingerprinting of TLS server due to a variety of reasons (namely expiration, revocation, domain churn, …).
This new big data collection of TLS server certa surely can only be rudimentary protection and certainly not against a timing attack, no?
My assertion now is that it is the current design of the TLS handling by web browsers is purposely crippled for ease of use purposes. TLS protocol, however, remains robust against such a MitM at IP reroute-level if verification of TLS certs are done both ways and on BOTH sides; kinda like IPSec, right?
If you're requiring your client to expect a particular certificate, e.g. certificate pinning, then obviously you don't have this problem. You're also then not participating meaningfully in the internet PKI. You can just self-sign your root and be your own CA. You also don't need mTLS in that case - the two concepts are orthogonal.
This is fine in a closed ecosystem, where you can also compel your trust roots into the clients and have a mechanism where you can push out updates to endpoint/public key mappings. This is why you still see HPKP for mobile apps.
If DANE were actually a thing then it would also be an option for more open internet use cases; but the reality is that it too comes with its own security and operational problems, and essentially no TLS implementation support as a result of the chicken-and-egg problem.
In any case, this is all outside of TLS, which only generally assumes PKIX.
You left me wondering how to hardened against a compromised intermediate CA if I were to start to rely on my CA provider and their ecosystem to supply with a secured CA cert for my ease-of-management so that we can do all the things we need to do; but only for one direction of the mTLS.
And I am not even talking about the incomplete browser mTLS handling found in a typical web browser, but a well-designed mTLS server/client app.
There is a sharp difference between HTTP-based mTLS vs. Browser-based mTLS. Browser-based mTLS is sorely incomplete and does not block well if verification of CA chaining failed.
Fortunately, for me, I think, if just one half of non-Browser mTLS exchange are under the domain of a private Root CA, there should be that small measure of the thin layer of security remaining … against a BGP hijack as long as the fixation and memorization of two mutual (public and private) Root CAs is done at software level (of which such properly secured mTLS are still not found in all web browsers but hopefully only in our non-web REST/HTTP API software).
We say “non-web” to be anything not useable by a web browser.
And we often hear new intern at test department complain loudly why they could not even use a web browser to exercise against our non-web REST/HTTP/mTLS APIs. We don’t even let them use curl/wget for their I&T stage (only within their unit tests).
And even then curl/wget can’t get to our corner test cases.
At this point, I really hope we don’t need to consider DANE for our scenario.
Regardless of whether our own intermediate CA (signed by a public Root CA) that we control gets compromised (via BGP hijack), it’s a properly deployed non-web mTLS and our private Root CA covers the client-side of mTLS, by design we should be safe against such a BGP hijack.
Yeah, Private Root CA for the client-side mTLS and a fully chain-validation deployment of mTLS mechanism, we should be fine there.
I am quite sure that someone looked into this full deployment mTLS for web browser usage (as my Mozilla bug report is still open on this for years).
But that defense would only stand by requiring a prior imprint of TLS signatures at both endpoints which no web browser would be able to do during a mutual TLS unless that web browser also starts to takes not only the mutual-thingie client TLS cert but to remember the TLS server public key cert as well beforehanded; not a pretty easy thing to set up by an average web user if the web browser choose to support this.
But you can do this strong protection of dual public key exchange for your own set of endpoints and no BGP hijack can get to it unless both-side’s private keys are compromised; it is just that web browser cannot bother with the prior fingerprinting of TLS server due to a variety of reasons (namely expiration, revocation, domain churn, …).
This new big data collection of TLS server certa surely can only be rudimentary protection and certainly not against a timing attack, no?
My assertion now is that it is the current design of the TLS handling by web browsers is purposely crippled for ease of use purposes. TLS protocol, however, remains robust against such a MitM at IP reroute-level if verification of TLS certs are done both ways and on BOTH sides; kinda like IPSec, right?