The described attack is one of the more straight-forward applications of a BGP hijack, and as mentioned, not specific to Bitcoin or any cryptocurrency. The problem is ultimately with BGP, slow adoption of RPKI, and inconsistent certificate issuance/transparency practices. We will see definitely see more of this, because once the attacker accomplishes the BGP attack, the rest of the exploit is extremely simple. It doesn't rely on victims doing anything out of the normal course of operations.
But guess what? The Bitcoin network itself is _also_ vulnerable to BGP hijacking, but the attack vectors are much more complex and so we haven't seen many of these attacks yet. The literature is really interesting though, e.g. Apostolaki 2017 [0] and Tran 2020 [1].
Indeed, a number of internet services and applications are vulnerable to BGP hijacking and interception attacks, including TLS/digital certificates, anonymity systems such as Tor, and cryptocurrencies such as Bitcoin.
The attackers were advertising that they handled the routes for the 2 IP addresses to which developers.kakao.com resolves. They did this so the CA would send the domain validation request to their systems (which is done over HTTP), "proving" they owned the domain, allowing them to get a valid TLS cert for developers.kakao.com.
But during that time, wouldn't they have also received HTTPS traffic from real users trying to load developers.kakao.com? I understand BGP hijacks can be somewhat localized, but still. The attackers presumably are getting real user HTTPS traffic for a domain for which they don't yet have a valid certificate. Which means those requests would fail to negotiate TLS properly, or the attackers could just not run anything on [IP address they hijacked]:443, which would make the requests for the JS file on developers.kakao.com timeout.
My question is, wouldn't real users whose traffic was also hijacked during that window before the TLS cert was issued notice the web application not working? Wouldn't a rise in incoming support requests or something indicate that something was breaking?
If you can maintain a route to the real host, you could pass through real HTTPS traffic back to it until you got your certificate, making it undetectable from the outside (besides maybe some extra latency).
Hi, I am one of the authors of this blog post, and I felt it would be relevant to mention that this attack (where HTTPS traffic is forwarded back to the victim before a certificate is issued) is actually exactly what we did in our live demo at HotPETS 2017: https://www.youtube.com/watch?v=TYBq2ammTRg
In the case of the KLAYswap attack, based on public routing data, it is highly unlikely the adversary forwarded any traffic back to Kakao. As mentioned in the comments, certificate issuance only takes seconds, so it would not have been a very noticeable outage. Also, the adversary did not seem to mind causing connectivity issues given that they actually checked the referer HTTP header to only serve their malicious JS file to people that were downloading it from KLAYswap.
Thanks for the clarification. Could you address the comment left directly on the post?
How was the this client-side JavaScript being used as part of the security of the transaction? Shouldn't the transaction have been recorded as part of an atomic database transaction, which should only be possible on the correct server?
I'm not privy to any details, but a simple possibility would be to replace the destination address of any transfers initiated during the time period. You could even re-write all the DOM to hide double checks. It might have even been possible to initiate the server side request directly - but even if there was a 2FA confirmation of the transaction that included the address - I could easily see $2m worth of transfers successfully rerouted. This attack easily could have been against a bank with online wire transfers - the challenge there would be laundering the money.
I don't know much about this specific third party javascript and it's possible that tighter CSP policies could have mitigated some of the attack vectors. However, once you can MITM, it's really game over.
Before we blame usage of the js CDN, they could have performed the TLS attack on KLAYswap instead and then just reverse proxied while strategically rewriting data. It's possible that with this specific setup that the js CDN was the lowest hanging fruit.
This is such an fundamental and structural attack (BGP poisoning to screw with certificate ownership validation) that I find the comment by Phill Hallam-Baker incredulous. Perhaps there is a hobby horse against automatic certificate exchange (there is a mention on EV certs, which were such an obvious scam) or it's just a middlebrow rant about "javascript dependent pages".
There are many attacks possible with this method and plenty that will work on the simplest request/response Web 1.0 architectures.
Sure you might know somewhere the internet was broken, but you wouldn't know what caused it. It's a downstream issue from you so it's hard to deal with beyond filing a ticket with with some service provider.
I posted another response off this article and I still don't quite get the gist of his response?
> The system we built in the 90s would have protected against this attack.
Is "the system" the human to human EV certificate process? There were so many obvious and awful social and technical engineering holes in that process that I find it hard to believe anyone would defend it?
Certificate expiration was always a danger. There was a single technical contact that could be spoofed or just walk away with the private key. The wider web audience never noticed the EV annotation, and basic human factors tells you they never would. Heck, we can't get people to notice HTTP downgrade attacks. The only protection is TLS everywhere - which never would be possible with slow and expensive authorization methods.
(I suppose it was good business for Verisign though...)
I never understood why, after finding out that users don't notice the EV security badge, Google/Mozilla decided the best course of action was to remove it completely.
Rather the conclusion should've been that the browser gives the user a warning when attempting to submit information to a website that doesn't have an EV security certificate.
Or at the very least allow websites to use HSTS to prevent downgrading an EV certificate.
Of course, it wouldn't have helped in this case as it was the third party site that had the bad certificate. That fact is not signalled anywhere. I don't recall anybody (even the most ardent supporters of EV) saying that the browser should warn you if any third party references were not EV.
That doesn't make sense since any content loaded from a second site has the ability to compromise the first. If nobody said that, they weren't thinking hard enough. Browsers already have this kind of transitive security enforcement -- you can't load a page with TLS that then pulls JS via plaintext http.
I think if a website with a form on it that involves transferring any sum of money around should pay hundreds of dollars per year to certificate authorities.
Not every building needs to implement the security measures of Fort Knox. That doesn't mean that the security measures in-place at Fort Knox shouldn't exist.
There should be a quick, visible way for a user to tell the difference between a connection that is merely encrypted and one where the identity of the certificate owner has been vetted. Then the user can decide for any given form whether they are comfortable sending that data over that kind of connection.
Sometimes, the alternative to companies paying hundreds of dollars per year is customers losing millions.
The comment made my whole day. I love it when 'old-guard' engineers who literally built the playground we're on come out to yell at everyone to get off the damn grass.
It's a weird combination of inspiring and humbling at the same time. Inspiring to think that this massive ecosystem of interconnected computers was actually made by real people, and humbling because yeah, I should get off the damn grass.
PHB was there when satoshi dropped the white paper. He didn’t get it at the time. He still doesn’t get it, commenting on the same forum recently. PHB, dude, time to admit you were wrong. Not only does this technology work wildly better than you have claimed, it has more value today than it did 13 years ago, both as a savings device, or a trustless medium of exchange
I won't claim to have the degree of technical expertise to authoritatively comment on the actual implications of his comment.
What I often find most valuable from these kinds of comments is the fact that these risks were acknowledged and either accounted for in the original specs, even if in a flawed manner, or explicitly discounted as being out of scope.
What I take away from the comment on the article is that from a security perspective "we're not there yet" even if the cryptocurrency technology is useful, widely adopted, and frankly not going away.
I don't think his issue is that cryptocurrency exists, or is being used widely on the world-wide web, but rather that underlying technologies (DNS/HTTPS/PKI) are being taken for granted and used in a manner that they weren't originally designed for.
EV is expensive bits issued by a trusted third party.
Cryptocurrency, Bitcoin, is expensive bits issued by a non-trusted third party.
I don’t see how you can be bullish on the first without being bullish on the second, yet I can see how being bullish on the second can mean bearish on the first. Solutions involving no TTP are better than solutions involving TTP.
I like PHB and he has done great things in the field of cryptography. His belief in centralization of public key systems for security runs counter to the cypherpunks who want to decentralize everything despite the costs.
Ah, while thinking of an adequate response to this comment, I think I may have discerned the nature of his complaint in context of what is happening at large. To both clarify my own thoughts and hopefully share what may be a useful revelation I've taken the time to write out a somewhat long-form response below.
---
*TTP Transactions*
When transactions are brokered through a TTP such as via swiping a credit card at Walmart. The transaction is then brokered by Visa, Mastercard, et. al. who retains the ability to reconcile various types of disputes & fraudulent transactions after the fact with minimal loss of money. At any point the central authority can technically reverse charges without the consent of either originating party.
My non-expert, "educated layman's" understanding is secure online digital communication was designed with the express intent of being used in this kind of scenario. You have a number of transactions of which some non-zero quantity are fradulent, relying on the affected parties to initiate some kind of arbitrage via the trusted third party. Prior to crypto, very few (if any) transactions performed online were irreversible economic transactions requiring *full trust* in the security of the communication channel to mitigate fraudulent transactions. Which brings me to what I believe to be his primary point of contention, putting gripes about specific decisions and technologies aside...
---
*Non-TTP Transactions*
Crypto currencies that require zero-trust in a third-party have no(?) mechanism with which to mitigate fraud.
That means your entire chain of communication, from physical layer all the way up to end-user, must be hardened against forms of irrecoverable tampering, that would otherwise be a recoverable nuisance in a TTP model.
As one of the engineers who designed & put into place a large part of the security around the communication channel being used PHB appears to be taking offense to the assertion that this exploit was due to a flaw in "the web's" security.
Simply put, it was never designed to be the first and final layer of defense against fraudulent activity, and if we want crypto currency to be 'the future' we should walk in the path of our predecessors and design a communication stack that explicitly addresses the concerns of being used for non-TTP transactions.
We need to harden the stack at all layers, as many components were not explicitly defined to protect under the zero trust model, especially with the exploitable financial incentives that could not have existed before. Thank you, I feel your insight has helped strengthen my understanding too.
Your site is probably vulnerable to this attack and there is very little you can do to prevent it. You can monitor for BGP hijacks but you can't really do anything about it when it happens.
The best you can probably do is pay for a TLS certificate signed by a CA that only signs certificate requests after actual, human verification, and pin that in a CAA record. This should prevent other certificate authorities from handing out a domain-validated TLS certificate.
If you're going for the monitoring route, keep the amount of domains your site connects to to a minimum. Self-host your fonts, frameworks, etc, so you only need to monitor your own host. If the developers of the attacked site had hosted their own JS, they might have noticed that something was up when the attack took place, because they can't know for sure if a route change for an external company like Kakao is intentional or a mistake.
The core of the problem is that advertisements by routers that have no business announcing specific routes like these are taken at face value. Certain ISPs that do not require RPKI or other security mechanisms will happily route traffic headed for your IP addresses to some router on the other side of the globe.
Web technologies like DANE used to be popular for this purpose (using DNS to distribute the certificate of your website, with DNSSEC validating the DNS records) but it has seen little actual implementation so far. HTTP key pinning (HPKP) was implemented in some browsers for a while, but its potential for holding websites hostage and general unfriendliness towards bigger websites caused it to be phased out.
If you have apps that operate your service, you can pin your TLS certificate in those and that should protect you against such hijacks.
It's not enough for everyone involved to have CAA enabled. They need to have CAA enabled and to select a certificate authority that does effective domain ownership validation, which - as the article suggests - means (at minimum) multiple-origin checking of network-based challenge protocols like HTTP-01.
Personally, I think anyone who has a heightened attack risk ought to contemplate a CA that does some form of more thorough validation.
Let's Encrypt is the lone, singular CA that actually already had a defense against this attack.
> In multiple vantage point verification, a CA performs domain control validation from many vantage points spread throughout the Internet instead of a single vantage point that can easily be affected by a BGP attack. As we measured in our 2021 USENIX Security paper, this is effective because many BGP attacks are localized to only a part of the Internet, so it becomes significantly less likely that an adversary will hijack all of a CAs diverse vantage points (compared to traditional domain control validation). We have worked with Let’s Encrypt, the world’s largest web PKI CA, to fully deploy multiple vantage point validation, and every certificate they sign is validated using this technology (over a billion since the deployment in Feb 2020). Cloudflare also has developed a deployment as well, which is available for other interested CAs.
> But multiple vantage point validation at just a single CA is still not enough. The Internet is only as strong as its weakest link. Currently, Let’s Encrypt is the only certificate authority using multiple vantage point validation and an adversary can, for many domains, pick which CA to use in an attack. To prevent this, we advocate for universal adoption through the CA/Browser Forum (the governing body for CAs).
That defense alone is still not perfect ("some BGP attacks can still fool all of a CA’s vantage points"), but that's the state of the art.
Something that is also easily glossed over when it comes to certificates is that a browser will happily accept any valid certificate, with no notion or presumption of specifically who should have issued the certificate.
You don't start with a lawsuit in the case of a this kind of theft.
You hope the FBI or some other law enforcement/criminal investigators can track down and prove the guilt of the thieves.
Jumping to the question of "who do I sue" implies that you want to get money out of someone not directly involved in the crime. In the example of gold in a safe, the question might be "can I sue the safe company to try to recoup some of my loss, even though it wasn't their fault that my opsec was shit?".
If there exists no explicit legal agreement that was violated (e.g. the bank that promised to secure your gold was robbed) then you're just being an overly litigious asshole acting in a way that will make our entire society weaker and more vulnerable.
> As part of their platform, KLAYswap relied on a javascript library written by Korean tech company Kakao Corp. When users were on the cryptocurrency exchange, their browsers would load Kakao’s javascript library directly from Kakao’s servers
Am I the only one that yelled "NO" at their monitor when reading this snippet?
It is usually not good practice to load js from 3rd-party domains into your app. Just grab the file and serve it yourself.
For things like payments, I would hope that most of it is server-side, and that js doesn't play a critical role in it at all.
As I am fond of saying these days: the key fingerprint is the identity. The TLS public key infrastructure (PKI) exists to map an alphanumeric string (domain) to a particular key fingerprint. So for critical stuff involving, say, executable cryptographic code, it would be good if there was a way to specify the fingerprint as well as the domain to the browser in the provided link. Completely inflexible, but that is what you want in this case.
The PKI is great, but there are times when you really don't want to have to trust a third party...
Which CA was used for the exploit? I know letsencrypt had work to validate from multiple paths [1]. If it were a more lax CA then just pointing CAA at somewhere like letsencrypt could possibly be sufficient protection.
Just wanted to summarize the challenges faced by today’s web browser.
The only way to survive a BGP hijack intact is also a deal breaker for today’s typical common web browser.
Web browsers do not bother with DNSSEC (which remains quite very relevant despite what the article doesn’t says.
It is true that this particular BGP hijack doesn’t change your endpoints’ DNS records (such as A, AAAA, PTR, NS, MX). But this attack did not attempt to fake the DNS CAA at all, which would have protect web browsers; Faking CAA record was not needed then, but it will be leverage next. So DNSSEC, being secured as they are now however is not culpable there nor would they ever be. It is only the DNSSEC-unprotected CAA record that will be the next target.
The Culpability is the malicious PKI substitution which occurred during that event is the fault of web browser, but it is NOT the fault of the TLS protocol, nor HTTP; web browser is guilty. It is the design of using TLS function calls by the application. TLS API, while makes heavy use of PKI certificates, leaves out PKI management to the “other people’s problem” category. TLS protocol and its API is not the problem, they still work as designed.
PKI management is the problem. By direct user experience, web browser is also the problem.
However … your data still can survive a BGP hijack (DNS or not) as intact and securely so if your use case for B2B calls for a secured endpoint using HTTPS/mTLS that has the following aspects:
- use a pair of server and client PKI and its public key stored on opposite ends for use with mTLS (mutual two-way server/client TLS)
- DNSSEC resolver on both ends configured in cacheless authoritative-only lookup mode (no forwarder) with the 13 Root Server key configured/compiled in.
- Deploy the DNS CAA record for your PKI and ensure that each of the chained PKI uses CAA and ensure that all that needed CAA are secured that under DNSSEC
- your endpoint app properly use TLS/SSL function calls AND verified the PKI chains but not before verifying DNSSEC-only DNS queries needed for chain validation.
It remains arguable that given above, one can still use a 3rd party intermediate CA to sign your intermediate CA in which to deploy your needed PKIs for your endpoints. I am now asserting that if you separate your server PKI and client PKI to each of two different vendors, you should get a pretty good security provided that you figured out your own PKI distribution securely and that your own intermediate CA PKI is also imprinted into your app’s own mTLS.
That summary should help your data survive a BGP hijack in your “B2B”pipeline.
The only (but arguably) remaining defense against BGP-hijack crypto heists is for you to run your own DNS resolver/caching in your own host that is configured only to allow to emit DNSSEC-verified DNS answers while you use your web-based cryptowallet.
This DNSSEC-only DNS caching is not a common thing to find in most desktops because it impacts many web sites that do not use DNSSEC (looking at you, Google.com).
Of course, this is assuming that your cryptowallet provider is ALSO using DNSSEC properly.
Sadly, none of today’s web browsers (including all of the DNSSEC Validator web browser extensions, that I could find) perform any detection for bad/unsigned DNS queries , much less do any active defensive blocking.
I blame the web browsers’ developers for this disabling of these DNSSEC validation capabilities through their poorly-planned but overly restrictive efforts in deploying newer browser extension APIs.
Those tested DNSSEC Validator browser extensions USED to be working but only back then and ONLY in passive detect mode; nowadays, they’re just a dead extension garnering nothing but downvotes).
Your only practical day-to-day recourse to all that jazz is to establish a windowed VM that does this DNSSEC-only DNS caching and any choice of a major web browser for the bestly secured crypto-exchanges. QubesOS is one such excellent option but VMware or VirtualBox would do just fine in a pinch.
DNS was not attacked in this case, the IP addresses themselves were redirected through BGP. It would protect against BGP hijacks of the DNS server, but that wasn't what happened here.
There are ways to protect against such attacks (RPKI for BGP routers, DNSSEC + DANE or HPKP for servers) but none of those aren't available in most modern browsers.
I'm a strong proponent of DNSSEC, but it can't solve _all_ problems. Only with DANE would it benefit the situation.
If target (a crypto wallet website) has both DNS authoritative with DNSSEC and the website, you should be able to detect a change in signing as being invalid.
Key thing is thou that the authoritative DNS server must also reside on the web server.
And gotta turn that DNS cache off or keep TTL short.
No. DNS was not affected by the attack. DNSSEC verifies that the contents of the DNS responses were not altered or spoofed. During this attack, the DNS records were left alone. The DNS response returned the real IP addresses in response to queries and the DNSSEC signature would still be valid.
The IP addresses themselves were hijacked. DNS has no authority over IP addresses so DNSSEC would be pointless.
No, but endpoint to endpoint DNSSEC server/client with no recursion enabled should have your host’s DNS caching do the hard authoritative lookups and should alert you of such BGP hijack and if configured for DNSSEC-only presumably prevents you from going through your crypto-based transaction in that scenario.
Of course, TLS, HTTPS, JavaScript and TCP attack vectors are another whole ball of wax.
No - the point is that if your CA allows an HTTP-01 challenge for certificate issuance, then anyone who can hijack the IP addresses (to which your DNS points) with a BGP attack can subvert that CA to issue their own TLS certificates and subsequently impersonate your service.
Of which all that could have been averted by having another TLS public/private key issued but this time by the client side and it’s public key then given to the website , no?
But who does that nowadays? I still do for my mutual TLS APIs.
It’s only if the client public key (which is stored only in the web server could ALSO get hijack then all that would be … for naught (to the hacker’s gain, that is).
Mutual TLS doesn't help here - the server TLS implementation can easily be configured to accept any certificate. Mutual TLS gives you mutual authentication, but if the server doesn't care whether the client is authentic or not (because it's been replaced by a malicious implementation) then how does that help?
.., if the server doesn’t , which is nearly all web browsers today.
Not necessarily a properly configured Mutual-TLS non-browser API handler, but the article is talking about what most people use which is a web browser.
I don't really understand what you're describing here: which part of the following do you disagree with?
TLS permits client (initiator of the session) and server (target of the session) to have certificates. The purpose of the client certificate is that the server may authenticate the client. The purpose of the server certificate is that the client may authenticate the server. Hence: mutual authentication.
If the server is replaced by a malicious implementation that has the private key associated with a genuine certificate, the client will validate the server as authentic. This happens first in the TLS handshake.
The malicious server will then happily receive the client certificate and accept it without any chain validation, because it doesn't really care whether the client is authentic or not.
To repeat the point: the use of client certificates does not and cannot protect the client.
If you own both the TLS endpoints, you can keep both keys and any attempt to intercept against the two different set of public keys will fail unless both and only both private keys are compromised.
But that defense would only stand by requiring a prior imprint of TLS signatures at both endpoints which no web browser would be able to do during a mutual TLS unless that web browser also starts to takes not only the mutual-thingie client TLS cert but to remember the TLS server public key cert as well beforehanded; not a pretty easy thing to set up by an average web user if the web browser choose to support this.
But you can do this strong protection of dual public key exchange for your own set of endpoints and no BGP hijack can get to it unless both-side’s private keys are compromised; it is just that web browser cannot bother with the prior fingerprinting of TLS server due to a variety of reasons (namely expiration, revocation, domain churn, …).
This new big data collection of TLS server certa surely can only be rudimentary protection and certainly not against a timing attack, no?
My assertion now is that it is the current design of the TLS handling by web browsers is purposely crippled for ease of use purposes. TLS protocol, however, remains robust against such a MitM at IP reroute-level if verification of TLS certs are done both ways and on BOTH sides; kinda like IPSec, right?
If you're requiring your client to expect a particular certificate, e.g. certificate pinning, then obviously you don't have this problem. You're also then not participating meaningfully in the internet PKI. You can just self-sign your root and be your own CA. You also don't need mTLS in that case - the two concepts are orthogonal.
This is fine in a closed ecosystem, where you can also compel your trust roots into the clients and have a mechanism where you can push out updates to endpoint/public key mappings. This is why you still see HPKP for mobile apps.
If DANE were actually a thing then it would also be an option for more open internet use cases; but the reality is that it too comes with its own security and operational problems, and essentially no TLS implementation support as a result of the chicken-and-egg problem.
In any case, this is all outside of TLS, which only generally assumes PKIX.
You left me wondering how to hardened against a compromised intermediate CA if I were to start to rely on my CA provider and their ecosystem to supply with a secured CA cert for my ease-of-management so that we can do all the things we need to do; but only for one direction of the mTLS.
And I am not even talking about the incomplete browser mTLS handling found in a typical web browser, but a well-designed mTLS server/client app.
There is a sharp difference between HTTP-based mTLS vs. Browser-based mTLS. Browser-based mTLS is sorely incomplete and does not block well if verification of CA chaining failed.
Fortunately, for me, I think, if just one half of non-Browser mTLS exchange are under the domain of a private Root CA, there should be that small measure of the thin layer of security remaining … against a BGP hijack as long as the fixation and memorization of two mutual (public and private) Root CAs is done at software level (of which such properly secured mTLS are still not found in all web browsers but hopefully only in our non-web REST/HTTP API software).
We say “non-web” to be anything not useable by a web browser.
And we often hear new intern at test department complain loudly why they could not even use a web browser to exercise against our non-web REST/HTTP/mTLS APIs. We don’t even let them use curl/wget for their I&T stage (only within their unit tests).
And even then curl/wget can’t get to our corner test cases.
At this point, I really hope we don’t need to consider DANE for our scenario.
Regardless of whether our own intermediate CA (signed by a public Root CA) that we control gets compromised (via BGP hijack), it’s a properly deployed non-web mTLS and our private Root CA covers the client-side of mTLS, by design we should be safe against such a BGP hijack.
Yeah, Private Root CA for the client-side mTLS and a fully chain-validation deployment of mTLS mechanism, we should be fine there.
I am quite sure that someone looked into this full deployment mTLS for web browser usage (as my Mozilla bug report is still open on this for years).
Having a CAA record and pointing to a CA like Let's Encrypt that uses domain validation from multiple vantage points on the Internet is a useful idea and raises the bar for an adversary.
I should point out that BGP attacks can also target the CAA records themselves since the DNS ecosystem is itself insecure. This is why such attacks are not easy to defend against, and require holistic improvements across internet routing, PKI, and web security practices.....
It is only because the web browsers are still not doing DNSSEC-only query and validation, CAA can easily be spoofed right there.
To avert a DNSSEC-secured DNS CAA record from being tampered under BGP hijack, both endpoints must deploy their own cacheless DNSSEC authoritative-only resolvers before commencing mTLS endpoint communication. (Like a web browser would even do that /s). This places the trust of DNSSEC to right at the 13 DNS Root Servers whose private key are stored in a HSM key vault. So, hijacking a DNS infrastructure to tamper the CAA record is only possible because 1) web browser don’t bother with DNSSEC 2) domain owner don’t bother with DNSSEC protection.
But even with insecure DNS, I assert that a properly designed REST/HTTPS/“mTLS” API (but would then be totally unusable by web browsers) would still not be intercepted under BGP hijack scenario so that is still an option for any B2B scenario.
Just for clarity: almost nobody uses DNSSEC. It's not just Google. Verify yourself with any list of popular/important domains; just write the shell loop to do a `dig ds +short $domain` across all of them.
DNSSEC only protects their records which includes the mapping of domain name to IP address (and vice versa), nothing else.
But why bother when web browser don’t even use DNSSEC itself? Because you can still send data securely under BGP hijack between your two endpoints with using DNSSEC-protected CAA record and a properly-CA-verified mutual TLS (mTLS).
But guess what? The Bitcoin network itself is _also_ vulnerable to BGP hijacking, but the attack vectors are much more complex and so we haven't seen many of these attacks yet. The literature is really interesting though, e.g. Apostolaki 2017 [0] and Tran 2020 [1].
[0] https://btc-hijack.ethz.ch/files/btc_hijack.pdf
[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=915...