Heh. I have a 30-nodes Tinc network over the internet but some hosts are behind a NAT. It keeps randomly losing routes between these nodes. It even has the infuriating behavior that often it loses the route a few seconds after I successfully established a SSH connection.
Also, traffic seems to be decrypted and re-encrypted by relaying nodes. For end-to-end encryption, you need "ExperimentalProtocol = yes" added by Tinc 1.1, which was never formally released.
I'd like to rewrite something like it in a language I'm familiar with (perhaps based on cjdns' protocol which is better documented than Tinc's) but it's not easy.
While that's pretty convenient, I'm worried about what happens when the vendor shuts down the website. "Ugly broken vendor tools" can be run forever in a VM of an old system, but a website would be gone forever unless it's purely client-side and someone archived it.
> They should be tied to cryptographic keypairs (client + server).
So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link, for as long as the cookie lives (and users don't visit the website to get the rotated key).
> If the web server needs a cookie, it should request one
This adds a round-trip, which slows down the website on slow connections.
> the client can submit again to "reply" to this "request"
This requires significantly overhauling HTTP and load-balancers. The public-suffix list exists because it's an easy workaround that didn't take a decade to specify and implement.
> So now, if a website leaks its private key, attackers can exfiltrate cookies from all of its users just by making them open an attacker-controlled link
This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc). You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
> This adds a round-trip, which slows down the website on slow connections.
Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default. The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested. This gives the client much more control over when it leaks users' data.
> This requires significantly overhauling HTTP and load-balancers
No change is needed. Web applications already do all of this all the time. (example: the Location: header is frequently sent by web apps in response to specific requests, to say nothing of REST and its many different request and return methods/statuses/headers).
> The public-suffix list exists because it's an easy workaround
So the engine of modern commerce is just a collection of easy hacks. Fantastic.
> This attack already exists in several forms (leaking a TLS private key, DNS hijack, CA validation attack, etc).
An attacker who gets the TLS private key of a website can't use it easily, because they still need to fool users' browser into connecting to a server they control as the victim domain, which brings us to:
> You could tack a DNS name onto the crypto-cookies if you wanted to, but DNS is trivial to attack.
It's not. I can think of two ways to attack the DNS. Either 1. control or MITM of the victim's authoritative DNS server or 2. poison users' DNS cache.
> Requests are already slowed down by the gigantic amount of cookies constantly being pushed by default
Yes, although adding more data and adding a round-trip have different impacts (high-bandwidth high-latency connections exist). Lots of cookies and more round-trips is always worse than lots of cookies and a fewer round-trips.
> The server can send a reply-header once which will tell the client which URLs need cookies perpetually, and the client can store that and choose whether it sends the cookies repeatedly or just when requested.
Everyone hate configuring cache, so in most cases site operators will leave it to a default "send everything", and we're back to square one.
> No change is needed.
I was thinking that servers need to remember state between the initial client request and when the client sends an other request with the cookies. But on second thought that's indeed not necessary.
> So the engine of modern commerce is just a collection of easy hacks. Fantastic.
There's at least a dozen different attacks on DNS, but the main ones regarding record validation include multiple types of spoofing and MITM (at both the DNS and IP level), cache poisoning, account takeover (of either the nameserver or registrar), DoS attack, etc.
Cache poisoning is the easiest method, and contrary to whatever Cloudflare says, it's trivial. The DNS transaction number is 16-bits. All you have to do is flood the shit out of the resolver with spoofed packets and eventually one of the transaction numbers will hit, and your attack is successful. It's low-bandwidth, takes at most a couple hours, and nobody notices. This is one of the many reasons you can't just trust whatever DNS says.
The choice of what HTTP messages to cache is not always a choice, as is the case with HSTS. But it could be made one if testing of this proposal (which again, I came up with in 2 minutes) showed better results one way or another.
But all this is moot anyway cuz nobody gives a crap.
It's indeed not a good one. Discord refined instant messaging and bolts other things on top like forums but isn't fundamentally different. Google Wave was (and still is) a completely different paradigm. Everything was natively collaborative: it mixed instant messaging with document edition (like Google Docs or pads) and any widget you could think of (polls, calendars, playing music, drawing, ...) could be added by users through sandboxed Javascript. The current closest I can think of is DeltaChat's webxdc.
The fact there is intense disagreement about what is “obviously true” between countries shows that this is still happening.
The beliefs of the masses are simply shaped to suit political interests.
Concrete example: “boys should be circumcised”. If the answer was objectively obvious to educated people why does the US have such a different position on it than Europe?
"send end-to-end encrypted (E2EE) emails to anyone, even if the recipient uses a different email provider" but the video shows Gmail asking the recipient to authenticate. How does that work? If a Gmail user sends an email to my self-hosted server, there is nowhere to authenticate me to.
And it means either Gmail or the actual email stores decryption keys, so what is the threat model in which E2EE is useful here?
The only "advantage" I see is that now recipients must manually archive these "encrypted" emails if they want to keep access to them in the future (so most of them won't). That would be consistent with Google's strategy with AMP's editable emails.
reply