
Why not “Why not WireGuard?” - Foxboron
https://tailscale.com/blog/why-not-why-not-wireguard/
======
XelNika
> This section of the article is confusing at first because it talks about
> “road warrior” users (who generally have dynamic IP addresses) not being
> supported by WireGuard. But this is not true; standard WireGuard happily
> works as long as as least one end (usually the central VPN concentrator) has
> a static IP address.

Maybe I don't understand the issue to be solved (I was unfamiliar with the
'road warrior' term so I looked it up [1]), but I think both Avery Pennarun
and Michael Tremer misunderstood what WireGuard is capable of. WireGuard works
just fine when the _physical_ network interfaces have dynamic IPs (on both
ends even). You can use the basic _wg_ or _wg-quick_ tools to create a VPN
server on a server with a dynamic IP and have clients connect from dynamic
IPs. However, all the _tunnel_ IPs must be static unlike e.g. OpenVPN where
the server gives the client a tunnel IP during authentication.

> (for example, so you can get to an office network whose home connection uses
> dynamic DNS). It’s true that plain WireGuard does not support this
> configuration out of the box.

I have done literally that for over a year now. I cannot think of a way to
interpret this that would make Avery's claims true.

> Someday, WireGuard will need to be upgraded to support a second cipher
> suite. When this happens, users will be able to configure it peer-by-peer to
> allow one cipher suite or the other, or both, exactly as they would with any
> other VPN.

That is not how I interpret what Donenfeld has said [2] about the future of
WireGuard. I believe the software will get a v2 without backwards
compatibility. However, even that is a non-issue because sysadmins can run
both protocols in parallel during the transition.

[1]
[https://en.wikipedia.org/wiki/Road_warrior_%28computing%29](https://en.wikipedia.org/wiki/Road_warrior_%28computing%29)

[2] Page 3 of
[https://www.wireguard.com/papers/wireguard.pdf](https://www.wireguard.com/papers/wireguard.pdf)

~~~
CalChris
Yes, it seemed to me that Donenfeld was pretty opinionated about 'updating
breaks compatibility'. Donenfeld is the anti-Torvalds: we _will_ break user
space. This is the closest quote I could find.

    
    
      If holes are found in the underlying primitives,
      all endpoints will be required to update.
    

Unlike IP packets, _struct message_header_ doesn't even have a version number.
I suppose _type_ could be used for that since it's 32b.

An interesting question is how will this play out with Wireguard going into
the Linux kernel itself.

~~~
XelNika
Breaking user space might not be popular with users, but it makes sense for
security. Besides, anyone could build protocol negotiation on top of WireGuard
v1/v2 if they wanted to.

I'm not a kernel dev, but WireGuard v2 will probably be implemented as a
separate kernel module so they can coexist. It makes sense to do it that way
since they're practically entirely separate protocols.

~~~
CalChris
I’m not sure that coexistence works without a version number going over the
wire.

~~~
XelNika
Sure it does. They are two separate packages, run on two separate network
ports and completely incompatible. Just pretend that WireGuard v2 is unrelated
to WireGuard. "OpenVPN and WireGuard coexistence doesn't work without a
version number going over the wire" would be an absurd statement and the same
applies to WireGuard and WireGuard's descendants.

~~~
CalChris
I suppose you’re right. It is left to the user to not try to connect v1 to v2.
Not sure what happens if on Tuesday morning I connect my v1 client to what was
on Monday night a v1 server.

That said, I admire opinionated design.

~~~
XelNika
The two versions can run in parallel so you will never be in a situation where
the sysadmin is forced to take down the v1 server to set up a v2 server. It's
the sysadmins responsibility not to put users in that situation. Users might
try to use the wrong version at first, but there's no reason for an admin to
break the working VPN before the transition is complete.

------
brobinson
Glad someone did a rebuttal of that article. I came across it a few days ago
and was shocked to discover it was by the guy behind ipfire, which I had
previously heard good things about.

~~~
geofft
Yes, agree - the original article displays such a bad understanding of
security that customers should avoid taking any operational advice, security-
related or not, from the person who wrote it.

The author of the original article seems to be making the following
assumptions:

\- "Secure" is only about what protocols you support, and not about the
quality of implementation, whether the protocols are implemented accurately,
etc. In the author's world, Heartbleed (which was introduced as part of DTLS,
part of Cisco's VPN implementation) doesn't exist, the Pulse Secure
vulnerability that left Travelex completely non-operational doesn't exist,
etc.

\- That said, "3DES with MD5" is a good cryptographic protocol to use, and you
should avoid fancier things like "AES-256 with SHA1."

\- You should only consider using black boxes from companies like Cisco and
Juniper (authors of Pulse Secure). If it's not compatible with those
appliances, it's not worth using.

\- You can't ever upgrade software in a reliable fashion. Everyone is running
old, unpatched versions of software, and it's better for that unpatched
software with all the well-known zero-days to connect to your corporate
network than to be blocked from connecting.

\- You should directly connect your internal network to "third parties that
you do not control," and you should let those third parties dictate your
software choices instead of vice versa.

In this world, you have nothing other than constant disasters. I guess it's
good for job security, especially for someone whose job is professional
support for IPsec.

It's a shame it needed debunking, but I'm glad someone did.

~~~
Spooky23
I agree the original article was really dumb, and sounded like it was written
by an enterprise network guy who has only worked with and been trained by big
vendors.

That said, the fatal flaw of Wireguard is the crypto model. Nobody who
interacts with the US government, regulated US industries, or some F500 will
be able to use it because of the fixed algorithm choice, which isn’t possibly
FIPS 140-2 validatable. You’ll always need that Cisco/Pulse/whatever box.

That may be a feature for some, but it kneecaps the possibility of it being as
ubiquitous as it might be otherwise.

~~~
tptacek
It's a trap, because your options boil down to:

(a) Using FIPS cryptography, which is inferior, to placate the USG.

(b) Using best-practices cryptography, and being incompatible with some places
that mandate FIPS.

(c) Supporting both FIPS and non-FIPS cryptography, which creates handshake
bugs and gives you lowest-common-denominator security.

I think you can reasonable argue that (a) and (b) are both superior to (c); I
think WireGuard made the right choice here.

In the long run, for enterprise environments, I don't think this much matters.
Even with FIPS cryptography, stodgy enterprises won't adopt WireGuard until it
becomes so prevalent elsewhere that they have to, simply because IPSEC has
more standardization. So the WireGuard task is simply to make it so prevalent
everywhere else that stodgy enterprises no long have a vote.

------
brunoqc
I can't wait for the tailscale app for Android.

If I die from covid before I get it I'll be very upset.

~~~
dfcarney
(Co-founder here). It's coming, I promise. I'm as excited as you are. We
wanted to iron out the kinks on mobile (i.e. iOS) before duplicating them on
another platform. Thankfully, we're past that point now.

------
nikisweeting
Finally, I've been waiting for someone to write a proper response debunking
that original article. I was surprised when it got as much attention as it
did, since it clearly seemed out of date.

------
thoraway1010
I just wish the wireguard folks did IP6v2 :)

Kidding (kind of)!

~~~
MayeulC
I quite like Yggdrasil's [1] approach, and there are other alternatives. Of
course, it doesn't fix everything, but with many project working on different
aspects of networking, we might get back to an "internet" that's closer to a
mesh between non-interoperable networks (GNU net, cjdns, private VPN-backed
LANs, zigbee, etc).

Because, on your local network, you can run whatever you want (but it better
be ipv6-aware software using DNS, it's easier to fool like NAT64 or yggdrasil
do).

[1]: [https://yggdrasil-network.github.io/](https://yggdrasil-
network.github.io/)

~~~
thoraway1010
Thanks for this - I'm not an expert at all in the networking space side of
things, but even dabbling with ipsec / ipv6 combos made my head spin coming
from an ipv4 background.

Basically I was left with the idea - how in the world is the gigantic set of
interlocking RFCs (at least to me IPv6 seems to involve a lot of complex
stuff) the best way to network these days.

------
nickodell
>In contrast, a hypothetical WireGuard protocol v2 can offer just two suites,
the old one and the new one, with simple advice: use the new one if you can,
and allow the old one for old nodes until they’re upgraded. There’s nothing
unusual about this, except you don’t need to be a cryptography expert to
configure it.

I don't think this is going to be as simple as the author thinks. Look at
Git's migration away from SHA1. There was no designed mechanism for switching
hash functions, and lots of code assumed 20-byte hashes. Three years after the
first SHA1 collision was discovered, Git has not switched to a new hash
function. I don't mean to be alarmist - no one has created a practical attack
on Git objects, but the time to switch cryptographic primitives is when they
start showing weakness, not after they are definitely broken.

IPSec has a standard method for supporting multiple cipher suites, and
negotiating a common suite. It might be very complicated, but we have no way
of comparing it to WireGuard, because WireGuard doesn't implement the same
feature.

~~~
AgentME
SHA1 was already found to be weak when git was made. Linus would have done
better to use a strong hashing algorithm to begin with than to try to
incorporate cryptographic agility. It was only found to be weak a few months
before git's release though.

Regardless, I think cryptographic agility makes a lot more sense for a storage
format like git than it does for a wire format. A shared repo being upgraded
from gitv1 to an incompatible gitv2 would need everyone to switch versions at
once. With WireGuard, a server could offer support for both v1 and v2 clients,
then both v1 and v2 clients can connect to it at the same time, and clients
can upgrade on their own schedule. The protocol doesn't have to make any
special allowances, like making any data structures interpretable by both,
besides to have both v1 and v2 on different ports or have different initial
connection messages.

WireGuard was developed in response to protocols like IPSec, OpenVPN, and TLS,
which emphasized configurability and cryptographic agility to a fault. Not
only were they hard to use and easy to misconfigure, but each one of them were
at times vulnerable to downgrade attacks because of subtle bugs in their
ability to be configured. It's better to be able to have few code paths and
verify they all work as expected, than it is to have many code paths, only
verify the happy path, and hope that there's no way for users to fall down an
unstudied insecure path by their own fault or because of attackers.

~~~
tptacek
Doesn't agility make _even less_ sense in a file format? Browsers, for
instance, don't need format agility to tell a PNG from a JPG. Meanwhile, PDF
tried to provide exactly that capability, and it is a world-historic security
disaster. The cryptographic equivalent is PGP, and... I rest my case.

~~~
AgentME
In git's case, by cryptographic agility I was imagining a Gitv2 which would
produce SHA-256 hashes by default on new commits, and was able to still
understand SHA-1 hashes on old commits so it could continue to read data
produced by Gitv1 clients or Gitv2 clients in SHA-1 mode without needing an
upfront convert-the-whole-repo step. Definitely less preferable than the
alternative of just supporting SHA-256 from the very start though. May be less
preferable to making git auto-convert repos forward to a new SHA-256-only
version.

