The last time I set up a WireGuard server, I used this script (with minor modifications) to create the users:
But, the next time I set up a WireGuard server, I'll use this web-based admin UI, which displays a QR code to easily configure a mobile client:
It automatically handles server provisioning, creating users, and can also generate QR codes.
Outbound port filtering is incredibly common on public and guest wifi networks and I found three use cases in my first week where OpenVPN on 443/tcp would have worked fine. The inability to use Wireguard over TCP and bypass most outbound port filtering by using tcp 443/etal makes it unusable in my daily life. I can understand why TCP isn't performant but my choice isn't performance vs non-performant. It's works somewhat vs GFY.
And yes I've seen the udp over tcp forwarding hacks. They don't work on iOS and some look outright dangerous (hello open proxy).
Hopefully this can be addressed before Wireguard hits 1.0.
I use port 4500, which is typically used for ipsec nat traversal, and have found it available/worked on most networks where the default wireguard port was filtered.
At this point if I was designing a VPN for client devices I'd have a mode that looked at as close to HTTPS as possible. There is one tool to tunnel over websocket but this was already sucking up too much of my play time. :)
Cisco AnyConnect, while expensive and bloated, works great as it initially connects on 443/tcp and then tries to setup UDP. If UDP fails it just sticks with the TCP connection and "just works".
I do have some questions about the assurance CryptoVerif provides. I'm a non-mathematician that does research in high-assurance security that tracks formal verification results. What I've read indicates problems can kick in at places like this:
1. The logic or modelling used isn't proven sound. There's been tons of analysis into things like set theory or HOL Light. Is CryptoVerif using a proven logic?
2. Recent designs favor kernelized models where about every step or transition can be proven consistent or just not mess up to begin with. Does CryptoVerif do this or is there a lot of complexity in it?
3. The solvers might themselves make mistakes. One mitigation was them simply driving the steps in 2 that were easy to check and/or providing a log of work for a trusted checker. Any risks on automated part to worry about?
4. Small, verifiable TCB. HOL Light and Milawa were explicitly designed for this. Does CryptoVerif have a trusted checker? Are the design and implementation of above done in a tiny, easy-to-verify fashion? The EasyCrypt paper's comparisons to CryptoVerif and CertiCrypt made me think it doesn't.
5. Verification activities on the code itself, source and/or binary, from static analysis to property-based testing. Empirical evidence suggests these catch problems proofs sometimes miss, esp due to specification errors. Even CompCert had a few. Any of this on CryptoVerif's building blocks?
The answers to these questions might be helpful to people interested in boosting the assurance of this or other tools in the future. After all, the proofs can only be as strong as the foundations they're built on.
- There is no verified kernel. All algorithms, including an equational prover for simplification of games, are implemented in ~30k to 40k LOC of OCaml. The correctness of CryptoVerif’s game transformations has been proved manually . A formal link between these proofs and the OCaml code is not established;
- During a proof, after each game transformation, CryptoVerif checks a list of invariants. This already allowed to find implementation errors in the past.
- There are regression tests using proofs that have been done before with CryptoVerif, with both proofs that are supposed to succeed and proofs that are supposed to fail.
To conclude, CryptoVerif has no high-assurance guarantees on its code like Coq and others, but is the only tool that is capable to treat complex real-world protocols.
Which is it ? (my understanding is that a proof assistant is interactive, like Coq, Agda, etc)
1) it generates intermediate games automatically during the proof [see footnote1 for a brief explanation of “games”]. This is in contrast to the other major tool that works in the computational model, EasyCrypt , where the user has to write down these games manually.
2) it has an automatic mode in which the tool is able to perform certain proof steps on its own, using a built-in proof strategy. For “simple” protocols in “simple” attacker models, it can even finish the entire proof automatically, where “simple” depends on the capabilities of CryptoVerif.
CryptoVerif also has an interactive mode that allows the user to guide the proof. This is necessary for more complex protocols and attacker models like the one we developed for WireGuard. Once you found the necessary proof steps in interactive mode, you put them into your proof/model file. Then, CryptoVerif can re-check the entire proof in one go.
As CryptoVerif only works with protocols, it's not a general proof assistant like for example Coq, and that's why it's sometimes called “protocol prover”.
 CryptoVerif https://cryptoverif.inria.fr
 EasyCrypt https://www.easycrypt.info/
[footnote1] “Games” are a concept of the proof technique, which is used to prove cryptographic security: security properties are expressed as a game; if the adversary wins the game, the security property would be broken. We prove that the adversary wins the game only with a negligible probability. We do this by transforming the game more and more, until we reach a game where we can easily prove that the adversary cannot win it, or only with a small (negligible) probability.
“Games” are a ...
CryptoVerif outputs what might be seen as program transformations:
P = Q
Verifying crypto protocols is essentially the production of a chain
P1 = P2 = ... = Pn
The reason for the "game" terminology is historical. Von Neumann's theory of economic games  preceeds the development of programming languages for interactive behaviour  by many decades.
The language of CryptoVerif is a probabilistic process calculus with interaction by message passing.
Your description of CryptoVerif's output and the proof technique is accurate, thanks that you detailed it for fellow readers. I like the comparison to optimising compilers. In our case, we look at the distribution of messages that are observable by the adversary, and prove that these distributions are only distinguishable with negligible probability. The computations done are actually changed such that they would no longer be useful in a real-world protocol. Like, in one proof step we replace the result of an encryption by a randomly generated bitstring. And if the encryption scheme is secure, the adversary is not able to distinguish these two cases.
Interestingly, that paper uses a variant of pi-calculus, i.e. an idealised programming language for message passing, as 'games'.
Is it usable?
I remember reading that Cloudfare is implementing it on their VPN product.
Almost every distribution has packages for it and it's just a matter of copying the example wg-quick configurations from the internet. Personally I've written a more complete script that allows you to dynamically add new devices to a server (and generate a QR code for the Android client).
Yes, it's not mainline yet but it appears to be getting quite close and people still use VirtualBox even though its drivers are out-of-tree. Even though Donenfeld hasn't started tagging releases that are CVE-eligible I'm willing to bet that WireGuard is more secure than its alternatives (has anyone even attempted a formal proof of OpenVPN or IPSec?).
There are a few userspace implementations too.
I've been using SoftEther in the mean time, and man is it flexible. Punches through NATs with wild abandon, doesn't get blocked since it communicates through 443 (if you should so config it), is super portable, and offers solid security and performance as far as I can tell.
That said i miss being able to use a trusted CA for provisioning. Every new client needs to be setup at the server which is annoying. I would also like to see the possibility to do Ethernet style virtual interfaces instead of raw ip interfaces in order to be able to bridge those. However, ipsec did not allow me to do that either. Allowing non unicast traffic would also be huge benefit for some applications.
I think all or some of this could be done by a different implementation using the same protocol. Maybe something like ipsec's IKE daemon but for wireguard could handle key exchange and dynamic routing allowing dynamic clients and probably fully meshed networks easily.
I'll think about writing up a guide on doing this.
For ubuntu/debian there is a package that installs everything. The config is very simple, more simple than openvpn.
For mobile there are "official" clients, that use QR codes for config.
Following tutorials like this https://www.stavros.io/posts/how-to-configure-wireguard/ (which seems like a good one from a fellow HN user) always seems to work that I'm able to connect to the server but my internet doesn't work after running it. Probably something related to the IP tables rules or firewall I assume?
I'm using the official mac client and Ubuntu 18.10 running on a server at OVH.
PostUp = iptables -A FORWARD -i %i -j ACCEPT
PostUp = iptables -A FORWARD -o %i -j ACCEPT
PostUp = iptables -t nat -A POSTROUTING -o $HOST_NIC -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT
PostDown = iptables -D FORWARD -o %i -j ACCEPT
PostDown = iptables -t nat -D POSTROUTING -o $HOST_NIC -j MASQUERADE
sysctl -w net.ipv4.ip_forward=1
I'm still working on getting ipv6 working correctly configured when I'm on work wifi, which is both ipv4/ipv6. Traffic flows fine on my personal iphone X with wireguard from home.
Our IT has reddit.com blocked and somehow my connection to reddit is being blocked still even though wireguard and 188.8.131.52 are setup on my VPS/wireguard setup.
When I was setting up WireGuard on macOS Mojave it would connect but I wouldn't be able to load pages. Other devices didn't have any issues so the server config was fine. Turns out the autodetected MTU was too large and the packets were getting dropped. A dead giveaway that this is happening is that pages will (mostly) load over HTTP, but get stuck negotiating a secure connection over HTTPS. I'm guessing this is because of the larger packet sizes required to do the handshake.
Try setting to something really low like 100 to see if things start working and adjust from there.