Hacker News new | comments | show | ask | jobs | submit login
I made my own WireGuard VPN server (techcrunch.com)
227 points by axiomdata316 71 days ago | hide | past | web | favorite | 121 comments



I was going to post this tomorrow, but I figure people here might find it useful now. I wrote an article on how to configure Wireguard for common scenarios:

https://www.stavros.io/posts/how-to-configure-wireguard/


I think the wg0 references should be swapped with `%i` as the config is dependent on the filename with that setup. Neat write up however.


Oh, great suggestion, thanks for that! I'll update it as soon as I get home.

EDIT: Updated and published, thanks again.


Nice, that's the kind of writeup I was waiting for. :) planning on setting this up in the next couple weeks.

However, I found a huge error:

> Linus Torvalds himself said that he loves it, which took the software world by storm, as we weren’t aware that Linus was capable of love or any emotion other than perkele.

Linus belongs to the Swedish minority living in Finland, so while he's probably able to perkele he jävlas most of the time.


Currently he belongs to the Swedish minority living in Portland, Oregon (USA). ;-).


This is a great guide, thank you! One suggestion though (from someone who has never set up Wireguard before): Could you elaborate on what the "AllowedIPs" setting does exactly? The name confuses me as your comments suggest that it doesn't have anything to do with permissions (as in "allow connections from the following IP addresses") but rather specifies the IP address or range of IP addresses that locally (on the machine I'm configuring) should be associated with the remote peer in question. (Such that packets to these IP addresses get routed to said remote peer.) Is this correct?


Almost: It specifies which addresses or ranges the remote peer can send packets from. Basically, if a packet comes in on the interface with an address that's not in AllowedIPs, it gets dropped. That's why for the "tunnel everything" example, the AllowedIPs are "0.0.0.0/0", i.e. a wildcard, so that packets from all addresses are allowed to come in from the remote server.

EDIT: I've updated the article with more information and a link to the official docs, which explain this better.


Ahh, that makes sense. Thanks a lot!


You mentioned that there are lots of little networking things that you don't know about. Have you tried a setup where you use another network namespace to put only certain apps living in that namespace through the tunnel? REF vpnshift. https://github.com/crasm/vpnshift.sh


We use Wireguard extensively at Monzo. It’s such a workhorse: extremely performant, straightforward to administer. Having dealt with the horrors of IPSec before, I can safely say I’d never go back.


I used IPSec before because OpenVPN was just too slow.

Is Wireguard similar to OpenVPN when it comes to configs?


WireGuard is simpler to configure than OpenVPN and there's much less to tweak.

A real-world config file can be under 10 lines for the client and under (10 + 5 * n_clients) lines for a server.

Private and public keys are short base-64 encodings of 256-bit keys and can be generated with the wg command line tool. These live inside the config file.


Streisand makes it very easy to setup privacy focused secure VPN including support for wireguard and tor. Just provide credentials of your preferred Cloud provider and it will do the setup very quickly and provide you easy instructions to use VPN and proxy on desktop and mobile phones. https://github.com/StreisandEffect/streisand


I too use Streisand. It has a vast plethora of options. I would warn this blessing can become its achilles heel as this introduces a potentially large attack space. I limit every deployment of it to Wireguard and OpenConnect only as these are the only options I use.


True, it’s a Swiss Knife with way to many options. It’s ok for a vacation with your family in a country that blocks Wikipedia or you visit a place with shitty WLAN with blocked ports. Streisand takes 15 minutes to setup and your done. If you need something permanent like a company VPN, you really need to spend more than 15 minutes. Way more than that.


OK, that's cool and all.

But what's really cool about WireGuard is how simple it is. It's far simpler than OpenVPN, and vastly simpler than IPSec/IKEv2. Given SSH logins to a pair of VPS, it takes maybe an hour at most to setup a link.

https://www.wireguard.com/quickstart/


Does the quickstart apply to MacOS? I follow the instructions and can't get past `wireguard-go wg0`.


Earlier today I wrote some documentation on setting up WireGuard on macOS, it may be useful: https://www.timdoug.com/log/2018/08/04/#wireguard_macos

Happy to answer questions if anything's unclear.


Is it arrogant that I feel like I've basically invented this once before? The combination of asymmetric keys for identity and symmetric rotating keys for actual message sending is something that's just always made sense to me, but I never bothered trying to implement this since I was usually working in a web context and "just use HTTPS", was the better solution than trying to work out a custom scheme.

[EDIT] - when I think about it, the management with kernel internals is definitely something I couldn't do. Writing kernel-level drivers/software is not my cup of tea.


Just naive, not arrogant (it's ok & encouraged to think through things yourself, even if there already is a solution. you can't expect to come up with actually new good things on day 1.).

You've described the obvious part - this is in any crypto intro course. But it's not even getting the details of that right (which is _a_ challenge) that's interesting, because that's not where wireguard innovates. Well it does actually propose a nice scheme, but still - OpenVPN (in TLS mode), IPSEC (with IKE, else it's DIY) and, in a way, HTTPS already have working schemes.

wireguard shines is other areas: the simple (factor 500 smaller code than IPSEC/IKE), performant and fully in-kernel (both unlike OpenVPN) implementation, ifaces instead of xfrm policies (for IPSEC. though that's not entirely far, you can have VTI's. they just don't scale well we've found.), the lessons learned wrt crypto agility (tl;dr: don't), implicit source verification, the beautiful DOS cookie implementation (feels nicer than TA key), that servers are silent/invisible unless you know their public key (cant provoke any response packets without), all connections are p2p. To name a few. Read the Wireguard paper. It's short and very approachable.


> ... that servers are silent/invisible unless you know their public key (cant provoke any response packets without) ...

Yes, this is beautiful.


> lessons learned wrt crypto agility (tl;dr: don't)"

This definitely doesn't qualify as a lesson learned. It's maybe, maybe, best current practice, if you squint.

We've even seen in other threads the WireGuard author arguing that oh, well, if there's a problem with say key exchange (it has a "conventional" elliptic curve key exchange and so an adversary with a big Quantum Computer definitely breaks that) then you can still rescue with PSKs. And as well as not addressing problems in any other primitive that doesn't stand up to a laugh test. If PSKs were fine you would already use PSKs and not have all this complicated public key dance at all.

In reality if there's any problem you have to throw WireGuard itself away. So the hope is maybe all these primitives are fine. And that hope will seem 100% on the money right up until it isn't. We will see then, I think, valuable character information about its designers and key proponents, in light of how they react to that particular lesson, for example in helping people migrate off their now fatally damaged baby.

The lesson learned wasn't "don't" it was that "more is always better" is wrong, we didn't need fifteen mediocre symmetric block ciphers without much cryptanalytic research behind them in TLS. But zero agility _definitely_ hasn't proven itself to be the right choice here either, it's just less work to implement.


If reconfiguring a service to use a different crypto algo was easy then I’d agree with you. But it is often really hard and makes config difficult.

To be honest, when something like WireGuard becomes victim to a flaw in one of the primitives then I’d rather they release a WireGuard 2 to address it - keep as much config the same as possible but throw away any notion of backwards compatibility with vulnerable versions.


No. Were Chapoly to be broken next year, we wouldn't "throw away" WireGuard. We'd simply instantiate it over a new set of primitives. That is part of the point of deriving it from Noise.

The idea behind zero-negotiation cryptography is that if there's a problem with the primitives behind the protocol, you version the whole protocol, rather than hoping that some complicated negotiation scheme can future-proof it. No part of TLS's security problems have stemmed from "fifteen mediocre symmetric block ciphers"; rather, almost all of them have come from the TLS protocol's complicated handshaking.


It's not about the inconvenience for WireGuard's author in writing a modified protocol document, "simply" instantiating the protocol with new primitives (or parameters, or any changes at all) means throwing WireGuard away as far as all the end users are concerned. For the sort of hobbyist "Hey, I hacked this together in five minutes, awesome" types running Wireguard today in a trivial pair-wise configuration this is no trouble. For a system with fifty thousand ordinary users this is a complete nightmare, and I expect what they'll do is consider that a lesson learned, no more WireGuard.

But perhaps at least some of them will try to muddle along as others have described below, let's try both versions and see what works. Then we don't need a flag day... Whereupon of course they've got a downgrade problem.

Because TLS has been such an overwhelming success there have been a lot of problems.

Let's start with BEAST. BEAST attacks the way TLS 1.0 does IVs. Is that anything to do with "complicated handshaking" ? Nope, not related at all.

Soon after that there's Lucky 13. This attacks CBC modes with a timing-based padding oracle. Handshaking? Nope.

How about RC4 being broken - Handshake problem? Nope, the RC4 stream cipher is just not as good as intended.

CRIME was a compression oracle, still not handshaking.

Coppersmith's and ROCA, as well as the Debian hilarity show that even minting private keys might be too hard, at least in RSA, for people to do it correctly. Once again, not related to handshaking.

Now Logjam really does come down to the complicated handshake, although it does also need a client which is a total sucker.

Some? Yes. "Almost all" isn't correct. Lots of problems with cryptographic primitives and implementation bugs, maybe WireGuard will be popular enough to have so many implementations and to have such a large number of researchers trying to break it...


Let's start with BEAST.

That's an interesting one to pick. It could not be fixed by negotiation and configuration at all, in the end. The entire protocol version had to be thrown away.


Beast works on TLS 1.0

The die-die-die draft would never have been written if we were looking at a situation where TLS 1.0 was gone in practice on the web, let alone SMTP and other slower-moving environments. Even PCI's day-late, dollar-short June 2018 deadline would have been irrelevant if you're right. You aren't right.

No, BEAST's story is much more interesting than that, there were three mitigations and one remains widely in use.

One "configuration" mitigation was to move to RC4 since that's not a block cipher so the CBC mode attack doesn't work by definition. This is no longer considered acceptable because RC4 is way weaker than we'd thought at the time.

The next mitigation is "empty fragments". An attacker only gets to peak at certain data using BEAST (and then they expand the attack and get everything) but we can arrange for the vulnerable block to have zero length, and then the attacker didn't achieve anything for all their effort. We waste a few bytes on the wire, but this works fine by the specification. Alas, some real world implementations had been written to reject these "empty fragments", which had never been needed previously and so you couldn't actually use this in products like web browsers.

Instead client implementations began doing 1/n-1 split, which moves a single byte instead of nothing. The attacker has 1-in 2^16 chance to guess this using BEAST, but it's a single byte, so they had 1-in- 2^8 chance to guess it without BEAST, and thus BEAST is worthless if you can do this.

Of course it's still _wasteful_ but it's both protocol compliant and an effective mitigation. How about that?


'pvg is right. You're rattling off lots of random technical details about the attack, but his single sentence captures a core problem with "agility": despite having all the complicated handshaking, a whole protocol version had to be burned to fix the problem.

Your confused argument suggests that because BEAST involved chained IVs and not the TLS handshake, it's an example of how the handshake didn't sabotage TLS. You're right: it's not. It's an example how the handshaking, which sabotaged TLS through other attacks, didn't buy anything material for TLS.


The confusion seems to lie with you here if it's with anyone. The stop gap RC4 preference is explicitly enabled by handshaking.

Beyond that though you seem to have your history out of order, BEAST is a TLS 1.0 attack, but TLS 1.2 was finished years before BEAST was published.

It's not a case of a protocol version "had to be burned to fix the problem" but two subsequent protocol versions had already shipped years ago which fixed the underlying cause and, as is the reality for which you seem so ill-prepared, scarcely anybody had upgraded.

All these years later financial services companies are getting waivers from PCI saying their cheap third rate POS terminal "has a mitigating control" and so it's fine that they still do TLS 1.0. So that's nice. Nice for them I mean. Hey, if you're "lucky" one day they'll be relying on a ten year out-of-date WireGuard, as enthusiastically endorsed by Thomas Ptacek. And every time you're asked about it you can spit feathers. How dare they. Like I said, if you're lucky.

As messy and uphill as it has been, the journey from SSLv2 to TLS 1.3 was only possible with the handshake.


but two subsequent protocol versions had already shipped

That doesn't really address the 'agility didn't help here' point at all. The argument against agility as a design approach is straightforward and widely articulated - it's obviously informed by the SSL/TLS experience but doesn't outright depend on it. It's not too difficult to imagine that there could be some sort of coherent counter-argument. If you have one, though, it's impossible to parse it out from this enthusiastic mixture of non-responsive detail and superciliousness.


But it did help there, as I already pointed out and you ignored twice.


It did not help here. In fact, it amplified the problem. We knew RC4 to be broken when BEAST was announced. Nevertheless, people enabled it, as a "stopgap", to deal with BEAST. RC4 is arguably less secure than BEAST-affected TLS (BEAST was in practice quite difficult to trigger), and, either way, all those RC4 deployments ended up also needing to be scoured off the Internet!


Thai Duong did BEAST during the brief window he worked with me at Matasano, so I'm pretty sure I have the chronology right.


Well, it is resistant to to the vast majority of downgrade attacks that plague just about every other widely used crypto protocol. While it's true that if a problem comes up, you'll have to throw the entire protocol suite, that's basically no worse than any of the others.

The only case where agility can be helpful is when there are different vulnerabilities known in the MAC, cipher, etc - but there is some combination you can mix-and-match that is still believably secure. I tend to side with Donenfeld here, that this improbable possibility is not worth supporting.

Note that WireGuard does have an "entire protocol" version; it's possible to support more than one at a timel; However, it does away with the 50 mix-and-match version that an agile protocol has, and the downgrade attacks that mean the whole thing is only as strong as the weakest combination.


> Note that WireGuard does have an "entire protocol" version;

Where? Neither a search nor a brief manual inspection of the protocol design mentions such a feature.

> it's possible to support more than one at a time

Just a moment ago you were cheering on WireGuard's simplicity and lack of vulnerability to downgrade attacks, and now we're talking about how an implementation might actually offer more than one, whereupon you're back to worrying about downgrade attacks.

TLS has a number of specific anti-downgrade countermeasures, it will not surprise anyone to discover that WireGuard's paper as well as not describing this "entire protocol" version feature you've mentioned also has nothing about downgrade prevention...


TLS was designed to avoid downgrade attacks and failed. It's a uniquely bad example of the argument you're trying to make.


As discussed previously, TLS is old and successful, which gives you a lot more to talk about than protocols too young to say their own names and barely heard of outside the nerdiest circles.

The most recent example where attackers get to do a TLS downgrade is Logjam, where they tie things up for long enough to solve the Discrete Log problem and then use their solution to cover up an earlier lie so that the downgrade protection doesn't trigger.

This sort of trick also wouldn't work against TLS 1.3's downgrade protection, because rather than use a removable sentinel they baked the sentinel into the shared entropy. This sets up a catch 22. If you remove the sentinel to hide your tracks, the connection fails due to a mismatch, but if you leave the sentinel there then a client knows what's up.


I might just be sleep deprived from hauling myself to Vegas, but this is word salad to me. SSL 3.0 and TLS were designed by some of the best cryptographers in the world specifically to avoid downgrade attacks, and the approach failed. Obviously, TLS 1.3 doesn't have glaring downgrade attacks; its a new protocol.

TLS is, as I said, a terrible example of how complicated handshaking can defend against downgrade attacks (for the best possible example of this, see DROWN, which actually manages to leverage SSL 2 attacks against TLS!).

On the other hand, WireGuard can't have downgrade attacks: the precise configuration of crypto constructions used in the protocol is baked into the constructions themselves as domain separation parameters.


Sure, lets blame the travel.

You're very kind to Paul Kocher in naming him one of the "best in the world". Maybe he'd just gone to Vegas too when he decided you could get away with abusing RSA decryption to implicitly authenticate the recipient?

In grouping cross protocol attacks like DROWN in as downgrades you undo your own arguement. WireGuard with this unusual definition can become vulnerable to downgrade, bad guys might get your WireGuard v2 private keys by abusing the legacy WireGuard v1 protocol, the deliberate lack of compatibility between the two not withstanding.

Of course the specifics of DROWN can't work against WireGuard, nor against TLS 1.3, but you're insisting this isn't about specifics, it's a general "lesson"


On that last sentence - TLS 1.3 can be vulnerable here under the same circumstances as TLS 1.2, I have no idea what I was thinking here, I was reminded by Appendix E.8 of the RFC this morning so I guess appendices are good for something.


I'm pretty comfortable with my assessment of Kocher. It's easy to look up what he's been up to.

Help me understand how exactly someone would get WireGuard's v1 Noise primitives, which bake the v1 construction identifiers and a v1 constant into the derivation of all secrets as the literal first step of the protocol, to generate something that would be comprehensible to a hypothetical v2 of the protocol.


My crystal ball is cloudy.

No, no, you're right, back in 1996 the "best in the world" were incompetent buffoons, whereas today we know exactly how to do this right and we definitely haven't overlooked anything /s


This isn't an answer to the question I asked, it's just huffing.


You didn't ask a question. You did ask me to "help you" see into the future and I informed you that my crystal ball is cloudy.


That's a weird way to say "ok, I was wrong about it being possible to recapitulate DROWN using WireGuard v1 and any hypothetical WireGuard v2".


An ability to recognise that "I don't know a successful attack" isn't the same as "it's impossible to attack" is why I don't share your (as yet unjustified) confidence in zero agility. It's a microcosm of this thread.


I'm sorry, but you're being slippery, and I'm not going to pretend that's a valid argument. It is not my claim that nobody will find vulnerabilities in WireGuard. I hope they do! Generalizing and weaponizing crypto exploits is something I work on professionally and I'm a cheerleader for team breaker.

My claim is that a cross-protocol attack like DROWN will not work on WireGuard, and I explained why.

Rather than acknowledge that explanation, you either (a) rattle off irrelevant details or (b) retreat to abstractions. But there's no abstraction to retreat to. DROWN isn't an unknowable attack. You can just read the paper to see how it works. After you do that, I suggest you read the WireGuard white paper, which is short and straightforward (the cryptographic details make up only about a third of it). You've said several things on this thread that lead me to believe you haven't yet read it, and I think you'll find it enlightening.


> Where? Neither a search nor a brief manual inspection of the protocol design mentions such a feature.

Pages 9 and 10 in the whitepaper, see IDENTIFIER and CONSTRUCTION.

That being said, supporting broken crypto is a bad idea. Should any of the crypto primitives be broken, the way to move forward would be to replace them, not have WireGuard support 2 protocol versions simultaneously.


You are correct, it isn't explicitly mentioned; I've reread the technical paper just now, and recalled the (internal, with colleagues) discussion I had in the past, from which I had this conclusion - "whole protocol version" is an emergent property of wireguard, not a a designed one - it is possible that a newer version of wireguard will respond to both a newer and an older protocol properly (provided it wants to, and it can tell them apart -- which should be trivial, e.g. by changing a mac parameter, in the unlikely case that the newer version would keep the mac) - or alternatively, just use different UDP ports, one for each version. I apologize for mistakenly remembering that this was a public discussion.

> Just a moment ago you were cheering on WireGuard's simplicity and lack of vulnerability to downgrade attacks, and now we're talking about how an implementation might actually offer more than one, whereupon you're back to worrying about downgrade attacks.

Yes, but I don't think there's a conflict here - technically, you could have two versions running on two machines, and pick the "right one" through sticky UDP load-balancer session after you've seen which one replies (so long as the two versions are sufficiently different that an initiation packet for one does not qualify for the other). What I'm saying is that it can also be implemented within the same kernel, and I believe that one way or another, it will. In that sense, it's about as bad as TLS.

I believe Wireguard's opinionated design is better because it inherently pushes this up the stack; The way it is implemented now, if (and when) it needs to be replaced, it will be of comparable difficulty to switch WG1 to WG2 or WG1 to (something-else-that's-comparable)1; Unlike TLS in which the upgrade is usually "well, get the new version of this lib and you'll be ok" -- which is essentially guaranteed to be the path chosen, and since backward compatibility is often prized, is much more likely to be exploitable.

Technically speaking, the only thing one can say for sure "zero agility is simpler to implement", and "protocol agility requires a lot of care to avoid downgrade attacks, but provides backward compatibility". Everything else is basically dependent on the hard-to-estimate probabilities one assigns to "diligence of upgrades", "importance of security", "seriousness of vulnerabilities", "difficulty of configuration" which vary a lot based on your axioms.

Based on my experience, I prefer the approach WG is taking. YMMV.


> I've reread the technical paper just now, and recalled the (internal, with colleagues) discussion I had in the past, from which I had this conclusion - "whole protocol version" is an emergent property of wireguard, not a a designed one

No, it's not merely emergent. The handshake identifier string explicitly has a "v1" in it, and message types have explicit type identifiers.


The whole Noise construction is "versioned" by the primitives it's instantiated with, through domain separation constants. That's what the weird English strings in the protocol do. Whole-protocol versioning is part of Noise, and thus of WireGuard.


I've had this feeling frequently in the past. Many years ago, when I was just learning computer science related concepts, I would be repeatedly surprised to see solutions discussed for which I had already had an inkling. It would usually turn out that my ideas weren't really novel or perhaps weren't really workable. With more experience and education this changed, and I've had an enjoyable career problem solving problem and inventing.

Inventive minds like yours will often have insights that aren't original, but over time and with more practice, some of these will be original and practical and valuable, and you will more often have the satisfaction of making important discoveries.


>Is it arrogant that I feel like I've basically invented this once before?

Probably.


Yeah... -- reading the wireguard paper[0] now, and my cursory analysis ("oh this is like that time that I...") of how it works/what it needed to accomplish is suuuuper naive.

[0]: https://www.wireguard.com/papers/wireguard.pdf


The mathematical foundations and power/elegance (and often even simplicity) of cryptographic primitives make it really easy to get the completely wrong idea about the difficulty and disturbing fragility of actual cryptographic engineering. Happens to pretty much everyone.


All right, spill the beans. Which one are you? Diffie or Hellman?


Haha I didn't mean I invented the key exchange mechanism -- just that after learning how easy it was to generate asymmetric public keys (on the shoulders of multiple giants), and how it easy it was to generate symmetric AES keys randomly (on the shoulders of more giants), I envisioned building services that would deliver public keys to a central repo and communicate over regular HTTP, but using encrypted payloads, and some MITM protection.

I thought this was clever, then I realized I had just "discovered" HTTPS.


openssl genrsa 2048? ;)

But in any case drafts for TLS over HTTP (also known as aTLS) exist: https://tools.ietf.org/html/draft-friel-tls-atls-00 https://tools.ietf.org/html/draft-friel-tls-over-http-00


Thanks so much for the pointer, those two drafts are now in my reading list!


It's giants all the way up, shoulder on shoulder.


Hmmm, I've always wondered why you'd want to run your own VPN server. The very first step is to rent a server from Microsoft/Google/DigitalOcean/etc and then setup the VPN on that server, right?

But how does it help you stay anonymous? You are still paying for that server using your CC and they'd have your address and all other info. Unless I can rent the server anonymously, I can't see any point to run my own VPN server. That's why I'm paying a third party, PIA for example, to rent and run that server for me.


I'm renting VPS in Netherlands. I'm using VPS mostly to circumvent internet censoring my country does (mostly porn sites, but they occasionally breaking legit sites and what's wrong with porn anyway). There's no legal punishment to circumvent blocks AFAIK, so I feel pretty safe even if someone would find out I'm doing that (it's not like I'm hiding, gigabytes of traffic on standard OpenVPN port, lol). But anyway I'm sure that Netherlands provider won't tell anything to Kazakhstan request even if they would ask something and I don't do anything to warrant Interpol engagement. It's all about attacks you want to mitigate, I think. If you want to break into Pentagon, that won't work, I guess.


Still be safe, if the government gets proactive and starts to make house visits to people who are having suspicious long running connections to foreign national transferring GB's of data you don't want to be caught with loads of illegal to own content on your machine.

Doing that kind of analysis has become very easy and cheap recently.


There are many more uses for VPN then anonymity. Like securing communication between my devices networks, bypassing/escaping restricted networks (geolocked or captive portal, proxies).


In the UK, all ISPs are obliged to hold a log of all your surfing. I suspect it's every dns request made and every parseable http get/post. If you host your own vpn, even inside the UK, you sidestep the generic bulk data collection. Obviously, if GCHQ want you, they'll get you.


The Government wrote themselves a law (the "Investigatory Powers Act 2016") that says the Home Secretary can write to a service provider and ask them to record all the connections made.

Some of Britain's smaller IPSs don't like this sort of stuff (e.g. Andrews and Arnold's "implementation" of the government's opt-in child friendly censorship was to have a checkbox on their application page, if you say you want censorship it says you should choose a different ISP...), and so the smart money is on them having not written to ISPs at all. The backbone providers are few, bigger, and much more corporate. No colourful personalities apt to make the government look a fool in a televised hearing. So any letters probably went to the backbone providers although so far as I know none have come forward to say so.

In 2017 the ECJ told the UK government that such mass surveillance is prohibited, and it is also widely rumoured that the government then told the backbone providers to pause the collection.


> But how does it help you stay anonymous? You are still paying for that server using your CC and they'd have your address and all other info. Unless I can rent the server anonymously, I can't see any point to run my own VPN server. That's why I'm paying a third party, PIA for example, to rent and run that server for me.

I don't see the purpose of VPNs as anonymity, and I think many people are making a mistake to see them that way. If anyone really wanted to know who you are, they could file a suit and subpoena PIA. Now you're trusting that PIA really doesn't keep any logs. You have no way of verifying this. Or, if someone has some suspicions about who (and where) you are, they could subpoena their billing information. If someone (particularly a government) really wants to find you, a VPN does little to stop them. You've just introduced a single middleman.

On top of that, you can only take PIA's assurances that they themselves are not tracking you.

I run my own VPN server, hosted in a DO droplet, to get around networks that filter certain ports and websites. That's it. I have considered adding a VM with read-only access to my NAS to the VPN but that's all.


I live in a country, of which there are many, that blocks most internet content of interest and heeavily logs and monitors the rest. However having developed this habit I quite like my ISP being unable to monitor me


You can also run the VPN server at home. Then you dial in and you will be able to access all your home services!


What?

E: NVM, you mean to access your stuff without NAT etc.


Most people use vpn to hide from their own ISP, that's a sad state of affairs.


> Unless I can rent the server anonymously

Perhaps if you used Bitcoins to pay?


We're down-voting questions now? Wow.


I run my own VPN server with a hosts file on it that blocks ads and tracking domains. I then use it for my mobile phone as an ad blocker.

I'd wager that quite a few people would like to pay for a managed VPN service that did the same thing.... however, as once in a while you need to disable your adblocker on some websites, so once in a while you might need to circumvent the blocking on the VPN. I wonder how that might be done.


I recently did this with openvpn+Pi-Hole running on a NATted £4/year VPS. It's great to have DNS blocking for your phone's apps and so cheap I'll setup another one for redundancy at some point.

OpenVPN on Android lets you select apps to bypass the VPN, so I have a second browser on standby if I ever need direct internet access. Or I could use the disable for x minutes feature of pi-hole.


Where can I find a £4/year VPS?


https://lowendspirit.com/

Proper ipv6, shared ipv4 with a port range forwarded for you to use.


Bandwith allowances seem far too low for use as a VPN or proxy server though?


500gb/month is fine for my needs.


i run my own vpn server because a) i already have a server for teamspeak and other smaller stuff and b) it allows me to use public wifi more securely.


Some VPS providers allow you to pay with bitcoin. In theory this means you could stay 100% anonymous.


bitcoin is hardly anonymous


Forgive my ignorance, I’m new to bitcoin. If I buy bitcoin with cash at an ATM, how could a transaction be traced back to me?


The easy answer is that somebody can photograph you putting the cash in the machine. Cameras are pretty small these days and facial recognition is cheap.

Besides that, though, the entire mechanism of Bitcoin is one where all transactions are recorded publicly in perpetuity. It's really the polar opposite of private in that respect. It means that if your identity is ever linked to a particular bitcoin address (as in the not-so-unlikely scenario that the authorities or some other attacker are monitoring bitcoin-for-cash machines), then all transactions linked to that address can be linked to you.


I see. In hindsight my comment about “100%” anonymity was misguided. My original intuition was that a cash-bitcoin payment had to be at least “more” anonymous than the alternative of a credit card, but now I’m not so sure. Presumably tracing a credit card payment would require a court order or hacking a bank account/email, but the bitcoin scenario only needs a big database of faces which I guess you could get by crawling Facebook or something(?). Anyway, your main point about all transactions being recorded forever is well taken. Thanks for clarifying my understanding.


The other bit that people seem to neglect with BTC is that it is not immune to ordinary network analysis. Yes, it's a very clever system that creates a trustless layer of distributed storage, but it still works via IP. If you want to spend some coins, you still need to sign your transaction and submit it somehow to the network. Analysis of where that block came from (for example, by controlling the node where you submitted it, or even just simple sniffing of your connection) can correlate a physical location or IP address with a BTC address. Tor or similar measures can of course help with this, but you will probably always leak some information about your computer/OS/ISP/general physical location via things like packet timing, if you haven't accidentally leaked even more via things like forgetting to mask your MAC address.

In short, as always, it depends on your threat model. Are you trying to hide the fact that you forwarded sensitive nuclear secrets to overseas actors from a determined U.S. Government investigation? Sorry, you probably can't, at least without a bit of luck. Are you trying to fool Netflix long enough to stream latest season of Arrested Development? Probably doable.


> Cameras are pretty small these days and facial recognition is cheap.

Masking your face is also cheap and easy to do if you are really concerned about it.


Beware of DNS leak [0]! It is easy to overlook when one configures own VPN solution.

https://en.m.wikipedia.org/wiki/DNS_leak


If you use OpenVPN, add the following (without quotes) at the end of your config file:

"block-outside-dns"

then run this test:

https://www.dnsleaktest.com/

I pass the extended test with it. Very easy and worth doing.


GDPR prompt implementation is terrible, shows the article, then redirects to prompt, then reloads article! Have to press back three times to return to HN.


I truly appreciate Wireguard's simplicity but what's the best way to handle key management and peer address assignment in larger deployments?


That's the only feature request I have as well. Something where you could reference a file would be ideal, for example your config could be:

PrivateKey = !peername

Publickey = !peername

And the actual credentials could be in a separate file with a specific name, with one peer per line:

peername:pubkeystring:privkeystring

That would make deployments much easier, as the credentials could be handled separately.

However, the peer address assignment is another good question. One file per peer would be better, I think, but you'd need something like another directory, and, at that point, you might as well write a script to take all your config and concatenate it into one file. That having been said, I don't know why that can't be a part of wg-quick.


What is the recommended way of UDP tunnelling wireguard? Is it necessary to run something like pwnat alongside it, or is there a better way?

Using a TCP based VPN there are services such as ngrok for exposing only the VPN endpoint behind a NAT, but nothing equivalent for UDP.

Unless it's possible to use the android and other clients to work in this way, it will limit the uptake of this, I believe.


For the sides that need to listen to an incoming connection (your server, typically), you'll need a direct port open, which might just be passed through via a NAT.

For clients, it works like all other UDP applications, it just reaches out to the remote address and the NAT keeps a temporary mapping of the source and destination ports and addresses.


On the client, is it possible to setup an “always on” VPN? Such that when the client restarts on reboot, there is no internet connection until the VPN is on. Or when either the client or server glitches, the end point computer/mobile does not connect to the internet in the clear?


On Ubuntu, for each wifi/wired connection settings, there's a checkbox and dropdown to automatically connect to one of your VPNs.


Its really frustrating that there isnt an option to enable automatic VPN for all connections.


I would like to try out WireGuard.

Has anyone made any experience with TunSafe?

https://tunsafe.com/


It's closed source crypto software. You can make your own judgement on how acceptable that is for you. Jason Donenfeld (Wireguard author) strongly recommends against it [1].

[1] https://lists.zx2c4.com/pipermail/wireguard/2018-March/00244...


Worth noting there was a HN discussion on it from 5 months ago. [0]

I got the impression that ludde (the author of TunSafe) seems to think that people should trust him because he wrote uTorrent and he is, for reasons not specified, opposed to releasing it open source and suggests that the author of wireguard has a "general dislike against non open-source applications."

For what it's worth I really cannot understand the mentality that would lead to you creating a high-quality, benign VPN client that you have no intention to charge for or sell and not wanting eyes (which have certainly been offered) to look over it before users rely on it.

[0] https://news.ycombinator.com/item?id=16515637


I'm still confused as to why Tunsafe isn't open-source. I feel like if he wanted people to trust it, open-sourcing it would be a way to go.

There's nothing wrong with non open-source applications in general, but when it's a closed-source client of an open-source VPN server, you can't trust it, sorry.


I don't want to lose control over it. If I open source it then anyone could just take it and rebrand it and pretend it's theirs.


> If I open source it then anyone could just take it and rebrand it and pretend it's theirs.

Not necessarily. Depends on what license you use for the code, but if it's not a copyleft license they can't even create a (legal) copy if you maintain copyright and don't give anyone a license to copy the code.


It's probably less about legal forks and more about bad actors forking it, inserting spyware, and buying Google ads against "vpn server" and "tunsafe". It seems like it's more of a problem for open source software targeting nontechnical users, and I imagine uTorrent had a lot of issues with it.


I can see that angle. We're all used to curated packages from distro maintainers, whereas Windows software is like the wild West, especially for those not so technically competent.


Then put it under the GPLv3 or something similar.


Interestingly he just opensourced it.


I've heard a lot that WireGuard is faster than OpenVPN. I must admit that my OpenVPN connection manages few megabits even if my server allows 100 Mb/s and my home connection allows 300 Mb/s. I didn't test WireGuard, it seems too hard to configure yet, but I wonder why is it more performant?


I found it similarly hard to configure, so when I finally managed it, I wrote up how so you can try my configs: https://www.stavros.io/posts/how-to-configure-wireguard/


The white paper has detailed section explaining the kernel implementation https://www.wireguard.com/papers/wireguard.pdf


Mostly because it uses a kernel module instead of a pure user-space daemon using `tun` drivers.


So this requires maintaining an out-of-tree kernel module? Does it require rooting on android?

I find OpenVPN plenty fast for personal use (with adjusted send/receive buffers and AES-NI on the server)


It does not require rooting on android, the wireguard app[0] falls back to the userspace implementation if the kernel module is not available.

[0] https://play.google.com/store/apps/details?id=com.wireguard....


For now, yes. There are efforts to get it into the Linux kernel, see https://lists.zx2c4.com/pipermail/wireguard/2018-July/003176...


In practice DKMS works well and is automated in many distros. You just notice updates take a bit longer as the WireGuard kernel module is built.

I believe you can use wireguard-go on android, which doesn't require a kernel module.


Wow this is the best TechCrunch article I have seen in a while that has reached the front page! Dude whoever wrote this deserves a big ass raise!


IMO they simply had nothing else to write about. This is the first time I see “how to” on TechCrunch.



> Algo VPN runs on any Ubuntu server, but the easiest way to host your server is to create an account on DigitalOcean.

Advertorial?


I feel like saying the entire thing was written to direct people towards DO is stretching it just a tad. Possible that they have some affiliate program with DO? Sure. But Algo itself says that DO is the "most user friendly" on their github page.


Considering Google Cloud is offering $300 credit and their Console is top notch AND offers browser based SSH. DO is not as obvious of a choice.

Sustained use discount and top notch network further the case.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: