Hacker News new | past | comments | ask | show | jobs | submit login
Dead Simple VPN (balaskas.gr)
148 points by stargrave 3 months ago | hide | past | web | favorite | 68 comments



If you already have OpenSSH installed, it has a built-in tunnel you can activate with a single command-line argument that exposes a SOCKS server on localhost.


It can also tunnel IP packets or Ethernet frames, see http://rkeene.org/viewer/tmp/ssh-ip-tunnel.txt.htm


I use it all the time to tunnel Firefox wherever I go, and it usually let it through even with restrictive firewalls.

I also port-forwarded 443 to my SSH server in case port 22 is blocked.


I'm very impressed with the blog. At least one post a month for ten years? Whoa. Impressive.


jedisct1 sure is prolific with all these lean and friendly crypto-related applications.

dnscrypt-proxy, libsodium, libhydrogen, minisign, dsvpn, probably others I've never heard of.


minisign is pretty cool. I've added its signature to my software releases along with md5/sha256.

https://github.com/jedisct1/minisign


I don't think Libsodium belongs to that list. It's friendly enough, but pretty far from lean.


This is exactly what I have been looking for. One executable, symmetric keys and any port I want.

TCP is sometimes a must (library Wi-Fi that supports only known ports). But UDP is (i think?) better for wrapping TCP traffic.


UDP is just an IP packet with port information. That is all. That’s why it is so easy to do something like TCP/UDP instead of TCP/IP, even though nobody calls it that. TCP over TCP is bad because the protocol uses congestion for traffic control, and if the underlying connection is lossless and never gets congested, the inner TCP layer cannot work right.


Just to clarify... IP = L3, UDP|TCP = L4. In your example TCP/UDP is still TCP/UDP/IP.


Yes obviously. But practically it doesn’t matter as UDP doesn’t add anything except expanding the addressing to include port numbers. That’s like saying that we must always list all protocols below TCP: TCP/IP/ETH/Copper wire. It doesn’t matter because what you run IP over can be ETH, 802.11, pigeons, door to door salesmen, etc. Same with TCP/UDP: it doesn’t matter if UDP is implemented on top of something other than IP. In fact, if you don’t make a distinction between IPv4 and IPv6, two wildly different protocols, you don’t actually care what TCP runs on top of in a TCP/IP combo.


> That’s like saying that we must always list all protocols below TCP: TCP/IP/ETH/Copper wire.

My point was your conflation of L3 and L4. And in fact TCP does have some changes on v6 from v4. Consider, for example that ARP, is replaced by Neighbor Discovery (ND) and that is added to TCP/IPv6. So... If we're splitting hairs now then not exactly.


I am not conflating them. But consider that you can run IP over IP. What level protocol is IP in this case? What about IP/IP/IP/IP? Is IP on the left level 7?

Also, when you are using TCP you don’t care whether it is using ND or ARP. That’s what abstraction means. The implementation of course cares, but when you use it you do not and if you do the implementation is incorrect.


> I am not conflating them. But consider that you can run IP over IP. What level protocol is IP in this case? What about IP/IP/IP/IP? Is IP on the left level 7?

There is only one L3 to the local stack at any given point in packet flow. Sure, you can run IP in IP but the parent, or root, IP stack is the one in control until the encapsulated IP is deencapsulated and moved up. Encapsulating a L3 protocol doesn't magically make it something else as it pertains to OSI.

Finally TCP isn't an abstraction and it has rules based on it's version. So, yes, if you're writing network applications at the TCP level you very much do care about ND or ARP.


Look, if you believe in the sanctity of the network layers, sure. I don't. In a TCP/UDP scenario, to me UDP is the packet layer because that's how I'm using it. If I am doing TCP/IPv6/UDP/IPv4, which I have actually done in real life when IPv6 access required that you set up a tunnel over UDP/IPv4, I am free to think of IPv6 as the level 3 protocol because that's functionally what it is.

As for ARP vs ND, when I do

    bind('localhost', 5555, 'TCP')
I really don't care whether localhost is IPv6 or IPv4 or pidgeons throwing rocks with notes.


> Look, if you believe in the sanctity of the network layers, sure. I don't.

And herein lies the issue. A layer 3 protocol can never be anything else. Unless you're writing some sort of dissector or have another use case. This isn't a "belief", it is what it is. But even when said layer 3 protocol rides a higher layer, let's say IPv4 on UDP, it's still a layer 3 protocol that is layer 4 payload until decapsulated and processed as such. What you describe is encapsulation and decapsulation, this isn't new or interesting and in most cases it's generally a bad idea just looking at it from a perspective of overhead.

I'm also not sure your pseudo code is legit considering you're binding a socket that's already been defined and the socket() is the real question mark here that differentiates IPv4 and IPv6. I think you may have glossed over the fact that some platforms don't support binding v4 on v6 sockets, while some do. Regardless your example is neither universally correct or ideal in many cases.


The problem is that the layer models break down above IP/level 3. Imagine you defined IPv77 which was identical to IPv4 but included port info. So this is equivalent to UDP/IP. What layer protocol is that?

Or say you managed to run UDP directly on top of ETH. Because if you don’t need routing across different networks this is perfectly possible. A protocol is just two interfaces: where it interacts with the lower level protocol and higher level protocol. It is all encapsulation, that’s my point. This is why the layer model is mostly not really used when discussing these things.

So back to the example or UDP being implemented as a layer 3 protocol: when using it, you wouldn’t actually know the difference, would you?

As for my code example, you are right. It should be like so:

    s = socket(TCP, IPv6);
    bind(s, “localhost”, 5555);
    c = accept(s);
Note that past the call to socket() and storing the remote address, I don’t care whether TCP here runs on carrier pigeons or paper planes. Heck, I could treat the addresses as completely abstract strings if I want and never know that they are the names of two pigeons. TCP abstracts that away from me. I don’t worry about things like packet delivery, congestion, routing, etc. When I use the TCP interface, I can ignore most of the underlying details. When I implement it, it’s the opposite: I care a whole lot because I need a different implementation for every underlying protocol.


I'm sorry but you're making things up at this point. Feel free to read the first paragraph of RFC 767 [0], most notably the last sentence:

"This User Datagram Protocol (UDP) is defined to make available a datagram mode of packet-switched computer communication in the environment of an interconnected set of computer networks. This protocol assumes that the Internet Protocol (IP) is used as the underlying protocol."

As for your code, again, you make a lot of assumptions. As I mentioned before opening a socket as IPv6 does not guarantee backwards compatibility with IPv4 addressing. So, again, my point is that you do have to care about the version of IP. In your original example you conveniently skipped the socket call and claimed "See, I can just consume any IP!". Now you're just glossing over the glaring assumption, as I stated in the last post, that not every platform guarantees backward compatibility with IPv4 on an IPv6 socket.

Also... Your pigeon analogy makes zero sense to the argument you're attempting to make because it's highly irrelevant to the protocols and how they actually work in computing environments.

If you care to read the OSI layer 3 protocols state that they are concerned with deciding which physical path data will take. UDP and TCP have no routing facilities I'm aware of because port numbers don't define path. So, again, UDP is not a layer 3 protocol just because you feel like it is.

[0] https://tools.ietf.org/html/rfc768


I understand and agree with you that IPv4 and IPv6 are OSI layer 3 protocols and UDP and TCP are layer 4. At the same time, the OSI model above the fourth layer has been widely considered mostly useless, and some have called into question the whole thing. Yes, RFCs exist that point to what you are saying. But again, I can 100% implement TCP on top of UDP without ever thinking about IP packets. And no, I don't care which layer 3 protocol TCP is using underneath, so long as I can initialize the socket properly and get the local/remote addresses. Take a look at a complete example of a TCP server: https://www.cs.cmu.edu/afs/cs/academic/class/15213-f99/www/c...

Where in there do you see anything about ARP or ND? Or hops vs TTL? Or IP packet header sizes? Or header checksums? Or priority flags? You can take this code, add a couple of macros for getting the correct AF_ value and structs and the code would be 100% abstracted away from any mention of the underlying layer 3 protocol.

Google has been developing transport layer protocols on top of UDP for years now. Why? Because getting a new layer 4 protocol to work is nearly impossible. Yet, if we all agreed that it was something we wanted, they could easily do so, which makes these new protocols functionally transport layer. If it helps you sleep at night, you can say that they are temporarily exiled to be encapsulated in UDP, but really they'll return some day to their proper place in the OSI model.


What does TCP have to do with IPv4/IPv6 changes and the L2/L3 glue ARP/NDP?


Even simpler for some use cases:

`ssh -D 1080 -C -q -N root@your-vps`


Are you sure that -C is a good idea? Wouldn't it be possible in theory to exploit something similar to CRIME/BREACH?

> root@your-vps

People allow for root ssh connections?


Yes, what's wrong with that?

Security is multi dimensional matter, you can't just rely on rules like "no ssh to root" or "password should be more than 20 characters".

In my case ssh is allowed from 2 IP addresses (much more useful rule then "no ssh to root "btw!) with key auth (passwd auth disabled). Don't see any problem with that.


Some do, although I too prefer non-root but used it for the simplicity of the example.


> People allow for root ssh connections?

Keys should make it secure, and a personal VPS obviates audit requirements, so sure.


So correct me if I am wrong but this is doing IP in TCP right ? Iirc, this is a big issue for tcp flow control, which relies on packet loss to detect congestion: as you encapsulate stuff in tcp stream, there will be no more packet loss and the tunelled tcp will not throttle correctly.

Did not read the code yet, so maybe there is something to simulate congestion packet loss.


From the README:

TCP-over-TCP is not as bad as some documents describe. It works surprisingly well in practice, especially with modern congestion control algorithms (BBR). For traditional algorithms that rely on packet loss, DSVPN has the ability to couple the inner and outer congestion controllers, by setting the BUFFERBLOAT_CONTROL macro to 1 (this requires more CPU cycles, though).


This is an intriguing quote. The biggest problem I saw with tcp over tcp is not during the regular operation, but when there was a noticeable packet loss present on the line. Then the the accumulated retransmits on stacked TCP most of the time killed it. Admittedly it was maybe a decade ago after which I concluded it’s just a BadIdea(Tm) and never revisited it.

I had similar experience lately whenever using the mobile internet in poorer coverage cases (eg. a high speed train Brussels-Paris or Brussels-London) which result in frequent and big variations of latency. Of course that all is decoupled congestion control.

If anyone compared the performance of this solution on poor connections with and without BUFFERBLOAT_CONTROL, would be very very interesting to know the results!


WireGuard reconnects in less than a second whereas OpenVPN takes a good 8 (!!) seconds to reconnect. If you have an unstable connection, that's awful. If you only need SSH, Mosh (which utilizes UDP) could be suffice though.


Does this provide some benefit over Algo? (https://github.com/trailofbits/algo)


Algo is a set of scripts that sets up IPSEC and WireGuard VPN on a Linux server. This dsvpn is an entirely new VPN implementation—like IPSEC or WireGuard—written by jedisct1 from scratch.


Does this provide some benefit over a Wireguard setup? (https://www.wireguard.com/)


There's a comparison section on GitHub [1] – WireGuard doesn't support TCP while this does.

[1] https://github.com/jedisct1/dsvpn#why


The normal solution to that problem is to tunnel the UDP over TCP, for example with this: https://github.com/wangyu-/udp2raw-tunnel


> OpenVPN is horribly difficult to set up.

That statement could need some more explanation in my opinion. I never felt it being much difficult.


Using easy-rsa, choosing the right ciphers and other protocol options always seemed overly complex and opaque to me. Wireguard was a breath of fresh air in comparison.


There are multiple setup scripts like [1] that make it super easy to setup OpenVPN.

[1] https://github.com/Nyr/openvpn-install


Makes it super easy but even more opaque, that is not a good thing with incredibly security-related software.


Isn't it common advice to not roll your own crypto? If this software has sensible defaults, I see no problem with using it.


It is, but with this script it's a third party rolling their own crypto for you :D. OpenVPN should have good defaults instead.


Compared to Wireguard though...

It sounds like OP would've like to use wg really, but only had TCP 80/443 to play with.


Compared to Wireguard, OpenVPN is easy to set up wrong but difficult to set up right.

Most people configuring their own OpenVPN installation will think the job done when it begins functioning, but there is a difference between functional and secure; just look at telnet! OpenVPN makes it too easy to create a system that 'works' but isn't secure.


Suppose you tunnel wireguard traffic through a plain-jane unencrypted TCP connection?


ah excellent. Everything makes sense now. :-)


Pretty damn cool, but I can't see this giving you much in way of anonymity. Yet, should be all fine for getting through to region-locked DRM content.


I believe this wouldn't work for Netflix because they check for known commercial IP addresses (DigitalOcean or AWS for example).


In those cases, I reverse ssh the vpn ports to my server, put it on a tiny computer, fly over to a country with limitless dsl and place it behind a router at a friend's house. Easy.


For very, very, large values of easy.


`ssh -D 9090 $HOMESERVER` and configuring firefox is pretty easy


For the record, I meant `autossh -M 20001 -R 10.8.0.94:11194:localhost:1194 me@my-server.com`, not dynamic port redirection over socks5. (I create a new interface+route for vpn connections to have more control over it)


I think autonomous indigenous communities are sitting on a huge untapped market of data centers operating under their own globally competitive data privacy laws

Casinos are so 20th century


Indigenous communities are probably fine with people going into their land and leaving ad embarrassing amount of money behind.

It’s probably less hassle than dealing with uncle sam, the cia/fbi and more than anything else, billing to random weirdos that this month have funds on their prepaid card and next month who knows.

Cash is better, I guess.


They have been doing this for years in Canada but the price of a server is extremely expensive


Can you give an example of a tribe and product offering? I’ve been researching Canadian indigenous communities and it seems the autonomy is even less than US tribes, but that the provincial and federal governments is just much less staffed (big area, low population, low interest, apathy)


The Mohawk nation is very high tech compared to some of the others.

When working for an un-named poker company we had a hosted a server. When a hardware problem happened we had to drive and replace ourselves.

The cost is high and getting accepted requires knowing someone but safest place to host in North America for gaming related materials.

Remember bodog.. I believe they hosted game servers as well there.


That's an idea but it might make for a somewhat fragile business model subject to the whims of Uncle Sam's attention


Sure yeah the Feds can regulate anything they want on tribal land

For most business and domestic issues they just lack enforcers, interest, will

For data yeah it would create a silly game if there were attractive hosting privileges. But there still might be a business play here: people flock to Iceland and Switzerland on pure misinterpretations of what a law means in practice.

There was a founder here that chose Iceland servers because of no formal data sharing agreement with US, and people showed case law where the FBI merely asked Reyjavik police to tap a server.

Point being that it could still be a cash cow for US reservations


That wasn't just any server, it was The Silk Road. The Icelandic authorities went through their own court process, though, getting whatever their equivalent to a search warrant is first, then turning over a complete image of the server to the FBI.


> The Icelandic authorities went through their own court process, though, getting whatever their equivalent to a search warrant is first, then turning over a complete image of the server to the FBI.

Which reinforces my point? These countries offer people nothing.



What are the implications of:

https://eprint.iacr.org/2019/447

"Practical Key-recovery Attacks on Round-Reduced Ketje Jr, Xoodoo-AE and Xoodyak"?

As far as I understand round-reduced doesn't have to mean all rounds are broken, but it is still something to think about.


The security of block ciphers, permutations, hash functions and higher-level constructions is studied by modifying the function until some of its claimed properties are invalidated.

What is being analyzed is not the actual function. How much changes had to be made and how significant they were is an indication of the security margin of the actual function.

In this paper, key recovery requires a made-up mode, or an existing mode, but used incorrectly (nonce reuse with the same key and different messages) 2^43.8 times. In addition, the permutation had to be reduced to 6 rounds. The normal number of rounds is 12, which makes a huge difference.

The analysis actually shows that the security margin of the real construction is extremely comfortable.


DSVPN does not seem to support PFS [1] which would immediately disqualify for any purpose for me.

[1] https://en.wikipedia.org/wiki/Forward_secrecy


While nice to have, this is not terribly useful in the context of a VPN.

PFS would prevent the following scenario: you’re suspected to be an axe murderer, the police asks the cloud provider to tap your VPS traffic, and, later, asks for a dump of that VPS to get the key. Haha, the key is still valid, so the previously recorded traffic can be decrypted!

PFS however would not prevent the following more likely scenario: the police asks for the key, and effortlessly decrypts everything from now on. PFS doesn’t provide post-compromise security, which is far more important.

But since a VPN server is essentially a proxy that decrypts traffic between itself and a client to forward decrypted packets to remote servers, here’s an even more likely scenario: the VPS is tapped, and packets exchanged with the VPN client is not something to waste any time on since for each of them, the server also sent or received a decrypted copy.

That being said, a simple way to get PFS and post-compromise security is to change the key regularly. If you’re just someone who needs a personal VPN to work in a coffee shop whose public WiFi has overzealous firewall rules, this is not something you have to worry too much about.

If you’re an axe murderer, just add key rotation to your post-crime routine.


Um, I think you're wrong about the following claim. Let's use wireguard as an example.

> PFS however would not prevent the following more likely scenario: the police asks for the key, and effortlessly decrypts everything from now on. PFS doesn’t provide post-compromise security, which is far more important.

Wireguard achieves PFS using regular ephemeral diffie-hellman key exchanges. If I understand the scenario you describe, Police has initial (long-term) key (from either endpoint or both) and taps VPS traffic. The long term key can probably be used to get the initial negotiation of the tunnel, but after the first ephemeral key exchange it can't decrypt any more of the traffic. Why? Well both endpoints kept their secret (random) portion of the key exchange on their machine, and continually wipe it (and the derived key) after every key exchange. So police needs the interstitial derived keys from either endpoint to decrypt the traffic. But it'll be probably too late after the conversation is over.


I don't really understand network much. > dsvpn server /root/vpn.key auto 443 auto 10.8.0.254 10.8.0.2

So what does those last two ip means? Similarly for the client.


You can leave that to `auto`.

A VPN creates a virtual network between two machines. Each end needs an IP address.

In your example, `10.8.0.254` is the address that will be assigned to the server and `10.8.0.2` is the address that will be assigned to the client. These addresses are only valid within the tunnel. They are not the real IP addresses.

If you do `ping 10.8.0.254` from the client, you'll get a response from the server. If you do `ping 10.8.0.2` from the server, you'll get a response from the client. If you do `ping 10.8.0.2` from the server, you'll get a response from itself. Which is not very useful.

These IP addresses are not reachable outside the tunnel. The network is private.

You can use any pair of addresses you want as long as they are not used by anything else.


Since this is a point-to-point tunnel, you can get by without assigning IP addresses to the tunnel endpoints and route by tun device instead. It saves some `ip addr` configuration and reduces two NAT (once on leaving local tun and once on the server) to just one NAT (on the server only).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: