Hacker News new | past | comments | ask | show | jobs | submit login
IP traffic over ICMP tunneling (github.com)
193 points by vampire_dk on Nov 11, 2015 | hide | past | web | favorite | 83 comments

SoftEther, a multi platform and open-source software supports ICMP and DNS tunnelling among other things (SSL, OpenVPN, IPsec, etc)


SoftEther is an amazing piece of software. As mentioned I was a little dubious of it back when it was closed source and touted so many features. After going open source I trialled it on a small network (~40 users) and it was rock solid.

My only gripe was user management. It lacks an LDAP or PAM integration, substituting RADIUS instead. No biggy, I can set up a SoftEther->RADIUS->LDAP bridge. The caveat is that SoftEther will only apply RADIUS authentication to accounts you manually define; so you must create a user in SoftEther and enable RADIUS for them. This may have been fixed by now, but was rather annoying back then!

After reading the product page, this product seems too good to be true (OSS, supports every OS and every type of VPN). Is there some kind of catch? How come I've never heard of it until now?

It's written in C++. /s

I suspect it hasn't gotten a lot of widespread publication in English forums because it's from a University project from Japan so there aren't a lot of English-speaking contributors (there are only 9 contributors to the official repo).

Additionally, it was only open sourced in 2014 so as an open source project it's pretty young.

I know among the pfsense group originally there was skepticism that it wasn't back doored by a government agency (not because there was proof), because of the fact it had such a great feature set while being so new a project. Now that it's open sourced, I'm interested to see if people pick it up. It definitely looks legit.

I have been using it since its early releases in 2003 and the feature set really reflects the continuous development.

I hope you meant 2013. Looks like the earliest beta users were starting 2012.

Before SoftEther went open source, it was sold commercially as early as 2004. It was renamed PacketIX VPN around 2005 and is still sold commercially under that product name at https://www.softether.jp/1-product/11-vpn (it's up to v4.0).

I remember reading that there was a 10-year agreement that finally ended at the start of 2014, allowing the original developer to go open source. The source tree is different from PacketIX's though, so some of the features are still not available yet in the open source version.

I've deployed SoftEther and I can confirm that the software does work incredibly well and is jam packed with features.

I to was concerned with the legitimacy of the software. But it is open source now, which should aid discovery.

I don't use it anymore at the moment. But my gut feeling is it is legit software, but sometimes there is just no way to know...

Not the first of its kind, just look-up in Wikipedia: https://en.wikipedia.org/wiki/ICMP_tunnel

Any captive portal these days block also ICMP.

Most firewalls block ICMP these days, because the days of blacklisting are over and ICMP is not the one who is getting white listed. Why?

The only way these days is to misuse DNS. But even that works less and less reliable.

Why would you block ICMP rather than police it or only allow certain opcodes and sizes? I hope the people blocking ICMP don't ever try and run IPv6.

Of course they aren't going to try IPv6. Not because of ICMP, but they won't try it.

I agree that some captive portals/firewalls do block ICMP but still I've seen many in my country which don't.

Well the question is then what's the point other than a personal exercise? There is plenty of ICMP / multi protocol tunnels software out there for both linux and windows much of it doesn't require administrative privileges.

Also ptunnel comes standard with some linux distro's these days Ubuntu and so do probably most of it's derivatives, and as far as raw performance goes ptunnel is also the highest performing one capable of achieving about 150kbps which isn't that bad considering the sheer amount of packets and overhead you get.


> Well the question is then what's the point other than a personal exercise?

What's your question? Is it "What's the point of blocking ICMP?"? Or is it the opposite question?

If it's the former, then there are sysadmins out there who cargo-cult their network configuration and listen to folks like Gibson Research Corporation who've been giving really bad advice [0] for the past decade+.

[0] Specifically, they strongly recommend dropping all traffic to ports that don't have listening services, along with all ICMP, rather than rejecting said traffic and allowing all non-problematic ICMP. They also have a "handy" tool [1] to make it look like doing anything else is "DANGEROUS": (The tool reports [2] if your site responds to ICMP echo requests.)

[1] https://www.grc.com/shieldsup

[2] Ping Reply: RECEIVED (FAILED) — Your system REPLIED to our Ping (ICMP Echo) requests, making it visible on the Internet. Most personal firewalls can be configured to block, drop, and ignore such ping requests in order to better hide systems from hackers. This is highly recommended since "Ping" is among the oldest and most common methods used to locate systems prior to further exploitation.

I tried using some but couldn't get them to work. Probably because many were developed long time back. There have been many recent changes in the kernel.

Exactly. I've seen many captive portals that don't block ICMP.

Any major captive portal re-routes DNS requests to their Login-IP and block any IP leaving the local network. That essentily prohibits any ICMP request to the outside world.

Sure, but they still need to lookup the name.

Yeah, that was my thought. Does everyone really think that the companies building these security platforms haven't thought of this? It's not an obscure protocol or anything, and ICMP has plenty of other potential abuses that would lead network admins to block it.

> The only way these days is to misuse DNS

How about IP over TCP SYN.

A few years back, I was assigned to work at a BigCorp's premises. They had really tight network security: all outward connections were blocked except through a dedicated HTTP proxy. This was bad news, since stuff like SSH are absolutely essential in my job.

After few days of mobile tethering, I realized I could ask their HTTP proxy to open an HTTPS connection to a server outside the network, but instead of sending HTTPS traffic through the proxy, I could send any traffic - like SSH. With this, I was ultimately able to open an SSH-tunnel to my own shell server running OpenVPN outside their network, which then allowed a (surprisingly stable and fast) access to the internet at wide – via an OpenVPN-tunnel wrapped in an SSH-tunnel pretending to be an HTTPS-tunnel.

I don't recall whether ICMP was allowed out at the BigCorp., but I am pretty sure someone will one day find a tool like this quite useful in a similar situation.. :)

Sounds like they enabled HTTP CONNECT without limiting the accepted port range, so any protocol will go through.

Maybe this is why Microsoft Azure never allowed ICMP travseral through their outer firewall despite frequent request from users....

It's common to run OpenVPN on TCP port 443 (HTTPS) to avoid such restrictions.

The Great China Firewall and others need deep packet inspection, heuristics and AI to find tunnels set up that way.

Blocking based on port is a silly thing. He probably just ran ssh on port 443 to get around it.

That's another possibility, the comment above can be interpreted either way.

In any case they were probably running something like squid with very basic level 7 filtering, so if something comes on 443 they have no option but to forward it.

I think they allowed HTTP CONNECT to any port. Not that it would have been hard to move OpenVPN or SSH to a different port.

My school blocks all protocols except for HTTP and HTTPS. However they actually MITM HTTPS connections (I have to install a root certificate from them) so they can decrypt and see everything people do. So unfortunately typical TCP proxies don't work, even on port 443 as their transparent proxy makes the handshake. Instead, I have to put the payload in an HTTP request acting as a download, and another request as an upload, and make an OpenVPN connection over that.

That should be criminal, requiring users to install a root cert is just sad.. nice workaround though!

My university uses a HTTP proxy too. The proxy makes connections only on ports 80 and 443. So, OpenVPN on port 443 is pretty much the only option. Wish Google Hangout would work over HTTPS.

I just tried iodine and icmptunnel. Can't say for sure but I think icmptunnel was faster. At least for my internet

I ahve also tried it. It's working quite smoothly, performance is pretty good!

That's good news for me. :)

That is what I would expect to happen. The problem is that most captive portals still let DNS through but not ICMP.

Having said that, I'm sure there are other usage for such a tool :).

The documentation for this is superb! I know what it does, why I'd want to use it, how to use it, where to use it and what it looks like in wireshark. Well done!

Thanks for the feedback :)

For anybody that's tried both - how do these compare to DNS tunnels (e.g. iodine), in terms of speed and reliability?

DNS is less reliable, but could give you bigger throughput (you can send and request large records) ICMP packets would arrive quicker, will be more reliable will bypass various DNS hijackers (common with many ISP's) along the way.

DNS also requires you to have a DNS server and a domain, and you'll need something to constantly clear the cache on the local machine otherwise you'll eventually run out of room even if you are going to use the max available DNS record size. If anything in the way will keep your DNS queries in cache then you might be screwed and run out of space very quickly.

If you need internet access ICMP tunnel will be better, bandwidth will be limited but it will be more or less a P2P tunnel, if you need to exfiltrate data without explicitly needing to maintain a bi-directional tunnel DNS is the way to go, will also work in more captive portal restrictive cases than ICMP. Today ICMP is usually utterly blocked DNS sometimes work especially in common cases where the restricted network offers some white listed sites (e.g. airport wifi that allows you to access the airport's site and the local train service but blocks everything else).

With even ICMP you can send/receive large messages. There is no restriction on the maximum payload length.

I haven't seen a single network stack that doesn't limit the size of the tailing payload or packet in general (MTU's ;)), go try and push 65507 bytes of payload into the message and tell me how it goes.

In any case DNS tunnel offers you both TCP and UDP tunneling at much higher throughput, I'll take a look at your code when I'll have the time and see how it compares to ptunnel or ICMP shell.

"A correctly-formed ping packet is typically 56 bytes in size, or 84 bytes when the Internet Protocol header is considered. However, any IPv4 packet (including pings) may be as large as 65,535 bytes."

Ref: https://en.wikipedia.org/wiki/Ping_of_death ;-)

Normally, an ethernet or wifi restricts an MTU of 1500. I use ping packets accordingly

1500 is the MTU of Ethernet, it's often not sustainable on end to end connection (especially when you add frame overheads) if you are using that high of a payload size you'll get considerably worse performance than say limiting it to around 500 bytes, you are welcomed to try it. Also with how global traffic is managed smaller packets tend to get priority since they can be qued quicker backbone connectivity uses much bigger frames than Ethernet so more often than not generating more smaller packets would increase your overall throughput (to a limit) unless you are on a very controlled network.

What do you mean by "it's often not sustainable"? Throughput on a server is higher at higher packet size, so if you're doing a download I'd expect the server to send 1500 byte packets. It's pretty easy to saturate a link with 1500 byte packets, and it's much harder to do so at lower packet sizes (from the server's perspective) since the per-packet processing costs start to dominate over the per-byte costs. Admittedly my knowledge of this sort of stuff is mostly intra-DC; is there some other factor that you're referring to that supersedes this on the web?

I'm not aware of prioritizing smaller packets on the backbone, sounds like something that would be targeted at small flows (i.e. first N packets in a flow get a priority bump)? More info on that would be appreciated.

I don't know about cable, but DSL usually uses PPPoE. It has on overhead of 8 bytes, lowering the MTU of those connections to 1492 bytes.

It doesn't matter the MTU setting on your end for WAN and ISP / interlink grade networks is meaningless they don't use Ethernet, FDDI frame size is 4500 (ATM is about double that) bytes (minus what ever overhead, but usually 4200 and change) ISP/WAN routers don't care about how many mbit/s they transfer but how many packets they route at per given unit of time, as packets get packed into a single frame the smaller the packet the more packets they can transfer each frame.

Also from a more high level point of view if you think about it the small packets are the most critical ones and they are at least as far as responsiveness goes DNS is limited to 512bytes over UDP, TCP 3-way handshake packets are tiny and those are the packets that need to get to and back from their destination as fast as possible, delays in data transfers means slower speeds, delays of handshakes mean that your application can fail or hang. Other important traffic such as VOIP[0] also uses very small packet sizes for this same reason most critical services need to transfer very little data (per given unit of time) but need to update data as frequently as possible to provide the illusion of real time and to mask the latency, same goes for other things like online/multi-player gaming and so on and on and on. Pretty much if you want your service to be as responsive as possible limit your packet size to the smallest size possible and increase your PPS this will ensure that your packets get quicker to their destination. [0]VOIP Packet Sizes http://www.cisco.com/c/en/us/support/docs/voice/voice-qualit...

The only time you would want to use large packets is pretty much when you can have a buffer, this means that you need to handle less packets per second which lowers CPU consumption (across the entire path) so video streaming and such can use pretty much as large of an MTU as they want unless they start getting fragments.

> It doesn't matter the MTU setting on your end for WAN and ISP / interlink grade networks is meaningless [as] they don't use Ethernet...

It absolutely does matter and is quite meaningful. :)

If you set your edge router's Internet-facing MTU to 9k, and the upstream equipment's MTU is smaller than that, then either your packets will be dropped, or PTMU Discovery will try to figure out the MTU of the path. (Better hope everyone along the path is correctly handling ICMP! :) )

> The only time you would want to use large packets is pretty much when you can have a buffer...

Or if you have high-volumes of data to move and want to dramatically increase the data:Ethernet_frame_boilerplate ratio. :)

> Also ... if you think about it the small packets are the most critical ones... [because they need to be dispatched as quickly as possible.]

Yes, but a larger MTU shouldn't affect this. Set whatever socket options are required to get those packets on their way as soon as they're created, and your system shouldn't wait to fill an Ethernet frame before sending that packet.

Poor choice of words on my part, if you configure jump frames on your uplink you are going to kill your network stack, if you limit it too much you'll have a huge overhead. The point being is that for transferring data especially when responsiveness is important if not paramount utilizing the maximum potential frame size you can push without fragmentation would generally yield a poorer result in real world applications.

> [I]f you configure [jumbo] frames on your uplink you are going to kill your network stack...

I can't agree with that statement. If upstream devices support larger than 1500 byte MTU, OR PTMU works correctly, then you are absolutely not going to "kill your network stack". At worst, (in the PMTU discovery phase) you'll see poor performance for a few moments while the MTU for the path is worked out, and then nothing but smooth sailing from then on.

> The point being is that for transferring data especially when responsiveness is important if not paramount utilizing the maximum potential frame size you can push without fragmentation would generally yield a poorer result in real world applications.

I'm not sure what you're saying here. Are you saying:

"If you configure your networking equipment to always wait to fill up a full L2 frame before sending it off, you'll harm perf on latency-sensitive applications."?

If you're not, would you be so kind as to rephrase your statement? I may be particularly dense today. :)

However, if you are, then that statement is pretty obvious. I expect that few people configure their networks to do that. However, I don't see what that has to do with the link's MTU. Just because you have a 9k+ MTU, doesn't mean that you have to transmit 9k of data at a time. :)

I work for an ISP and it is all ethernet on the interior. Both for residential and commercial customers. The small amount of frame relay and things that are requested are on the ethernet network from edge to edge.

> 1500 is the MTU of Ethernet...

It's the de-facto MTU of much of The Internet. Baby Jumbo (MTU >1500 but <9k), Jumbo (MTU ~9k), and Super Jumbo (MTU substantially larger than 9k) frames exist, and are supported by many (but -sadly- not all) Ethernet devices.


> Also with how global traffic is managed smaller packets tend to get priority...

Do you have a reliable citation for this? I would expect that core and near-core devices would handle so much traffic, that they all would be using MTUs far higher than 1500 bytes per frame.

It's pretty standard QoS measure, network schedulers especially for multiplexed/aggregated networks will have a bias for small packets, you should be able to find performance statistics for various token bucket configurations that will demonstrate that.

Do you have a cite for that? :) I know that CoDel doesn't bias for small packets; it treats all flows equally and tracks traffic on a bytes-transferred (rather than packets-transferred) basis.

Can you please clarify this comment? It sounds like you're saying Ethernet cannot maintain a line-rate transfer at the maximum MTU. But that can't be what you're saying, anyone could run an iperf/netperf or even large crafted packet transfer and prove this wrong.

Thanks I'll look into this :)

If the layer above (i.e. IPv4) can create fragments, you can send up to the maximum payload of the L3 protocol. You can send up to a 64KiB IPv4 packet over 1500-byte Ethernet.

By far the biggest impact on tunneling over DNS is whether or not you can make direct DNS queries to the server running iodine.

I did some tests a while and found that iodine was ~98% of non-tunneled speed when I could access the server directly, since then the traffic is wrapped over huge big TXT queries and it's really efficient.

But the common case for using it is that you can only lookup through a local DNS server, and then it's usually ~0.5% or so of the usual speed. I.e. 1-2KB/s at best.

I haven't tried comparing both. I don't have much resources. All I can say is that using icmptunnel, one couldn't differentiate whether it's using tunnel or direct internet. Hence ICMP tunneling was very fast.

Although I'm interested in comparison as well :)

I remember postponing my payments to the ISP, which hadn't blocked ICMP for anybody, by using ptunnel: http://www.cs.uit.no/~daniels/PingTunnel/

It was pretty much usable circa 2008...

btw, in debian (and probably, derivatives), it is just apt-get away from being installed.

Hans is a nice one too: http://code.gerade.org/hans/

Using ICMP reply only in both side is more convinient than ICMP reques/reply. In this case you do not need to write this, for example echo 1 | dd of=/proc/sys/net/ipv4/icmp_echo_ignore_all

I use to restrict ICMP to echo/reply using -m icmp on iptables, but this uses just that kind of packets...

Is there anyway to stop things like this at the corporate firewall?

High end firewalls will monitor ICMP and can restrict the size of the payload. They'd probably also notice the large number of ICMP packets.

Big corporate places can completely restrict things and prevent any traffic from internal hosts to the internet. You can use proxying for web browsing etc. and then monitor that to check for any unauthorised traffic.

Out of curiosity, why would you drop echo requests or replies? I know the normal thought is "So no one will know if the IP is in use or not," but that argument doesn't hold water. Scanning toolkits have been fast enough for the past decade to skip ICMP altogether. They know you're home even with ICMP blocked.

Edited for spelling

They said they only allow echo and reply, not drop them.

That's even worse. PMTU detection is OK but not perfect. Fragmentation needed but DF bit set ICMP messages are important.

Yes. In my opinion they should restrict the payload size of an ICMP message. Blocking all echo/reply can have adverse impact on other applications as well.

OK, I see "length" extension in man iptables-extensions (Debian 8), so for example, to drop pings with a packet size greater than 85 bytes:

    # iptables -A FORWARD -p icmp --icmp-type echo-request -m length --length 86:0xffff -j DROP
Still, until someone checks the code of this tool, or a working test environment, we won't know if the rule stops this tool.

Update: as for the number of packets, there is -m limit and other recipes.

If you're going to do that, set the maximum length to 128 bytes. Different ping tools use different sized payloads - I know of some common ones that generate packets by default that would be blocked with that limit.

Also, instead of using the plain limit match, check out hashlimit. It can apply a rate limit on a per sender, destination, or sender+destination basis. The recent match may also be of interest.

A couple of million small packets in a short timeframe will still eat up your resources. If an application needs ICMP echo to pass transparently through your firewall then you should probably review your need for that application, you're one step away from becoming a partner in someone else's amplification attack.

ICMP echo isn't amplification, as long as you don't respond to multicast/broadcast addresses. It's still 1:1 reflection, so you probably want to rate limit if it's simple (FreeBSD and Linux come out of the box with sane default rate limits).

It is amplification if you allow the packets through transparently because all the hosts behind your firewall will respond if you send an echo request to the broadcast address.

So you're going to have to do a little bit more configuration than just allow a maximum packet size if you're going to allow ICMP to transit at all you should also limit the allowed set of addresses (you should do that regardless, but echo can be used for amplification requests by virtue of the broadcast feature of the IP protocol). Hence the 'one step away'.

This was known as the 'smurf' attack. Fortunately this is now mostly a thing of the past. But poking holes in your firewall for ICMP is a delicate affair.

Sorry, I had lost the context that it was a network firewall (not a host firewall), and missed the 'one step away' as well.

Ah ok. It was only a one line context switch so that was an easy mistake to make.

There are DNS tunneling apps which will (usually) get past those captive portals that block ICMP. It's just slower.

I have tested it. Internet speed difference is negligible. Yup DNS tunneling apps can also be used.

Can someone explain to a non-network person the significance of being able to tunnel IP traffic over ICMP?

You can potentially bypass firewalls and prevent inspection of your traffic depending on how the network is configured

Some of the very early IP telephony apps (20 yrs ago) used the very same trick.

Anyone test this against China GFW and get it working?

Normal tunnels also work around the GFW, so there's no reason to do ICMP tunneling.

The GFW attacks most normal tunnels, causing unreliable performance.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact