
IP traffic over ICMP tunneling - vampire_dk
https://github.com/DhavalKapil/icmptunnel
======
NetStrikeForce
SoftEther, a multi platform and open-source software supports ICMP and DNS
tunnelling among other things (SSL, OpenVPN, IPsec, etc)

[http://www.softether.org](http://www.softether.org)

~~~
redwards510
After reading the product page, this product seems too good to be true (OSS,
supports every OS and every type of VPN). Is there some kind of catch? How
come I've never heard of it until now?

~~~
tw04
I know among the pfsense group originally there was skepticism that it wasn't
back doored by a government agency (not because there was proof), because of
the fact it had such a great feature set while being so new a project. Now
that it's open sourced, I'm interested to see if people pick it up. It
definitely looks legit.

~~~
Laforet
I have been using it since its early releases in 2003 and the feature set
really reflects the continuous development.

~~~
hrez
I hope you meant 2013. Looks like the earliest beta users were starting 2012.

~~~
level3
Before SoftEther went open source, it was sold commercially as early as 2004.
It was renamed PacketIX VPN around 2005 and is still sold commercially under
that product name at
[https://www.softether.jp/1-product/11-vpn](https://www.softether.jp/1-product/11-vpn)
(it's up to v4.0).

I remember reading that there was a 10-year agreement that finally ended at
the start of 2014, allowing the original developer to go open source. The
source tree is different from PacketIX's though, so some of the features are
still not available yet in the open source version.

------
PinguTS
Not the first of its kind, just look-up in Wikipedia:
[https://en.wikipedia.org/wiki/ICMP_tunnel](https://en.wikipedia.org/wiki/ICMP_tunnel)

Any captive portal these days block also ICMP.

Most firewalls block ICMP these days, because the days of blacklisting are
over and ICMP is not the one who is getting white listed. Why?

The only way these days is to misuse DNS. But even that works less and less
reliable.

~~~
vampire_dk
I agree that some captive portals/firewalls do block ICMP but still I've seen
many in my country which don't.

~~~
dogma1138
Well the question is then what's the point other than a personal exercise?
There is plenty of ICMP / multi protocol tunnels software out there for both
linux and windows much of it doesn't require administrative privileges.

Also ptunnel comes standard with some linux distro's these days Ubuntu and so
do probably most of it's derivatives, and as far as raw performance goes
ptunnel is also the highest performing one capable of achieving about 150kbps
which isn't that bad considering the sheer amount of packets and overhead you
get.

[http://manpages.ubuntu.com/manpages/gutsy/man8/ptunnel.8.htm...](http://manpages.ubuntu.com/manpages/gutsy/man8/ptunnel.8.html)

~~~
simoncion
> Well the question is then what's the point other than a personal exercise?

What's your question? Is it "What's the point of blocking ICMP?"? Or is it the
opposite question?

If it's the former, then there are sysadmins out there who cargo-cult their
network configuration and listen to folks like Gibson Research Corporation
who've been giving _really_ bad advice [0] for the past decade+.

[0] Specifically, they _strongly_ recommend dropping all traffic to ports that
don't have listening services, along with all ICMP, rather than rejecting said
traffic and allowing all non-problematic ICMP. They also have a "handy" tool
[1] to make it look like doing anything else is "DANGEROUS": (The tool reports
[2] if your site responds to ICMP echo requests.)

[1] [https://www.grc.com/shieldsup](https://www.grc.com/shieldsup)

[2] Ping Reply: RECEIVED (FAILED) — Your system REPLIED to our Ping (ICMP
Echo) requests, making it visible on the Internet. Most personal firewalls can
be configured to block, drop, and ignore such ping requests in order to better
hide systems from hackers. This is highly recommended since "Ping" is among
the oldest and most common methods used to locate systems prior to further
exploitation.

------
vesinisa
A few years back, I was assigned to work at a BigCorp's premises. They had
really tight network security: all outward connections were blocked except
through a dedicated HTTP proxy. This was bad news, since stuff like SSH are
absolutely essential in my job.

After few days of mobile tethering, I realized I could ask their HTTP proxy to
open an HTTPS connection to a server outside the network, but instead of
sending HTTPS traffic through the proxy, I could send any traffic - like SSH.
With this, I was ultimately able to open an SSH-tunnel to my own shell server
running OpenVPN outside their network, which then allowed a (surprisingly
stable and fast) access to the internet at wide – via an OpenVPN-tunnel
wrapped in an SSH-tunnel pretending to be an HTTPS-tunnel.

I don't recall whether ICMP was allowed out at the BigCorp., but I am pretty
sure someone will one day find a tool like this quite useful in a similar
situation.. :)

~~~
Laforet
Sounds like they enabled HTTP CONNECT without limiting the accepted port
range, so any protocol will go through.

Maybe this is why Microsoft Azure never allowed ICMP travseral through their
outer firewall despite frequent request from users....

~~~
andrewchambers
Blocking based on port is a silly thing. He probably just ran ssh on port 443
to get around it.

~~~
Laforet
That's another possibility, the comment above can be interpreted either way.

In any case they were probably running something like squid with very basic
level 7 filtering, so if something comes on 443 they have no option but to
forward it.

~~~
vesinisa
I think they allowed HTTP CONNECT to any port. Not that it would have been
hard to move OpenVPN or SSH to a different port.

------
piyush8311
I just tried iodine and icmptunnel. Can't say for sure but I think icmptunnel
was faster. At least for my internet

~~~
vampire_dk
That's good news for me. :)

~~~
p4bl0
That is what I would expect to happen. The problem is that most captive
portals still let DNS through but not ICMP.

Having said that, I'm sure there are other usage for such a tool :).

------
redwards510
The documentation for this is superb! I know what it does, why I'd want to use
it, how to use it, where to use it and what it looks like in wireshark. Well
done!

~~~
vampire_dk
Thanks for the feedback :)

------
victorhooi
For anybody that's tried both - how do these compare to DNS tunnels (e.g.
iodine), in terms of speed and reliability?

~~~
dogma1138
DNS is less reliable, but could give you bigger throughput (you can send and
request large records) ICMP packets would arrive quicker, will be more
reliable will bypass various DNS hijackers (common with many ISP's) along the
way.

DNS also requires you to have a DNS server and a domain, and you'll need
something to constantly clear the cache on the local machine otherwise you'll
eventually run out of room even if you are going to use the max available DNS
record size. If anything in the way will keep your DNS queries in cache then
you might be screwed and run out of space very quickly.

If you need internet access ICMP tunnel will be better, bandwidth will be
limited but it will be more or less a P2P tunnel, if you need to exfiltrate
data without explicitly needing to maintain a bi-directional tunnel DNS is the
way to go, will also work in more captive portal restrictive cases than ICMP.
Today ICMP is usually utterly blocked DNS sometimes work especially in common
cases where the restricted network offers some white listed sites (e.g.
airport wifi that allows you to access the airport's site and the local train
service but blocks everything else).

~~~
vampire_dk
With even ICMP you can send/receive large messages. There is no restriction on
the maximum payload length.

~~~
vampire_dk
Normally, an ethernet or wifi restricts an MTU of 1500. I use ping packets
accordingly

~~~
dogma1138
1500 is the MTU of Ethernet, it's often not sustainable on end to end
connection (especially when you add frame overheads) if you are using that
high of a payload size you'll get considerably worse performance than say
limiting it to around 500 bytes, you are welcomed to try it. Also with how
global traffic is managed smaller packets tend to get priority since they can
be qued quicker backbone connectivity uses much bigger frames than Ethernet so
more often than not generating more smaller packets would increase your
overall throughput (to a limit) unless you are on a very controlled network.

~~~
theptip
What do you mean by "it's often not sustainable"? Throughput on a server is
higher at higher packet size, so if you're doing a download I'd expect the
server to send 1500 byte packets. It's pretty easy to saturate a link with
1500 byte packets, and it's much harder to do so at lower packet sizes (from
the server's perspective) since the per-packet processing costs start to
dominate over the per-byte costs. Admittedly my knowledge of this sort of
stuff is mostly intra-DC; is there some other factor that you're referring to
that supersedes this on the web?

I'm not aware of prioritizing smaller packets on the backbone, sounds like
something that would be targeted at small flows (i.e. first N packets in a
flow get a priority bump)? More info on that would be appreciated.

~~~
Ded7xSEoPKYNsDd
I don't know about cable, but DSL usually uses PPPoE. It has on overhead of 8
bytes, lowering the MTU of those connections to 1492 bytes.

~~~
dogma1138
It doesn't matter the MTU setting on your end for WAN and ISP / interlink
grade networks is meaningless they don't use Ethernet, FDDI frame size is 4500
(ATM is about double that) bytes (minus what ever overhead, but usually 4200
and change) ISP/WAN routers don't care about how many mbit/s they transfer but
how many packets they route at per given unit of time, as packets get packed
into a single frame the smaller the packet the more packets they can transfer
each frame.

Also from a more high level point of view if you think about it the small
packets are the most critical ones and they are at least as far as
responsiveness goes DNS is limited to 512bytes over UDP, TCP 3-way handshake
packets are tiny and those are the packets that need to get to and back from
their destination as fast as possible, delays in data transfers means slower
speeds, delays of handshakes mean that your application can fail or hang.
Other important traffic such as VOIP[0] also uses very small packet sizes for
this same reason most critical services need to transfer very little data (per
given unit of time) but need to update data as frequently as possible to
provide the illusion of real time and to mask the latency, same goes for other
things like online/multi-player gaming and so on and on and on. Pretty much if
you want your service to be as responsive as possible limit your packet size
to the smallest size possible and increase your PPS this will ensure that your
packets get quicker to their destination. [0]VOIP Packet Sizes
[http://www.cisco.com/c/en/us/support/docs/voice/voice-
qualit...](http://www.cisco.com/c/en/us/support/docs/voice/voice-
quality/7934-bwidth-consume.html)

The only time you would want to use large packets is pretty much when you can
have a buffer, this means that you need to handle less packets per second
which lowers CPU consumption (across the entire path) so video streaming and
such can use pretty much as large of an MTU as they want unless they start
getting fragments.

~~~
simoncion
> It doesn't matter the MTU setting on your end for WAN and ISP / interlink
> grade networks is meaningless [as] they don't use Ethernet...

It absolutely _does_ matter and is quite meaningful. :)

If you set your edge router's Internet-facing MTU to 9k, and the upstream
equipment's MTU is smaller than that, then either your packets will be
dropped, or PTMU Discovery will try to figure out the MTU of the path. (Better
hope _everyone_ along the path is correctly handling ICMP! :) )

> The only time you would want to use large packets is pretty much when you
> can have a buffer...

Or if you have high-volumes of data to move and want to dramatically increase
the data:Ethernet_frame_boilerplate ratio. :)

> Also ... if you think about it the small packets are the most critical
> ones... [because they need to be dispatched as quickly as possible.]

Yes, but a larger MTU shouldn't affect this. Set whatever socket options are
required to get those packets on their way as soon as they're created, and
your system _shouldn 't_ wait to fill an Ethernet frame before sending that
packet.

~~~
dogma1138
Poor choice of words on my part, if you configure jump frames on your uplink
you are going to kill your network stack, if you limit it too much you'll have
a huge overhead. The point being is that for transferring data especially when
responsiveness is important if not paramount utilizing the maximum potential
frame size you can push without fragmentation would generally yield a poorer
result in real world applications.

~~~
simoncion
> [I]f you configure [jumbo] frames on your uplink you are going to kill your
> network stack...

I can't agree with that statement. If upstream devices support larger than
1500 byte MTU, _OR_ PTMU works correctly, then you are absolutely _not_ going
to "kill your network stack". At worst, (in the PMTU discovery phase) you'll
see poor performance for a few moments while the MTU for the path is worked
out, and then nothing but smooth sailing from then on.

> The point being is that for transferring data especially when responsiveness
> is important if not paramount utilizing the maximum potential frame size you
> can push without fragmentation would generally yield a poorer result in real
> world applications.

I'm not sure what you're saying here. Are you saying:

"If you configure your networking equipment to _always_ wait to fill up a full
L2 frame before sending it off, you'll harm perf on latency-sensitive
applications."?

If you're not, would you be so kind as to rephrase your statement? I may be
particularly dense today. :)

However, if you are, then that statement is _pretty_ obvious. I expect that
few people configure their networks to do that. However, I don't see what that
has to do with the link's MTU. Just because you have a 9k+ MTU, doesn't mean
that you _have_ to transmit 9k of data at a time. :)

------
xpinguin
I remember postponing my payments to the ISP, which hadn't blocked ICMP for
anybody, by using ptunnel:
[http://www.cs.uit.no/~daniels/PingTunnel/](http://www.cs.uit.no/~daniels/PingTunnel/)

It was pretty much usable circa 2008...

btw, in debian (and probably, derivatives), it is just apt-get away from being
installed.

------
matiasb
Hans is a nice one too:
[http://code.gerade.org/hans/](http://code.gerade.org/hans/)

------
fl0m
Using ICMP reply only in both side is more convinient than ICMP reques/reply.
In this case you do not need to write this, for example echo 1 | dd
of=/proc/sys/net/ipv4/icmp_echo_ignore_all

------
txutxu
I use to restrict ICMP to echo/reply using -m icmp on iptables, but this uses
just that kind of packets...

Is there anyway to stop things like this at the corporate firewall?

~~~
vampire_dk
Yes. In my opinion they should restrict the payload size of an ICMP message.
Blocking all echo/reply can have adverse impact on other applications as well.

~~~
txutxu
OK, I see "length" extension in man iptables-extensions (Debian 8), so for
example, to drop pings with a packet size greater than 85 bytes:

    
    
        # iptables -A FORWARD -p icmp --icmp-type echo-request -m length --length 86:0xffff -j DROP
    

Still, until someone checks the code of this tool, or a working test
environment, we won't know if the rule stops this tool.

Update: as for the number of packets, there is -m limit and other recipes.

~~~
ryan-c
If you're going to do that, set the maximum length to 128 bytes. Different
ping tools use different sized payloads - I know of some common ones that
generate packets by default that would be blocked with that limit.

Also, instead of using the plain limit match, check out hashlimit. It can
apply a rate limit on a per sender, destination, or sender+destination basis.
The recent match may also be of interest.

------
de_wq912AesppE5
There are DNS tunneling apps which will (usually) get past those captive
portals that block ICMP. It's just slower.

~~~
vampire_dk
I have tested it. Internet speed difference is negligible. Yup DNS tunneling
apps can also be used.

------
callumlocke
Can someone explain to a non-network person the significance of being able to
tunnel IP traffic over ICMP?

~~~
thebakeshow
You can potentially bypass firewalls and prevent inspection of your traffic
depending on how the network is configured

------
Sami_Lehtinen
Some of the very early IP telephony apps (20 yrs ago) used the very same
trick.

------
xbeta
Anyone test this against China GFW and get it working?

~~~
legulere
Normal tunnels also work around the GFW, so there's no reason to do ICMP
tunneling.

~~~
thaumasiotes
The GFW attacks most normal tunnels, causing unreliable performance.

