
VMWare Fusion IPv6 NAT Black Holes - l1n
https://rachelbythebay.com/w/2016/03/27/wonky/
======
readams
This is a bit of a shot in the dark but my guess here is that they're doing
this because their stack is not able to properly deal with ICMPv6 packets on
the return path. In ICMPv6 for some reason they designers saw fit to add the
IP header information into the ICMP checksum so that if you're doing a NAT or
other rewrite then you need to recompute the checksum for the ICMP packet, and
if it's an error packet you need to do this for the inner packet as well.

It seems plausible that their network stack wasn't up to the task of handling
this so they sort of jury-rigged up this sort of odd connection forwarding.

That's the only thing that I can think of here as otherwise there's just no
planet where this makes any sense. That said, NAT for IPv6 is a generally
problematic concept and they probably were flying a bit blind on how to
implement it since there's no real standard way to do this. IPv6 was really
designed around the idea that every endpoint would have a unique, globally
routable address.

------
0x0
Did they lay off the whole team to the point where they can't even push
updates? Their second-to-last blog post is about a hotfix that they haven't
released a proper 8.1.1 patch release for in several months, you have to
download a random file from their blog and manually patch it in via the
terminal...?!

[http://blogs.vmware.com/teamfusion/2016/01/workaround-of-
nat...](http://blogs.vmware.com/teamfusion/2016/01/workaround-of-nat-port-
forwarding-issue-in-fusion-8-1.html)

~~~
matheweis
Presumably yes - people that worked there described it as: "VMware has decided
to lay off the entire Fusion and Workstation teams."

[https://twitter.com/jdotk/status/691771635771244545](https://twitter.com/jdotk/status/691771635771244545)

They haven't release any patches since... :(

~~~
nailer
If you're on OS X, Veertu uses the inbuilt OS X hypervisor and has _waay_ less
interactive latency than Fusion did, seemingly as a result. It's also
significantly cheaper.

~~~
toyg
My problem with veertu is the same as with Hyper-V: it's not crossplatform.
With all its warts, VMW offered the best turnkey solution for small teams
occasionally sharing VMs in offline contexts.

------
rleigh
It's not just NAT that's broken. On both Windows and Linux hosts, with bridged
networking, SLAAC doesn't work for Linux or FreeBSD guest systems. It does
_eventually_ , after somewhere between 5 and 30 minutes, but for machines on
the physical LAN it's virtually instantaneous. Something is dropping the
router advertisements, but eventually one gets through. Once the guest has an
address, it then works just fine.

Not so great when all the systems you want to talk to are v6 only, and the v4
NAT address is just for legacy use.

------
wtallis
I totally understand that the observed behavior may not be what was intended,
but there's clearly some complexity of the sort that doesn't happen by
accident. What was VMWare _trying_ to do, and which parts of this mess were
unintentional? Is this an experimental feature that was correctly disabled for
IPv4 but accidentally left on for IPv6, or was it intended to be released and
on for both?

~~~
msbarnett
> What was VMWare trying to do, and which parts of this mess were
> unintentional?

It appears that they were trying to build an ad-hoc connection-forwarding
faux-NAT for the guest's IPv6 connection to the host's, to approximately
mirror the NAT they can do for IPv4.

IPv6 really doesn't want to be NATed, though, and they've done a poor
approximation of it.

------
api
Why implement NAT for IPv6 at all?

~~~
rachelbythebay
I wish I didn't have to use it. Bridging the Linux VM means it escapes from
the Mac-level VPN which I need to get actual work done. I'd have to tunnel in
from both systems and would probably set off some alarms for being in two
places at once. Ugh.

And yeah, as mentioned elsewhere, bridging onto a wireless situation is even
worse.

~~~
DanielDent
People are going to have to wrap their head around the brave new v6 world.

I get "logged in from a new IP address!" alerts with a service I use almost
every time I log in, even though my IPv6 prefix hasn't changed. Deciding it's
a completely new IP just because something changed in the last 64 bits is
probably a bad idea in a v6 world.

Devices being multi-homed is intended to be standard practice.

My iPhone regularly has multiple IPv6 addresses, with different reachability
characteristics for different addresses. There's an address used by my carrier
for voice, there are addresses I locally administer, there are addresses from
a stable prefix I use, addresses from dynamic prefixes provided by my
upstreams, ...

The v6 world is a world where many devices have many addresses and addresses
do not all have the same scope.

Application developers are going to need to get used to this new normal.

An application I manage has ~40% of users accessing it over IPv6, most of
which would have a degraded experience if we didn't offer v6 connectivity.

IPv6 is here, it's here to stay, and applications are going to need to
understand the new world they live in.

~~~
mirimir
Yes. And it's not just multiple IPv6. I don't know iOS, but Windows, OS X and
Debian all use only "privacy-friendly" (RFC 4941) IPv6 with remote devices.
And they change frequently. That gets to be a pain when you're pushing static
routes. NAT was so easy.

~~~
DanielDent
I'm not sure I understand what your use case is for pushing static routes &
would be interested to understand what you mean.

I've been trying a few different approaches to routing. Putting link-local
addresses in routing tables has worked well in some deployments.

Debian & OS X use MAC based addresses in addition to their privacy addresses.
[https://www.danieldent.com/blog/remote-ipv6-device-
fingerpri...](https://www.danieldent.com/blog/remote-ipv6-device-
fingerprinting/)

~~~
mirimir
I'm pushing static routes over OpenVPN tap to get IPv6 assigned to remote
LANs. In my (very limited) experience, the MAC-based IPv6 don't reach the
Internet ([http://test-ipv6.com](http://test-ipv6.com) for example). However,
they do get revealed via WebRTC in Firefox (default install). IE and Safari
block WebRTC by default.

~~~
DanielDent
My testing has shown they are accessible over the internet :(.

They are not marked as 'preferred' and won't be used by default. But they are
still available for use if someone goes out of their way to do so.

~~~
mirimir
Thanks. I was going from [http://test-ipv6.com/](http://test-ipv6.com/).
Unless the "privacy" address is routed, it reports no IPv6 connectivity. I'm
guessing that ping6 should find them, right?

------
newman314
I thought this was posted not that long ago.

But in any case, I was wondering if this had anything to do with happy
eyeballs but did not hear any further input.

EDIT: Upon rereading, this is the followup post.

------
chris_wot
Yeah, it's very unlikely this will be resolved given the team who developed
Fusion was retrenched.

VMWare are no longer, in my view, a particularly innovative company.

------
caf
This sort of thing isn't all that uncommon - enterprise "network optimiser"
devices like [http://www.riverbed.com/](http://www.riverbed.com/) work in this
way too. Hopefully not buggy, though.

------
majke
I can definitely say that for IPv4 VMWare Fusion NAT does not forward inbound
ICMP path MTU messages. For ipv4 vmware fusion hosts are a black hole.

~~~
rampole
Yep, I was hit by the same thing - downloading the Homebrew installation
script from Github to an OSX guest hangs, once I decreased the MTU from 1500
to ~1450 it works better.

