
Why removing the TCP pseudo-header doesn't help mobility - signa11
http://pouzinsociety.org/node/47
======
mjevans
"TCP+IP should have been split differently"

After reading I agree with some of the divisions that occur, but think that if
more layer splitting happens it will be laterally (with defined interfaces in
a 'meta' protocol layer).

    
    
        * IP shouldn't fragment
        * Connections should only exist in higher layers
    

Connections probably should be divided in to different logical parts, but
interact as closely related components of a validated and reliable "pipe".
Supporting mobile endpoints (in my mind) clearly means dropping the implicit
authentication of an interface address and instead using a stronger
cryptographic one. That's also why the connection layer needs to remain
unified, even if the components within it are distinct: attacks on the
connection are equivalent to packets that failed checksum.

~~~
pulsarpietro
Due to lack of personal knowledge is not quite clear to me what the problem is
with mobile devices. It seems to be the fact that they very often change IP
address.

am I right ? Is there any resource you would recommend to make my mind up ?

~~~
mjevans
That's part of the problem. When the Internet was created computers had
finally progressed to being /always on/ and /always connected/; it was
possible to relay data (slowly) among these mainframes/big iron machines and
have useful and expedient exchange with experts working at other institutions.

That carried forward until the dialup era. At that time the majority of users
shifted to being /clients/ that intermittently connected to servers to check
their inboxes and sometimes to remote services to request static public
content. However they were still in a binary state of online or offline, and
when they lost the connection loosing connection was a hard change.

Mobile devices loose connection for other reasons. Mostly the fact that they
are wireless. They have limited power, speak across a noisy and highly
contended media, and also move around. They are able to poll (somewhat
infrequently) but are not always connected. They are in the uncanny valley as
hosts; not disconnected enough to warrant a big change of state but also not
connected enough to rely on two way communication.

They also happen to be extremely prolific. If not making up a majority of
devices that happen to use some services, they are clearly an extremely large
market segment. While I find it impossible to create in depth content on them,
some do rely upon them as their only computer.

A protocol that is suitable for mobile connections would also alleviate some
annoying failure modes caused by ISP maintenance and DHCP lease expiration.

------
zamadatix
"In addition, TCP supports “fan-in, This allows multiple connections to the
same well-known port at the same time. ... This creates a security problem,
because the server must rely on the source IP address, and source port-id
(values that it did not create) to distinguish this connection. This is the
source of many spoofing attacks."

I really don't see how "source port" is relevant in this section, the issues
are all about source IP + the lack of source verification. Using node IDs
without the lack of source verification results in the same issue, source port
or not.

Most of the other points I agree with but they are written with an agenda,
well summed up in: "The reader should be getting a sense of how what may have
seemed like a good idea at the time is proving to be a cascading series of
patches that increase complexity and generally introduce new problems."

As a network guy I'll say it's much easier to look back at the last 40 years
with the view of how all systems are ethernet + IP + TCP/UDP today and say
"well obviously this is dumb". The same would probably be said for RINA in 40
years if it took off (haven't read through how it works so I can't judge it).
The point is you can't skip the middle 40 years in how it should have been
done and provide a high level "just do <x> technology direction instead" as
evidence why it was done wrong. You have to go back and describe the full
alternative and how it would make sense each step of the way (or at least how
the full alternative could be implemented today). If the idea is "we think we
could do better" then the article should have been written in the form of
discovery of what's broken today.

Could there be a few good improvements to how networking is done now that we
are a bit further along in both knowledge and technology? Sure, plenty, but
I've yet to see one that covers both all use cases, is actually possible to
implement in hardware, and can be decentralized.

~~~
toast0
I'm dealing with some of the 30+ year old broken stuff today, and I don't
think TCP / IP is that bad.

That there are some holes in the spec isn't really so bad when they didn't
become readily apparent for 20+ years; many things are fixable with changes on
just one end of the connection too (if you skip some 'required' sends, you can
avoid ack storms, etc).

IP Fragmentation is pretty awful. I wouldn't say it never worked, because I
see some legitimate traffic using it. But with decades of hindsight, it was
never a better solution than breaking up data into right sized packets, and it
would probably have been better for routers to truncate packets and forward
rather than fragment or drop.

IP could use the 16-bit ID field as a 16-bit original length, if you need to
truncate a packet, change the total length, leave the original length alone;
adjust the ip and tcp/udp checksums (sorry other protocols / maybe IP should
have checksummed the whole packet or owned the data checksum), and send it on
its way.

For TCP, the acking packets could use the mss option to indicate the new
effective MTU (under the current spec, MSS is only valid on SYNs, but let's
pretend), providing an _in band_ way to notify the sender, since clearly the
out of band methods haven't worked out so well.

For UDP, the peers can figure it out themselves. If individual datagrams must
be larger than the path MTU, you have to frame it yourself, but you have more
options than what IP provides.

OTOH, I bet nobody expected janky path MTUs to be so prevalent.

~~~
bogomipz
>"I'm dealing with some of the 30+ year old broken stuff today..."

I am curious which behaviors you are referring to. Can you elaborate?

>"IP Fragmentation is pretty awful. I wouldn't say it never worked, because I
see some legitimate traffic using it."

I am curious what types of traffic you you have that's doing IP fragmentation
regularly?

~~~
toast0
I'm trying to get data on clients with path MTU problems, with an eye toward
fixing it through statistics; assuming I can only intervene on the server
side, because the clients are on Android, so path mtu blackhole detection is
there, but disabled (thanks Google), and you can't set any useful socket
options from Java anyway.

I work for a messaging service , but won't tell you which one.

I first saw IP fragments when some nice people were doing charge reflection
ddos against our www servers, but the reflector hosts were sending fragmented
UDP responses, and also filtering the first fragment out (probably because the
first fragment has the port number, and chargen is clearly bad, but their
firewall didn't think to drop or reassemble fragments). Since fragment
reassembly was a big performance problem, I thought maybe we could just
disable it, but actually we see a trickle of fragments on legitimate
connections. I didn't track it further than we can't turn it off, but we can
set the reassembly buffer to be incredibly small, which was enough to weather
the storm (and knowing we could turn it off in case of high fragmentation rate
was nice too).

~~~
bogomipz
Thanks for the response. Interesting, do you mean Android has PMTU detection
disabled in that Google is filtering "fragmentation needed but DF bit set"
ICMP replies? That would be very surprising for Google I would think. Or are
you referring to something else?

~~~
toast0
Android either doesn't enable PMTU detection, or it's only enabled in recent
versions, or it's configured such that it's not effective. I've seen pcaps
with iOS devices doing it though.

I don't think Google would intentionally filter out the ICMP fragmentation
needed packets, but there are lots of problems actually getting them, between
routers that drop packets without sending them, firewalls that drop all ICMP,
and NAT devices that don't know how to direct them to the underlying client.

Edit: sorry I really mean PMTU probing. If you got a connection, but no acks
when you send a large packet, after a significant timeout, it's worth trying
to send smaller packets, even if you didn't get an icmp that says you should.

~~~
bogomipz
Ah OK, interesting that this behavior exists in iOS as well. Thanks for the
clarification. Cheers.

