

Let's not use HTTP over TCP for mobile apps anymore - chetanahuja
https://packetzoom.com/blog/

======
floatboth
IETF is working on better stuff – and it will be _standard_.

RFC 7413 TCP Fast Open, already deployed on Android and Chrome OS.

RFC 6824 Multipath TCP, already deployed on iOS.

And the most important upgrade: HTTP/2.

Sorry, I'm not using commercial proxies with proprietary protocols. I believe
that anything as low level as transport layer shouldn't have a "pricing" link.

P.S. QUIC is an interesting experiment, maybe it will become a standard like
SPDY became HTTP/2\. But UDP can't always be used... My university's WiFi just
blocks all UDP completely, for example.

~~~
chetanahuja
Just to get the pricing thing out of the way, the pricing is not to license a
protocol but to actually distribute app content -- from our servers worldwide.
And our current customers are finding that it's pretty comparable to what they
maybe already paying for such a service to other providers.

Agreed that the standards will get there at some point. And it'll be a joyous
day for all. But I'll have to humbly disagree with you that TCP Fast Open or
Multipath or HTTP/2 provide the answer for the mobile app use case today.

TFO will save you one round-trip for the handshake.. but that's about it.

Multipath is only available for Apple's own use the last I heard so it's not
really a factor... but even if it does become available, it's still TCP.

And in both cases, TCP is still hampered by the usual problems outlined in the
blogpost. No concept of a session, slow-start, exponential backoff,
unrealistically long timeouts etc. etc.

I guess the meta point here is that TCP is designed for the general case and
it's great for what it is meant for. But by narrowing the use-case to native
mobile apps, one can constrain the problem domain and gain more freedom in
what you can design.

------
Synchro
It all sounds very interesting, however, the tone of the web site comes across
like some crappy download accelerator. When you follow the "nerdy details"
link you don't get anything meaningful, just bland assurances and pretty
animations, i.e. entirely non-nerdy. Nowhere do I see basic technical info
like "It's a new IP transport protocol used in place of TCP", or "It's
compatible with existing networks because...", or "It's (better than|different
to) (HTTP/2.0|QUIC|SPDY) because...".

The discussion here, while short, has been much more informative than the site
itself. I don't see any good reason to dumb-down the site like that - you're
selling a protocol layer, so you're not going to have any non-nerdy customers!

Why are resumable downloads done in that layer? I have no trouble with
downloads resuming across network switches when handled by higher-level
protocols like chunked HTTP or even FTP.

Similarly, why would you even think about bundling multiple resources in one
transfer at that level? Surely that's a higher level thing (like HTTP/2.0),
and thus independent of transport? There appears to be a lot of layer-leakage
going on - not that it's necessarily bad, but the reasons for it need to be
clearer.

~~~
chetanahuja
_" It all sounds very interesting"_

Thanks. I'll take this positive beginning at face value :-)

 _" Nowhere do I see basic technical info like "It's a new IP transport
protocol used in place of TCP", or "It's compatible with existing networks
because...", or "It's (better than|different to) (HTTP/2.0|QUIC|SPDY)
because..."_

I'm sorry if the material on the site is not technical enough for your taste.
We've just launched and consider the site and the blog a work in progress.
(Follow us on twitter at @packetzoom and me personally at @IAmChetanAhuja for
updates).

Having said that, our customers are not necessarily people following the
network protocol worlds closely. The stuff that you find "bland" and "pretty"
is actually very helpful for a large portion of our target audience.

Now of course, those who click on the "Learn" link and don't find enough in-
depth info are welcome to, and do, contact us directly. Every single mail we
get on info@ or support@ addresses is read by a human in our (rather small)
team and yes, we respond. Just as I'm responding to more detailed technical
questions here.

 _" You're selling a protocol layer, so you're not going to have any non-nerdy
customers!"_

No we're not selling "a protocol layer". We're selling a drop-dead simple way
to speed up downloads. Some of our customers are more interested in details of
the protocol than others. But the majority of them want their apps to be fast
and couldn't care less about, or even want to know about QUIC/SPDY/HTTP2.0 or
whatever. Their questions are more like -- How easy is it to integrate?
(very). Do I have to rewrite my code? (No). What do I have to change on the
server side? (nothing).

And those of our customers who do want to go in-depth into details of the
technology, do. They just talk to us and we explain what we're doing to the
extent it doesn't reveal trade secrets.

 _" a lot of layer-leakage going on - not that it's necessarily bad, but the
reasons for it need to be clearer"_

This seems to be the crux of your issues with what we're doing. The reasons
are ultra clear to us. Apps are faster and users are happier. That the only
thing that matters.

We've found that merging layers allows us to handle, say, out of order
delivery or dropped packets more smartly and more naturally. For far too long,
user experience has been held hostage to diktats from the ivory tower. It's
about time we start thinking from the ground up to create technology suited to
what the users needs and not because we want to maintain some platonic ideal
of "layer separation".

------
hennaheto
I have seen a demo and there was a huge speed increase (the app loaded content
much quicker) with their technology. Is it the perfect solution? I have no
idea, I am not a network person. But if you are looking to speed up your app I
would check it out.

My big question is price, do I need to be a big company to afford this? It
seems like it could get expensive.

Yes there are other people (as floatboth and jhugg pointed out) working to
solve the same problem, but that is awesome! I want some competition,
companies pushing the limits, standards organizations making it work
everywhere and academics researching it. Who knows maybe one day my phone will
load content so fast that I believe ATT really does have 4g in my area

~~~
chetanahuja
_" Who knows maybe one day my phone will load content so fast that I believe
ATT really does have 4g in my area"_

Heh... let's start with acknowledging a pretty good burn right there.

And thanks for the vote of confidence. Much appreciated.

As for price, we're working with a whole bunch of customers right now who've
been using various solutions for serving content from the cloud (aws,
cloudfront, CDN's... what have you) and they find the price to be comparable
to what they're already paying. In addition, they find as users find the app
experience more pleasant, the engagement goes up.

Note that as an app starts using our system to do content delivery, our
caching proxy means that your other backend bandwidth bills drop off. So
overall, most customers find us very affordable for the value they get.

------
mnemonicfx
So, if I understood correctly:

If I use an HTTP 1.0/1.1 with PacketZoom, I'll be communicating on the
internet with HTTP 1.0/1.1 over PacketZoom?

What if I want to communicate on the internet with other protocols such as
WebSocket, SPDY, or HTTP 2.0? Does it work the same way?

~~~
chetanahuja
Actually the way it goes is, the application code makes some HTTP calls
through common libraries and Packetzoom framework, if linked in the app, can
intercept these requests and reroute them to our proxy through packetzoom's
own protocol. So we're not intercepting any particular wire protocol as
such... the interception happens at a higher level. So your existing stack
shouldn't matter.

------
shauncrampton
Does it do anything to solve router congestion when it does happen and how can
it tell the difference? Cell towers are often bandwidth limited on their
uplinks; is this protocol going to out-compete TCP for bandwidth and slow
everyone else down?

~~~
chetanahuja
Currently we're focusing on the download portion of the transfer so I'd
refrain from commenting on the uplink protocol for the time being.

But the general question about router congestion handling is a valid one. And
the answer is No, we're not just flooding the network with a blast of packets.
Apart from anything else, that strategy is full of fail for throughput anyway.
Since a busy router will likely generously drop packets from the UDP queue
before it loses stuff from the TCP queues.

So we rely on precise knowledge of network conditions at a fine grained level
(the grain gets finer as we add more users to our network) to utilize the
available bandwidth. We also use other techniques to predict queue congestion
on the network path _before_ it gets to the level of dropping packets. Note
that (most) TCP implementations _rely_ on packets loss to discover usable link
capacity. We don't! One of the advantages of creating your own protocol is
that you can use the state of the art research in congestion control
strategies without being constrained by legacy concerns.

The upshot is, our protocol is actually fairer to competing traffic compared
to TCP on the whole. Hope that alleviates your concerns.

------
jhugg
Totally lazy commenting by me here, but isn't there a mobile shell that was
designed to work well over high latency, iffy connections.

This? [https://mosh.mit.edu](https://mosh.mit.edu)

Maybe we’re talking about the same problems.

~~~
chetanahuja
Hey it's not lazy if you actually found the link ;-)

Yeah I've used mosh and it comes under the general category of UDP based
protocols (of which, there are many). The purpose of designing the PacketZoom
protocol (and it's attendant infrastructure) is to design something particular
_for_ the mobile native apps use case. Including things like app session based
sessions, moving around between networks without breaking the session, take a
single round-trip to setup the session and zero round-trips thereafter to
start file transfers etc.

And then, once the transfers are started, taking care of the packet drops with
a more measured backoff and retransmit policies than are available with TCP.
E.g., PacketZoom protocol packets don't have to arrive in order. The receiving
side can send feedback back to the sender with an exact list of what packets
went missing. The sender can then make decisions about backoff on not just the
fact that a packet dropped, but how many and in which locations.

Lots of other details like this allow us to handily beat TCP in all varieties
of networks all over the world in speed and user experience.

