Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why don't more apps use peer to peer networking?
58 points by api on Aug 13, 2014 | hide | past | web | favorite | 43 comments
I think my question boils down to: is there a fundamental reason apps don't use p2p networking, or is it just that there aren't any good app P2P SDKs or programming techniques out there?

(Note: by P2P in this context I mean over the Internet, not the emerging wireless "Internet of things" P2P networking stuff. That's a bit different, and has a different use case.)

Take an example: SnapChat. (Just using them as a hypothetical here.) Why didn't they architect their app to send snaps directly when possible? It would have saved them a lot on bandwidth for starters. If they wanted to also store snaps on their servers they still could have done so, but they could have saved considerably on downstream bandwidth costs by sending snaps "horizontally" between users if these users happen to be online.

Is it just that it would have been too much work development-wise, or is there a more fundamental reason companies like this pass on P2P?

Spotify used a P2P protocol but last I heard they were moving away from it. Netflix -- about as bandwidth heavy as you can get -- doesn't do it. Skype has moved away.

Why?

The only reasons I can think of are:

(1) It's hard to program and there are few good SDKs to make it easier.

(2) Some users -- enough to be meaningful -- have bandwidth caps even on wired Internet connections.

(3) Cellular data connections almost always have bandwidth caps, and so users on these networks dislike p2p apps eating their bandwidth.

Which of these is most significant? Or are there other reasons?




I have worked on multiple applications that use P2P networking, and the fundamental problem is this:

Not all networks support the ability to P2P network, or, if they do, they require intervention by the user.

So you have two issues: some users will never get to use your application, and for those that could potentially, they will likely need customer support to help them configure their network correctly.

Corporate networks are the worst for this. They aren't going to change their rules for your application (yes, they might, but don't assume that starting out).

A much greater percentage of home networks can support P2P networking, but your application probably needs to support STUN as well as UPNP.

Some number of home routers won't work ever, or can if you configure them correctly. And that's where it gets messy. Is it enough to tell a customer to 'go figure it out', when it pertains to router configuration? You might get away it with PC gamers; but I'd argue any other segment of people will have no idea how to do it, and need some help. So now you have to try figure out the enduser's home router configuration as best you can remotely. Huge drain on customer support resources, which in a small startup, usually means the developers.

So why go P2P with all these headaches? Unless you have a really strong reason to do use P2P, like keeping latency down between peers, you don't bother.


My guess is it's a combination of development complexity and (probably more importantly) firewall/router issues.

It's very hard — impossible, really — to deploy P2P technologies on a mass scale without thousands of users encountering problems with their routers & firewalls.

For mass market products, you can't get away with asking people to whitelist your app, make sure port 28777 is open for UDP, etc.

Many P2P systems have freeloader issues, which can usually be resolved/avoided/ignored on desktop systems, but when you add mobile into the equation — with its paltry bandwidth limits and sky-high overage charges — the potential for it to become a problem is much greater.


Hmm... so...

(1) Developer ease of use... must be almost as easy as simply opening a 'dang socket.

(2) Must be able to fall back on non-p2p easily and transparently if p2p is not possible for a given customer.

(3) Must be able to set a bandwidth quota on mobile devices, or possibly fall back to non-p2p easily and transparently when on a capped cellular connection.

(4) (via another reply) Must not do constant keep alives all the time at least on smaller mobile devices like phones, which will eat battery life-- must support some kind of sleep mode with instant wake.

This being HN, I have ulterior market research motives with this ask post. I have thoughts about building something, and want to know if it's worth doing. :-)

But I think it's an interesting question in the abstract too.


I have a hunch that WebRTC is going to make P2P a much more mainstream technology. Once most people are running browsers with WebRTC support, and there are good high-level libraries available for developers, I think we'll start to see it used for all kinds of things — not just real-time video conferencing, but all kinds of information sharing and dissemination, both real-time and who-cares-about-real-time, just because WebRTC will be there, will (mostly) work, and will be (relatively) easy.

You mentioned something about saving bandwidth costs, and I admit I've had a few ideas for bootstrap startups where I've thought, "Well, I could do this, but the second it became semi-popular I'd go broke and have to shut it down," and then, "If only there was a way to have the apps distribute the data P2P, so I didn't have to pay for every single user downloading from the server."


Check the thread and see my other reply.


> (1) Developer ease of use

You can't fix it with developer tools. You'd need to completely rewrite the way many hundreds of thousands of diverse, entrenched, autonomous entities with a vested interest in your failure think about networking.

Most users are behind NAT. Several thousand employees in an office or students in a dorm will have the same public IP address and will never, under any circumstances, administer the router where NATing takes place.

For users who do not pay their own ISP bills, it's impossible. By design. You are not going to convince the entire IT management/security establishment that they should allow arbitrary connections to their users' machines.

With unlimited resources and political power, you might be able to convince all the SOHO router manufacturers to include some kind of standardized port-forwarding API. It will be several years before all the old devices are replaced, though. But you might get to a point where there are peer-to-peer systems for nontechnical users who own their routers.

You still need client-server for when people are on an institutional connection/mobile.

There are no peer-to-peer connections between users behind firewalls or NAT. This is true even in BitTorrent - if you're behind a firewall, your only peers are people who are not behind firewalls (seedboxes in datacenters, geeks who went into their router config, and people who are wasting IPv4 space by handing out fully routable IPs to individual clients.)


Most of this is incorrect. Look into hole punching as a starting point. BitTorrent, SIP VoIP phones, desktop Skype, etc. all do this. It's sort of a hack that became a semi de facto standard, but so is NAT.

There are some percentage who cannot do p2p even with smart traversal techniques. In my system I have measured this to be <5% of total users, few enough that free relaying is basically free to provide.

The hardest problems surround mobile battery life, quotas, and coarse grained and limited multitasking. These are also likely solvable. They may require a rethink at the encapsulated layer 3 level, like TCP proxy ACK or linear coding. Or maybe push notification. Still researching this.


>BitTorrent

http://www.bittorrent.com/help/manual/chapter0203

BitTorrent users who do not have port forwarding enabled do not communicate with each other. BitTorrent has a highly technical user base and chances are good that someone with the file you want actually has enabled port forwarding or is running a server in a datacenter (seedbox). That wouldn't work for mainstream messaging.

>desktop Skype

Firewalled Skype users do not talk directly to each other, they talk to Supernodes (servers) running on nodes that do have port forwarding / no firewall.

Conscripting random PCs as servers drew public outrage and Microsoft eventually (citing scalability and reliability) made all the Supernodes server-class hardware sitting in Microsoft datacenters.

>SIP VoIP phones

If you are in an office and can receive calls on a SIP phone, it's because your organization runs (or pays someone to run) a central server called a gateway, usually but not always as part of a PBX. The most common open-source PBX is called Asterisk.

People who call you from the outside are connecting to your phone, but to your organization's gateway. Your gateway then sits on a (V)LAN with your phone, or your phone maintains an outgoing connection to it.

If my friend has her router configured but I do not, then (absent a gateway) I could call her but she couldn't call me. So you use central gateways

Many VoIP phones come with VPN clients so that they can behave like peers to phones within the organization if deployed at employees' homes or in certain branch office configurations where bridging is not already in place. In this case, you still have centralization at the level of the VPN server.


It's very weird that the BitTorrent docs say that, since I have seen different in practice, both Transmission and uTorrent support uTP, a tcp-like udp protocol that can traverse NAT.

There is something like a standard. Few products use this standard by name, but many use it's techniques.

http://en.m.wikipedia.org/wiki/STUN

So you are mistaken. Either that or for years I have suffered from packet hallucinations and clinical delusions of NAT traversal, cause I am literally watching it happen now. :)

Here's some scenarios I have recently personally tested using simple UDP hole punching:

* Institutional/enterprise NAT: two universities, one large corporation, one hotel

* Double NAT with virtual machine behind physical behind NAT

* Double NAT on a cellular connection (tethering)

* NAT behind "privacy tunnel" VPN

* Several consumer cable modem routers

But it certainly is hard. It involves either complex or clever solutions, and to field a reliable product there must be relay fallback for the percentage of users who cannot traverse. (Since it's a small percentage it's virtually free to provide that.) That fallback must be fast and transparent to the user, and done in a way that is maximally compatible. I do TCP over port 443, which is almost always open, and have considered literal HTTP encapsulation over 80 as a second fallback that would work even through proxies.

It can be done. Like I said, I have a system in the field now seeing 90-95% success rate with direct connectivity. No firewall config is needed, nor are users conscripted to act as relays.

Like I said elsewhere on the thread: coping with the limitations of mobile is harder than dealing with NAT. I think that can be done too but there are still question marks, especially around iOS.

Idea is to do all the super hard stuff once and then make it available in a very nice SDK that's also backed by a service with enterprise SLA, support, etc. (All cheaper than handling all bandwidth yourself of course, not to mention lower latencies for most users.) Hide all the complexity from both the user and developer and offer something professional. There are P2P libraries out there but none that are professional, polished, or backed.

Target would probably be games, video, and other more demanding applications. I was just using SnapChat as a simple example but it's probably not the best one.


Interesting! I didn't realize so much of the NAT in place was full-cone. I thought it would all be symmetric.

Non-symmetric NAT seems like a major security vulnerability, though. Couldn't I hijack any application whose eAddr:Port I knew simply be replying faster than the actual server? My traffic would appear to the client as identical to traffic from the actual server, no?

Why do people deploy NAT that is not symmetric?

Also, why couldn't Skype and BitTorrent simply use those techniques rather than relying on certain users to serve all the firewalled traffic?

What is the demographic of your product where you're seeing a 90-95% success rate?


Linux, BSD, and most home routers are full cone. Apparently my cellular tethering feature is also full cone. Most importantly, every carrier-grade NAT I've seen (used on cellular nets and by some ISPs) is full cone.

It's actually hard for me to test symmetric, since I don't have access to any networks nor have I found any that are symmetric. I have to simulate it on a test net with a special iptables config.

Symmetric seems rare in the field, though it also seems most common in "enterprise" environments. My guess is that some major vendor like Cisco or Juniper sells symmetric NAT engines. All the users I've heard from who are behind symmetric are in large corporate or university networks. I've never even heard of a symmetric NAT consumer device.

My guess is that full cone is most common because (a) it's simpler at the router and (b) it guarantees maximum compatibility. As I said: NAT traversal is a "de facto standard" used by a number of products. If those products don't work or are slower than molasses, users complain that "the Internet is broken." That sword cuts both ways. Vendors probably found that fewer complaints about "broken Internet" came when they went to full cone NAT.

There are ways of traversing symmetric NAT, but none of them can be guaranteed to be successful the way full cone hole punching is, and some might look like a port scan or an attack and set off alarm bells. They include psychotic techniques like:

* Proposing that the symmetric NAT box assigns ports sequentially and trying the next few (may look like a port scan).

* Proposing that the symmetric NAT box may assign ports using the standard K&R linear congruential rand() function, inductively determining the PRNG state, and guessing the next ports by extrapolating the next sequence numbers (also may look like a port scan).

* Impersonating a DNS server and sending a packet that refers to the NAT-t target as if it were a delegated DNS server (apparently some symmetric NATs have DPI logic to make stuff like this work, which can be exploited... but this can look like an attack).

* Having the host behind symmetric NAT probe its own exterior ports 1024-65535 and determine the port assignment schema by doing NAT-t against itself (looks like an attack).

* Impersonating IPSec and triggering "make IPSec VPNs work" DPI logic in the router (absolutely requires 'root' on the host and SOCK_RAW, which Windows doesn't have).

* Various "Jackass stunts" with ICMP that can set of IDS systems.

I would not do any of that stuff in a Real Product(tm), but it's all cool in a cool hack kind of way.

Nevertheless, the statistical rarity of symmetric means that just relaying peoples' traffic for free is basically... free... at least within reason.

As far as I know BitTorrent does use hole punching, but maybe not in the official bittorrent.com client. Take a look at Transmission and uTorrent. They use uTP, which is basically an application mode TCP stack that runs over UDP, and they pair it with hole punching. I have used them many times behind NAT and they work... you'll notice that if you're behind NAT nearly all of your peer links are uTP instead of TCP.

I don't know about Skype. I know they use servers only on mobile, but I don't know what their desktop client does. Skype is super closed and obfuscated, so only way to tell there is to fire up a sniffer.

My demographic is mostly consumers and small businesses, so it's going to be mostly consumer routers and hotels and such. I do see some symmetric with .edu users.

Product is here: https://www.zerotier.com/

If it can't do NAT-t, it will relay UDP (for free). If it can't do UDP at all it will fall back to relayed TCP over port 443 that looks like TLS. It works just about everywhere. Being able to NAT-t just makes it run a lot faster.

I'm considering taking the engine behind this and SDKifying it and then trying to make it play nice on mobile. The latter is IMHO much harder than dealing with NAT, and if I could do it would make it a first-in-class product. But I don't want to spend the time if nobody wants such a thing.


If you could get that going on both mobile and desktop apps, I would definitely implement it across several software implementations I work on. It'd likely be a year from now before it was implemented, but the amount we'd save would arguably outweigh the cons to this.

Of course we'd implement user settings to disable, only transfer of public data, make sure the client is robust and secure, etc. but peer-to-peer in this day-and-age seems to be the way to go.


I'm considering doing just that. I already have most of it done in fact... It's the engine behind this:

https://www.zerotier.com/

It does most of the things I listed. The remainder would be a matter of proper packaging and integration with mobile OSes for the mobile version.

The idea would be to package it with an ultralight IP stack so that each app appeared on a virtual network as an ip endpoint. Servers could join the same virtual networks using the already released software I just linked, and services could just talk standard ip as if they were talking to any other network. No rewrites on the server end needed, and the client may just need to link in a library and call some init functions.

Respecting battery life might be hard, but I see no reason integration with Apple or Google push notification systems couldn't be used to wake the endpoint app when it needs to do something. Idea is that desktop nodes could run in a heavier mode than mobile nodes, with the latter going dormant after a few minutes of inactivity and waking on coarse grained push.

Drop me an email at contact@zerotier.com if you're really interested in this kind of thing. I'm doing some research to see if this is worth building.


It's not so much 'Developer ease of use' but 'End user ease of use'.

If end users can't use your product because it's blocked by their firewall then that's going to cause you problems (either in extra support costs, or lost customers).


Developer time is one of the most important things to optimize. Bandwidth is really cheap by comparison.

P2P isn't completely reliable. There are many cases where you can talk to a server but you can't talk to a peer, ranging from evil firewalls to excessive layers of NAT to simple things like the target device being offline. Thus, you must code a fallback that talks to a server if you want reliability. This server fallback will work for all situations, so it's necessarily easier to just use it for everything and not bother to code the P2P bit at all.

P2P is also really hard to do well. It's pretty easy to do poorly: have one device tell the other device what its IP address is and a port to connect to, then connect to it. In practice, this fails about 99% of the time because approximately all consumer internet users are behind NAT these days. So then you enter the wonderful world of NAT traversal meaning you have to deal with horrifying things like UPNP, NAT-PMP, and STUN. And this is when both sides keep the same IP address throughout the connection! Now consider when your smartphone user goes from Starbucks, where he has WiFi, to the bus, where he only has LTE, to home, where he has WiFi again.

Bandwidth is cheap. Let's say it would take Snapchat one developer-month to implement this, or about $10,000. (I'd wager this would be a strong underestimate, both in terms of time required and the cost of that time.) Amazon S3, to take a random example, charges 12 cents per GB of outgoing transfer to the internet at lower use levels. You can buy 83TB with that $10,000. If your typical Snapchat image is 1MB (they're low resolution, right?) then you'd have to P2P 83 million images before you broke even on the investment. Factor in a more realistic timeframe, a more realistic cost, and the opportunity costs of not having that developer work on something more useful, and the payoff goes up by an order of magnitude or more.

P2P does get used where it pays off well. That's either high-bandwidth stuff or low-latency stuff. WebRTC does P2P whenever it can. Apple's FaceTime does P2P when they're not disabling that functionality to placate patent trolls. Skype does (or at least did, I seem to recall some changes) P2P for audio and video. And of course nothing beats BitTorrent for sending massive amounts of data to large numbers of people. But it just doesn't pay off unless you're really sending a lot of stuff.


1) Users don't care 2) It's complex, so it costs quality developer time. Even then there's no guarantees. 3) The old school approach is cheap/easy/fast. Additionally enabled by fast/quality clouds that allow scaling relatively easily. 4) Increasing number of clients do not accept incoming connections because of IPv4/Nat or because of cellular/mobile/wan connections that don't accept incoming. 5) It's a competitive market for apps, web services, and related. Any increase in latency or increase in support calls is prohibitive. If even one and 10 people need support for port forwarding that's a deal killer. 6) in an increasingly mobile world incoming connections and anything that hogs battery (bandwidth or even just being awake) is a disadvantage. 7) there's no easy money in P2P. No monthly data plans, centralized towers, centralized servers, etc. Sure a mesh network of smartphones with millions of clients could do cool things. But where does AT&T/Verizon make money? Without AT&T/Verizon is Samsung going to make a p2p/mesh network phone if they need to sell millions to break even?

As an example skype years ago with mostly desktop/laptop clients was largely p2p (just login was centralized). With increasing numbers of tablets, WAN connections, and smartphones they switched to central servers.

So sure, you might be able to spend a man year and get an awesome, robust, and performant solution. But your competition will have spent that time actually making users happier and steal your market.


It's because startups rarely have time and resources to create a solid infrastructure from the beginning. Easy and simple are preferred. The product needs to be launched and validated as fast as possible.

So by the time a product becomes successful, it's core is already using inefficient solutions. And very few companies upgrade them because the effort is not really worth it business-wise.

If we would have been driven by real solutions instead of money, then P2P would probably be king. Direct communication everywhere.


Let me give you a comprehensive answer on this that goes back to first principles. I've worked on apps like SnapChat, so I am probably pretty close to an authority on why apps like that don't use p2p.

The first problem is that mobile devices are pretty much inherently asynchronous. There are apps that you would use at the same time as another person (like real-time games) but especially on cellular, lag is an issue. This pushes people into designing products that can tolerate lag measured in seconds (because that isn't shockingly bad performance on cellular networks for apps that use your standard off-the-shelf tools like REST/HTTPS/AWS for example). This produces a lot of asynchronous, or semi-asynchronous applications.

Now partly due to those product designs, and partly due to people having lives, they use these apps asynchronously. You pull out SnapChat, fire off a message, and go back to reading Reddit or whatever. Snapchat is off. There's no way to reach you.

Okay, so why don't we run SnapChat in the background? Well there are layers of reasons. The first layer is that it costs energy, and the mobile revolution is in large part possible because software and hardware developers got very aggressive about energy management. If we ran things in the background like you do on your laptop it would need to be as big as your laptop and plugged in regularly like your laptop. There are also practical problems, like announcing every time your network address changes, or even figuring out when your network address changes, which is hard to do passively. I'm glossing over some networking details but there's a deep principle here that within the design of existing TCP/IP/cellular stack you can't have reliability, energy-efficiency, and decentralization. You must pick 2.

Apple, very presciently IMHO, has decided to legislate a lot of rules about background processes that I don't have time to go into here but basically they try to regulate the types of work that apps can do in the background to only the energy-efficient kind. The rules are actually pretty well-designed but they're still rules and they restrict what you can do. Android doesn't have this limitation but unless your product is Android-only you're going to comply with the iOS rules when you design a feature that communicates across platforms.

Okay, so we can't run Snapchat in the background. But what if two users happen to have it open? We can use p2p then right?

Well sure. But the user may be on a network that blocks p2p traffic. That is their network's fault, but they still e-mail you to complain, and leave bad reviews for your product, because as far as they can see "the Internet works" so it's your app's fault.

So what you do is you design a scheme that uses p2p by default and client-server as a fallback. There are actually apps that work like this. Problem here is, instead of getting support tickets about it not working, now you get support tickets about it being slow.

And there are ways to solve this, like permanently giving up on p2p after a certain numbers of failures for example. But the first experience is still pretty bad, which is what counts in mobile. And I remind you, this p2p feature is already a scheme that only works in the 0.3% of cases that users actually have the app open at the same time, and now you want to add code that disables p2p in even more cases than it's disabled already. This process continues until basically zero actual customers will ever use the feature.

And we haven't even gotten to cases like "Why didn't this message get delivered to all my devices?" because there is just zero chance that any customer, anywhere, will have all his devices turned on at the right time to receive incoming p2p connections.

Now non-messaging products like Spotify or Netflix are more plausible, but you still have to ask who wins here. Customer experience is worse, both because of connectivity problems and increasing bandwidth bills and the energy efficiency losses that comes with rebroadcasting content to other users. Developers are worse because they probably have to build both client-server and p2p architecture, because p2p isn't reliable enough on its own. Support is worse because almost any issue is potentially a p2p-related issue, have you tried disabling p2p and seeing if the issue persists?

There's really no reason, certainly no compelling business case, to inflict that much pain on any mobile product I can think of. I mean, there's probably a place where p2p makes sense--we live in a big world--but in general it makes things much worse for everybody.


Short version: there's no incentive to build or support reliable _infrastructure_ for p2p, so apps won't _design_ around it.


There is an incentive for real-time audio/video/gaming applications, though: latency.

And many such applications do use P2P.


You certainly can. The thing is though that the hops surrounding the server are generally the fastest hops in the whole path. The server is sitting in a big datacenter somewhere with multiply-redundant, professionally-managed network connectivity.

Doing a direct shot from client to client would be some faster, but not much. Meanwhile there are usually optimizations that are better you could do in the same amount of time.


That's not a valid generalization.

Yes, if User A, User B, and the server are in a hub city, such as Dallas, then the difference could be small in either direction.

But if user A is in San Francisco, and user B is in San Diego... then routing through the server in Dallas is almost always going to be significantly slower.

So if latency is a concern, and you want to route through your server, you need to now cover the network with servers. That's a big deal.


If such an infrastructure did exist, do you think people would use it?


Those are all very good points.

I'm pretty much certain that standard issue p2p approaches will not work on mobile without a rethink, but I do think its possible.

SnapChat may not be the best example, since it's a very ephemeral quick and as you say asynchronous app. A better use case would be something with a more prolonged engagement like a game, video conferencing, telepresence, etc. Those also have significantly higher bandwidth requirements that would make something like that more interesting.

Also sounds like if it can be done on iOS, it can be done anywhere, so if one were going to take a do the hard stuff first approach one should start with iOS.


NAT.

The vast majority of end-users cannot accept incoming connections. The vast majority of end-users cannot initiate connections to each other.

The only way around that is centralization.

BitTorrent users who are behind firewalls are only participating in a subset of the network - people who can receive incoming connections. That's okay because there's probably at least one who has the file you're looking for, but it wouldn't facilitate direct messaging.

Skype was centralized on Supernodes. Two users make outbound connections to a Supernode, the Supernode passes traffic between the two parties. It was (sorta) decentralized because Supernodes were sometimes just random users who could accept incoming connections. Microsoft found it more reliable and scalable to just host the servers in datacenters instead of co-opting random PCs as servers.

In our NATed, firewalled world, there is no such thing as true P2P between mainstream users.


There are a number of ways around NAT [1]. I don't know of a 100% guaranteed solution but NAT is not impenetrable.

[1]: http://en.wikipedia.org/wiki/NAT_traversal


P2P is definitely complex to develop. And on phones P2P sucks battery life due to keepalives between peers; this is one of the reasons Skype is removing P2P. P2P push notifications could help, but they'll probably never exist.


[deleted]


And there's also the fact that everything is now monetized through the server, so eliminating the server probably puts you out of business.


Things are monetized through the service, not necessarily the server. Having everything have to go through a server all the time just adds cost. That makes me doubt that need to monetize is a fundamental reason for p2p being rare... but I could be wrong.

(I deleted the parent cause it was redundant to my other reply. Oops. :)


SIP is a good example of a protocol where signalling is done through a server, but the media can (optionally) go via another path.


1 - complexity

P2P is just hard, and not that pertinent for most products. And of course there is the network issue with special ports, UDP that makes these solutions sometimes impossible to deploy in the enterprise world.


It really comes down to the fact that it's hard. Creating a secure P2P network takes a lot of time, a lot of smart people, and a lot of money. Especially on mobile the design is not reasonable. However, a semi P2P network could be an option. For example, a snapchat is sent and the app attempts to send the message directly to the user. If the app is open there is no problem, but if it isn't the message gets sent tp a snapchat server and saved for later. This is a design I used for a P2P email project I worked on and it worked.


There is a definitive lack of P2P SDKs, and those that exist, address a very narrow set of problems. The advent of digital currencies may provide incentives for users to prefer P2P apps. Getting paid to put your computer to work will appeal to many people.

Maidsafe is a good example, but developing for that SDK is a major undertaking. I am working on peercentrum (https://github.com/pmarches/peercentrum-core) which may appeal to a more general developer crowd.


Servers are dirt cheap, developers and customer service reps are very expensive. P2P solutions have the potential to impose high reputational costs if users start getting angry about data or battery usage, or experience connection problems. In a very few edge cases (very high bandwidth use, very low customer value) it might make sense to use P2P, but in those cases you probably don't have a real business proposition.


I am surprised podcast apps don't use P2P to download media.

1) Podcasts are made available for all at the same time, and rarely downloaded after the next episode. Lots of people trying to download the same file at same time. 2) Would save publishers a lot of bandwidth. 3) Latency is not really an issue

Seems like a perfect fit.


One factor I would guess is that companies want control of the data their app is sending, so they can data mine it. Easy if everything is going via a central server, and looks legitimate. P2P will specifically have to "phone home" to do this and makes the data mining element obvious.


Check out maidsafe.net - those guys think that the internet should have been designed as peer-to-peer in the first place!


I'd like a p2p chess app to play with my brother using our mobile phones and nothing else.

Is that even possible?


Chrome and Firefox support WebRTC on mobile, so you can use it from a web app.


It could probably be done with AllJoyn.


That's a different thing in that it's hyper local, basically people next to you. Cool but different.


Why would they?


Because there's never a guarantee the app is open, especially on mobile.

If we take your Snapchat example , if one user is trying to send a message to another and their app is closed or their device is off, where does the message go ?

On the desktop you have the luxury of mostly running in the background to pick up the ping, but it's almost never the case on mobile.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: