Very long story short, the overhead in maintaining the link state information in meshed networks swamps the available bandwidth for even modest (think 20-40) numbers of nodes. Basically, as nodes are constantly moving/coming on- and off-line, nodes have to know which nodes are in the connection graph, so they exchange that information. But that eats up all of the bandwidth amazingly quickly, leaving no room for actual traffic.
There are some other issues with meshed networks (latency, battery usage when you're just a client node, etc.), but especially for those of the kind described/dreamed about here (which are also known as MANETS -- mobile ad-hoc networks), the routing issue is a huge one.
Finally, I should note the above isn't just academic; several companies have tried to build products based on these kinds of networks -- military contractors being one of note -- and have pretty much come up short.
So until the really fundamental issues are worked out, which is in the realm of academic research, not implementation, we probably aren't going to see anything like this.
That's not some academics problem. Its an engineering problem. What those companies have proven is that given the particular approach they took and the constraints they had they could not make their mesh networks work.
So the starting point for an engineer is what types of mesh networks work that don't need to globally broadcast topology updates in real time.
It always amazes me how quick people are to accept someone else's incorrect approach to a problem or to dismiss a new paradigm because others havent had success. Meanwhile they spend enormous amounts of energy tweaking the parameters of an outdated paradigm to try to make it work a little longer.
The main thing holding back new superior approaches to a problem solution is always the people who tried and failed and then kept saying it was unworkable thus preventing others from trying.
Hi, we spent a few years working on mesh networks at One Laptop Per Child (as engineers rather than academics, obviously). We ended up giving up, for the reasons gsteinb88 mentioned -- another cause of bandwidth saturation was that broadcast packets turn into multicast on mesh.
I'm not saying this to discourage others from trying. But they should definitely be teams of network protocol experts, rather than a generalist team trying to put a product together as we were. We used 802.11s, and being the largest deployment of a protocol you weren't planning on having to fix yourself isn't an enviable position.
I'm incredibly interested in the challenges you guys faced - the mesh network and view source parts of OLPC were (to me) the most important and life-changing aspects. Bummed to hear that one of them at least was problematic.
Was there a blog post or anywhere you guys catalogued these problems, your approaches, and what could be learned? I'm looking to take on a similar challenge as a generalist and would love to learn from your attempts :)
Thanks! It looks like there were a bunch of academic papers about the mesh performance -- if you search for anything by (or citing) Ricardo Carrano, he was one of the more academic engineers who worked on the problem.
For example, here he proposes a hack on top of 802.11s to switch between reactive and adaptive mesh topologies based on measuring the network density -- if you're a small group of kids in a village you want a reactive network, which comes with higher bandwidth saturation costs, but if all the kids come into the same classroom at once you need to quickly switch to adaptive to avoid using up all the spectrum with routing detection:
The 802.11s spec is pretty useless for high density areas without such a hack, it seems. I suppose the takeaway message is that your use of mesh networking needs to be extremely tailored to your specific use case, quite possibly including writing custom routing software.
Another problem: I've never seen a meshnet protocol that isn't vulnerable to a variety of flooding, cache pollution, DDOS, and other attacks that could basically make the network unusable.
This is a problem for the idea that mesh nets should be used to create resilient decentralized networks. Unless the meshnet is safe against attacks, any government or other organization that wanted to take a meshnet down might actually find it easier than taking down a multi-path tree-shaped or centralized infrastructure.
I agree that in general the idea is pretty stupid, but what about if we add a couple of layers of master nodes? As Google says, by trading off a bit of decentralisation you can gain a lot of performance (see gfs, etc).
Do you have any more information on this? I'm definitely curious what the problem looks like when stated in specific algorithmic terms (big-O and all that).
Let's not forget that wireless has very limited bandwidth in the first place. Once you start "sharing" your bandwidth by relaying packets for other nodes, in any large network the bandwidth available for you to use will drop off a cliff.
In other words, even setting aside maintenance overhead, simple relay overhead is also going to bring the network to its knees.
This is really interesting. Thanks for this comment.
So is something like Multipeer Connectivity Framework in existence mostly because it is a much weaker version of true mesh networking? (Or not mesh networking at all?)
Do you think weaker versions of mesh networking that don't solve the underlying problems could also be valuable?
You're accepting his defeatist attitude too quickly. The problems he lists are easily surmountable. The military uses mesh networking to a great degree of success. A globally-distributed up-to-date copy of the network topology is a faulty starting point.
Doesn't Sonos employ some type of mesh-based network? It appears to work reasonably well in practice, but not sure whether it is in fact a better alternative to current standard wifi alternatives. Anyone happen to know more about this?
The article states, "Perhaps most importantly, the idea of mesh networking seems to be a fit with our current trend toward decentralization..."
I think this isn't really an accurate statement. The examples given in the article (Uber, airbnb, bitcoin) could reasonably be called "decentralized" (though, one could argue that in a case like airbnb, the fact that you go through a company means it isn't actually decentralized), but on the whole, I'd say the trend, for the bulk of the population, is actually towards more centralization. Think Facebook, Google, Twitter, etc. More people are storing their data in fewer places, so I think the trend is the opposite of what the author states. In that sense, it's not at all surprising that mesh networking is not popular.
The cellular network example depends on somewhere having a reliable, direct connection to the parent network.
So, the need for a reliable, direct connection trumps the need for mesh networking.
If you solve the problem of having a reliable, direct connection somewhere, it's more effective to extend that solution rather than also covering the problem of mesh networking.
Good timing for a mesh networking post, since this year's WirelessBattleMesh starts tomorrow.
"The Wireless Battle of the Mesh is an event that aims at bringing together people from across the world to test the performance of different routing protocols for ad-hoc networks, like Babel, B.A.T.M.A.N., BMX, OLSR, 802.11s and Static Routing."
I believe the answer is that it is basically a pipe dream. For some reason, these sorts of ideas infect people with breathless enthusiasm. Telcos won't be necessary anymore, the reliability will be so much better, etc. Of course the reality is different. There are technical reasons for this, such as mentioned in another comment, but what I also think is important is the human factor. People would have to be convinced to administer their mesh nodes, and of course there are network effects here (there's really only a point to this if there's a critical mass, at least around you). When you think of the average user what they really want is to pay some nominal fee to ensure that the maintenance of their uplink is Somebody Else's Problem--they don't want to reconfigure their mesh node just to check facebook &c.
Has anyone worked with hybrid networks? I can see how the routing problem grows exponentially, but what if you keep everything to a few hops to a node that offloads the traffic to a more traditional system? It could be a good way to extend network range while reducing infrastructure.
This is the approach I've been thinking of. Launching a mesh in a specific area near a backbone provider where you could cut a deal for reasonable internet connectivity. As the mesh expands, users (in theory) pay less and less, especially as more backbone nodes are connected.
Looking at trends like that is pointless. Interest in anything technical will decrease because more non-technical people become connected and the Internet reflects that. I bet there is more people (absolute numbers) interested today than in 2005.
Do you think the percentage of technical people will increase over time? In the same way that almost no one was literate 1000 years ago which is now reversed.
Disclaimer: I'm creator of a mesh network, or a decentralised, anonymous 'reddit' called Aether.[0] [1] I'm probably biased.
The main problem with these networks is not the declining user interest per se, it's rather that they are by their own nature, a) very hard to engineer, and b) almost impossible to monetise. The engineering effort required to make a decentralised network reach resilience (and visible uptime) of the most basic Rails web app on Heroku is orders of magnitude away from just hosting it.
For A, Decentralisation is a cruel mistress; every assumption you will make about computers connecting to your network will slowly fall apart as you move deeper into the architecture, engineering and design of your services—that makes it a fascinating challenge as much as a hard one fortunately. On the more unfortunate side, you eventually end up with cases that require you to entirely reëngineer your stack to provide for them; and not providing them access is philosophically against the democratic access decentralisation offers. For starters, anything higher than C/C++ is probably a non–starter, a lesson I learned the hard way. My own work is built in Python and getting it to perform adequately well on even most medium–grade computers has been a challenge. Distributed network means distributed loads, and there is a significant amount of computers who aren't able to pull their own weight on the Internet, especially in the developing world.
It also means that you'll be bogged down with a lot of platform support issues, small bugs etc.
For B, there are two issues. The first issue is that in a decentralised network it's almost impossible to target a specific person or entity to pay if you're not willing to create an exception for yourself (i.e. all computers able to call your servers, and consequently in absence of your servers the network goes down) but if you're willing to do that, you can just make it into a traditional system in the first place, and you just made it both subject to your whims, and as fragile as any other centralised service. That's a no-go.
Beyond this, the moral imperative is that you shouldn't be charging for a service that doesn't cost you money to maintain. For additional services, sure, but you still need to figure out how to attach an additional service without which the core network continues to function.
In general, it's a massive engineering task that isn't lucrative, and the effort required to maintain and keep functional increases exponentially as the network scales. For me, I'm stuck at packaging the new version for the last few months because PyInstaller doesn't like the way I split Aether into two applications—and you end up with these corner cases nobody on the entire Internet has ever encountered before. There's no Stack Overflow for that. It's fun, but it's also tiring.
Question: If you assume that at some point (until it becomes ubiquitous), the mesh network must connect to existing broadband providers for bandwidth, how could you avoid charging?
I'm not suggesting profiting off of users, but rather covering the bandwidth costs incurred by each user in the network. In theory, the costs approach zero over time (once density is reached in each distribution of nodes), but surely some monetary transfer is ok.
Aether is not a full-fledged mesh network in that sense, it still relies on an existing IP connection between the nodes. But in cases of full blown mesh networks, there needs to be 'exits' from the mesh to the normal Internet, and those points would cost money to run. In that case, though, those exits would be clearly identified, i.e. not fleeting interstitial nodes on the mesh network, and you could direct payment to them. The problem arises when you try to transfer money from a mesh node to another, not from a mesh node to a known location (the exit).
Agreed. I'm approaching this thinking that the exit points can be backbone providers and we could build simple software to enable users to pay for the bandwidth (to the exit) that they've utilized. Connectivity within the network would be free and we could even place peering points at the exit to reduce outbound bandwidth.
Then it just becomes a question of incentivizing users by consistently driving down costs (in theory) to promote density of nodes.
Yes, this could be an intermediary state between a full–blown mesh network which everything is fully peered, and a fully centralised network such as today's Internet. It all depends on everybody carrying their own weight, though, and of all, that seems to be the farthest away concept so far. Phones and other mobile devices are consuming an increasingly sizeable chunk of the pool and those devices are barely able to connect and to their work, much less work for others due to battery restrictions, which are ultimately bound by the laws of physics.
This is also the reason Aether will most probably never have a mobile version. Not until they finally make nuclear fusion work, and compact it enough to make batteries with it!
There are some other issues with meshed networks (latency, battery usage when you're just a client node, etc.), but especially for those of the kind described/dreamed about here (which are also known as MANETS -- mobile ad-hoc networks), the routing issue is a huge one.
Finally, I should note the above isn't just academic; several companies have tried to build products based on these kinds of networks -- military contractors being one of note -- and have pretty much come up short.
So until the really fundamental issues are worked out, which is in the realm of academic research, not implementation, we probably aren't going to see anything like this.