The radio dongles are superfluous and I can't think of any engineering reason why all cell phones can't just communicate with others nearby already. Maybe not for cellular voice and data, but certainly for something akin to AppleTalk back in the 80s:
When I got to college in 1995, all of the Macs were on the campus-wide LAN and lots of Mac users had their public folders shared. There were tons of apps and games and even little personal BBS-style areas where people posted stories and you could share files to their drop box. It was all free and open and frictionless and stands out vividly in my mind as a vision of what we thought the internet was going to be.
If we had that, it would be trivial to run something like IPFS to bypass the ISPs, even without paid cellular service. Then anyone could connect through their tunnel provider of choice and bypass any privacy concerns. The speed would be proportional to the number of nodes, so many thousands of times faster than internet today is, or ever can be.
With the debates around net neutrality and the cost of streaming, it's like we've forgotten that the only cost of the internet could be the electricity required to run a cell phone.
You’ve just outlined a lot of reasons why we don’t:
1) Filesharing went from “harmless, only a few geeks do it” to “theft on a level that will put ‘us’ out of business”
1a) Media companies and device makers merged which means that device makers are now beholden to the copyright regime. Previously they argued “we just make tools, we can’t be responsible for how people use them” but we’ve lost those allies.
2) Openly sharing folders by default was great when everyone knew what they were doing, but the public now recognizes that there are real consequences when people don’t know what they are doing. From being “exposed as an imposer” (harmless and common imposter syndrome) to all manner of life/death consequences. It’s a whole different landscape.
3) No protocol is secure forever. Having all devices communicate with each other means that it’s only a matter of (usually a short) time before people can view other people’s personal information, and that leads right to point 2.
4) Similar to point 1, service providers have deals with device manufacturers and those providers don’t want people to be able to bypass them. They even write terms of service that say you’re not allowed to share with your neighbours so that even in the densest cities they can charge all 500 households that are within wifi range of each other though that creates a worse experience as the signals clobber each other.
The electricity required is precisely the problem why we don’t do it.
And it’s not so much the power that it requires but that the extremely low energy density of lithium batteries means we have to minimize power usage as much as possible.
I see no reason why mesh networking should need a huge amount more power than traditional mobile data. If we take the assumption that the vast majority of potential mesh members are unwanted (ie. you are not running a 'fully open' mesh network) then you can trivially ignore data that does not match an originator to which you have pre-approved listening. Similar to ethernet MAC filtering, I believe this is how implementation works in silicon. And indeed this seems to be the case. https://docs.kernel.org/driver-api/80211/mac80211.html?highl...
If you were relaying a lot of traffic for approved peers, then sure you are going to have a higher power profile. However, if you are doing this on an ongoing basis you are probably not battery-constrained (eg. you are connected to a larger power supply such as a vehicle, etc.) and you are potentially talking lower signal strength and closer peers. So it's really an application-specific concern.
If you read datasheets for radio transceivers, one of the first things that will jump out at you is that simply keeping the analog radio front-end powered on -- either in listening or receiving mode -- is by far the biggest source of power consumption. (At least, this is true for all of the microcontrollers and relatively low-power embedded devices I've looked at. I can't guarantee how true it is for things like phones.)
For instance, the popular ESP8266 chip uses roughly 1mA in sleep mode, 15mA when the CPU is awake but the radio is off, 50mA when receiving, and 100-200mA when transmitting.
Nordic Semi's comparable nRF52810 uses less power, but follows a similar pattern: <1mA sleeping, 2mA with CPU active, 6mA receiving, 8mA transmitting.
So if you tell the radio hardware to discard packets that aren't addressed to you, you may be able to save a small amount of power by avoiding unnecessary CPU activity, but you will still be using dramatically more power than a non-mesh network in which your node could go to sleep when not in use.
In order to have any chance of getting reasonable battery life from a small mobile device, you need a protocol that establishes short time slots for devices to wake up and find out if they have any incoming data, so that they can spend as much time as possible with their radios turned off. This is difficult to achieve in a decentralized mesh.
While I don't doubt your line of reasoning, one would assume these factors were taken in to account by the specialists who designed the protocols and they would have used obvious strategies like reducing the time domain for new peer associations and added predictive schedules for ongoing transfers. These must be common building blocks in radio protocol engineering.
Also, relatively speaking, modern wireless transfer rates are very fast and therefore transmissions will only be sporadic, and 200mA is not a lot of power. Ignoring other factors, napkin says you would have to hold 200mA consumption solidly for 15 hours to deplete a modern smartphone's 3000mAh battery. Most mobile mesh networking use cases will have long since concluded by that point.
According to a 2017 PhD dissertation[0] on a sample device cellular eats way more than wifi in the normal idle mode (p56). Therefore, if we ditch cellular there may actually be power budget savings when idle. However, all radio communications are totally dwarfed by display power (p58, p67, p72), especially at high brightness levels (p59).
> While I don't doubt your line of reasoning, one would assume these factors were taken in to account by the specialists who designed the protocols and they would have used obvious strategies like reducing the time domain for new peer associations and added predictive schedules for ongoing transfers. These must be common building blocks in radio protocol engineering.
In wifi, this is handled by the base station if I'm correct. It assigns slots for the associated receivers. Assigning slots in a mesh network without coordination is effectively a graph coloring problem and difficult to solve.
My professor for my master thesis was the author of LMAC, which attempted to assign slots to each node. It worked great in theory, but not so much in practice.
LMAC (2003) @ https://research.utwente.nl/files/5427399/VanHoesel_INSS04_0... seemed to target energy efficiency on bursty, short messages on fixed topology wireless networks, whereas we were discussing the potential for ad-hoc mesh for a general use case using existing wireless chipsets.
To take up the tangent, it seems to me that the graph coloring problem can either be (1) solved universally (any local mesh will have distinct boundaries which make absolute coordination feasible) by adding some sort of explicitly negotiated or progressively developed shared state to act as a coordination layer (2) solved temporarily, for example by implementing a best-effort failover/fallback (CSMA/CD feeds back to "color") (3) Ignored.
Solution class 1 is obviously challenging to apply to for mobile and ad-hoc mesh networks. Solution class 2 seems more realistic for this category of use case. Solution class 3 works well if you have full control of the application layer (eg. wireless sensor network is push-only with short packets with high temporal tolerance for retransmissions).
Feels a bit like "Energy optimization / throughput / ad-hoc topology / traffic generality. Choose any three."
I think if someone could build and demonstrate a cheap node that could be set up and run things like matrix and post it to HN that it could take off more. I think a lot of people know about mesh networks but have no idea how to set one up, legality, and can afford it (put me in this camp). But if it's something that is straightforward and you could cheaply deploy all around the city, then I think people would start putting them up.
This used to be pretty common until applications like Napster took the usecase for p2p to mainstream for copyright music filesharing. Then p2p got a lot more scrutiny. Also, since it is hard to create a search index for wireless devices that may be offline, the mesh network becomes a bit of a darknet.
Traditionally in the US, carriers own customers and mobile device manufacturers distribute through them. Many customers lease-to-own handsets on 'plans', with DRM in-handset to ensure the customer can't easily leave the 'deal'. However, Apple began to change this status quo by launching the iPhone and demanding customer ownership. Carriers derive revenues from customer network use. Any carrier that sees a major push toward dropping them is not going to be happy. But it will happen, eventually. But not if Android fails to support it. We desperately need better firmware. Everyone is locked in. There is no better time for an open phone...
If you try to ban the future, it will just happen elsewhere. - Paul Graham (2017)
Note that Adam Dunkels, who wrote this writeup and posted it here, is one of the most accomplished embedded programmers in the world; if you've done TCP/IP on a computer without an OS in the last ten years, you probably used his lwIP stack. What you might not know is that lwIP was originally written for his free-software operating system Contiki, which can run working web browsers not only on a Commodore 64 but even a Commodore PET.
A city-sized IPv6 mesh network built out of handheld-sized devices was science fiction 25 years ago. Metricom's Ricochet showed it was possible, without the IPv6, about 23 years ago. And after decades of resistance and sandbagging from regulators, Thingsquare is finally making it happen in real life.
This sounds like one of those tools that was built internally that could spin out to be it's own project/startup. It's such a clever and easy way to solve their problems.
What's OPs view of helium network? It basically solved the issue of network deployment for LoRaWAN. It covers the vast majority of American population centers, Western Europe, and soon south america and APAC.
OP here. I think Helium(*) is super interesting. They are building an access network for LoRa by using crypto coins to incentivize people to deploy base stations. You essentially get crypto coins for every packet that passes through your own base station. And anyone who wants to use the network pay for access using crypto coins.
The idea is that if you are designing and selling a product, say, a dog collar, you can build Helium(*) capabilities into your product by including a LoRa chip in it. Your customers can then use the Helium(*) network to communicate with the dog collar, for example to locate a missing dog. The fees for this service then flows to the people that have deployed the base stations that facilitate this communication.
To me, this seems like a great way both to build an access network and to get people invested in (and excited in) the process. There are now also a bunch of companies who are taking advantage of this network for their products.
(What we at Thingsquare is doing, and what is discussed in the article, is a little different from what Helium(*) is doing. We are providing a single-purpose network for one system/product, such as a street lighting system. That entire system is connected using its own mesh network, and that mesh network is typically not used for anything but that product's communication needs.)
There are three hotspots available for a relatively obscure IoT network. Not only that, if you look at the data tab on these, they're actually getting use. Really cool.
I would be curious to read about use cases from people who use the helium network as a network. When I previously looked online all the information I saw was about people setting up the miners and didn't see anything about what using the network is actually like.
I was disappointed to see that the network seems intended for transmitting very small amounts of data. For example, by my messing around with their calculator, transmitting a gigabyte of information would cost ~400 dollars.
Lora is for low data rates, such as sensors, location trackers, IoT applications. If your town has a Starbucks, you most likely have coverage right now.
The network gets global utilization processing about 40M packets a day. Not big in terms of dollar $ yet, but people are indeed using it. Definitely nobody is suggesting you watch YouTube or even send emails with it.
If a packet is 24 bytes then 40 million packets a day is slightly less than a gigabyte of data a day. Seems like really low throughput for a global network that extends to every town with a Starbucks.
Do you have a source for the 40m packets number? Does that include the packets that the miners send to each other for proof of coverage?
I'm having trouble imagining the uses for an expensive, slow, unreliable, bespoke network that's limited to the areas with helium miners nearby. If you have an example of someone using this I'd love to read more.
About usage, this tech has historically been used for industrial enterprise applications, like sensors, valves, environmental data. A large scale network has never been available to individuals so there aren't many consumer facing applications. I think the first big ones will be very cheap package tracking.
I think that's what they're referring to, www.helium.com.
"Mining HNT with Hotspots is done via radio technology, not expensive or wasteful GPUs. … Hotspots work together to form a new global wireless network and undertake ‘Proof-of-Coverage’."
"Tokens & Data Credits: The network uses two units of exchange: HNT, a new cryptocurrency, and Data Credits.
Proof-of-Coverage: Our novel proof-of-work algorithm enables Hotspots to be rewarded trustlessly.
Helium LongFi: LongFi combines the low-power, long-range LoRaWAN wireless protocol with the Helium Blockchain."
I get the impression is it's to create a cryptocurrency motivation for providing good network coverage. I don't get how it works but assume, like all things involving blockchain, there are externalities or unintended effects that make it not a good solution.
Yes, it's an economic mechanism to build wireless networks. There are definitely externalities as you say. However what I find amazing is they successful built the network when all other traditional means of doing so had failed. See the recent bankruptcy of sigfox.
Helium is the quintessential example of Blockchain doing obvious social good when all traditional means of accomplishing the same goal had failed.
I can't quite figure out from the site: I provide some connectivity to the network, but don't I have to worry about anonymous traffic like as if I were running a tor exit node? Seems to me that should be a big concern.
Helium reached a scale orders of magnitude larger in 1/5th the time. As TTN is a community effort it can't really fail in a traditional sense, but it has been lapped several times over at this point.
Similarly to Bitcoin, miners are paid by a combination of minting HNT (which halves over time) and fees paid by network usage. I'm very curious to see how this will play out over time. If network usage / fees don't increase over time while HNT issuance drops, will miners stop mining? Would the revenue still be enough to incentivize long term maintenance? Unlike Bitcoin, if miners stop mining, that directly reduces the value of the network due to a decrease in coverage. Is there a possibility of a spiral, where network usage drops due to reduced network coverage, and then miners stop mining due to the drop in usage?
I've not really dug into the details as to what solutions Helium has, but it is quite interesting to see how this experiment will play out.
All great questions. Helium mining is unique in that operational expense is close to zero, once setup, it's actually more trouble than it's worth to turn off. In many cases, turning off literally means climbing a tower. Power and bandwidth costs a cup of coffee a month. Capex is shrinking as the network matures and lower power hardware can do the same job as previously beefier units.
A much more realistic risk is sudden insolvency of a specific vendor (who are also responsible for maintaining firmware). There are about 30 approved vendors, growing monthly, but some have a larger share of the mining pool than others. The community is anticipating long term business risks and devising mechanisms to prevent a sudden loss of a large percentage of nodes due to business risk.
I firmly believe there is a significant flywheel developing here and I recently left my FAANG job to build in this ecosystem.
Wow, what a cool service. It would be fascinating to see what could be done if you bought a lot of plugin devices and then just gave them out to neighbors.
The page really isn't clear but I assumed it was for IoT devices like a coffee maker. If all the coffee makers talk to each other, great, but someone has to connect to the internet at some point. I'm clearly missing some context here.
MAC address filtering comes with its own downsides. For one, it’s very easy to spoof. So you can act as a node in the mesh network if you copy the MAC address of a known node.
Implications: elevated access within mesh network, capturing traffic between clients and the mesh network, localized disruption of mesh network due to unstable connection caused by duplicate MAC addresses on a single network
I think both the statement and the criticism are oversimplifications.
It sounds like the criticism hints at the hidden node problem. (Briefly: One of the largest problems in wireless networking. A and B can hear each other, B and C can hear each other, but A and C can't. So when C wants to talk to B, how does it know if A is already transmitting and B's receiver is already listening? In large networks this gets hugely important.)
But it sounds like the simulation is already maximally pessimistic about it -- there's no RF attenuation in the drawer, so all the devices occupy the radio channel whenever they're transmitting. So I would imagine that software that works well in this case, would probably also do decently in the real world. (As opposed to software tested on a naively-simplified network of RF cabling and attenuators, which would not model the hidden node problem's complexities very well.)
So, while the real world will surely be more complex than they hint at in their sentence, I think the simulation drawer is also better than you hint at in yours.
There are nuances, of course, and frankly there are RF network simulators they could be using, but I think the presented MAC-filter approach is not terrible.
OP of the article here. You are absolutely correct. The statement in the article is very much a simplification: in the real world, wireless communication is tricky. Not only are there issues such as the hidden node problem (and the exposed node problem, for that matter), but it also changes over time.
It is possible to take these issues into consideration, but as you say, that requires a very different infrastructure than what this article is talking about. The way we do it at Thingsquare is to use a software-based simulator where we can both emulate the software on the devices and have full control of the simulated radio medium - if we need it. Sometimes using RF attenuators and/or cabling is useful.
In the end, what it comes down to that each tool has one or more use cases where it shines, but there is no one tool that fits all.
I'm not a HAM guy - would you care to distill the lesson that they are ignoring? I infer that, by definition, this is not a mesh, but more of a train. And I further infer that a single node falling out would break the chain and split the "mesh". But I'm just guessing.
It doesn't say how many nodes could hear each node in the emulated env, only that's it's a subset of the total, unlike the unemulated env where they're all in a drawer. There's nothing there that precludes having enough for nodes to route around failures as I see.
https://futureprepping.com/mobile-phone-mesh-networks/
The radio dongles are superfluous and I can't think of any engineering reason why all cell phones can't just communicate with others nearby already. Maybe not for cellular voice and data, but certainly for something akin to AppleTalk back in the 80s:
https://en.wikipedia.org/wiki/AppleTalk
When I got to college in 1995, all of the Macs were on the campus-wide LAN and lots of Mac users had their public folders shared. There were tons of apps and games and even little personal BBS-style areas where people posted stories and you could share files to their drop box. It was all free and open and frictionless and stands out vividly in my mind as a vision of what we thought the internet was going to be.
If we had that, it would be trivial to run something like IPFS to bypass the ISPs, even without paid cellular service. Then anyone could connect through their tunnel provider of choice and bypass any privacy concerns. The speed would be proportional to the number of nodes, so many thousands of times faster than internet today is, or ever can be.
With the debates around net neutrality and the cost of streaming, it's like we've forgotten that the only cost of the internet could be the electricity required to run a cell phone.