One missing feature: deferred message propagation. As far as I understand, while messages will be rebroadcast until a TTL is exhausted, there is no mechanism to retain in-transit messages and retransmit them to future peers. While this adds overheads, it's table stakes for real-life usage.
You should be able to write a message and not rely on the recipient being available when you press send. You should also be able to run nodes to cache messages for longer, and opt in to holding messages for a greater time period. This would among other things allow couriers between disjoint groups of users.
I’ve read all the posts and, as the 'old man of the village', I would suggest taking a look at FidoNet. It was running 40 years ago, for more than a decade, before the internet was available to the average person.
Store-and-forward, hierarchical organization, scheduled transmissions, working over dial-up and radio links, everything is there.
There is nothing new to invent, and it was far more reliable than the 10m real-world range of BT5 (not the 1km claimed for lab devices, which aren't commercial phones).
A BT5 mesh only works under well-defined conditions, which usually coincide with the cases where you don't actually need it.
FidoNet has a lot of it solved, for sure. But doesn't it rely upon pre-configured paths between nodes in order to handle message routing?
If so, then: Wouldn't it fall down completely when operating in the ever-shifting and inherently disorganized environment that a sea of pocket supercomputers represents?
I don’t take concepts as a 'full package'. I evaluate what is worth taking based on the requirements. The brilliant part of FidoNet is the asynchronous persistence.
In a 'sea of supercomputers,' a real-time mesh (like Bluetooth) fails because it requires an end-to-end path right now. Store-and-Forward allows a node to hold a message until it 'sees' any valid peer, turning every 'meat-bot' into a mobile post office.
My main concern with this entire discussion is the reliance on Bluetooth to achieve the result.
If we truly want to build a free and open intercommunications system, we must put all ideas on the table, establish clear targets (a doomsday system or inviting a friend for a drink), and evaluate what is truly available versus what is not.
Only from that foundation can we begin to define a project that survives the real world.
Well, no: In the scenario I outlined, there's now still just one node with the message for H. A passed it to K, and promptly forgot about (having passed it along to "any" valid peer).
---
In your scenario, both A and K store the message for H -- suggesting replication (or perhaps, redundancy) by visiting peers. And maybe replication is OK.
It seems obvious that it can spiral out of control, but our pocket supercomputers do have a fair bit of bandwidth even at Bluetooth speeds, and flash memory is very cheap and available (a gigabyte of flash can hold a lot of short-ish text messages and costs very little).
So the network can afford quite a lot of replication in an effort to promote distribution -- and maybe that can work. Maybe the message isn't stored by just A and K, but also E, I, O, and U because they happened to stroll by and see the outbound message for H.
But there must be limits, if for no other reason than without limits then any single bad actor can ruin the whole works by exceeding the bandwidth and storage capabilities of the network.
These limits could be hop-based, or time-based, or geography-based, or any/all of the above.
Suppose a message lives until any of 50 hops or 5 days or 50 miles is exceeded? Yeah, maybe something like that works. The capabilities can be mathed to find some version of "ideal," and probably enforced somehow to prevent bad actors from doing too much bad stuff.
(But we're very rapidly straying very far from Fidonet's normal distribution behavior here, and dismantling that concept was the main crux of how I got to thinking about these things may theoretically work to begin with.)
Thanks for posting - this is really interesting. An idea perhaps whose time may have come. Out of interest (no criticism implied) but do/have you use this tech? and if so what was your experience?
I never actually used Fidonet. I started on BBS systems just as the internet was becoming affordable, and I made the switch early.
However, I apply the concepts of FidoNet almost every day. I often design offline-first devices, where store-and-forward logic is a necessity, not an option. Many are deployed in remote areas where signals are never optimal, there a High-Gain Antenna is the only solution.
I also prioritize binary protocols over structured JSON; you have a much higher probability of delivering a few hundred bytes of binary data than a bloated text object when the link budget is tight.
Finally, I never expect Wi-Fi to work beyond 5-10m when the router is placed inside the metal structure (that's why my skepticism about BT on cruise ship).
this is prob the 100th time ive read about bitchat here, and the comments are largely the same (use briarchat, none of these really work that well, i dont like jack dorsey, etc) every time.
but this is interesting. and i agree strongly with this: "While this adds overheads, it's table stakes for real-life usage."
i suppose events like iran are really making me wonder if this stuff is possible it feels like anyone who's under the chokehold of regimes has completely run out of options, but even in America I'm getting the sweats wondering if there's going to be a time where such techs are needed. from what i gather none of these decentralized p2p messengers work well at all, but I also haven't truly tried. I can think of some moments that would've been viable test grounds though. Was at Outsidelands festival in San Fran and cell service was pretty much DOA due to the volume of people trying to hit the same tower(s). Even airtags which everyone in the group had on their beltloop weren't working.
It's funny how 3 or 4 similar BLE systems each are slightly different, and yet no one wants to just merge all the features for an obviously superior product. Everyone seems fine squabbling about which incomplete app/system is better.
Just take what's there and include the obvious next steps:
- Meshtastic and Meshcore ability to use relay nodes for long range BLE networks (Briar doesn't allow)
- Store and hold encrypted messages, as noted above.
- Ability to route through the internet, prioritize routing methods, disable internet routing, etc.
- Ability to self-host server for online relays (similar to Matrix)
Bitchat does work with Meshtastic as of the most recent release. It also lets you self host a relay, because it uses Nostr relays. I'm not so sure about white/black listing so yours DOES get used, but you can absolutely host one. Routing through the Internet is something both Bitchat and Briar support, Briar through tor, Bitchat through Nostr (optionally also through tor). Disabling Internet routing at this time may require turning off Internet for Bitchat -- haven't dug on that one.
I do like the store and forward idea, though a thought on that is that while it makes sense for DM's, it makes less sense for group chats, which, being real time, make the shelf life of messages a bit short. It makes good sense for forum like content though. I think so far Bitchat has treated this as a bit out of scope, at least at this stage of development, and it is a reason that indeed, Briar is still quite relevant.
Bitchat only just recently even added ad hoc wifi support, so it's still very early days.
> while it makes sense for DM's, it makes less sense for group chats, which, being real time, make the shelf life of messages a bit short.
Neither are real time once you introduce delayed communication. Not sure I see the distinction.
Actually, I'd argue that unreliable transport breaks the real-time assumption even without introducing delayed communication. Is there immediate feedback if your message can't reach it's destination?
Lack of retention can actually be a feature in these types of situations. It should be opt-in. The government would actually need to infiltrate the network in order to read the conversations, instead of just retrieving the messages from the cache on a confiscated phone
I'd consider end-to-end encryption to also be table-stakes, at least opportunistically after the first message in each direction. With encryption cached messages are far less harmful (though still leaking very useful metadata), without encryption it seems almost trivial to spy on any communications
E2E encryption probably isn’t enough to protect activists trying to organize. Without doing onion routing where you pre-compute some nodes it in the network that it MUST transit prior to delivery and having them decrypt it until it arrives to the recipient (like Tor) you still leak who’s talking to who.
Neither E2EE or Tor are enough to protect someone being targeted by state level actors. They're helpful, but if you're a high enough value target, they only slow down your adversary. If you're relying on algorithms on your computer to protect you, you should be prepared to meet the hacking wrench. [1]
If the political environment gets bad enough, you may expect to die anyway, and the TTL difference that obfuscation provides means the difference between making a small improvement before the inevitable, or not.
If the user can get immediate access to older messages then normally those messages will be available on a confiscated phone. That's why things like Signal have you set a retention period. A retention period of zero (message is gone when it scrolls off the screen) is safest.
If you want to protect older messages you can have the user enter a passphrase when they are in a physically safe situation. But that is only really practical for media like email. Good for organizing the protest but perhaps not so great at the protest.
>At its core, BitChat leverages the Noise Protocol Framework (specifically, the XX pattern) to establish mutually authenticated, end-to-end encrypted sessions between peers.
I actually wrote a Noise implementation and someone wanted to make a Bitchat implementation with it, but my impl only supports BLAKE2B (and I got the impression this person really didn't know what they wanted to do in the first place). It's kinda sad more haven't moved to BLAKE2B (or BLAKE3, which I almost never hear anyone talking about).
Not just deferred message propagation, but also a way to setup medium to high powered rebroadcasting stations. For political unrest scenarios, you don't always need 2-way communication, but you do need to distribute critical info. A listen-only mode makes it very difficult to track individual users (no RF transmissions), and would cover a large percentage of a critical use case.
All of this is solved with the store-and-forward model that you highlight.
A Lora dongle seems to be better than BT, though potentially incriminating.
iOS definitely made a name for itself to the ire of many for this many moons ago, but it's a fairly ubiquitous default behavior for mobile phone operating systems now (because battery life) even on android
If the idea is to have devs implement each kata, wouldn't it be more effective to provide not only automated tests, but also code which should be used as a basis for each challenge?
For example, if supporting a dev tag to serve assets from the filesystem, why not include a simple webserver which embeds the contents?
This would allow aspiring gophers to go straight to the high value modification of a project rather than effectively spend most of the time writing scaffolding and tests.
A large percentage of git users are unaware of git-absorb (https://github.com/tummychow/git-absorb). This complements just about any git flow, vastly reducing the pain of realising you want to amend your staged changes into multiple commits.
This sits well alongside many TUIs and other tools, most of which do not offer any similar capability.
I gave it a try a few months ago, and wasn't impressed. About a quarter of the time it got confused about the commit it should squash into, and left the repo in a half-applied state. This inconsistency was enough for me to not trust it when it did work, so I stopped using it.
Honestly, it's too much magic for my taste. And, really, it's not much manual work to create fixup commits for the right commit anyway.
I see the usefulness. But my client is magit, and committing and rebasing are so quick that this will reduce perhaps 30 seconds to one minute to my workflow. And I do not like most rust tools, because they're too dependency heavy.
Definitely. The instant fixup feature is just three keystrokes away (s c F). The only thing this helps is when you don't want to spend the extra brain cycles to figure out which commit to fixup on.
The task that absorb speeds up is finding the commit where each hunk was last changed. The actual committing and rebaseing is still basically the same.
Git blame using `M-x vc-annotate` with Emacs. But If I have a clean PR that usually means one to three commits (If it's not a big refactoring). So the whole point become moot. In magit, if you create a fixup or a squash commit, it will present you with the log to select the target.
Yes, or magit-blame, but if you still have multiple commits in your history that you are working on, and you need to break up the current changes in a bunch of instant fixups, figuring out which one is the right one can be a bit time consuming. I'm not convinced that automatically amending to the last commit that touched that line is safe, but I'm willing to try git-absorb.
This is exactly what I want when baking bread: I have a fixed sequence of steps, spaced quite far apart, and this is pretty much perfect: a series of relatively short breaks when autolysing and kneading, then waiting 10 hours overnight, then waiting 75 minutes after proofing.
I'm not sure how well this will work on a mobile; the service worker might be stopped after a few hours, particularly with the screen off overnight
Yes! I noticed this too, I really want to go with the PWA / open approach, however after a bit more research it looks like the PWA would be barely usable on mobile without their "closed APIs" a native app has :/
Will research what is the thinnest "wrapper" I can use to avoid maintaining 3 different apps.
This would be more impactful if we could see the cost to US purchasers was actually 39% more. Sadly some manufacturers spread the cost across all consumers, which actually means non-US customers are actually paying some of the tariff costs too.
I imagine some manufacturers used tariffs as a reason to lower the price of their products that imported into the US while also raising the price outside of the US to balance that change, but that doesn't mean the manufacturer or their customers outside of the USA are paying anything towards tariffs. The entire tariff transaction is between the customer and the US government, and it's all transacted within the USA.
Tariffs are a tax, paid on the value of imported good, by US citizens who are buying things from outside of the USA. That's it. They are not paid by anyone outside of the US.
Let’s say I’m a widget seller in the US, and my widgets cost $100 to import from Switzerland before tariffs. I retail them at $150 USD in the US, but I sell internationally. In the UK for example, I retail them at £113 (simple conversion, obviously it doesn’t really work like this).
Now tariffs are imposed, my import cost per widget is $139. Not only do I have to jack up my US price to $189, I have to jack up my UK price to £142, meaning UK customers are also paying the tariff now.
Even if you’re a bit smarter about your logistics and use an FTZ or drawback against the import duties, imagine you sell two widgets, one where you don’t pay import duties (bound for the UK) and one where you do (remaining in the US). Your total cost to import is $239.
Instead of making your US customers eat all the cost of the tariff, you might instead adjust your retail prices to $170 and £128 respectively. Again, now your British customers are paying an increased price due to the tariffs.
That only works if you have no competition in the UK. Why would your customers there continue to buy from you when you are now more expensive than the competition?
Edit: for that matter, if you could raise your prices without losing any customers, why did you wait for the tarrifs? You should have already done it.
Ok, so I should have said that I make gizmos which depend on Swiss widgets, I’m not just round-tripping Swiss widgets through the US, I’m adding some value somewhere.
I didn’t say I wouldn’t lose any customers. I probably will, but this way I probably lose the fewest.
It's very likely that for luxury items the price is what people are willing to pay. And it's adjusted for each country accordingly.
Thus, the change may simply be that profit margin for sales into the US drops (or rather than it skews that way).
But there are still many commodities where you're not pricing the product based on branding.
These commodities will likely still have the same price on the international market. And thus, consumers in the US will see the effects of tariffs in the price.
Such commodities could be finished goods, but also parts, machines or feedstock for industry in the US.
I'd also guess that if you look at what middle class people buy, these commodities make up a larger percentage of the expenditure -- than it does for wealthy people.
Making tariffs a very regressive tax.
Most people won't care about the price of luxury watch.
But most people will buy aluminum cans, etc.
Switzerland would sell the same widgets to the UK for much less, since they wouldn't be hit by the tariffs and also wouldn't be paying to ship Europe->America->Europe.
Ok, so I should have said that I make gizmos which depend on Swiss widgets, I’m not just round-tripping Swiss widgets through the US, I’m adding some value somewhere.
They don't sell for the highest possible price, because sales would suffer, they try to sell for the price point that brings them most profit. Adding tariffs will change the point of maximum profit upwards, drastically so for low margin goods.
Companies sell for the highest price they can without losing more sales than it's worth, exactly as you say. I thought it was such common knowledge as not needing to be mentioned.
Adding tariffs means that you might have to lower your profit margins to remain competitive or that you will have to stop exporting to that nation completely, or start manufacturing in that nation. Which is the purpose of tariffs.
The manufacturer are subsidising the tariffs if they lower their price in the us to counteract part of the tariff. When they charge other markets more to make up for the cost, they are making those markets pay for the subsidy.
That might be a short term strategy to avoid losing market share in the states and it’s rational if you think the tariffs are temporary. For goods like iPhones which are truly global that might last. But It doesn’t look like a stable equilibrium in the long term for any food which can support multiple suppliers because manufacturers who don’t do this will be more competitive in non us markets.
Seems to have been the case with PS5 and Xbox consoles. The rest of the world was effectively subsidizing US gamers for a while, until prices there were jacked up even higher.
Aren't most subscriptions significantly cheaper outside the US? I don't know specifically about those, but YouTube's premium is pennies on the dollar in places like Ukraine, Turkey, etc.
The tariff is applied to the import value. For many products you'll get a significant markup on top within the US for distribution, which is not affected by the tariffs.
> Sadly some manufacturers spread the cost across all consumers
Of course not. They charge the highest price they possibly can in each market, regardless of other factors. They're not compensating this here or that there. Every company always charge as much as they can get away with, that is the core function of business.
This was a big worry initially when the tariffs were announced but it doesn’t actually seem to be happening. Most manufacturers are not adjusting their price structure because the effects are super hard to estimate (don’t forget that the US is still just 20% of worldwide demand)
The gaming market is a good example of how they tried to mitigate this. Firstly, Nintendo tried to not surge the price of the Switch 2 by instead increasing the prices of accessories. Then they raised Switch 1 prices. It might still end up needing to raise the Switch 2 price if this keeps up.
Sony and Microsoft did price hikes outside of the US at first as an example of how other countries may be paying for US tarriffs indirectly. But as of a month ago these had to relent and eventually they did both do price hikes on their systems.
This might have been true three months ago, but it isn't any more. Narrow margin business like independently owned coffee shops are already seeing consumables increase in price by up to 3x, which then leads them to have to add "tariff surcharges" that show up on their POS devices.
The profit margin on selling brewed coffee is normally counted in thousands of percent. Any tariff on the imported coffee beans does not have any noticeable effect on operating costs of a coffee shop, so you can safely assume that the owners are just lying in order to jack up their prices, which might work depending on customer base.
Operating costs for coffee shops mainly come from other things than the beans, such as rent, utilities, wages.
I think we agree?! I’m arguing that the tariff is being passed on to US prices and not distributed onto the worldwide customer base. A manufacturer that doesn't adjust their price structure is passing the price on because the tariff is applied by the government and not by the company selling the product.
Is there a risk that this will underemphasise some values when the source of error is not independent?
For example, the ROI on financial instruments may be inversely correlated to the risk of losing your job. If you associate errors with each, then combine them in a way which loses this relationship, there will be problems.
Despite what you get in Australia being pretty reliable, it's too expensive to justify quite yet. My 8kW solar is connected to a Fronius inverter, but until I find a less expensive option I can justify adding a battery.
A 13kWh system is over $AUD10k, and the ROI is on par with the expected lifespan of the battery.
If sodium cells can bring the price down to $AUD100 it would indeed be a massive game changer.
Unfortunately it doesn't seem to help at all, I think mainly because (at present) Go's PGO basically inlines hot functions, and the important code here is all in one big function.
It is only mildly effective because how anemic Go compiler is. And even then it's extremely limited. If you want to see actual good implementations - look into what OpenJDK HotSpot and .NET JIT compilers do with runtime profiling and recompilation (.NET calls it Dynamic PGO).
I've also found htmx a great way to retain server side rendering with minimising the recalculation cost of client changes.
By avoiding needing to add lots of client side logic to still get very low latency updates, it's given me the best of both worlds.
The way Django's template system works also makes it so easy to render a full page initially, then expose views for subcomponents of that page with complete consistency.
On a tangent, Django's async support is still half-baked, meaning it's not great for natively supporting long polling. Using htmx to effectively retrieve pushed content from the server is impeded a little by this, but I slotted in a tiny go app between nginx and gunicorn and avoided needing async he'll without running out of threads.
You should be able to write a message and not rely on the recipient being available when you press send. You should also be able to run nodes to cache messages for longer, and opt in to holding messages for a greater time period. This would among other things allow couriers between disjoint groups of users.
reply