I was thinking exactly this. I am the maintainer of ntfy.sh and my costs are $0 at the moment because DogotalOcean is paying for it 100% because it is open source. It would be around $100, though I must admit it's quite oversized. However, my volume is much much higher than what is described in the blog.
I suspect that the architecture can be improved to get the cost down.
There are some other interesting repos from the same author, namely https://github.com/pijng/goinject, which lets you inject code as part of preprocessing. Feels a lot like Java’s annotation magic.
Thanks for sharing. I wasn’t even aware Go had pre-processors, or that modifying the AST like that is even possible.
I wholeheartedly disagree. I think the Stripe docs and developer experience is one of the greatest, if not the greatest, I've ever seen.
It has great user docs, API docs, developer centric UI elements like copy pastable IDs, webhook event browser, time travel features, test mode (!!), and you can even look at the exact API calls that the stripe UI itself is making. I've brought it up to many colleagues are awesome the docs and experience is...
IMHO, for most things, the data model is straight forward and well explained. Of course there are complicated topics and quirks, but that's just because payments is not easy in general..
I'm clearly a Stripe fanboy, but I am not affiliated in any way.
I agree. I think stripe is complicated because accepting payments is complicated. It’s easy to start a new services that only support the 80% of use cases. Especially if you don’t have to consider fraud or regulatory requirements. But that remaining 20% is what kills your simplicity.
Fun fact, ping 0 works because 0 is the IP decimal notation of 0.0.0.0. One of my favorite age-old WAF bypass since it doesn't match octet notation regexes that are often in place.
That reminds me of a just for fun plugin I wrote for my (long dead) Dropbox-like file sync solution Syncany that would store the shared files as PNG images on Flickr.
At the time, Flickr and Google Picasa (now Photos) gave you like 1T of image storage for free, so I thought it'd be a dope backend. It worked really really well actually... And it was nice to see your data as images. Though since the files were packed and encrypted, it just looked like static.
Pēteris' model with healthchecks.io was a large inspiration for me, and it is the reason why ntfy.sh is following the same model: open source, self-hostable, fun driven development.
Devil's advocate. As someone who has developed a Linux based appliance with over 100k live units across the globe, it seems insane to NOT have access to the thing you're selling and that you have to maintain. If your thing breaks or gets bricked by an update, you will call support and expect them to fix it. You don't want to send in your device or have a support technician come to your house to fix it.
So yes, to the conspiracy theorists it may look like a secret backdoor -- it sorta is. But in many cases I bet it's just a safety net for developers and support to fix things.
I speak for myself and my own experience working for $oldjob. Other companies or countries may of course use this differently. And of course companies get sold and such so you'll never know.
> As someone who has developed a Linux based appliance with over 100k live units across the globe, it seems insane to NOT have access to the thing you're selling and that you have to maintain.
I’ve developed Linux devices selling that many units (and more) and I’m baffled that anyone would think this is a viable way to handle things at this scale.
Units like this should have a firmly read-only Linux firmware that can only be changed by signed updates. The only data you would actually get or modify is the diagnostic data or the contents of the settings. Both of those can be sent through mechanisms that shouldn’t require SSH access.
The correct way to handle this is with a debug info feature. Put something in the app that will zip up logs and configuration files and send them in for support, with the user’s explicit permission obviously. If you can’t figure it out from logs, you can use their config files to clone the situation on a device in the office.
The bigger issue is: Who are you going to task with SSHing into customer devices? With 100K or more people filing support requests, it would be insane to have engineers handling those requests with anything having to do with SSH. It would be equally insane to hand off access to customer support people and give them the keys to SSH into customer devices.
I agree that that is the gold standard. Having an immutable Linux that is well tested on your own hardware and upgraded like that.
At the time I inherited a system that had 30-50k units deployed and was updated via Debian/APT. Older units were running Ubuntu 10.04 (it was 2016) and were hopelessly outdated. We managed to pull every single device to Ubuntu 16.04 and designed a fully automated image based update mechanism for them (I've linked it in other posts). We tried for read only base systems, but it was too tricky, so images stayed read-write, with migration of configs across upgrades.
At the time, customers even had access via SSH (similar to NAS devices these days).
I think what you are describing works for well defined hardware with a medium complexity software stack, or at least something that is limited in terms of epipheral device usage.
The appliance I was managing was heavily using raided disk, ZFS, loops, dmsetup, and many other Linux tools that we have all seen fail in horrible ways.
Not having SSH access, and not being able to diagnose lockups or hanging progress (D state issues) in a live system would have severely crippled us in being able to fix these issues. Many of them I'm sure we would not have been able to. We had failing disks, slow disks, failing RAM, hanging loop devices, corrupt loop devices, hanging ZFS, hanging ZFS, hanging ZFS, many of its bugs we fixed upstream, and and and...
On top of that, we had a "bring your own device" product that literally allowed people to use whatever hardware they want. That makes the read only firmware thing ever trickier.
As said in the beginning, I agree with you in principle, but there are many cases in which it's not as black and white. And I can fully understand the rationale of providing remote access.
Side note: I would have never expected to be down voted on HN for expressing an opinion in a respectful manner about a subject that I have knowledge about, just because it is the "unpopular" opinion. On Reddit, I'd expect to be downvoted for something folks don't like, but on HN in thought the button is just for use against trolling and such.
Re your side note, yes this is the new HN. People use the downvote as a lazy "I disagree". On the plus side, that's mainly the people who tend to read and react within the first 30 to 60 minutes of a comment being posted. After that the votes usually right themselves.
If you sold it, you should not have remote access to it.
Auto-update is de facto isomorphic with remote access capability but that doesn't mean you should have a remote shell. At most, maaaaybe a way for the customer to enable a shell for developer support.
Otherwise, a/b setup to avoid remote bricking, DFU or whatever current standard for customer driven unbricking in exceptional cases. But really, test all the forward and reverse update cases and keep a handful of samples of all shipped hardware so you can make sure everything actually works, and you can figure out how to fix it when you mess it up. Always test upgrades starting from factory fresh with all the versions you ever shipped from the factory. (I've run into products where several updates in, version X would work or not based on the original version from the factory forever ago because of original config or something that didn't get migrated properly but never caused problems until recently).
If you have the ability to update firmware, you have the ability to add remote access whenever you like. You're already trusting the vendor either way.
That said, this current situation of an always-on SSH connection/backdoor is just begging to be exploited by an irate employee, curious intern, or worms. It's impossible to know what sort of safeguards the vendor has in place, if any.
Putting a lock on a nuke is good, but not building the nuke at all is better.
That is correct. But it is possible to design a system with short lived auth tokens/keys and frequent key rotation. I designed such a system at $oldjob for remote access (see [1]). Obviously there is always a risk, and there are always syseng/ops people with access. That is correct.
That's a fair argument, but it doesn't appear that that updates are high on sleep number's priority list:
> The hub includes Python 2.7.18. While extremely old (keep in mind the Hub appears to have been last updated in 2018)
If we give them the benefit of the doubt, perhaps they intended to to keep it up to date but ultimately compaines need to either be transparent about their remote access and manage it responsibly, which includes keeping the system patched, or give up access
I am not defending them for not keeping their stuff up-to-date, but it is very common practice for embedded systems to be hopelessly outdated. I've done what OP describes with IPMI/BMC systems for $mainboardmanufacturer1 and $mainboardmanufacturer2 (both really big name brands), and their BMC systems were equally outdated. It was almost comical, but really sad at the same time.
Moral of the story is to firewall things off really well, I suppose.
At $oldjob, I designed an upgrade mechanism to do A/B image updates so things were always up to date, or at 2-3 weeks out of date. See [1].
For small embedded systems that do not have enough space/bandwidth, this may not be feasible though.
Even if it didn’t have the intentional backdoor… you probably should be treating it as hostile anyway.
Even where not intentionally hostile, not intentionally privacy invading, not trying to fetch updates so it can show you more ads, not… most of this stuff is so hopelessly out-of-date and full of security vulnerabilities it’s only not hostile out of luck.
I don’t connect anything to WiFi unless absolutely necessary. And by that I don’t mean “the device demands it” (I just won’t buy the damn thing) but “it’s a core part of the functionality I’m asking of it”. I’ll prefer zwave/zigbee, Bluetooth, or something else wherever possible when communication is required. (If I were forced to use this bed and it had no manual controls I would definitely have used Bluetooth, avoiding this whole issue.)
And even for the devices that do get a WiFi connection… they run entirely isolated, on a separate SSID and VLAN from my normal devices and traffic, and with a whitelist for what traffic is allowed.
As far as I’m concerned the only difference between this bed and the other devices is that we know about the issues with this bed. We have no reason to believe that the other devices are any better, and in fact a pretty large body of evidence suggesting that they’re probably not.
> And even for the devices that do get a WiFi connection… they run entirely isolated, on a separate SSID and VLAN from my normal devices and traffic, and with a whitelist for what traffic is allowed.
This is what I do today, and honestly I'm about to give up. We lost. Trying to get stuff like airplay / DLNA to work via mDNS is already impossible across subnets, and telling family to switch networks if they want to control X with their phones is just a shit solution. I have to disable 90% of my vehicle's "infotainment" screen to not feel spied upon, and which breaks the app I can use for remote starts, etc.
Maybe when the "Mega-Hack of 2025" happens and all IoT devices go nuclear something will change. But for now, if you buy a device it expects to be on one giant /24 and anything different creates problems. I'm starting to spend way more time than I want maintaining all the various pieces of networking glue that keeps my devices and home automation functioning. It's no longer fun, and I'm tired of fighting it.
I still have an ancient sleep number bed, with no connectivity. It's leaking, and old enough to drink. I'd like to replace it, but still can't bring myself to do it because of articles like this.
I've never felt more like Abe Simpson yelling at a cloud.
> This is what I do today, and honestly I'm about to give up. We lost. Trying to get stuff like airplay / DLNA to work via mDNS is already impossible across subnets, and telling family to switch networks if they want to control X with their phones is just a shit solution. I have to disable 90% of my vehicle's "infotainment" screen to not feel spied upon, and which breaks the app I can use for remote starts, etc.
I guess I never really specified, but I was only referring to "this random IoT/embedded crap" when I said devices.
My main network has all of our computers, phones, tablets, etc. None of it is really restricted or isolated for the reasons you mention.
The main network _also_ has things like the Apple TV. On the balance, it's (1) a device from a reputable vendor that (2) gets regular patches and updates and (3) would be an absolute pain in the dick to isolate.
(The whole reason I own the Apple TV in the first place is because I was never going to hook the Smart TV crap up to the network because I have zero trust that it will be secure or receive useful updates (I'm sure they'll find a way to shove more ads in it...) and it works fine as a TV without it.)
If I were to try and boil this sort of intuitive sense down to a somewhat useful heuristic... if it has a keyboard or has somewhere I can plug one in it's probably going on the main network by default.
My isolated network (well, networks) are for everything else.
There's one for my IP cameras that has no external routing. It only allows communication from Blue Iris to individual cameras and vice-versa. These are all cheap cameras full of security holes and a compromise has a high impact on my privacy (someone literally watching me in my house). Additionally, since most of them are wired this provides some protection against somebody pulling a camera off my wall and connecting a different device to that cable.
Another is for my home automation stuff. I've managed to build it out almost entirely with zwave, but there are still a few things on wifi. This also has no external routing, only allowing communication between Home Assistant and devices. I didn't achieve this by carefully curating firewall rules, but carefully choosing what I purchased. When I needed an air quality monitor, I ended up buying from a less well-known German company at a higher price specifically because "operating with no internet connection or app" was one of their supported use cases. Generally, anything that Home Assistant lists as needing the manufacturer's API for the integration just gets no further consideration.
Not to get too engineering-manager-y, but look at each risk in terms of the likelihood, impact, and effort to mitigate:
- The likelihood of the Apple TV being compromised is pretty low. The impact if it were is maybe moderate, everything within the network is still _secure_ in other ways. The effort to mitigate this through network isolation (as you're saying) is very high. Screw it, main network. We'll mitigate as much as we can ensuring that updates are being installed.
- The likelihood of one of our computers being compromised is moderate. The impact to the network is moderate. The effort to mitigate this through network isolation is, again, very high.
- The likelihood of this $20 Chinese IP camera being chock full of vulnerabilities is 100% (I've found vulnerabilities myself!). The impact is very high (someone watching me in my home). The effort to mitigate is very minimal (totally isolate from the network and greater internet, use my own DVR instead of their broken mobile app and cloud service). It's getting isolated.
- The likelihood of this wifi door lock being insecure is pretty high (though the likelihood of it being compromised by someone with physical access to my house is low). The impact is moderate. The effort to mitigate by buying a zwave lock instead is... pretty near nil. Risk avoided entirely!
As far as effort and risk, this strikes the right balance for me. It may or may not for you. The only advice I'd give is don't let the perfect be the enemy of the good. Don't burn yourself out chasing perfect and fall back to "bad" if "good enough" is an option.
While 2.7.18 hasn't been updated since 2018, it's also the last version of Python 2.
I've got several programs stuck in 2.7.18, as they have sizable dependancies that never got updated to Python 3 -- unless I'm willing to rewrite several large Python packages, I'm stuck here forever. As long as the program isn't network connected, I don't see a problem with fixing a Python version, and set of packages, and leaving the software running forever.
It does seem insane. But the support engineer having local network access after remoting in without the customers willing consent also seems insane. Its obviously there so they can fix these devices, but shortcuts made for engineers are such a common security risk.
Ideally you would have a backdoor on the device thats open only to the local network. User runs an app on their PC, provides willing consent for someone to complete a support task by providing an OTC to the engineer. App goes and discovers the device, and hosts the session for the engineer. If the user cant perform such a task they can probably buy a device with one button on it that will, or pay for a callout or return.
reply