The reason I'm considering a Pi cluster is resilience and repeatability. The reason I don't have one yet is because (like you) I'm unconvinced it's the right way to get that.
At least in theory, a Pi cluster has better failure modes than a single machine even if it's less powerful overall. And yes, I'm currently running on an old laptop -- but it's all a bit ad-hoc and I really want something a bit more uniform, ideally with at least some of the affordances I'm enjoying when deploying stuff professionally.
Your house a single failure domain, you're not really going to be resilient to a lot of common failures, and most of these home labs have every device plugged into the same UPS, so there's really no difference between the power failure domain of 10 Pis and one desktop computer plugged into a UPS connected to your home router.
Do yourself a favor and buy a NAS and a compatible UPS. Any modern NAS software will speak one of the UPS IP protocols to handle graceful shutdown if your power goes off. Once you have the money, buy a second NAS and put it in a relatives house, set up a wireguard/tailscale tunnel between the two devices, and use it for offsite backups.
It's not power failure I'm worried about, it's failure of other systems. But even so, yes, it's more likely that the systems that are supposed to ensure redundancy will fail than the otherwise-non-redundant services.
For what it's worth, my house isn't a single power domain -- I'm running some of my services on an old laptop that's still got some battery. But it goes to sleep if the power goes off, and needs physical intervention to recover. Which is an excellent example of redundant systems providing multiple single points of failure and of the perils of using random left-over hardware to run stuff.
Separating storage, compute, and database into dedicated appliances makes sense for home labs. Getting redundant units early is key.
A pre-built NAS (configured to raid 5 at least) is worth the cost - storage should be set-and-forget since drives will fail and hot-swapping drives and automatic rebuilds should be zero downtime to life. Commercial NAS solutions have proven backup workflows.
For compute and databases, home setups can mirror to cloud or remote locations. Proxmox makes this straightforward with its web admin - just a few clicks to spin up replicas.
Modern consumer hardware and internet are quite reliable now. Business-grade equipment performs even better.
On the other hand, suddenly not offering cover at all is a problem for people who have established interests in a property.
I can see an argument for not writing new policies in an area. But I can also make an argument for allowing existing policyholders to renew -- maybe not at the previous rate, but at an appropriate rate for the risk.
As a matter of public policy, we ought to match the risk put on a homeowner with a mortgage by the bank with the risk assumed by the insurer when the homeowner pays their policies. Not let the insurance company lay the risk on the homeowner if they notice the risk has gone up before the loss is realised.
Alternatively, we need to start treating buildings insurance more like (UK) life cover: I took out decreasing life insurance when I took out my mortgage, it'll pay off the mortgage if I die. The amount of cover goes down every year to roughly match me paying off my mortgage. No matter what happens to my health in the meantime, if I keep paying the premiums then I keep the cover -- even if I wouldn't qualify for new cover.
Or maybe we need to say that if an insurance company declines to renew because they think the risk has risen too much, the customer should be allowed to claim on the expiring policy even if the house is still standing, because it's obviously worthless, and it's obviously due to a risk that was covered by the policy.
If you want a longer reinsurance term, it needed to be agreed to upfront. I'd guess insurance companies are well aware of the risks of writing long-term policies and so don't usually offer them. That being said, your comparison to term life insurance is quite apt - I wonder if such insurance policies actually exist. I would guess they'd cost more than a yearly renewing policy, but who knows.
Your other proposals as extensions to yearly terms certainly go too far. Annual renewal policies are commonplace, and it should be well understood that there's no obligation on any party to continue it.
Oh, definitely. At least not without a lot of discussion around how much the extra insurance would have cost. I'm not in a position to implement it either :).
If we're going to have state intervention though (and it seems at least under suggestion, I've no idea how seriously, in CA) then rather than an insurer of last resort, we (or rather they) should consider what they actually want from their insurance.
It's obviously not, or you wouldn't find so many people (successfully!) arguing against your interpretation.
That's one of the problems with a codified constitution that's as ossified as the one in the US: the language used gets interpreted, and so the meaning of the language depends on the interpretations favoured by whoever's currently holding the reins.
The first test of a class of tests is the hardest, but it's almost always worth adding. Second and subsequent tests are much easier, especially when using:
Parametrised tests let you test more things without copying boilerplate, but don't throw more variants just to get the count up. Having said that:
Exhaustive validation of constraints, when it's plausible. We have ~100k tests of our translations for one project, validating that every string/locale pair can be resolved. Three lines of code, two seconds of wall-clock time, and we know that everything works. If there are too many variants to run them all, then:
Prop tests, if you can get into them. Again, validate consistency and invariants.
And make sure that you're actually testing what you think you're testing by using mutation testing. It's great for having some confidence that tests actually catch failures.
In the UK, if you're at all over the line when the lights change, you're considered "in the junction" and are expected to leave the junction -- the next phase should give you priority to do so. The only way to run a red light is to start crossing the line while the light is red -- although plenty of drivers will start to inch across while they're supposed to be waiting :P.
The most annoying scenario is where a driver has either stopped or inched forwards far enough that they can't actually see the lights any more and don't know when they've changed.
Because: no, I don't remember when maintaining a blog was the way to build your developer brand, nor that thoughtful technical writing alone would get you anything -- if no-one reads what you write, the result is the same as it's always been.
I do think there was a distinct period 10-15 years ago where, particularly when trying to break into the startup scene as a rookie developer coming from a non-CS education background, the blog was a useful place to write about technology just to show you were actually connected to "the scene" somehow even though you had limited professional experience. Somewhere out there I still have a neglected personal blog with posts from those days where I was hacking open source firmware onto a WiMAX receiver so I could use it as a router, sharing source for some audio processing effects plugins I wrote in college courses, things like that... cool stuff for a college kid, but not what I spend time doing now years into my career. At this point my credibility is from my professional work.
Of course, that's exactly why it eventually became standard for every aspiring rookie developer to have a blog; and eventually these just turned into straight-up programming TIL blogs as more and more people who weren't ever tech hobbyists entered the field. The signal quality diminished, and in the modern world of ML-assisted ATS resume screening it may not even be a signal at all.
Basically, "building a brand" as a certain kind of technologist had value when relatively few people were doing it. Now I think its very fair to question as a tool for getting hired.
Anybody who writes because they like writing should certainly continue to do so!
As the article points out, the TPM is not in a good place, architecturally, to use for DRM: there's no path from the TPM to the screen that's not under OS (and thereby user) control.
Currently, no. But once (undetectable) OS modification is no longer possible, making the undecrypted media unreadable is just a few API restrictions away.
In Android phones for example you cannot screenshot banking apps. And if you root (modify the OS of) your phone, banking apps refuse to work.
However, for the question at hand, that's irrelevant: a better (for DRM) solution exists today, and they're already using it.
I'm not saying that the TPM is incapable of being abused by manufacturers and OS authors, but the FSF really weakens their argument when they predicate it on something that's not actually true. Ex falso quodlibet (you may prove anything if you rely on a falsehood).
The phones are using their TPM equivalent to do it securely, though -- there's not nearly enough entropy in a lock screen to provide robust security, but the boot-time unlock depends on both the screen lock and the hardware, and the hardware will rate limit attempts to use it to turn lock screen inputs into usable encryption keys.
Which is also where the A1, A7, and A8 meet.
reply