reminds me of the recently enacted FAIR plan in California for last-resort wildfire insurance. It got state dispensation to carry otherwise-disallowed, lopsided balance sheets to cover more people -- but if a small fraction of those people do experience wildfire it'll go bust!
--
edit: see below, I was wrong about FAIR being newly enacted
The California FAIR Plan was created in 1968 so I’m not sure where you’re getting your information.
It was entirely self funded by premiums until the Eaton and Palisade fires and unlike the NFIP, still hasn’t been bailed out by the federal government.
> However as of this year it's got ~$600B of exposure and $400MM in funds.
The California FAIR plan only has $377 million in liquid funds because it pays premiums for $5.75 billion worth of reinsurance. Roughly 1% of their clientele (by value) would have to lose their homes to put the plan under strain. That's what may happen with the Palisade/Eaton fires, for the first time in nearly sixty years. Current estimates are $5-9 billion in claims, so the current worst case scenario is a multibillion dollar bailout by the state (not federal!) which is well within the state's budget.
Also a nitpick: (almost?) no one is receiving $3 million on a FAIR plan even in the Palisades or Malibu. That's the theoretical maximum but since it doesn't cover the value of the land, the actual coverage is much lower.
I have a lot more to say about the circumstances of these latest fires (several of my friends lost their homes in both neighborhoods) but suffice it to say I don't think this disaster is representative of future liabilities.
Oh and next time I'll be sure to quote your entire original post because this:
> It got state dispensation to carry otherwise-disallowed, lopsided balance sheets to cover more people -- but if a small fraction of those people do experience wildfire it'll go bust!
is shit you edited in after you were called out on your ignorance on when the FAIR plan's origins.
You don't know the first thing about how insurance works in California and it shows.
Have some dignity. It's a lot cheaper than California fire insurance.
I use a eGPU via USB4/Thunderbolt (I think it's the same? Not entirely clear). Works out of the box on Linux. No real setup needed. Main downside is that removing it tends to make the system somewhat unstable and lock up (sometimes hours later) after a kernel "oops". I need to look into that because it's probably a relatively minor Linux kernel issue. But minus that: it works great.
I have read that thunderbolt and oculink are very different in this regard. Whereas thunderbolt devices can be plugged in at anytime, the oculink needs to be plugged at boot time. This seemingly innocuous detail is the catalyst as to the reason why oculink is better performing. It comes down to PCIe vs Thunderbolt in general.
While PCIe as a standard allows for hot swapping I would be quite surprised to learn that any motherboard or GPU supported it. At least in the consumer space
I just bought an external thunderbolt eGPU box (even thought it'll never support a GPU with its mini form factor) to host a blackmagic 4k display card. Luckily, I'm still on the last gen i9 CPU so it worked right out of the box once I found the slightly older software. I've read people have issues getting it to work on the M* series chips though.
I greatly appreciate these kinds of tools but I always err on the side of what's installed by default wherever possible so I can work across hosts as soon as i land
agreed. and the setup for this tool in particular looks… complicated and annoying, at least at first glance
for myself, if i want a shell script to be _portable_ i just write it in POSIX sh and try to be smart about dependencies
and if i don't care about portability, i'd rather just use a nicer shell like bash or zsh or fish (i'd actually like to mess with ysh at some point)
i feel like i'm much more likely to encounter a system with one of those shells available than one with modernish installed, and the idea of introducing a bundling/build step into shell scripts is deeply unappealing to me.
i can see why this exists, i think, and i imagine there are people who find it useful. i simply am not among them.
i also find it disappointing that their most basic example shows the setup in bash instead of sh, but that might just be me.
I get wanting some level of portability, but what kind of systems do you still encounter (and want to run your scripts on) that have sh yet lack Bash? I would've expected that to be the baseline nowadays.
For me it's small alpine containers running in k8s, and trying to get weird stuff running on my kobo ereader (can quickly get to a chroot with bash, but the base system doesn't have it).