Hacker Newsnew | past | comments | ask | show | jobs | submit | cmcconomy's commentslogin

strange, Teams is not on my list. Maybe you are not using "(new) teams (for business and school) v2 (new)"


Is that the new version? /s


reminds me of the recently enacted FAIR plan in California for last-resort wildfire insurance. It got state dispensation to carry otherwise-disallowed, lopsided balance sheets to cover more people -- but if a small fraction of those people do experience wildfire it'll go bust!

--

edit: see below, I was wrong about FAIR being newly enacted


The California FAIR Plan was created in 1968 so I’m not sure where you’re getting your information.

It was entirely self funded by premiums until the Eaton and Palisade fires and unlike the NFIP, still hasn’t been bailed out by the federal government.


thanks - I was wrong about the plan being new.

However as of this year it's got ~$600B of exposure and $400MM in funds. at 3MM/residence that's 133 homes before they're bust, right?

see:

https://ains.assembly.ca.gov/system/files/2025-05/assembly-f...

https://calmatters.org/economy/2025/02/homeowners-insurance-...


> However as of this year it's got ~$600B of exposure and $400MM in funds.

The California FAIR plan only has $377 million in liquid funds because it pays premiums for $5.75 billion worth of reinsurance. Roughly 1% of their clientele (by value) would have to lose their homes to put the plan under strain. That's what may happen with the Palisade/Eaton fires, for the first time in nearly sixty years. Current estimates are $5-9 billion in claims, so the current worst case scenario is a multibillion dollar bailout by the state (not federal!) which is well within the state's budget.

Also a nitpick: (almost?) no one is receiving $3 million on a FAIR plan even in the Palisades or Malibu. That's the theoretical maximum but since it doesn't cover the value of the land, the actual coverage is much lower.

I have a lot more to say about the circumstances of these latest fires (several of my friends lost their homes in both neighborhoods) but suffice it to say I don't think this disaster is representative of future liabilities.


Oh and next time I'll be sure to quote your entire original post because this:

> It got state dispensation to carry otherwise-disallowed, lopsided balance sheets to cover more people -- but if a small fraction of those people do experience wildfire it'll go bust!

is shit you edited in after you were called out on your ignorance on when the FAIR plan's origins.

You don't know the first thing about how insurance works in California and it shows.

Have some dignity. It's a lot cheaper than California fire insurance.


in fact, I did not edit the original contents of my first message at all, out of respect for the reader. The edit merely acknowledged your point.


Sorry, I was a bit drunk on this reply. Should have deleted it but its too late


i had a nuc with an eGPU, connected via a simple usb/thunderbolt connection, and I recall it was a nightmare to setup


I use a eGPU via USB4/Thunderbolt (I think it's the same? Not entirely clear). Works out of the box on Linux. No real setup needed. Main downside is that removing it tends to make the system somewhat unstable and lock up (sometimes hours later) after a kernel "oops". I need to look into that because it's probably a relatively minor Linux kernel issue. But minus that: it works great.


I have read that thunderbolt and oculink are very different in this regard. Whereas thunderbolt devices can be plugged in at anytime, the oculink needs to be plugged at boot time. This seemingly innocuous detail is the catalyst as to the reason why oculink is better performing. It comes down to PCIe vs Thunderbolt in general.


While PCIe as a standard allows for hot swapping I would be quite surprised to learn that any motherboard or GPU supported it. At least in the consumer space


Lenovo's TGX 'extension' (I guess? It was for their eGPU solution) allowed hot swap, but support for it is definitely not very broad.


AFAIK oculink is pretty much pure pci-e wiring while thunderbolt has a whole protocol much like USB that adds some overhead.


I used an eGPU with a 2013 MBP for gaming, it was great when my other machine shat itself.

The other machine was also a NUC the "Skull Canyon" and it was much more finicky about using the eGPU.


Same for my wife's old Mac Mini. Finally gave up on it and bought her a new M4 Pro


I just bought an external thunderbolt eGPU box (even thought it'll never support a GPU with its mini form factor) to host a blackmagic 4k display card. Luckily, I'm still on the last gen i9 CPU so it worked right out of the box once I found the slightly older software. I've read people have issues getting it to work on the M* series chips though.


more general purpose is to use https://github.com/akavel/up

eg.

cat ./file.txt | up

and inside up, jq away!


Sadly, up doesn’t seem very active nowadays with some bugs unfixed for a while.

Great idea, though!


if you want to fall back to a less hardcore intro try spacemacs or doom emacs, very vim friendly


this guy is like matt drudge, he caught lightning in a bottle and is holding tight onto that moment of relevance


remarkable


grep for /* and omit /v2 ?


crazy if true


I greatly appreciate these kinds of tools but I always err on the side of what's installed by default wherever possible so I can work across hosts as soon as i land


agreed. and the setup for this tool in particular looks… complicated and annoying, at least at first glance

for myself, if i want a shell script to be _portable_ i just write it in POSIX sh and try to be smart about dependencies

and if i don't care about portability, i'd rather just use a nicer shell like bash or zsh or fish (i'd actually like to mess with ysh at some point)

i feel like i'm much more likely to encounter a system with one of those shells available than one with modernish installed, and the idea of introducing a bundling/build step into shell scripts is deeply unappealing to me.

i can see why this exists, i think, and i imagine there are people who find it useful. i simply am not among them.

i also find it disappointing that their most basic example shows the setup in bash instead of sh, but that might just be me.


I get wanting some level of portability, but what kind of systems do you still encounter (and want to run your scripts on) that have sh yet lack Bash? I would've expected that to be the baseline nowadays.


For me it's small alpine containers running in k8s, and trying to get weird stuff running on my kobo ereader (can quickly get to a chroot with bash, but the base system doesn't have it).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: