this google research is a fascinating pivot from the usual driver-centric data we look at in insurance risk modeling. usually we use hard braking as a proxy for how safe an individual driver is. but using it to identify specific road segments or intersections with bad geometry is huge. it basically flips the script from individual liability to infrastructure-level risk assessment.
This is definitely pie in the sky but I dream of a future where you have so many autonomous vehicles all the road that we can not only collect this data but also incentivize the slow turning wheels of government to fix it.
yeah the interesting part is that the carriers already have most of this data from telematics apps, it's just sitting in corporate silos.
if we could bridge that gap, the economic incentive for municipalities would be massive lower accident rates mean less property damage and fewer expensive liability lawsuits for the city. it's basically a potential safety feedback loop that just needs the right data sharing protocol to actually kick in.
i'd love to see a safety heatmap layer, but the legal hurdles are probably massive. the second google puts a high risk badge on a specific road segment they open themselves up to lawsuits from local businesses or property owners claiming the algorithm is nuking their traffic or property value. it's probably going to stay in the hands of traffic engineers and underwriters for a long time.
Google is offering this as part of the geospatial platform that they market to governments for huge $$$ so I don't think you are going to get it for free any time soon. Maybe limited access if you have an Earth Engine developer account?
blocking these reports is a huge blow to systemic risk management.
if the specific vectors of the breach aren't disclosed, the rest of the critical infrastructure ecosystem is basically flying blind. it feels like we're trading collective security for corporate reputational damage control.
the move to decentralize tsmc's footprint to japan is such a massive play for supply chain resilience. from a macro risk standpoint, having advanced node capacity outside of the immediate geopolitical tension zone is basically the ultimate catastrophic insurance policy for the global tech economy. it's interesting to see how the 'just in case' logic is finally starting to override just in time efficiency.
from an actuarial perspective, these longitudinal studies on dementia are huge. early-onset is basically the hardest risk to price for long-term care because the tail of the claim is so long and expensive. finding a solid inverse correlation like this is the kind of thing that eventually shifts premium modeling for an entire generation.
It’s less about denying coverage and more about accurate risk pooling. If an insurer knows a specific marker leads to a 90% chance of a million-dollar claim, they have to price for that. If they don't, the 'healthy' people in the pool end up subsidizing the high-risk ones until the premium becomes too expensive for everyone and the pool collapses (adverse selection). The real challenge is that regulators often won't let insurers price high enough for those risks, which is why many companies just stop offering LTC (Long-Term Care) altogether.
that's the fundamental paradox of modern underwriting.
insurance relies on what philosophers call a veil of ignorance. it only works if we're spreading stochastic risk things that might happen to anyone.
once data gives us perfect foresight into a 90% chance of a million-dollar claim, it’s no longer insurance; it’s just a pre-funded bill. at that point, the pool isn't spreading risk, it's just facilitating a direct wealth transfer. the 'good' risks realize they're just subsidizing a known event for others and they flee the pool, which is exactly how the market for things like LTC collapses.
we're basically at a crossroads where better data is actually making 'insurance' as a concept mathematically impossible for certain risks.
why would I agree to that when I'm not at risk of that? (Assume for discussion I have had this tested - whatever it is). I have my own life and like everyone more things (including vacation...) I want to spend it money on than I have money.
Some of us think that a key aspect of society is that we take care of each other. If something terrible happens to you before you manage to amass a fortune, it’s nice to live in a society that won’t leave your family destitute.
Some of us don't like paying for other people who make objectively bad decisions that cause them to need to be bailed out in some way.
There's nothing wrong with taking care of others, but there has to be limits. Hopefully the limits are designed in ways that encourage objectively good choices and discourage objectively bad ones.
True, but reductio ad absurdum is a good way to make any argument look silly without actually considering nuance. Of course, there's some limit to what society will do to save an individual. If someone is lost at sea, we'll try to save them, but we won't spend $1T rerouting all of our available naval capabilities to do it. How much should we spend? The math isn't clear, and thus the economics aren't clear. But, where we should fall is somewhere on a gradient between "Every man for himself" and "Save every individual at all costs."
The question is, where do we fall on that gradient?
Things are not that simple. Spending money on the toys/experiences I want also increases my community. As does investing in the future. Helping the poor does increase society as well, but it isn't clear which investment helps society the most (there is no one correct answer).
GINA prohibits genetic discrimination in pricing common health care insurance, but not for products like life insurance, disability insurance, or even long-term care insurance. Some states have statutes that address the latter types of insurance, though.
Life insurance is mostly a useless financial product, and obviously the law wouldn't mandate selling life insurance to people who are likely to die early. That would create such a crazy perverse incentive.
YEsh, they'll both raise your rates for not submitting this data and raise your rates for being in the cohert that's susceptible; they'll also raise everyone elses rates for having to recalculate the tables!
yep. it's a total market failure. 20 years ago, carriers completely underestimated the tail of cognitive decline and priced it way too cheap. now the legacy claims are bleeding them dry, which is why new LTC policies are basically non-existent or priced for the moon. it's a tough lesson in what happens when the actuarial data lags reality by 20 years.
Wow, that is a depressing point of view. Advancements in "not paying for things" accelerates while advancements in "preventing things" just inches forward.
sandboxing is really the only way to make agentic workflows auditable for enterprise risk. we can't underwrite trust in the model's output, but we can underwrite the isolation layer. if you can prove the agent literally cannot access the host network or sensitive volumes regardless of its instructions, that's a much cleaner compliance story than just relying on system prompts.
exactly, egress control is the second half of that puzzle. A perfect sandbox is useless for dlp if the agent can just hallucinate your private keys or pii into a response and beam it back to the model provider. it’s basically an exfiltration risk that traditional infra-level security isn't fully built to catch yet.
this was bound to happen with the number of compounding pharmacies getting into the glp-1 game. from a risk side, it’s a total mess it’s basically impossible for insurers to model long-term liability when the supply chain for these drugs is a black box. fda had to drop the hammer before the claims started flying.
The compound pharmacies are the leeches I have the least amount of sympathy for. While Lilly may gouge on the price, at least they can claim to have invented the drug. Some compound pharmacy buying $20 worth of API from China and selling it reconstituted for at least 10x that is just raking in huge profit, and they did none of the R&D.
What the FDA will probably have trouble cracking down on are the vendors selling lyophilized peptides direct to the public. And the number of people going that route goes up dramatically over time.
The Atlantic did a write-up in December [0] about retatrutide availability and they named at least one well known vendor (who also sells tirzepatide, semaglutide, and all the other usual peptides). Not the cheapest vendor, but well known, does the batch lyophilization in the US, still takes credit cards, and gets pretty solid group test participation due to their popularity. And if you know how their pre-sales work, the price premium isn't too terrible, worth it for the test participation IMO.
If you want to know who else to avoid, join the Discord the vendor links to from their web page and ask around in the general discussion topics, people will share.
Nexaph does not lyophilize in the US despite Cain's claims - if you're buying from Nexaph, do so because they're popular and well tested by 3rd parties, but not off of anything they're saying about themselves. Hell, just go to the About page and you can find an artifact from back when Cain was lying about Nexaph being a chinese pharma company that had mRNA vaccine production experience.
That's also why the "presale" periods are cheaper - just letting them know how many vials to put in an order for.
I agree such claims should be viewed skeptically. The aspect of SNP that appeals to me is the excellent test participation. It’s basically just a big, reliable group buy. I shop other places too, but I’ve been burned a couple times.
the agentic shift is where the legal and insurance worlds are really going to struggle. we know how to model human error, but modeling an autonomous loop that makes a chain of small decisions leading to a systemic failure is a whole different beast. the audit trail requirements for these factories are going to be a regulatory nightmare.
I think the insurance industry is will take a simpler route: humans will be held 100% responsible. Any decisions made by the ai will be the responsibility of the human instructing that ai. Always.
I think this will act as a brake on the agentic shift as a whole.
that's the current legal default, but it starts breaking down when you look at product liability vs professional liability.
if a company sells an autonomous agent that is marketed as doing a task without human oversight, the courts will eventually move that burden back to the manufacturer. we saw the same dance with autonomous driving disclaimers the "human must stay in control" line works as a legal shield for a while, but eventually the market demands a shift in who holds the risk.
if we stick to 100% human responsibility for black-box errors that a human couldn't have even predicted, that "brake" won't just slow down the agentic shift, it'll effectively kill the enterprise market for it. no C-suite is going to authorize a fleet of agents if they're holding 100% of the bag for emergent failures they can't audit.
This is how it works for things decided by algorithm right now, and it has done absolutely nothing to stymie companies making decisions by algorithm and simply making sure you sign away your right to sue before you interact with them.
They just are not going to provide insurance to companies who use AI because the liability costs are not worth it to them since they cannot actual calculate risks, it is already happening [0]. Its the one thing that a lot of the evangelists of using AI for entire products have come to realize or they aren't actually dealing with B2B scenarios where indemnity comes into play. That or they are lying to insurance companies and their customers, which is a... choice.
as someone looking at this from a risk and liability perspective, the move toward world models is pretty wild. the edge cases where a model predicts a safe path that doesn't align with physics is basically the final boss of insurance underwriting for autonomous vehicles. curious how they actually validate for 'common sense' reality vs just predicting the next frame.
It's interesting that he did this to stop people from using phones while driving, but he ended up creating a bigger public safety hazard by jamming emergency comms.
From a liability perspective if that jammer had blocked a 911 call during a nearby accident his exposure would have been far higher than just the $48k FCC fine. Federal preemption on signal jamming is one of the few areas where the hammer drops consistently hard.
The own vs rent calculus for compute is starting to mirror the market value vs replacement cost divergence we see in physical assets.
Cloud is convenient because it lowers OpEx initially, but you lose control over the long-term CapEx efficiency. Once you reach a certain scale, paying the premium for AWS flexibility stops making sense compared to the raw horsepower of owned metal.
Using "big" cloud providers is often a mistake. You want to use rented assets to bootstrap and then start deploying on instances that are more and more under your control. With big cloud providers, it is easy to just succumb to their service offerings rather than do the right thing. Do your PoC on Hetzner and DigitalOcean then scale with purpose.
reply