Assume 120 homes in an area, evenly distributed per month on yearly leases. If all renters switch every month and it takes a month to re-rent the unit, there is a vacancy rate of 8.3%
Obviously not every renter moves every lease end, but also some units are in places no one wants to live, others are mispriced, some need renovation or extensive cleaning, etc.
But from a cursory sanity check the 6.9% number is likely reasonable for a fairly tight rental market.
I feel for the victim and Google’s continued inability to provide customer service. Maybe the media attention will escalate this to someone who can fix it.
As an aside, I am relieved to see this is from the UK. Whenever sim swap stories end up on HN there are always comments about how it’s due to some unique incompetence of US-based cell service providers.
While it's no excuse for their subpar customer verification practices, cell carriers globally have been telling tech companies for years not to use control of a phone number as an auth factor, and they've been doing it anyway.
For a long time I wondered why there was such a big push for PQ even though there was no quantum computer and a reasonably working one was always 15 years in the future.
… or was there a quantum computer somewhere and it was just kept hush hush, hence the push for PQ?
The answer turns out to be: it doesn’t matter if there is a quantum computer! The set of PQ algorithms has many other beneficial properties besides quantum resistance.
The point is that a lot of secrets need to remain secrets for many years. If some government found a way to break elliptic curve in the same way that the number field seive broke rsa (hence we now need 2048-bit keys rather than 256bit keys we were using in the 90s) we’d be fucked for many years to come as all secrets are leaked.
So there may not be quantum computers now. But if there’s going to be in 20years we need our crypto to be resilient now.
I’m a physicist working on QC. I know we actually don’t know if a “secret” QC exists somewhere, but given that major theoretical and engineering breakthroughs are needed to build a fault tolerant one (and all QC companies are facing this regardless of whether their qubits optical, superconducting, trapped ions etc), I’d put that possibility to near zero. Consider also the talent and expertise that is needed for such an endeavour…
Very nice! I hope you folks will go far. Best of luck :)
That said, the entire field is still so far from a seriously useful QC that I still wouldn’t bet there’s a secret one somewhere in some government lab. Those are my two cents, and I may be wrong of course.
I’m not claiming there is. There might be, but I find it unlikely. When the NSA develops practical QC systems, a lot of QC research will suddenly go quiet. That hasn’t happened.
There is a viable pathway to low error rate, scalable quantum computers on a less than 10 year time horizon though.
There is a long history of this technology, and the comparison to cold fusion is unwarranted. This is peer reviewed, accepted science. The basic technique was worked out under a DoE study in Texas, with an Australian collaborator. She (Dr. Michelle Simmons, who is widely respected in this field) then Went out and raised money to scale-up.
The basic idea is that they use scanning probe microscopes to create structures on a silicon surface with atomic precision, which can then be manipulated by the surrounding chip as a solid-state qubit. You still need error correction, but it ends up being a small constant factor rather than combinatorial blowup.
Full disclosure: I’m fundraising a startup to pursue a different manufacturing process that would enable the same type of quantum computer, but with nitrogen vacancies in diamond instead of silicon (and therefore still higher reliability).
One way or the other, highly reliable quantum computers are right around the corner and are going to take a lot of people by surprise.
This is also something that people outside academia apparently don't understand. Peer review doesn't tell you anything about the validity of the science. It only ensures the methodology was correct. The original Pons & Fleischmann paper passed peer review and was published in the Journal of Electroanalytical Chemistry. It only got retracted after other people tried and failed to reproduce it. If you want to know whether science is legit or not, look out for reproduction by independent actors - not peer review.
Indeed. Peer review is table stakes for the conversation, not an acceptance criteria for "true". Plenty of things get published that are generally regarded as wrong by those who work in the field.
There's journal peer review, and then there's scientific community peer review which involves acceptance of ideas and replication. They're not the same thing and unfortunately not often distinguished in speech or writing ("peer review" describes both). I thought that on HN it would be clear I was talking about the latter.
In this case, three separate labs have replicated this work. It's solid.
Peer review in fundamental science is almost universally understood straightforwardly as part of the process of publishing said science. The other kinds you are referring to (there's actually more than one) are more common in other fields. Peer review in physics is very far from acceptance in general.
Maybe, but that’s a very recent redefinition of terms. Peer review as a standardized mechanism in the 70’s - 90’s depending on the field. Until very close to the present saying “passing peer review” meant something akin to the Popperian notion of “ideas that survive attempts at falsification by the broader community of scientific peers.” In all my interaction with academia pre-pandemic, it meant precisely this. Something wasn’t peer reviewed because it was published (surviving the editorial process), but because it was published and cited by later works without credible refutations emerging.
> California-based startup PsiQuantum was given an “inside run” to a controversial $1 billion investment by Australian taxpayers as the only company that government engaged with in a thorough due diligence process.
According to the linked post there are PQ algorithms that will fit this niche:
> This variety of different trade-offs gives developers a lot of flexibility. For an embedded device where speed and bandwidth are important but ROM space is cheap, McEliece might be a great option for key establishment. For server farms where processor time is cheap but saving a few bytes of network activity on each connection can add up to real savings, NTRUSign might be a good option for signatures. Some algorithms even provide multiple parameter sets to address different needs: SPHINCS+ includes parameter sets for “fast” signatures and “small” signatures at the same security level.
Embedded/IoT is typically slow and small which is not a space PQ fits into.
I also think the article is overly optimistic claiming that ECC is “hard” because of the need for careful curve selection (even though we have very good established curves), but I find it hard to believe that PQ algorithms are immune to parameter selection problems and implementation challenges.
There has been research on the intersection of IoT and PQ signatures specifically at least, e.g. see "Short hash-based signatures for wireless sensor networks" [0] [1]. Unlike SPHINCS+ which is mentioned in the article, if you're happy to keep some state around to remember the last used signature (i.e. you're not concerned about accidental re-use) then the scheme can potentially be _much_ simpler.
The state is enormous. Dedicating megabytes and megabytes to key state is painful. And so is tracking state across components and through distribution channels. If you’re not afraid of that then just use symmetric crypto and be done with it.
To be clear my comment is specifically only relating to signature schemes, not encryption.
> The state is enormous
The scheme I linked to points towards efficient "pebbling" and "hash chain traversal" algorithms which minimize the local state required in quite a fascinating way (e.g. see https://www.win.tue.nl/~berry/pebbling/).
> tracking state across components and through distribution channels
Assuming you have reliable ordering in those channels I don't see how the stateful nature of such schemes makes it hugely more complex than the essential hard problem of key distribution.
Also, we are talking about mitigating a large tangible downside risk to a sudden breakthrough in the space - all the secrets stop being secret. "Reasonable" timeline estimates for how far away we are matter for thinks like if/how much we invest in the tech, but optimistic timelines become pessimistic when defending against downsides and we should be pessimistic when preparing regulations and mitigations
> … or was there a quantum computer somewhere and it was just kept hush hush, hence the push for PQ?
If there were a quantum computer somewhere, or close to one, it would be reasonably likely for it to be secret.
I look at the history of crypto in the mid to late 20th century for example. Small groups in the Allies and the NSA and etc. had certainly more knowledge than was public by a wide margin, years to decades.
That's not quite correct. The first (public) brute-forcing of DES was done in 1997 by the DESCHALL project distributing the search across tens of thousands of volunteer's computers for weeks [1]. The EFF then spent $250,000 to build a dedicated DES cracker ("Deep Crack") which required an average of four days per key found [2]
Income limits usually apply still. In my state, its up to 32K for pregnant women or for those with a child under age 1. For children 1 and 5 it goes down to about 20K.
If you're above those limits, no medicaid. You can go on ACA/Obamacare plans but those are (much) more expensive even with subsidies, at least in my state.
A birth all in all is surely way above 10k dollars. So people are literally expected to pay 5k+ to have a child? If that's not stressful enough for many poorer people I don't know either
If it's an emergency of some sort, it's probably 100% covered. The thing though is that MedicAid is run similarly to Medicare where private insurers get involved but provide really shitty plans with terrible formularies and very limited choices for providers. Also, there's widespread healthcare provider bias, stigma, and discrimination against MedicAid patients.
> Out of pocket costs cannot be imposed for emergency services, family planning services, pregnancy-related services, or preventive services for children. Generally, out of pocket costs apply to all Medicaid enrollees except those specifically exempted by law and most are limited to nominal amounts. Exempted groups include children, terminally ill individuals, and individuals residing in an institution
There's an incredible variety of health care needed in the 9 months before birth, and well after. That care is poorly covered in the US compared to most developed nations.
We were both in college when my first child was born with one $13/hr summer internships's worth of income. Medicaid covered 100% of everything; I paid $0 out of pocket.
This is false. Medicaid coverage is better than even European healthcare systems. It will cover rare disease drugs that aren’t paid for in the EU (it has to by law).
Plus the OOP expenses are basically zero.
While true not all doctor accept new Medicaid patients, you can find care.
> In 2021, Joseph et al. published a paper in Obstetrics & Gynecology demonstrating that the entire recorded increase in maternal mortality since 2003 was due to a change in the way data was gathered. In 2003, U.S. states began to include pregnancy checkboxes on death certificates. This led to a whole lot more women who died while pregnant being identified as such. The apparent steady increase in maternal mortality was due to the fact that states adopted this new checkbox at different times:
> In fact, when the authors looked at the common causes of death from pregnancy, they found that these had all declined since 2000, implying that U.S. maternal mortality has actually been falling. Meanwhile, a CDC report in 2020 had found the same thing as Joseph et al. (2021) — maternal mortality rose only in states that added the checkbox to death certificates.
The CNN article is about this [1] study, which is based on OECD 2023 maternal mortality data. OECD says here [2] about "Definition and Comparability":
> Maternal mortality is defined as the death of a woman while pregnant or during childbirth or within 42 days of termination of pregnancy, irrespective of the duration and site of the pregnancy, from any cause related to or aggravated by the pregnancy or its management but not from unintentional or incidental causes. This includes direct deaths from obstetric complications of pregnancy, interventions, omissions or incorrect treatment. It also includes indirect deaths due to previously existing diseases, or diseases that developed during pregnancy, where these were aggravated by the effects of pregnancy.
Edit: [1] Also references [3], a 2022 CDC report saying over 80% of pregnancy-related deaths were determined to be preventable.
That may be relevant to something, but not to why the difference is so drastic between Norway and US.
It is indicative of the US healthcare system, however, that up until 2003 it wasn't even known, statistically, that women were actually dieing of childbirth.
It is very relevant. The US definition of maternal death is very expansive. The expanded definition counts any reason a woman who was recently pregnant and dies.
The prototypical example is murder by a spouse. While tragic and extremely important to collect for policy reasons, it is not what “maternal death rate” typically measures.
The study cited uses OECD data. If the US does not adhere to the OECD guidelines for the data fields, for example by collecting a too broad measure and not correcting for it, studies are going to compare apples to pears. Not saying that the conclusion is false. But researchers should do their due diligence on the way international statistics are compiled.
If the US collects the data in a different way and then doesn't publish anything else, there is no other data available. All you can do is include a note that explains why the numbers aren't comparable.
Sorry, the 'should' probably has an unintended negative connotation when talking about a specific study.
To delve a little deeper. They seem aware (under HOW WE CONDUCTED THIS STUDY [1]): "While the information collected by the OECD reflect the gold standard in international comparisons, it may mask differences in how countries collect their health data. Full details on how indicators were defined, as well as country-level differences in definitions, are available from the OECD."
They do not mention the specific CDC caveat mentioned above regarding the check box on US death certificates.
And then the pincher: The study points to CDC [2] where explicitly this effect is mentioned as a possible issue with the reporting via death certificates ("Efforts to improve data quality are ongoing, and these data will continue to be evaluated for possible errors.").
I'll leave the interpretation to you. They mention there is a gold standard and that some countries might not follow that gold standard. The conclusion is mainly based on US CDC data vs. OECD non-US data. They link to a CDC report mentioning this issue. Should they mention this fact in the study in the main body, or is this transparant enough?
Going back to the Noahpinion link with graph above in this discussion. For me the time series gives quite the hint that ICD-10 is not being followed appropriately and that false conclusions may arise. If this were my report, I'd take one or two paragraphs to explain why this issue doesn't affect my conclusions in the main body of text.
And then even a 'How to solve this (partially)'. As an actuary I know death is very unlikely in the childbearing age. Show a comparison table of deaths per 100k for women in the age of 20-40 between countries, including the 'US-Black' category. If that comparative line is a lot more flat (my expectation), I would really presume there is a data collection issue. The other interpretation would fail Occam's razor (that non-pregnancy death in US / US-Black categories are less likely than in other OECD-countries). First inkling: [OECD - 3], US ASMR in Women up to 20% higher than other countries.
Reporting differences don’t fix the fact that they also claim that 80% of these deaths are preventable.
The US healthcare system is always being designed around profit requirements and care constraints, and not vice versa. Nobody here (save for Medicare) really knows what the proper reimbursement is for care, and we waste needless amounts of time and money on quackery (naturopaths, supplements, chiropractic) instead. The reason why we open more “cancer centers” rather than adequate emergency or trauma care is because these hospital systems want to sell a Veblen good to wealthy people with cancer. There’s hope though, if we erase the weird private insurance industry we might start seeing prices and care reflect needs vs. means.
NVSS has reported monthly updates on this since the 60s, it's wrong to say it wasn't known statistically I think. Maternal mortality review committees have existed since the 1930s also which provide extra data. Maternal mortality is one of the most important vital metrics to track for any country so it indeed would be surprising not to have more data.
It’s amazing how often you find out the differences in metrics are due to how data is collected not due to actual differences.
I read a good paper(1) about newborn deaths rates in Cuba. It’s often touted that Cuba has amazingly low newborn death rates which obvious means communism has far better healthcare than capitalist systems.
Turns out it’s a reporting artifact. If you correct for it, they have the same death rate as other Central American countries with similar GDP per capita.
It's explained in another comment. The US tracks it by asking "is this person who died, pregnant?". If the answer is yes, then it's a "maternal death".
Norway only counts pregnant women who died because of their pregnancy.
> Among the 525 pregnancy-related deaths, an underlying cause of death was identified for 511 deaths. In 2020, the six most frequent underlying causes of pregnancy-related death—mental health conditions, cardiovascular conditions, infection, hemorrhage, embolism, hypertensive disorders of pregnancy—accounted for over 82% of pregnancy-related deaths (Table 4).
> Among the 525 pregnancy-related deaths, a preventability determination was made for 515 deaths. Among these, 430 (84%) were determined to be preventable (Table 6).
This shows they didnt just take a yes\no for pregancy and +1ed the statistic, like you suggested. They reasoned about the causality and preventability.
> This shows they didnt just take a yes\no for pregancy and +1ed the statistic, like you suggested.
I didn't suggest that.
What I said was how numbers were reported. The US reports all deaths in pregnant women, regardless of cause. Norways only reports maternal deaths when the cause is pregnancy complications.
So you want to use only the richest and most sophisticated EU countries but then compare them against a federation of US states that includes the likes of Mississippi and West Virginia?
The difference between New York (the highest US state by per capita GDP, ~$91,000) and Ireland is larger than the difference between Mississippi and a per-capita GDP of literally zero.
Also, I forgot, it's a disingenuous comparison if you know anything about how life, subsistence, marginal savings rate & co work, which I assume you do if we're discussing these kinds of topics.
The scale of "GDP per capita and how people are living" is roughly this:
At a "GDP of literally zero" you're DEAD.
At a GDP of 1k, you can afford a cheap bicycle.
At a GDP of 10k, you can afford small, old, beat up and unsafe cars.
At a GDP of 30k, you can afford almost all modern amenities, they'll just be smaller, older, have fewer features.
At a GDP of 80k you can do whatever the hell you want if real estate expenses aren't killing you.
So no, you can't freely compare a country at 10k with one at 80k and try to bail out the comparison with PPP.
And the difference between 1 billion and 1 billion 30k is 1 billion. Percentages matter, thresholds matter. The person having 1 billion 30k doesn't have a materially different life to the person having 1 billion. The person having 30k is reasonably well off, the person having 0 is dead. The person having 40k is also reasonably well off while the person having 10k is poor (and NOT US poor, world standards poor; which BTW, is about the global average, which makes the average person in the world poor by modern development standards).
It's an average. You can average zero in with 140 and still have a higher average than the average of 90 and 40. And what do you expect if you don't average in the zero?
How does moving the discussion to the legibility of the rate of change
help us understand why large numbers of women are still dying from
pre-industrial causes in the richest nation?
Man! this plus the teenage suicide/mental health rate stats also possibly being an illusion (Obamacare changed data rules the same time mobile social media was taking off, obfuscating everything) has really thrown me for a loop. Not sure what to believe!
A related effect is there is a real tendency in online debates to use countries that speak exotic foreign languages as examples. So there is no way of working out what the data actually represents, what the known strengths and weaknesses are or what they are trying to measure. Or what the legal framework is.
I got a great laugh out of that, they've done an impressive job anglicising their website. But it doesn't really change the fundamental point. It doesn't take long to get to "Most of the content here is only available in Norwegian" [0]. And the articles on the Norwegen version of the site seem to be different to the English.
It can take a surprising amount of research sifting through who-knows-what to figure things out. One fun introductory challenge I recommend is figuring out what the components of the inflation index actually are; it usually takes a few rounds of sleuthing unless you have a muscle memory of where the right manual is. It is hard enough in the same language and with a familiar government. It isn't easy to do in a foreign language and unfamiliar government.
If your're most interested in blog posts google translate is great for exotic languages.
But for the data they're all there in English [0].
And if you're after methodology, analysis or understanding medical data, they follow WHO standards and publications are all in English on pubmed.gov [1] for the explicit purpose of international collaboration (which is the norm in medicine and public health for most developed nations).
I applaud the enthusiasm but I'm not that interested in Norway's medical system. I'm making a point about the larger issue of using foreign data. I spend a lot of time arguing with people on the internet for fun and education; and it is extremely common to get a cheerful comment which - after a few hours of investigation - appears to be an incorrect interpretation of data.
It is hard enough to do for systems that are part of the English speaking world or big, easy to track metrics. It is substantially harder to do for fiddly data series from foreign systems where the primary source material is in a different language.
> And if you're after methodology, analysis or understanding medical data, they follow WHO standards and publications are all in English on pubmed.gov
This goes to the main point - if it turns out that they don't follow WHO standards in an area or there is critical data not on pubmed.gov, what is the expected path for finding that out?
Because in English I have a much better chance of being able to figure that out. The countries are familiar and there is a better chance that the criticisms of the major institutions are well known. In a Norwegian context that already rather challenging task is even harder.
EDIT
An example occurs to me a few minutes later; there was an interesting theory that Japan had a lot of old people because there were unusually strong pension & tax incentives to lie about elderly relatives being alive when they were in fact dead.
The Japanese stats office could be following WHO standards and publishing all their information on pubmed.gov and the series would still be incomparable with other countries if there is an unusual incentive for the stats to deceive coming form an unexpected angle.
Keeping on top of that sort of thing in foreign legal systems is simply hard.
For the point of arguing with strangers, yes, I agree that neither PubMed nor any other entities will provide you with what you need. I don't think that it is possible to acquire an understanding of an issue without some domain knowledge, at least on how to get the data.
But to gain a deeper understanding of the flaws of any country's health (or any) system, there is no way around that except by comparing it with data from other countries. And that might be hard, which is why professionals spend a lot of time on it.
I don't think that is an in-depth on methodology, they seem to be talking about how the WHO does things. And that doesn't seem to translate the graphs.
But regardless, the bigger point is that the default position isn't that Stats Norway data is automatically comparable with everyone else's data. The world is large and complicated; it is quite easy for small details between systems to do surprising things.
Are saying that Norway speak an exotic foreign language, so we should ignore their results because some people feel that we cant trust their information? Does that mean that we should not compare the US system to these other nations? Who can we compare it to in that case, UK, Australia and New Zealand?
You can make judgements on uncertain data. It is a reasonable thing to do. It just happens that, given the number of people who muck up data that should be familiar to them, I say there is a lot of misplaced confidence in how well people understand other countries - confidence that often grows because the average person has very limited material to cross-reference with because they can't read a lot of publicly available stuff.
Yeah this roenxi user is one of the most talented mental gymnasts on HN. In past arguments I have been honestly suspicious that I was taken in by a performance artist.
OK thats all fine, this kind of discrepancies/errors happen all the time in statistics. You for some reason completely avoid massive discrepancy between 0 and what US reports. The fact that its slowly falling from relative stratospheric heights gives no comfort to common US citizens, when clearly it can be done much, much better.
I think we all know most probably the main reason - US healthcare is a business with huge prices compared to anywhere else in the world including nations with higher salaries, not public service. So its all nice and top notch if you have millions in some form, not if you are remaining 95% of the country. General compassion to fellow citizens in need is not a strong point of US in general, is it.
People like me could move literally anywhere in the world if wanted. I moved to Switzerland from my crappy home country for example. But hell will freeze sooner than I would want to raise my kids or get old in US, no thank you for many reasons and this being one of biggest.
The core problem is that Grant’s Pass is being asked to shoulder the homeless burden for the region. They are not being provided with any means of help, and their ability to remove those who cause an excessive burden is being taken away.
Look at Grant’s Pass on Google maps. It’s a small mountain town that looks like it has a low tax base in the best of times. If, at enormous cost to their tax base, adequate shelter is built, the only result will be more homeless who gravitate to their tiny town because its better than anywhere else in the region.
The problem and solutions are regional. Who should really be being sued is the state of Oregon or the federal government, but that is much more difficult to do.
If the ruling goes against Grant’s Pass, the result will be even more off-the-books policies to harass and intimidate the homeless until they move to the next town over, because every small town will see them not as human beings but as a potentially immense local tax liability that has to be moved before someone notices.
To be fair in a world of good LSP impls, grep/find are really primative tools to be using.
Not saying this isn't better then
a more sophisicated editor setup, just that grep and find are a _really_ low bar
Not sure if that's making things "fair". Grep & find are insanely powerful when you're a CLI power user.
Nonetheless, I'm particularly curious which cases the AI tool can find things that are not easy to find via find & grep (eg: finding URLs that are created via string concatenation, those that do not appear as a string literal in the source code)
Perhaps a larger question there, what's the overall false negative rate of a tool like this? Are there places where it is particularly good and/or particularly poor?
I evaluate a lot of code, like ten-twenty applications per year currently, terminal tooling is my goto. Mostly the basic stuff, tree, ripgrep, find, wc, jq, things like that. I also use them on top of output from static analysis tooling.
It's not as slick as SQL on a RDBMS, but very close, and integrates well into e.g. vim, so I can directly pull in output from the tools and add notes when I'm building up my reports. Finding partial URL:s, suspicious strings like API keys, SQL query concatenation and the like is usually trivial.
For me to switch to another toolset there would have to be very strong guarantees that the output is correct, deterministic and the full set of results, since this is the core basis for correctness in my risk assessments and value estimations.
When we reach that world, let me know. I'm still tripping over a "python-lsp-server was simply not implemented async so sometimes when you combine it with emacs lsp-mode it eats 100% CPU and locks your console" issue.
Possibly. Definitely why it has been locking up on me when I added lsp-mode.
Lsp-mode will schedule one request per keypress but then cancel that request at the next keypress. But since the python LSP server doesn't do async, it handles cancel requests by ignoring them
If emacs hard blocks on LSP requests, that may be on emacs as well.
I recomemd you try ruff-lsp, although it does not iver everything and is more for linting, it's higb quality
Personally I don't like the fragility/IDE-specificity of a lot of LSP setups.
I wish every language just came with a good ctags solution that worked with all IDEs. When this is set up properly I rarely need more power than a shortcut to look up tags.
Assume 120 homes in an area, evenly distributed per month on yearly leases. If all renters switch every month and it takes a month to re-rent the unit, there is a vacancy rate of 8.3%
Obviously not every renter moves every lease end, but also some units are in places no one wants to live, others are mispriced, some need renovation or extensive cleaning, etc.
But from a cursory sanity check the 6.9% number is likely reasonable for a fairly tight rental market.
reply