Hacker Newsnew | past | comments | ask | show | jobs | submit | jeffbee's commentslogin

People made self-guided missiles with 1940s technology, in the 1940s. It can't be too much of a surprise if someone right now can make guided missiles in their garage with 2026 electronics. At this point the "guided" feature is trivial, the "missile" part is doable, and the weapon has probably become the tricky part.

Throwing an aside here that anyone interested in 1940s war technology must check out the old BBC documentary The Secret War (1977) which goes into depth on solving the engineering challenges of the war.

Well worth a watch. I think I watched it on Youtube.

I think the hard part was and will usually continue to be making the whole thing work effectively together with enough performance to actually work in practice. It's a lot of details across a lot of disciplines to get right.

Your comment is broadly misleading. In fact, I would say that "shadow stats" guys like you have enabled the destruction of the system by creating the space to cast doubt on the valid methods used by BLS. BLS unemployment metrics have a valid basis and where they differ from Eurostat those differences are minor and with rational basis (such as 16 vs. 15 year old starting age).

It is tough, though, for me to fully buy labor statistics when it has become the norm recently for them to be revised down. This spans back into Biden's term as well so it isn't one party either.

With a valid measure I would expect a roughly even distribution over time between underestimates and overestimates. For a valid measure worth considering I'd also expect the stat to be released later when revisions are less likely because more actual data has been collected


> With a valid measure I would expect a roughly even distribution over time between underestimates and overestimates

This is a valid hypothesis. It’s wrong, and I’ll explain why. (It’s a bad and invalid thing to conclude.)

If measurement errors were iid, you’d be correct. But they’re not. They’re well documented for not being so. Earlier survey results are biased by directional response bias inasmuch as the employers with the lease changes respond first. So the earliest releases tend to match whatever was going on before. Then the employers who had to do paperwork respond. And then, finally, someone gets around to calling the folks who never got back. Some of them aren’t around anymore.

So yeah, the directional tendency in revisions is well documented. And for a long time, the early releases were appreciated. But maybe American statistical and media literacy is such that only final releases should be released, which would mean we’d always be working with data 6 months to a year out of date.


That's all well and good in theory, but job reports data over recent years have noticeably shifted towards downward monthly revisions. Prior to the pandemic response, the graph [1] looks much more balanced with regards to positive and negative decisions.

[1] https://www.apmresearchlab.org/blog/how-abnormal-are-the-rev...


> but job reports data over recent years have noticeably shifted towards downward monthly revisions. Prior to the pandemic response, the graph [1] looks much more balanced with regards to positive and negative decisions

Yes. The reasons for this are well documented. Changing methodology for the preview estimates is rigorous. That means our published estimates lag best estimates, something the primary sources note in every release if one gets past the headlines.

Also, if you have one year of massive job gains and four years of flat and falling, you’ll spend most of your epoch biased one way. Again, not a sign of methodological problems. Just a predictable methodological artifact that folks are supposed to be able to incorporate before using, much less emotionally reacting to, the data.


Why would the shift to a new methodology bias the estimates to one end? I would expect a new methodology to make comparisons of data between the two systems to potentially be unhelpful, but I wouldn't expect a valid methodology to bias one way or another.

Related, I wouldn't expect past data to bias a current estimate. If 6 or 12 months of positive growth biases the next prediction it falls into the hot hands fallacy. It isn't predicting based on current predictions, its predicting based on recent past behavior and extrapolating forward. This only makes sense to do if the data is not yet available, and even then the extrapolation isn't a useful estimate of current conditions.


> If 6 or 12 months of positive growth biases the next prediction it falls into the hot hands fallacy

It’s a sample of a sample. The full sample is the final release. The early results are the preliminary releases. When firms change things they take longer to respond. So whichever way the economy is moving, there will be bias in that direction. If the economy is turning, you won’t know direction. If it’s accelerating or slowing down you don’t know magnitude. Sometimes context clues can help. Sometimes they can’t. There is no known statistical treatment for intuiting the missing data before one has it11


Sure, but it's totally ridiculous to post about that without discussing the survey response rate, which is the cause of that drift. People are attributing it to political meddling, and that is baseless.

Naturally all of this metadata about the BLS surveys is available for free from the BLS, so you can just go look at it.


Interesting that you're claiming this is baseless without providing any sources for your alternative. How do you know that (a) the response rate is down meaningfully and (b) that data shows a strong correlation or causation between the two?

That is a reasonable position, however the assumption that it is the administration that is gaming them vs other motivated parties is open for discussion.

It is in fact not at all reasonable. They are saying that the BLS stats can't be trusted because they totally misunderstand the survey methodology. That isn't a reason!

I’d counter that if we were doing a good job gathering data that these structural biases could be compensated for with more conservative initial numbers.

At some point a lack of decision to take compensating action becomes faking the numbers.


> if we were doing a good job gathering data that these structural biases could be compensated for with more conservative initial numbers

There is no more conservative. The data will bias in the direction of trend. The point of the data are, in part, to measure that trend. Fucking with it to make it politically correct to the statistically illiterate is precisely the sort of degradation of data we’re worried about.

(They’re also useless as a time series if the methodology changes quarter to quarter. That’s the job of analysis. Not the data.)


What you wrote suggests the data will bias predictably, which matches my understanding.

Reporting biased data as the default because the bias compensation is already built into the audience seems like a weak argument for not improving.

They can provide for the continuation of data visibility/granularity by releasing the prior numbers as previously calculated and at the same time changing the calculation of the headline number to be better compensated.

The simpler argument is that changing it at all will result in a negative step change in the reporting that no one wants to take accountability for.


> What you wrote suggests the data will bias predictably

Ex post facto. Before the fact, we don’t know.

Imagine you know the weather will be a strong gust regardless of direction. Averaging the models will produce a central estimate. But you know it will be biased away from the center. You just don’t know, until it happens, in which direction.

> They can provide for the continuation of data visibility/granularity by releasing the prior numbers as previously calculated and at the same time changing the calculation of the headline number to be better compensated

They do. These data are all recalculated with each methodological change. They’re just deprecated indices the media don’t report on because they’re of academic, not broad, concern.

> simpler argument is that changing it at all will result in a negative step change in the reporting

Simpler but wrong. Those data would be useless for the same reason we don’t let CEOs smooth revenues.


> It is tough, though, for me to fully buy labor statistics when it has become the norm recently for them to be revised down.

There have been revisions since the forever, and this is because they depend in part of surveys, and if companies (and the people with-in them) don't bother responding in a timely or accurate manner then that's going to throw the sampling off.

> CES estimates are considered preliminary when first published each month because not all respondents report their payroll data by the initial release of employment, hours, and earnings. BLS continues to collect payroll data and revises estimates twice before the annual benchmark update (see benchmark revisions section below).

* https://www.bls.gov/opub/hom/ces/presentation.htm#revisions

Post-COVID surveying seems to have become more difficult (and BLS budget stagnation/cuts haven't helped). This has been a known issue for a while; see Odd Lots episode "Some of America's Most Important Economic Data Is Decaying":

> Gathering official economic data is a huge process in the best of times. But a bunch of different things have now combined to make that process even harder. People aren't responding to surveys like they used to. Survey responses have also become a lot more divided along political lines. And at the same time, the Trump administration wants to cut back on government spending, and the worry is that fewer official resources will make tracking the US economy even harder for statistical departments that were already stretched. Bill Beach was commissioner of labor statistics and head of the US Bureau of Labor Statistics during Trump's first presidency and also during President Biden's. On this episode, we talk to him about the importance of official data and why the rails for economic data are deteriorating so quickly.

* https://www.youtube.com/watch?v=nfgpqVixeIw


My argument wasn't that there shouldn't be revisions though, only that recent years have shown consistent negative revisions rather then a roughly even distribution.

If response rates are down or something else is making surveys more difficult, its reasonable that confidence windows would weaken and size of revisions would increase. Its unreasonable that difficulty in surveying would lead to a consistent bias in results though, that's a methodological issue at best.


> My argument wasn't that there shouldn't be revisions though, only that recent years have shown consistent negative revisions rather then a roughly even distribution.

It's been to too many moons since I took a prob/stats course to comment accurately on population sampling, but how valid is the assumption that errors 'should' skew both positive and negative?


If errors are skewed in one direction there would likely have to be a factor forcing it, like sampling and response bias.

That's always possible, though again I question the validity of the measure and results if its getting consistently skewed results. Either the methodology is faulty or the results simply can't be trusted because they can't reliably get good data.


I don't say stuff like this very often, but are you actually blaming a victim for dealing with the reality of government bsing its own stats instead of the government that allowed this bs to continue? BLS had only one thing going for it and it is mostly that it was used for long enough time that changing methodology would prevent us from being able to compare it prior time ranges. That is it. Otherwise, the methodology itself is seriously flawed ( and likely was from get go, but these days, it is absolutely the worst possible mix of options ).

Honestly, your comment made me mildly angry. That said, can you say why you believe parent's comment is misleading?


Do you have a substantive complaint to make about the BLS methodology? So far all I see in your remark is shadowstats vibes.

I've never met a single person willing to attest to filling out a BLS survey. Not once. If their methodology is built on that + unemployment data from State Unemployment agencies + data from payroll processors, anyone not collecting state unemployment benefits is invisible to the system, and half of the payroll is actually not even consituted of U.S. Citizens.

Admittedly, if I could find a single instance of someone willing to vouch or share insight on having filled out a BLS survey, that'd cure a healthy chunk of skepticism. There's still be the other distortions in the data to account for, but I'd at least have an instance proving that yeah, there is somebody filling out these surveys and it isn't just something they say they do to make their magic unemployment number sound legit.

Note, I'm in a massive sceptical shit phase at the moment. Last decade has burned my optimism hard. So when it comes to my ability to assume benevolent intent right now, there's a heavy bias against doing it, and a heavier bias in the direction of "what would be the easiest way to keep the System limping along?" The answer to that is "say you do one thing, in reality do another, and as long as no one comes lookin', it's gold." The finance industry runs on Trust moreso than anything else, and there ain't much to be said for Trusting anything you can't verify these days. Not from other humans.


> I've never met a single person willing to attest to filling out a BLS survey

I’ve never met a single chicken farmer. Does that mean I should be sceptical about them existing? Like, what sort of metric is this for truth finding?

> to assume benevolent intent

No need. Markets move on these data. The rich and powerful bet their money on what they say.


No one's ever met a Gideon either.

> No one's ever met a Gideon either

I’ve never != nobody has.


It used to be an old Jay Leno late night bit

> if I could find a single instance of someone willing to vouch or share insight on having filled out a BLS survey, that'd cure a healthy chunk of skepticism

It comes from the Census bureau, a letter like this: https://old.reddit.com/r/frederickmd/comments/1p1j1my/did_an...

https://www.google.com/search?udm=2&q=%22current+population+...

They only reach ~100k unique households per year, so you'd need to survey a few hundred people to find a respondent: https://en.wikipedia.org/wiki/Current_Population_Survey#Meth...

> Note, I'm in a massive sceptical shit phase at the moment

How might one distinguish such "scepticism" from ignorance?


See, Census letters are one thing. BLS is another. I've actually received Census letters. BLS ones, not so much, and given they claim to be collecting data through surveys all the damn time, I'd expect to have been able to find someone who filled one out. It's weird, to me, that my luck has been so bad in finding someone with context on it. At best I only find someone who knows BLS uses surveys as part of their methodology, usually through reference to the site. No one ever seems to be able to primary vouch for having been the one surveyed.

>How might one distinguish such "scepticism" from ignorance?

Ignorance doesn't seek to invalidate itself. Scepticism does. It does not enrich my life knowing there's a methodology to collect a "high value statistic", but not finding any on the ground proof of people who actually have primary exposure to the methodology. One can't reason around what the system is actually measuring without a sampling of that. I can find screenshots of UI at times. I find papers around low response rates. I never find an actual person who says "Yeah, I get dinged to do those every few months, once every few years..." I sure as hell know when I'm gathering statistical data, that sampling bias evaluation requires foot work, and if you do that footwork, if your methodology works, it shouldn't take you that long to run into someone you've surveyed if you're doing it right. If you're not, and only hitting "the usual suspects", you're not getting a representative sample/measuring what you think you are. So I look for payroll people or people who have done payroll in the U.S. and ask if they've actually ever been directed to provide input. I've been doing it the last few years. Nobody seems to recall ever having been asked to participate in what amounts to billion dollar money movement at stake jury duty.

So yeah. This is kind of a weird tic of mine at the moment. Ranks right up there with the time I felt the inexplicable urge to figure out what the deal with zoning as applied to city planning was and how it worked. Something just doesn't add up. I hate that. Mental equivalent of a thumb detection via hammer.


> I've never met a single person willing to attest to filling out a BLS survey.

Unless you have introduced yourself with this question to thousands of people, this is a totally meaningless statement. It says more about your social circle, your grasp of descriptive statistics, and the weird online stew you are soaking your brain in than it says about the CPS.


I can't tell if you are serious or not. Lets assume for a moment that there was once a benefit to BLS survey methodology ( I would argue otherwise, but w/e ). Is it a good methodology today?

So my main argument ( and frankly the only argument that should matter ) is that is a bad fit for the goal of estimating values ( even though we do know its failure modes ). Is that not enough?


What are the alternatives, and do other countries labor statistics agencies use them?

Alternative is to build something better. Just about anything is better than the current survey system. What I would propose is something akin to "derived real-data unemployment system". All this data exists now, but is distributed. It can be stitched together, but if one was so inclined.

<< do other countries

No, it doesn't mean I am wrong.


"BLS CPS is worse than a hypothetical better thing" is tautological, void, and without meaning.

It must be nice to live in a simple binary world.

You made the argument and provided zero supporting evidence. As it stands, it's merely an opinion, and appears to be an uninformed one until you prove otherwise. That's what people are asking you to do.

Sigh, your supporting evidence is a record of someone saying something, which itself is merely an opinion.. men in glass houses and all that. The interesting thing about my opinion is that while it may not be AS informed as yours, it is notably above the average level of knowledge when it comes to BLS.

<< That's what people are asking you to do.

No. What I am being asked to do is: "Show me a better way, but I only accept a better way that is already utilized by someone else". Not a recipe for a thoughtful exchange of ideas.


It amuses me how contradictory the two bullet points from the article are.

- Strict limits on governmental regulation, wherein any restrictions must be demonstrably necessary and narrowly tailored to a compelling public safety or health interest.

- Mandatory safety protocols for AI-controlled critical infrastructure, including a shutdown mechanism and compulsory annual risk management reviews.

How were the necessity and scope of the second rule shown to satisfy the first rule?


You can read the actual bill here: https://legiscan.com/MT/text/SB212/id/3212152/Montana-2025-S...

In essence, it doesn't really mandate anything; it says you should have a plan, and only for "critical infrastructure facilities":

"Section 4. Infrastructure controlled by critical artificial intelligence system. (1) When critical infrastructure facilities are controlled in whole or in part by a critical artificial intelligence system, the deployer shall develop a risk management policy after deploying the system that is reasonable and considers guidance and standards in the latest version of the artificial intelligence risk management framework from the national institute of standards and technology, the ISO/IEC 4200 artificial intelligence standard from the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems. A plan prepared under federal requirements constitutes compliance with this section."

So it's essentially lip service to AI safety, probably to quell some objections to a bill that otherwise limits regulation of tech platforms.


I did read it. The point is there are no findings that justify the regulation in light of the grant of rights in the same bill. The only WHEREAS that approaches the level of a finding amounts to "many are saying..."

The 2nd rule is clearly intended to be a shield and distraction. It's there to pretend the law serves the public, when in reality it's designed to defend datacenter builders from the public interest. Politicians can talk about meaningless sci-fi concepts like SkyNet and how it can defeat it with off switches, instead of real issues like noise pollution, tax giveaways, electricity prices and mass surveillance.

> any restrictions must be demonstrably necessary and narrowly tailored to a compelling public safety or health interest

This should be the default policy on regulation. We shouldn't need a specific law to enact it.


Probably one applies for individuals while the other, as described, applies for infrastructure.

Orwell called it “double speak”

Not quite. He coined "newspeak" and "doublethink".

You are correct - mixed it up

It seems like a normal-sized motherboard? For comparison here is the ifixit teardown of a PixelBook Go (happens to be the laptop I am using right now). https://guide-images.cdn.ifixit.com/igi/LT6YEIeE1Svh4WCk.hug...

Nobody hates ALPRs more than tax evaders. I love ALPRs because they bring lawless sociopaths out of the woodwork.

From the company that brought you the Lung Brush.

"How dare they say mean things about the manner in which I destroyed a nation?"

I’m no big fan of DOGE but our fiscal trajectory is utterly unsustainable, much more nation destroying than the particular cuts being mentioned here. I hate that it is now a republican talking point, but we do need a focus on raising revenue and reducing expense — and there is no easy ‘fraud’ win on expense, most of these are on real things that big coalitions of people want but we cannot afford without a large increase in revenue-as-%-GDP (ie. middle & working class tax increases), inflation (effective middle & working class tax increases), or a technological productivity boom.

Ok, then let's address the 52% elephant in the room instead of making cuts to the 3%: https://en.wikipedia.org/wiki/Government_spending_in_the_Uni...

Reducing defense spending by a fractional amount will have more of an impact than completely eliminating science spending altogether. The Iran tally is up to what, $11b now after a single week?


Defense is 12% of federal spending, not 52%. Definitely a bigger impact & waste than science budgets, I agree - but even cutting 100% of it would not close our hole. As I said, I’m no big fan of DOGE — but the problem is a real one despite the common tendency to put fingers in our ears or propose non-solutions, whether of the ‘tax the rich’, ‘cut DEI spending’, or ‘end all military expenditures’ variety. Not a single one of those or combination gets us there. We have to make real hard choices.

Those numbers still scale whether you're talking about total or discretionary spending, which means science/grants are an even smaller fraction of the 3%.

Why start cutting from the smallest piece of the pie? My point is that defense spending is already outsized and increasing while we cut science spending. Instead of increasing it, why not cut it and provide 100x the savings before cutting science spending? Doesn't seem like such a hard choice.


The top 5-6 expenses (SS, Medicare, interest, health, defense, income security)

https://fiscaldata.treasury.gov/americas-finance-guide/feder...

Going to be hard to cut into these, and the middle/working class is shrinking as wealth concentrates and wealth inequality expands. Perhaps if there weren't so many middlemen taking slices w/o providing value...


Source on the middle class shrinking? How can both the middle and working class be shrinking? Real median incomes are increasing.

The fuckwit in the video is personally responsible for crushing the productivity boom. Higher education is, or at least was, one of America's chief export industries.

But why would be megacorp and billionaire tax increases off the table? You didn't even mention it... And before someone points out that they pay - yes they pay _something_ then get tax cuts or legal loopholes and in the end they don't really pay.

We could liquidate all billionaire wealth and raise taxes on income >$500k to 100% and it would not close the fiscal hole.

I didn’t mention it because I was only mentioning the options that will avert our fiscal crisis.


I don't even know why this is downvoted. Standard technique in Texas. Harris County does not have 40 DPS offices for its 5 million people. The current backlog to get a DPS drivers license appointment in Harris County is 45 days. The next available appointment in Kerrville is tomorrow. That is inequitable.

But anyway, none of that is the real core issue with the idea of voter ID. The real issue is that there are many living Americans who were born in jurisdictions that steadfastly refused to issue birth certificates to Black people.


This doesn't have to be binary... there can be multiple sources of disenfranchisement. They all add up.

Why does HN take this guy seriously? OK, he got pacman running on a PS3. Great. Remember the time he stood up on a stage discussing the path to Level 5 autonomous driving? Comma.ai actually produces a barely-adequate lane keeping system for obsolete cars. Remember fixing Twitter search in 12 weeks? This guy is a total charlatan.

On the question of whether Hotz knows what AI can or cannot do, the answer is demonstrably "no".


Yeah this is the very first time I am hearing that templates are "extremely cheap". Template instantiation is pretty much where my project spends all of its compilation time.

It depends on what you are instantiating and how often you're doing so. Most people write templates in header files and instantiate them repeatedly in many many TUs.

In many cases it's possible to only declare the template in the header, explicitly instantiate it with a bunch of types in a single TU, and just find those definitions via linker.


On the few times that I have looked at clang traces to try to speed up the build (which has never succeeded) the template instantiation mess largely arose from Abseil or libc++, which I can't do much about.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: