Hacker News new | past | comments | ask | show | jobs | submit | Zanni's comments login

Serious question: why is this bad? Is it just the 3% false negative rate? I don't see the negative privacy implications of face recognition when the alternative is to present your face (via photo ID) anyway.

I enjoy traveling to Berlin for vacation, as it's a totally different atmosphere around privacy. Default payment is cash, your entry and exit from train stations is not tracked (surveilled perhaps, but you do not tap-in/tap-out or god forbid tap your credit card every time you step on a train like SF or NYC), and it's against the law to publish photographs of someone without their consent.

Ask IBM what becomes of databases full of people's names associated with their movements.


In both SF and NYC you can still buy transit passes anonymously with cash if you so desire.

Convenience won, though, it seems.


I think this is silly given how much Germany is actively helping a country where the PM of that country has an arrest warrant out for him through the ICC.

Germany is still facilitating an alleged genocide. The only thing that has changed is the profile of the victims. The situation now is even worse, given that practically everyone in the world knows what’s happening but life is going on as normal.


You could have made a sensible argument about how security policies in Israel move in a wrong direction, even if it isn't at all on topic. But you stumbled here too.

It’s a reply to this part of OP:

> Ask IBM what becomes of databases full of people's names associated with their movements.

None of this matters. If a state wants to commit a genocide, they will. Collection of IDs being there or not is a minuscule bump in the road there.


It is not against the law to publish photographs of someone without their consent. People post me to Instagram without my consent in Berlin all of the time.


There are some carve outs for including you in a picture of something else, but there is, at least, social backpressure to swinging a camera around.

I appreciate the response, but it seems that database can be constructed with or without facial recognition because photo ID is already required. So, I ask again, why is this bad?

Showing ID to pass a gate is somewhat different than having a timestamped record of the fact that you passed a gate, but I agree that given it's already surveilled it's not a big difference. Still, small differences add up.

I mean, it was living memory for many HN'ers that you could travel freely in the United States with doing either. It's a post-9/11 thing that an airline ticket is associated with a unique person, and requires a matching photo ID.

There was a time when America's security forces did not routinely surveil its own peoples' movements.


When I was a kid there were classified ads like "Pan Am NYC Dec 20-28, E. Smith, $200 o.b.o" for people who wanted to resell their ticket because they couldn't make the trip. There were no id checks then.

In the 1990s the airplanes jumped at the opportunity to have required id checks so they could take control of the secondary market.

It was still possible to buy a ticket like "E. Smith", but that option was cut off a few years later.


The leviathan is often arbitrary and capricious.

(American here) A quick search shows Pew Research, National Academies Press (associated with the Library Of Congress), AmericaUnderWatch dot com, Politico and Georgetown Law .. all with varying responses to this question. In the case of social structure and law, there are many layers, interwoven, and difficult or impossible to fit into chat-level responses.

Universe was the only series I watched (due to Scalzi being a technical advisor), but I found it infuriating that they let Tim Roth's character continue to have the run of the ship after he proved himself a psychopath.


Do you mean Robert Carlyle? Tim Roth kind of looks similar to him, but doesn't appear to have been on any Stargate.


That is who I mean, thank you. It's been a few years ...


I worked as a telephone operator a couple of summers and pay phones were a pain in the ass. The protocol for a long-distance call (and this was back when long distance was expensive, up to to a couple of dollars/minute), you'd collect for three minutes up front and refund if they went under. If the call went longer, the caller was expected to stay on the line and pay the balance, which could be considerable. Often they just bailed. There was one exchange near a military base where we'd have to handle lengthy long-distance calls from lovesick/homesick recruits, and so many of them bailed out at the end of call that we took to breaking in every three minutes to collect more coins. Worst part of the job.


It seems to be intentional. (I was confused by this at first, too). If your starting grid contains, e.g., a dot, you can safely assume it's a one-ship. But if you add a ship, the initial state is a block that resolves to a dot or end as appropriate when you complete it.


Physical books are great. I love physical books. But (among other downsides) they take too much space. I've got just over 2,500 books on my Kindle. That's more than 200 linear feet of shelf space if they were physical books.

My ideal library is a hybrid: physical for art books, kid's books, reference books, large format or special edition, signed copies or sentimental treasures, etc., and digital for everything else. An iPad as a digital card catalog for the entire collection and a couple of Kindles for actual reading. (Impossible to browse on a Kindle ...)


I couldn’t find an average thickness of books, so I quickly measured the 2nd from top shelf of my closest bookcase. (The top seemed unfair as it has a collection of “the world’s famous orations” which seem abnormally thin ). Shelf in question: https://ibb.co/CznTZ5m

(Googling I just seemed to get spine thickness and format sizes but not an average of actual printed thickness )

It has 21 books in 28 inches.

1.34 inches per book.

I get 279 linear feet for 2,500 books.

Checking Amazon, Walmart, the book cases they sell seem to be 5-6 tier with 24 inch width. (Though checking my non built in ones, all of the ones I own are around 4 feet).

Using the Amazon average, you’d need 24 book cases to fit all of them… which does seem like a lot. Wonderful but a lot.

I do have a kindle, and I love it. But something about physical books is great.

I have 5 built in book cases of various sizes, and 6, four feet Wide book cases that are 5 or 6 tiers… and yet I still have bins and bins of books in my attic.

Initially I had thought “that’s manageable” but upon some self reflection you make an excellent point.


I appreciate the fact-checking and the self-reflection :) I used 1" as an estimate (I have a lot of paperbacks), but 1.34" is probably a better estimate for hardbacks.


For what it's worth, I loathe this sort of thing (damning by vague questioning): "But, if a composite is full of a measure that’s biased, how accurate is it going to be?"

Well, I don't know, and you haven't told me. Is it a consistent high bias? Low? Variable? How biased? How accurate?

What you're trying to say, without accepting responsibility for actually saying it, is "It's an inaccurate composite because it's based on biased measurements." But you don't know that, or you'd say it without weasel words.


From the study linked [https://link.springer.com/article/10.1007/s40279-024-02066-5] (with acronym meanings added inline for readability): "The lowest pulse arrival time standard deviation (PAT σ) at 2.0 times the standard deviation corresponded to 88.4% of the Region of Practical Equivalence (ROPE) for SDNN (Standard Deviation of Normal-to-Normal intervals), and 21.4% for RMSSD (Root Mean Square of Successive Differences). As the standard deviation of PAT increases, the equivalence between photoplethysmography-derived "heart rate variability" (PRV) and electrocardiography-derived heart rate variability (HRV) decreases for both SDNN and RMSSD. The width of the highest density interval (HDI), which encompasses 95% of the posterior distribution, increases with increasing PAT σ. This increase occurs at a higher rate for RMSSD than for SDNN."

So for "how accurate?", grossly and irresponsibly oversimplifying, the Apple method is roughly 90% accurate, and devices using the inferior method are roughly 20% accurate.

I would be really interested to know of any devices not locked down to Apple ecosystem that also use this approach, if anyone has any insight.


The Quantified Scientist has tested a while buttload of smart watches against a chest strap heart rate monitor:

https://youtube.com/@thequantifiedscientist?si=4u-u1VI7eMrXD...

Apple Watches are at the top, but there are many other watches that are almost as good.


Is there a searchable text version of this resource?


Not sure, but every video shows a huge graph with all of the watches on it, ranked by accuracy. If you just view the latest video you can get an idea of the best watches, or where a watch you might be looking at falls, then you can search for the specific video where he tests it to see specifics.


In the video description click on "Show transcript".


But it seems each video is about a different model; having to search N transcripts still isn't optimal.


He's not written books on the subject. You can setup a search yourself. There's also a blog with easier access than transcripts. https://www.robterhorst.com/post/apple-watch-ultra-2-vs-seri...


And your comment was much more informative than the article.


That's obviously untrue


This is such a ridiculously wide divide that I'm surprised Apple hasn't used this, or something similar to this, as a marketing tactic. Imagine being able to say your medical insights are 70 percentage points more accurate than your competitors. I do understand this is apples to oranges, since these are often devices in a completely different price range, but still. I was surprised to hear this and I have been loosely following the medical wearables space for awhile.


I would imagine would be quite happy saying as little as possible about their sensors in their watches. It seems like their largest surface area for licensing litigation. Touting performance today is fuel in court tomorrow.

https://time.com/6692718/apple-watch-masimo-alivecor-patent-...


I mean, they'll definitely market it as 4.5x more accurate. Or, for nice round numbers, 5x -- and while we're within orders of magnitude it's not unreasonable to say "up to 10x more accurate."


I believe there are non-Apple devices that use this approach as well, but they're certainly failing in making that information easy to find.

That being said, being the only watch with FDA blessing is a pretty effective point to market

https://www.apple.com/healthcare/apple-watch/


There are a number of watches that were given a blessing, as you put it, by the FDA. Pixel, Samsung, Garmin, Withings...


Apologizes, you're correct. I vaguely remember reading they were "first" to get FDA approval for some feature, but if that were ever true, it is sorely outdated information. I fell for the hype, it seems.

Unfortunate that I can't delete or edit the post at this point.


Where can one find a list of those watches/manufacturers?


Not remotely the same idea, but the Magic Puzzle Company [1] makes fantastic puzzles with "two solutions." They're traditional puzzles with hand-drawn art, but the first solution is in four distinct pieces that can be reassembled to make a slightly larger rectangle with a hole in the middle. (Similar to Sam Loyd's Missing Square puzzle. [2])

Then, a second set of pieces (in a separate envelope) allow you to fill in the hole with an image that (typically) wildly alters the interpretation of the original image.

[1] https://magicpuzzlecompany.com/ [2] https://proofwiki.org/wiki/Sam_Loyd%27s_Missing_Square


Surprising reference to The Goal [1], which Mr. Beast "used to make everyone read ..." and still recommends. The Goal is a business novel about optimizing manufacturing processes for throughput and responsiveness rather than "efficiency" and is filled with counter-intuitive insights. Presenting it as a novel means you get to see characters grapple with these insights and fail to commit before truly understanding them. Excellent stuff, along the lines of The Phoenix Project [2], with which I assume many here are already familiar.

[1] https://en.wikipedia.org/wiki/The_Goal_(novel) [2] https://www.goodreads.com/book/show/17255186-the-phoenix-pro...


Theory of Constraints is fascinating because, as MrBeast points out here, it seems extremely obvious. I've had numerous interactions on this site where a person dismisses an insight from ToC as "obvious" and then 2 sentences later promulgates the exact type of intuition that ToC disproves.


Yeah, this is the brilliance of the novel format. Someone presents an insight, and it can see obvious in isolation but then seems obviously wrong in context. "Of course we should favor throughput over efficiency" is obvious until you realize it means, for example, allowing idle time on incredibly expensive machines to favor responsiveness, which just seems wasteful.

In the novel, you get to see the characters bang their heads against these "paradoxes" again and again until it sinks in.


>is obvious until you realize it means, for example, allowing idle time on incredibly expensive machines to favor responsiveness, which just seems wasteful.

Weird how things that seem to make sense in one context seem to make no sense in another context. If you told me a factory runs their widget making machine at 70% capacity in case someone comes along with an order for a different widget or twice as many widgets, at first glance think that's a bad idea. If your customers can keep your widget machine 100% full, using only part of the machine for the chance that something new will come along seems wasteful. And through cultural osmosis the idea of not letting your hardware sit idle is exactly the sort of thing that feels right.

And yet, we do this all the time in IT. If you instead of a widget machine told me that you run your web server at 100% capacity all the time, I'd tell you that's also a terrible idea. If you're running at 100% capacity and have no spare headroom, you can't serve more users if one of them sends more requests than normal. Even though intuitively we know that a machine sitting idle is a "waste" of compute power, we also know that we need capacity in reserve because demand isn't constant. No one sizes (or should size) their servers for 100% utilization. Even when you have something like a container cluster, you don't target your containers to 100% utilization, if for no other reason than you need headroom while the extra containers spin up. Odd that without thinking that through, I wouldn't have applied the same idea to manufacturing machinery.


This is a very key insight many need to be aware of. The thing that can be sacrificed in order to obtain efficiency is resilience.

To master the bend not break model.

You can make a bridge that can handle a 10 ton load for half the material of one that can take 20 tons. 99% of the time this isn't an issue but that outlier case of a 18 ton truck can be disastrous. This is why power cables have sag in them, in case there is an extreme cold snap. Why trees sway and bend with the wind so that anything but the most extreme evens do not break them; with that analogy, grass is much weaker but could handle even higher winds. The ridged are brittle.

I'm not saying to not strive for efficiency but you also have to allow those efficiency gains to provide some slack in the system. Where I work, there is a definite busy season. So for most of the year, we operate at about 70% utilization and it works out great. Most people are not stressed at all. It means that when those 2 months of the year when it is all hands on deck, everyone is in peak condition to face it head on.

In my previous job in manufacturing, efficiency was praised over everything else, it was 100% utilization all of the time. So when the COVID rush came, it practically broke the business. After a year of those unrelenting pace, we started to bleed out talent. Over the next 6 months, they lost all the highest talent. A year later from those I still spoke with, they said they lost about two thirds of their business over the next 12 months, they are now on the edge of collapse.

Slack allows a bend, pure efficiency can lead to a break. There is a fine line between those two that is very difficult to achieve.


I see the parallel you're drawing but even the core idea is I think different enough to be worse dissecting.

In manufacturing, you keep spare capacity to allow for more lucrative orders to come in. If you don't expect any, you run at 100%. For instance when Apple pays TSMC all the money in the world to produce the next iPhone chip, they won't be running that line at 70%, the full capacity is reserved.

Or if you're a bakery, you won't keep two or three cake cooking spots just on case someone comes in witb an extraordinary order, you won't make enough on that to cover the lost opportunity.

We run our servers at 70% or even 50% capacity because we don't have control on what that capacity will be used for, as external events happen all the time. A manufacturers receiving a spike of extra orders can just refuse them and go on with their day. Our servers getting hit with 10x the demand requires efforts and measures to protect the servers and current traffic.

Factories want to optimize for efficiency, server farms want to pay for more reactivity, that's the nature of the business.


I think even for a company like TSMC these ideas are important to understand.

To give you an example TSMC might have a factory with 10 expensive EUV lithography tools, each capable of processing 100 wafers per hour. Then they have 4 ovens, each able to bake batches of 500 wafers per hour.

TSMC could improve efficiency by reducing the number of ovens, because they are running only at 50% capacity. But compared to the cost of the EUV tools, the ovens are very cheap. They need to be able to produce at full capacity, even when some ovens breakdown, because stopping the EUV tools because you don't have enough ovens would be much more expensive then operating with spare capacity.


> Or if you're a bakery, you won't keep two or three cake cooking spots just on case someone comes in witb an extraordinary order, you won't make enough on that to cover the lost opportunity.

I think it's always worth thinking about what you can leave slack / idle space in. For example, you might not keep multiple stations free, but you might invest in a larger oven than you need to make the cakes you currently make. Or you might invest in more bakery space than you need, including extra workspace than you can utilize at 100%. Not because you necessarily anticipate higher demand, but because you might get a customer that's asking for a cake bigger than your standard. Or because you might have a customer placing a large order and need some extra room to spread out more, or to have a temporary helper be able to do some small part of the job even if they can't use the space as a full station.

But also idleness might look like "you don't spend all of your time baking orders for customers". If you never build in slack for creating, experimenting and learning, you'll fall behind your competition, or stagnate if your design and art is a selling point.


Even for a server farm, you can prioritize the web traffic and still use the excess capacity for CI or whatever.


> using only part of the machine for the chance that something new will come along seems wasteful.

Because it is. My brother works in industrial manufacturing machinery supplies. I can assure you the overwhelming majority of manufacturing machines on the planet are not only run constantly but as near to 99.999% as possible. So much that they are even loath to turn them off for critical maintenance rather preferring to let the machine break down so they don't get blamed for being the person to "ruin productivity"

This book sounds like one of those flights of fancy armchair generals are so found of going on.

Perhaps it works in small boutique shops making specialized orders but that is a slim minority of the overall manufacturing base. I could see why the advice would appeal to HN readers.


It really depends on whether the capacity is fixed or not. If capacity is fixed and demand is unlimited (eg. because you just can't get more EUV light sources this year) then you should probably run as close to 100% utilisation as possible.

But if you can easily scale production capacity, you should not strive for 100% utilisation. You should expand capacity before you reach 100%, because if you are running at 100% you will not be able to take any more orders and lose the opportunity to grow your business.


Yeah it mostly only works for small boutique shops like the Toyota Production System or Ford’s manufacturing line.

And yes, a lot of manufacturing doesn’t behave this way. That’s the “counter” part of “counter-intuitive” revealing itself.

This comment is yet another of these excellent cases in point!

You really don’t see how “they’re afraid to turn them off even for critical maintenance” might be actually suboptimal behavior in the long run?


One of the most insightful things I heard someone say at Toyota (in an interview) was that they replace their tools (drill bits and the like) at 80% wear instead of letting them get to 100% and break.

Why waste that 20%?

Because if the tool breaks and scratches a $200K Lexus, then that might be a $20K fix, or possibly even starting from scratch with a new body! Is that worth risking for a $5 drill bit they buy in boxes of 1,000 at a time? No.

Then the interview switched to some guy in America looking miserable complaining how his bosses made him use every tool until breaking point. He listed a litany of faults this caused, like off-centre holes, distorted panels, etc...

And you wonder why Tesla panels have misaligned gaps. Or why rain water leaks into a "luxury" American vehicle!


Toyota uses price premium and reputation to achieve this. Its not something every company can do, and I don't mean in theory. I mean that economics don't support it. Most buyers cannot and will not pay extra premium for reliablity. The reality is letting them break/damage/fix/replace actually is cheaper overall otherwise it would not be the popular choice.

If tomorrow Ford decided to start this process it would be a decade before the market believed that hey had changed their ways. Would they survive this gap? IDK the new ford Mach-E is not selling so I doubt it but I"m not an economist. People don't buy fords because of the reliability. They buy it because it's cheaper and the risk of downtime is less important to them than the price premium. Don't forget that in order to achieve that lost resource return you must be disciplined all the time and most people/corps cannot achieve that.


Toyota’s strategy is cheaper, and their cars are very cost competitive.

PS: “It’s too expensive to save money with your methods!” Is the most common response I get from customers to this kind of efficiency improvement advice. Invariably they then proceed to set several million dollars on fire instead of spending ten thousand to avoid that error. It’s so predictable, it is getting boring.


I would really recommend coming into these conversations with more curiosity!

Toyota makes some of the cheapest and some of the most expensive cars on the market. They don't "use" their reputation to do this, their reputation is the result of excellent production.

You're missing the point with Ford, which is an example of another very successful manufacturer who uses similar techniques/philosophy as Toyota, which are not similar to what your brother's machine shop does.


Edit: Sry, missed your Poe's law. People buy fords because they are cheaper for the most part. People that have more money buy Toyota. This is just market segmentation of a couple of the biggest brands.

Companies that have hammered out an effective cost/production/time ratio are not something you can compete with without becoming the same thing as them. Which is why factory managers are literally afraid to turn them off for any reason.

My brother constantly tells me about how when they do repairs they will see something within 1-3 months of failing and tell the factory manager. He said almost without exception they always ask will it increase the repair time "TODAY" and of course the answer is yes. They always decline and deal with it when it breaks at a greater time/cost. I think this is more an effect of the toxic work relationship that has become forced on everyone by MBA's.


What are you arguing here exactly? Most production systems work the same way as your brothers', which is to say they suck. We're pointing to a methodology that has a very strong track record of making production systems that don't suck, such as Toyota's and Ford's (empirical disproofs of your claim that such an approach is only applicable to boutique shops).


>Toyota's and Ford's (empirical disproofs of your claim that such an approach is only applicable to boutique shops).

Where was this provided? I didn't see you or any poster provide claim or evidence that Toyota or Ford intentionally leave unused production capacity. I had a busy day so I may have missed it somewhere.

Far as I'm aware they also run their assembly as close to 99.999% of the time as possible.

My brother is not a mft. He works for an engineering company that makes and maintains manufacturing equipment. He has worked in nearly every major company you can name's manufacturing plants fixing their stuff or installing new stuff. Its a whole world I did not know about until he started. I'm just forwarding some stories he tells. Not sure why you think you know more than all the people involved.


Interesting -- I'll have to read The Goal! I've only read the reference material around ToC, so this sounds additive :)


This sounds intriguing. Of note for anyone with an audible membership: The Goal is in the free library.


It's also included in Spotify Premium for free.


All cars should be taxed on a formula of miles and weight. I own an EV, and I'm happy to pay my share, especially for the increased weight due to the battery. I'd love to see taxes that incentivize smaller, lighter cars in general.


And even if the power is green, less use is always better. Less tire wear and in general less energy used which means it can be used for other things or in general produced less.


Yes. Card counters (advantage players at blackjack) studied shuffle tracking. I haven't seen much good material published on it, but I played on a team with a math whiz (physicist) who ran the numbers. For the casino we were playing at the time, the last 20 or so cards would be distributed over the bottom half of a six-deck shoe. A strategic cut would bring those three decks to the front, and you could play with a slight advantage.

Suppose you're at a full table and 12 of the last 20 cards are aces or tens (rare but possible). These get shuffled and cut into the first three decks, giving you a true count of four (12/3) for the first half of the shoe, which is significant.

We never really put it into practice, though, since: 1) you have to track the count for the last 20 cards in addition to the regular count, 2) shuffles change, 3) dealers are inconsistent, 3) casinos use different shuffles, 4) the typical advantage is likely to be much smaller.

My knowledge on this is at least 20 years out of date, though, so who knows?


The bias I saw would likely be difficult to exploit without studying the mathematics a lot more. It seems to pass several statistical tests, but the problem I found was more about independence.

Let's call the two splits L for left and R for right and consider the resulting deck to be defined as a sequence of these letters indicating whether that card was taken from the left or right pile so in theory you can specify a shuffle uniquely like "LRRLLLLLLRRR...". This method of labeling shuffles has the advantage of being extremely easy to record and also makes the bias I found apparent where there are way too many pairs of "LR" and "RL" compared to "LL" and "RR" like the "7 shuffles is enough" model suggests there should be.

For some reason, when I did all this analysis last year, I was mostly thinking about how to ensure my MTG decks are sufficiently shuffled. I didn't really think about the implications for card counting. However, normal riffle shuffling seems to have less bias (but still present) than the mash shuffling I looked at and I think most casinos use machines for shuffling these days.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: