Hacker Newsnew | past | comments | ask | show | jobs | submit | Enginerrrd's commentslogin

> it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?

I could not disagree more. A big part of that is also knowing when NOT to pull the trigger. And it’s much harder than you’d think. If you think full self driving is a difficult task for computers, battlefield operations are an order of magnitude more complex, at least.


We have fully autonomous weapons, and had them for over a century. We call them "landmines".

I expect autonomous weapons of the near future to look somewhat similar to that. They get deployed to an area, attack anything that looks remotely like a target there for a given time, then stand down and return to base. That's it.

The job of the autonomous weapon platform isn't telling friend from foe - it's disposing of every target within a geofence when ordered to do so.


Well, I assume that they are at least not to attack their autonomous "comrades". Masquerading as such will be one obvious tactic, no ? You could argue that these guys would use e2e encrypted messages as FOF designation, but I would imagine a contested area would be blanketed with jammers, leaving only other options (light ? but smokescreens. Audio? Also easily jammed). So this isn't as easy as most people think.

Edit: No, I don't think a purely defensive stance like landmines is sufficient and what the people in command think.

We have landmines today. Why spend much more making marginally better, highly intelligent ones with LLMs?


Also, a longer quote from Douglas Adams might be appropriate here (also appropriate to agentic vibe coding ...)

Click, hum.

The huge grey Grebulon reconnaissance ship moved silently through the black void. It was travelling at fabulous, breathtaking speed, yet appeared, against the glimmering background of a billion distant stars to be moving not at all. It was just one dark speck frozen against an infinite granularity of brilliant night. On board the ship, everything was as it had been for millennia, deeply dark and Silent.

Click, hum.

At least, almost everything.

Click, click, hum.

Click, hum, click, hum, click, hum.

Click, click, click, click, click, hum.

Hmmm.

A low level supervising program woke up a slightly higher level supervising program deep in the ship's semi-somnolent cyberbrain and reported to it that whenever it went click all it got was a hum.

The higher level supervising program asked it what it was supposed to get, and the low level supervising program said that it couldn't remember exactly, but thought it was probably more of a sort of distant satisfied sigh, wasn't it? It didn't know what this hum was. Click, hum, click, hum. That was all it was getting. The higher level supervising program considered this and didn't like it. It asked the low level supervising program what exactly it was supervising and the low level supervising program said it couldn't remember that either, just that it was something that was meant to go click, sigh every ten years or so, which usually happened without fail. It had tried to consult its error look-up table but couldn't find it, which was why it had alerted the higher level supervising program to the problem.

The higher level supervising program went to consult one of its own look-up tables to find out what the low level supervising program was meant to be supervising.

It couldn't find the look-up table.

Odd.

It looked again. All it got was an error message. It tried to look up the error message in its error message look-up table and couldn't find that either. It allowed a couple of nanoseconds to go by while it went through all this again. Then it woke up its sector function supervisor.

The sector function supervisor hit immediate problems. It called its supervising agent which hit problems too. Within a few millionths of a second virtual circuits that had lain dormant, some for years, some for centuries, were flaring into life throughout the ship. Something, somewhere, had gone terribly wrong, but none of the supervising programs could tell what it was. At every level, vital instructions were missing, and the instructions about what to do in the event of discovering that vital instructions were missing, were also missing. Small modules of software - agents - surged through the logical pathways, grouping, consulting, re-grouping. They quickly established that the ship's memory, all the way back to its central mission module, was in tatters. No amount of interrogation could determine what it was that had happened. Even the central mission module itself seemed to be damaged.

This made the whole problem very simple to deal with. Replace the central mission module. There was another one, a backup, an exact duplicate of the original. It had to be physically replaced because, for safety reasons, there was no link whatsoever between the original and its backup. Once the central mission module was replaced it could itself supervise the reconstruction of the rest of the system in every detail, and all would be well.

Robots were instructed to bring the backup central mission module from the shielded strong room, where they guarded it, to the ship's logic chamber for installation.

This involved the lengthy exchange of emergency codes and protocols as the robots interrogated the agents as to the authenticity of the instructions. At last the robots were satisfied that all procedures were correct. They unpacked the backup central mission module from its storage housing, carried it out of the storage chamber, fell out of the ship and went spinning off into the void.

This provided the first major clue as to what it was that was wrong.


And the arms industry has been pushing smart mines for decades, so that they can keep selling them despite the really bad long-term consequences (well beyond the end of hostilities) and the Ottawa Treaty ban. In the end, land mines are killing people although the mines are supposed to be sufficiently advanced not to target persons.

From a security perspective, the “return to base” part seems rather problematic. I doubt you'd want to these things to be concentrated in a single place. And I expect that the long-term problems will be rather similar to mines, even if the electronics are non-operational after a while.


"Smart mines" specifically can be designed so that they're literally incapable of exploding once a deployment timer expires, or a fixed design time limit is reached.

It just makes the mines themselves more expensive - and landmines are very much a "cheap and cheerful" product.

For most autonomous weapons, the situation is even more favorable. Very few things can pack the power to sit for decades waiting for a chance to strike. Dumb landmines only get there by the virtue of being powered by the enemy.


You don’t need Anthropic for this use case, so obviously this use case is not what the current fight is about.

You don't need Anthropic for any use case. They don't ship VLAs either - nothing from Anthropic's entire model lineup can run on a killer drone.

Which raises the question: why did the Pentagon try to pressure Anthropic at all?

On the principle of it? Political reasons? Or was the real concern "domestic warrantless surveillance"?


"Since the end of the Vietnam War in 1975, unexploded ordnance (UXO)—including landmines, cluster bombs, and artillery shells—has killed over 40,000 people and injured or maimed more than 60,000 others." - Google AI Overview "How many children were maimed by landmines after the vietnam war"

I guess by that definition, a bullet is also autonomous. It will strike anything in its path of flight, autonomously without further direction from the operator.

Bullets don't kill people, etc. etc.

If anything represents the logical conclusion of that tired fallacy, it'll be actually autonomous, "thinking" drones which make the targeting decisions and execution decisions on their own, not based on any direct, human-led orders, but derived from second-order effects of their neural net. At a certain point, it's not going to matter who launched the drones, or even who wrote the software that runs on the drones. If we're letting the drones decide things, it'll just be up to the drones, and I don't love our chances making our case to them.


Yes, but it doesn’t have to be error-free. The friendly fire rates in symmetrical hot wars is pretty high, it’s considered a cost of going to war.

If autonomous weapons lead to a net battlefield advantage, I agree with the GP, they will be used. It is the endgame.


The big asterisk in what you're saying is, like self driving cars, it's hardest when you want to be the most precise and limit the downsides. In this paradigm, false positives and false negatives have a very big cost.

If you simply wanted to cause havoc and destruction with no regard for collateral damage then the problem space is much more simple since you only need enough true positives to be effective at your mission.

The ability to code with ai has shown that it requires an even higher level of responsibility and discipline than before in order to get good results without out of control downside. I think the ability to kill with ai would be the same way but even more severe.


> A big part of that is also knowing when NOT to pull the trigger

"In a press conference, Musk promised that the Optimus Warbots would actually, definitely, for real, be fully autonomous in two years, in 2031. He also extended his condolences to the 56 service members killed during the training exercise"


I've not watched all of Robocop (too much gore for me), but I have seen the boardroom introduction of the ED-209.

That's how I imagine a Musk demo of this kind of thing would play out, if his team can't successfully manage upwards.


And the US learned the lesson the hard way in Iraq that in fact even human intelligence struggles with this. There were major problems throughout the war with individual soldiers not adhering to the published rules of engagement.

Yes, but the important bit is that autonomous drones can't be held accountable for not adhering to the published rules of engagement.

I keep getting timeouts so I'm unable to test this. However, I have a suggestion:

What's really needed IMO is a drop-in tool to increase the ranking of thoughtful comments and decrease comments that drive engagement by making people angry. You need your tool to score comments on a scale for THAT. Combine that with policy mandating its use on algorithmically ranked sites for an audience above a threshold size and you have a tool to bring civility back to society. I don't think angry comments should be censured. I think they just should not be artificially amplified into everyone's feeds. While not perfect, there's a wonderful difference between hackernews comments and reddit comments and a great deal of it stems from the culture of self-moderation here.

Amplifying people with nuanced takes on things would go a long way honestly. As it stands, adversary countries are using this artificial anger amplification as a weapon, and its thus far been devastatingly effective.


Ethical humans are pretty hard to come by if you put them under a microscope.

"Not beating women" doesn't require a microscope.

I agree but when you’re dealing with celebrities people sometimes lie and exaggerate, and third parties sometimes extrapolate beyond any semblance of grounded facts. So most people subject to that level of scrutiny and fame are likely to have some allegations against them whether true or not.

Hendrix’s girlfriend Kathy Etchingham claims he never abused her. Some third parties dispute her claims about her relationship.

His arrest record suggests at least some type of altercation with a previous girlfriend but it’s far from clear cut to me.

People are complex and reality is complex. I myself was subject to false accusations about abuse from a disgruntled ex girlfriend (who actually WAS in fact physically and mentally abusive to me and I have the scars to prove it).

But regardless, I have zero issues reflecting on a person’s accomplishments and talents even in the context of them being a horrible person. In fact, I find that part of the intrigue of really talented people. Reality and people are quite multi-dimensional. The only general rule I know is that nobody is perfect and holding up ANYone as some example of moral perfection is almost certainly wrong.


>it's sort of like running a JS crypto miner in the background on your website.

To be honest, I wish the web had standardized on that instead of ads.


To be honest, my working heuristic for over a decade now has been to assume that if someone openly admits to reading 4chan without any hesitation, caveats, or embarrassment, they can quite likely be lumped in with a general basket of deplorables.

Is it totally fair? No. Is it reasonably high probability? IMO, yes. Is there likely information value on 4chan that could be difficult to find elsewhere? Probably. Is it worth my time and aggravation to sort through it? No.


In case anyone was curious like me: the standard deviation of lifespan is ~12-15 years in developed countries.

So environmental effects, sleep, diet, lifestyle, etc (I.e. modifiable factors) maybe account for half of that, so like 6-7.5 years of variance. Which… sounds about right to me.


Lifespan is not even half the story though, health span is much more important. Your life is completely different if you can ski or split your own wood at 80+ vs being barely able to use stairs at 50. Both might die at 90 but one "lived" 30 years more


Yup.

I'm not really afraid of getting old, but I'm afraid of becoming decrepit.

My grandma has been decrepit for over 5 years now. She can't walk and has no bladder or bowel control, so she just sits on the couch and shits herself all day. She's not living, she's merely surviving. She was living with my mom for a while, but my mom decided she couldn't handle it anymore and put her in an assisted living facility.

If I get to the point where I couldn't cook my own meals and wipe my own ass, just put a bullet in me. I do not fear dying, but I do fear spending years of my life not being able to actually do anything.


My dad died at the end of last year, and was not too different from your grandma. For him the main problem was chronic pain from his failing body. Even fairly powerful opioids from a pain management doctor only helped a bit. Basically all he could do was sleep, eat meals, and sit in his chair in pain.

I feel similar to you, but I wonder if it's one of those those things where age changes your perspective. Dad was in assisted living and had several stints in rehab/nursing home facilities, and in both there were quite a few people with what I'd call poor quality of life who were still holding on to life.


Something we youngsters (I'm 69) may not realize is that people in assisted living still have friends and frequently even sex lives while they are there. They read, play games, and watch movies, just like us. They might not be able to do all the things they could when they were younger, but their lives are not necessarily over.


I am looking forward to playing 3 decades of great computer games once I am too old to go out into the woods or do martial arts.

I love gaming, but I am still too young to do it properly.


Any idea what kind of games you'll want to play by then?

I suspect it won't be hair-trigger combat games in dark dungeons where every strike results in a blizzard of gems and stars flying around the screen while teenagers scream into the mic.

But if you like Sudoku and crosswords you'll probably be good. That's my jam anyway.


I've been playing Factorio and the base game is 100 hours easily, there are mods that ratchet it up to 500+. It's great brain exercise too, constantly refactoring, solving for bottlenecks, etc.


I would love to be that mentally spry in my old age. I'm not convinced I will be though.


Witcher, all of them Baldurs gate 3 Mass effect Assassins creed

Probably gta 6, if it’s out by then


Hopefully by then VR versions of The Longest Journey or sanctioned or unsanctioned AI generated/slop adaptations of trek/wars/who etc.


This. I've bought a lot of games over the last 15 years that I haven't touched let alone finished. I hope to at least play them some day.


Of course, some truly do “live” there, and good for them.

And others just sit there waiting to die, unable to even feed themselves.

I saw plenty of examples of both when my grandmothers were in assisted living homes. Unfortunately my grandmothers both tended towards the latter case.


nice


I am close to what you describe about your dad, and I am 42. I have no idea what to do. I don't want to live this way. And I don't want to die, not really, although I am at peace with the idea. I can't find what is wrong with me, except for the fact that it is related to pain regulation mechanisms somehow. This has been going on for 10 years already.

The only thing that helps now are opioids in dosages nobody would prescribe. I was prescribed opioids at some point during these years, and I still don't know if this was a mistake by the doctor. Now I am in pain AND opioid-dependent. But I am not sure I would not have ended my life sooner if not for the temporary relief I had.

The government does not allow me to get a few years of better quality life in return for dying early from an overdose, etc. I am bitter about it, and often wish government officials had the pain I do. Maybe I did not do enough, or people close to me could have been more pressing in asking to do more earlier. That's a consequence of a culture where people don't get into other people's business. I sometimes hope it is not too late still, but everything is harder now, and I still don't have any good ideas or the willpower to execute them.


> there were quite a few people with what I'd call poor quality of life who were still holding on to life.

The next question would be “did they have any alternative”


It is probably more than half the story. Health span is strongly correlated to life span, although not completely. The median "health span gap" is about 10 years, and has widened by roughly one year over the past 20 years. However, this is probably just due to an aging population and not necessarily from any factors you can control fully.

I wouldn't be surprised if "health span" (although defining it is difficult) exactly mirrors the inheritability pattern of mortality.


> The median "health span gap" is about 10 years

It depends on the definition, if you're even just 20kg overweight you're living a wildly different life than you'd have if you were fit, you're closing so many doors by default and making a bunch of things much harder than they should be, But you're still considered "healthy" here


This is such an underappreciated fact. Lots of people think 20kgs overweight is normal, they'll call you skinny and tell you to eat more if you're a healthy weight. An adult man of average height should probably not weigh more than 80kg. It could be okay if you're very muscular but most likely you'd be better off losing a few kilos. And being extremely muscular to the point where your BMI says overweight isn't exactly good for your health either. Though probably better than just being fat.


My Dad (age 81) tore his rotator cuff splitting wood recently. It's slow to heal and he's in a lot of pain which (along with his Alzheimer's) is really getting him down.

Maybe even if you're still fit and strong in your 80s you should let someone else split your wood for you


I can't speak for him but the reason I want to live somewhere where I split wood at the end is so that I can expire either from want of heat when I become incapable of splitting, or so that the exertion causes me to keel over and expire in nature when it's time.


I hope I’m able to do this when the time comes

https://www.thisamericanlife.org/779/ends-of-the-earth

Basically assisted death


It's a remarkable tragedy how many people don't understand your point.

https://en.wikipedia.org/wiki/Disability-adjusted_life_year

Too many people think your life is a binary 'living or dead' when thats not the case at all. I didn't even understand it fully till I was hit by a car.


I'm sorry that happened.


Yeah, been working in IT since forever (sitting work all day), but started lifting recently and it already made remarkable improvements in my wellbeing. Should've started sooner of course, but I'm still well in time.


Lot of people think it's a niche exercise activity and it shouldn't be - for all ages including those in their 80s and 90s according to reports.


One of the most consistent health research findings Ive heard in recent years is the benefits of weight training for older adults. Hopefully the message is being received.


It is one thing to receive the message but a much different thing to act on the message.

From going to the gym for decades now, I don't see older people acting on this at all. A big problem is the CNS takes so much longer to recover as you get older. Starting lifting at an older age is really an uphill battle. I don't know a single person who has ever started lifting over 45 and kept with it. I know a guy that lifts in his 80s but our first conversation about lifting was 35 years ago. I am part of the old crowd at the gym and everyone I know has lifted for decades.

The message really needs to be that you have to start lifting young so you still lift when you are old. Need to become so addicted to lifting that you will still be doing it when your only lifting to get less weak and figuring out how to train around various injury. Not going to the gym is inconceivable to me but I just don't see how I could have started past 45. Even the difference between early 40s and late 40s lifting was night and day for me.


This plus stretching / yoga has been amazing as I'm entering my 40s. For a while I was just lifting and I had strong muscles but they were short and tight. Not everyone has that problem, but just noting strong muscles are half the picture, being strong and flexible makes life feel effortless and years of being a desk jockey.


As many of the health nutters say, the goal is "live well, drop dead."


100% now that I get older I observe the even older people I know.

Some live a very painful and limited life. Others are 85+ and still go out to run, play soccer etc. Amazing to see.


Life span is easier to measure. You get the offial birth dates table, you get the official death dates table, you just substract the numbers and call it a day.


It is almost never reasonable to assume normality and make calculations like this. This is particularly the case when you are dealing with lifespan, which isn't normally-distributed even in the slightest. The actual ranges are likely smaller than you are stating here, and variance is just not a very practical or interpretable metric to use when dealing with such a skewed distribution.

We should be stating something like a probability density interval (i.e. what is the actual range / interval that 95% of age-related deaths occur within), and then re-framing how much genetic variation can explain within that range, or something like it. As it is presented in the headline / takeaway, the heritability estimate is almost impossible to translate into anything properly interpretable.

https://biology.stackexchange.com/questions/87850/why-isnt-l...


Since lifespan can't follow a power law distribution, I suspect the error in variance from assuming it IS normally distributed is far less than you're suggesting.

Like even if I'm off by a factor of 2, then only ~3ish years are explainable by environment/exercise/diet/etc. Then... OK... that's really not that bad of an error in this context. That also feels a little low to me. I'd have guessed around 5-8 years anyway based on my experience with healthcare and life.


> I suspect the error in variance from assuming it IS normally distributed is far less than you're suggesting. [...] Like even if I'm off by a factor of 2 [...]

You would be deeply mistaken. Robust statistics texts (e.g. Wilcox) are full of examples of distributions that have zero skew and are even nearly indistinguishable by eye from a Gaussian, but where the differences in variance and thus resulting differences in conclusions drawn are profound. Heck, a sample from a Cauchy distribution looks not too bad, but in fact the variance is not even defined (or effectively infinite, and, thus, meaningless).

And even if you have enough data that statistical issues are not a concern, the problem is that most summary metrics (like effect sizes, heritability, etc) are developed under the assumptions of near-normality AND minimal skew, so that the effect size can be interpreted as something about the overlap and or positioning of the bulks of the distributions. But when skew and long tails are involved, the bulk itself is what is messed up, making most such metrics largely uninterpretable.

I.e. it isn't just that variance is hard to measure accurately here, it is that, even if measured accurately, variance isn't actually a meaningful metric here.

The few metrics that do remain interpretable in such cases tend to be those like HPDI in Bayesian methods, which look at actual distribution shapes and try to quantify a bulk in a sensible location. Likewise, meaningful effect sizes for skewed and long-tailed data need to actually take into account distribution overlap in meaningful regions. Heritability does not do this, as it is an explained variance metric.


One note: the standard deviation of the remaining effects would be sqrt(1/2) as large, not 1/2 as large. So more like 8.5-10.5 years.


This is a nice example/re-stating of what the heritability % "means" here.

I'm curious, with something like smoking/drinking, how you can be confident that you've untangled genetic predispositions to addiction or overconsumption from those "modifiable factors". I guess that's just captured within the 50% heritability? And if you could confidently untangle them, you might find heritability is higher than 50%?


Heritability is a pretty funky concept because it's contextual to a certain point in time, environment, and population, effectively.

An example I like is that if you measured the heritability of depression in 2015, and then you measured the heritability of depression in 2021, you would likely see changes due to environmental effects (namely, there's the pandemic/lockdowns and this could conceivably cause more people to experience depressive symptoms). Let's assume we make those measurements and the rate of depression did increase, and we could tie it causally to the pandemic or related events.

In that scenario, the heritability of depression would have decreased. I don't think anyone would argue there were massive genetic changes in that 6 year time period on a population scale, but the environment changed in a way that affected the population as a whole, so the proportion of the effect on the trait which is genetically explained decreased.

For something like lifespan in the above example, you can imagine that in a period of wartime, famine, or widespread disease the heritability would also decrease in many scenarios (if random chance is ending a lot of lives early, how long the tail of lifespan is influenced genetically is much less important).

Given that note, it's generally tricky to talk about whether heritability increases or decreases, but with more accurate estimates of how genetic predispositions form you could see the heritability of certain traits increase with the environment held stable, as there's certainly ones that may be underestimated or genetic factors that aren't currently accounted for in many traits.

*edit: I realized I never mentioned the other thing I wanted to mention writing this! since you mentioned what the percent heritability means here, I think the best way to think of it is just "the proportion of phenotypic variation for this trait in a measured population which is explained by genetic variation." So it's dependent on the amount of variation in several aspects (environmental, genetic, phenotypic).


Some epigenetic effects are semi-heritable too, eg maternal exposure can be transmitted. That's in addition to environmental effects like you mentioned. Two otherwise identical cohorts can inherit the same genetic predisposition for depression where one manifests and another does not entirely due to their circumstances.

Evolution is just super super messy.


Lifespan isn't as important as healthy lifespan. Lifestyle can mean the difference between being able to complete an Ironman triathlon at age 80 vs being bedbound.


> the standard deviation of lifespan is ~12-15 years in developed countries.

That seems rather higher than I would have expected, at least if one corrects for preventable accidents and other such things (that I would expect to shift the results away from a normal distribution).


Lifespan is a quite skewed distribution, so the SD looks large because it is in fact a poor summary of the bulk of the distribution. The actual part we care about for age-related mortality is narrower than such an SD would imply if we had a normal distribution (simple image example: https://biology.stackexchange.com/a/87851).


> at least if one corrects for preventable accidents and other such things

You can't really correct for these. Yes there are genuine accidents that will kill you under any circumstances, but for a lot of things both your odds of having an accident and the odds of surviving it are strongly linked to age. As a simple example, despite driving significantly less, the elderly get into more car accidents and suffer worse injuries in those accidents than people earlier in life. Only the age range of 15-24 has higher car accident fatality rates.

There is no such thing as death by old age. At most there are deaths in the elderly that don't get attributed to a specific cause (typically because of so many different things going on at once and no desire to cut up grandma after the fact to see which straw broke her back) which we tend to refer to as "died of old age" but it's not a recognized medical cause of death. People die of diseases, injuries, and various other things, many of which are strongly influenced by age but also heavily influenced by other factors.

You can set a cutoff point and say these things don't count as age related deaths whereas these others do. As long as you're consistent with these choices, you can learn something useful. But a wide enough net that is widely agreed to cover what we think of as aging is going to include a lot of other maladies, whereas a narrower selection criteria is probably going to yield wildly different results from one analysis to the next.


There is death by old age. You’re just not supposed to write it on the statement because the age is there already.


Environmental effects are not necessarily modifiable. It includes randomness, background radiation, unknown risk factors, anything which is not genetic.


I have one on my watch. It’s a citizen with a circular slide rule /E6B flight computer. I need my reading glasses to use it, but it’s fun.

It can reliably get me 2 sig figs, and a decent guess at a third. But… if I think about it for a minute, I can usually get that in my head anyway. Being able to setup a ratio is great though for unit conversions and things.

It’s also really good for answering that question when driving where you’re like, ok if I go 10mph faster how much sooner will I get there which is otherwise hard to do mentally.

Most of the benefit of using a slide rule in my experience comes not from using it, but from thinking LIKE you’re going to use a slide rule. You learn to freely use scientific notation with ease, and mental estimation to get the order of magnitude right.

And just my 2 cents, but circular slide rules are where it’s at.


> " You learn to freely use scientific notation with ease, and mental estimation to get the order of magnitude right."

This is what I was trying to get at in my other comment [1]!

[1] https://news.ycombinator.com/item?id=46872141


I have one of these Citizens, it was my first "nice" watch.

They're really cool


I have a Rotary Henley Chronograph and love using it for baking ratios.


>... has there ever been verified science that shows exercise is unhealthy?

Yes, the extremes of endurance have certainly been shown to have a negative effect on heart health, and possibly also colon health, but the amount of exercise required to get into the danger zone here is so high almost no one that isn't a competitive athlete would achieve it. (Although, amateur marathon runners might.)


I think that the engineering is very challenging and the market for this is nonexistent.

First, anyone truly concerned about this for actual use cases just isn’t going to bring their phone with them at sensitive times. Especially after the infamous chip bag Italian meta data incident.

Second, it’s conspicuous and kinda suspicious so its use is limited to primarily virtue signaling privacy advocates or crazy people and the latter aren’t usually big spenders.

Third: the engineering sounds challenging. All that metal in an undeployed fashion is going to reflect and interfere with reception. ( it isn’t an iron man suit, it has to get packed somewhere.). That may also interfere with RF safety approvals? Finally, avoiding RF leakage is surprisingly difficult in practice.


> chip bag Italian meta data incident

I’m not familiar with this one and search didn’t get me anything that seemed relevant. Got a link to something describing the incident?



What is the chip bag Italian meta data incident?


What is the infamous chip bag Italian meta data incident? I have had a quick search but nothing obvious jumped out. I'm very curious!


One thing I noticed is that the water matters.

I noted that when visiting my sister down in the Bay Area, I had to steep for quite a bit less time before the bitter tannins would start creeping in. Like 1.5-2 mins tops for cheap PG Tips. But that same tea up north could sit for 3-4 minutes before the bitter tannins would creep in.

It was a marked difference so there are obviously some confounding factors. I suspect the water chemistry matters a fair amount.


Other solutes in the water, like calcium chloride, can indeed greatly affect the solubility curves of the flavor compounds.

You can buy premixed packages of salts to dissolve into distilled water to precisely reproduce the composition of the well waters of some famous breweries, even though the result mostly still tastes like water.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: