Hacker News new | past | comments | ask | show | jobs | submit | nwienert's comments login

Finasteride is scary bro, too many firsthand accounts of issues for me.

Say more please.

took 1 pill, pp fell off

tl;dr While it can have undesirable side effects, these are long-term. But these side effects make it a powerful nocebo for some people to the point where they almost instantly get side effects (nausea, anxiety, sexual dysfunction), despite that being pharmacologically improbable.


It's mostly nocebo, they read about it online for weeks and then try it, and at that point they are so psyched out that they keep watching for every little thing, effectively making it a self fulfilling prophecy.

I started dutasteride when I was 20 and have been on it for many years and it really stops hair loss, because my hair loss was very fast and aggressive.


So if someone after two weeks had their semen consistency change to be completely watery, would it be their eyes that produced the nocebo effect or? Or how is the doctor who went on it for a few months, lost erections, then gained them back after quitting explained by a nocebo?

Again, I've heard at least 5 of these first-hand, which is kind of wild for a drug I've probably only talked to 20 people about in total.

We're terrible at detecting changes over long periods of time. If anything I'd think long term users would be much more liable to bias.


Safari is better than Chrome in many ways, arguably most.


Sure, it does have some benefits. Like lower energy consumption, I hear good things about JavaScriptCore (Safari's JS engine), that said, so many of the features are missing, and one part is it encroaching on the iOS apps territory.


The features missing thing was true years ago, but Apple significantly increased their investment in Safari about 3 years ago and it really gained ground. If you subtract all the Chrome-invented features, they aren't too far off.


> so many of the features are missing, and one part is it encroaching on the iOS apps territory.

Be careful when listing those features. Many of those "encroaching" are Chrome-only non-standards


And some of them, like WebGPU, are Khronos IP that Apple has no reason to object to except on an ideological and profit-maximizing basis. I wonder why Apple would deliberately avoid an API that might obsolete the requirement for games to use the App Store? Do you have any ideas?


> like WebGPU, are Khronos IP that Apple has no reason to object to except on

You do know that Apple is basically the original author of WebGPU, right (together with Mozilla)?

> I wonder why Apple would deliberately avoid an API that might obsolete the requirement for games to use the App Store

And your fantasy of Apple deliberately avoiding it is based on what exactly?

https://webkit.org/blog/9528/webgpu-and-wsl-in-safari/

https://webkit.org/blog/14879/webgpu-now-available-for-testi...


> You do know that Apple is basically the original author of WebGPU, right (together with Mozilla)?

Apple is the original author of a lot of tech they end up abandoning. Certainly a lot of Khronos IP, paging through their history.

> And your fantasy of Apple deliberately avoiding it is based on what exactly?

Based on a 4 year (!!!) porting time from MacOS Safari to iOS Safari. Basically textbook feet-dragging there.


> Apple is the original author of a lot of tech they end up abandoning.

Doubtful

> Certainly a lot of Khronos IP, paging through their history.

Everyone abandons Khronos IP, or doesn't really supports it, paging through history in general. Because Khronos IP ends up a designed-by-committee crapfest. Meanwhile WebGPU is not and has never been a Khronos IP. It's developed within a w3c working group: https://www.w3.org/2020/gpu/

> Based on a 4 year (!!!) porting time from MacOS Safari to iOS Safari.

Based on a 4-year porting of what from MacOS Safari top iOS Safari?

- WebGPU spec is literally in draft status, so things can still change. It's literally in stage 2 of 5 of spec development

- Neither Safari nor Firefox have enabled WebGPU yet. The fact that Chrome rushed and enabled it by default does not make the spec or the standard finished and ready to be enabled everywhere

- webgpu can be enabled with a toggle in advanced settings in Safari on iOS (as is the case with most new features for in all browsers)


A fact that appears to be lost on the majority of users that have a say in what browser they use: https://gs.statcounter.com/browser-market-share/desktop/worl...


The problem is that when Chrome came out it was heavily marketed/targeted towards developers. Developers took it up and then built websites in & for Chrome. The end result is many websites work better in Chrome than Firefox or Safari. It's a vicious cycle of continuing dependency.

I'm doing my part to break the cycle by supporting the underdog by using Safari as my daily driver & developing primarily for Safari & Firefox.


> It's a vicious cycle of continuing dependency.

Or a viscous cycle of continued development. There are definitely things that Chrome does that nobody else should copy, but there's also a lot of stuff like WebGPU and WebRTC that should be standard. Firefox doesn't drag their feet in the same way Apple does, and they certainly don't resist standardization by trying to limit what a user can do on their device.

I have no real love for Google. ChromeOS sucks, Android is only tolerable when you de-Google it, and YouTube is perpetually regressing to a shittier state. But Chromium the browser is great, and it's the only browser I install on my Mac or Linux box when I get set up at work. I want to love Firefox like I used to, but Mozilla as a business is just about as functionally inept as Google or Apple at this point. I'm done trying to be a browser ideologue, I'm embracing post-browser realism here.


The data doesn’t show they drag their feet though. If anything FF is behind.

I genuinely enjoy Safari as a user more than Chrome. As a developer the dev tools suck. But as a user - the UI is far more minimal and nice. Every single action feels 2-3x faster, from opening and closing, tab opening or movement, etc. Battery lasts significantly longer. And I never really run into anything that doesn’t work, ever. Plus never worry about the latest hidden checkbox I have to find to not have my data soaked up. Hide my email is also dope.

The more responsive and thoughtful UI and battery/performance alone would have sold me. But the privacy and modern features it’s gotten over the last years make it better imo.

Just want to give a perspective as I feel people should update priors from 2021 “Safari is the new IE”


You said data doesn't show they drag their feet and then proceeded to present anecdote of your personal preferences and use cases while adding that thoughtful UI and battery life are the features and not web standards or the implementation quality of it nor the lack of 3rd party browsers on iOS - https://news.ycombinator.com/item?id=31902707

Sure they have recently implemented some features like IndexedDB but the data does indeed show that they dragged their feet!


They did years ago, as of late they are in fact moving faster than others. I think my point stands, they are no longer clearly behind in features, and in fact probably are near tied if you subtract the Chrome-only stuff, and take into account there's a variety of things Safari has that others don't now.


> The data doesn’t show they drag their feet though. If anything FF is behind.

Literally no? We must not be on the same page, both of the technologies I namedropped were Chrome and Firefox exclusive for a half-decade. And they're certainly not the only features Mozilla and Google agree upon; Apple deliberately gimps features that benefit PWAs so that browsers artificially cannot compete with their native apps.

> Just want to give a perspective as I feel people should update priors from 2021 “Safari is the new IE”

I'm sorry; people will keep calling Safari "the new IE" for as long as Apple carbon-copies Microsoft's Explorer strategy from the 90s. You can run from it, insist it's not true, but Apple will clutch to their ecosystem control whether it's rational or not. This is why we have to antitrust them, to stop the market from more of their irrational self-serving harms.


I mean if you do the analysis on features supported on CanIUse, Safari is not really behind in any meaningful way. There are some missing features relative to Chrome, but they actually support a number of things other browsers don't. It's not clear-cut like it was years ago. Sorry if that's inconvenient.


> WebRTC that should be standard

What is WebRTC good for? I've never understood. It probably has some use for in-browser video chats, but other than that?

I'm asking because at some point the Chrome you are praising prevented my Mac from sleeping for like half a year or more because 'webrtc has active peer connections'. I had no conferences open in the browser, just - i thought - regular web pages.

So what can you do with WebRTC behind the user's back then, and why is it moral to do it?


Yep


On a more general note, fines need to be something like 5-10x of the benefit, not 1-2x. Of course the gov has limited resources, limited ability to win trials, etc, so they need to be far beyond just compensation.


> fines need to be something like 5-10x of the benefit, not 1-2x

There is zero chance Amazon was saving more than $1mm at these two warehouses from hiding quotas since the law went into effect. We’re already at 5x+ (most likely 10x+) benefit before considering damages.


My back-of-the-napkin math: Amazon warehouses run 24/7, so a million divided by 365 is $2740 per day, which turns out to about $114 per hour. Amazon warehouses employ about 1500 people (even though SoCal warehouses probably employ more.)

I would be absolutely gobsmacked if they saved that little.


I'd be interested to see what labor costs are at these facilities. I could easily buy that, if they rightfully disclosed the quotas they put people under, they would need to pay significantly more in order to attract and retain workers. Maybe 10% - 20% more, including taxes and benefits?


It greatly reduces my autoimmune issues, another friend reported dramatic reduction in allergy issues...


Are you willing to share more about either?

That's fascinating.


Have Ehlers Danlos and another thing, with quite a few effects I get ongoing, it's actually why I try experimenting with drugs in general. I wasn't much overweight but gave it a shot since it seemed like a fascinating drug, and the results were shocking to me.

Perennial light sleeper, couldn't drink caffeine either due to extreme long half life and sleep issues. On it, fall asleep at night like a "normal" person for the first time. Able to drink coffee too. I chalked that up to the fact it does have a bit of a drowsiness effect, but... then I noticed I was just having a bit less back pain. Noticed my tingling in feet and muscle fasciculation was down.

I've been on and off it now ever since, but I take lower doses/breaks since I don't need it for weight. But I was getting "attacks" of inflammation every ~6 months, since doing it for 1.5 years not a single attack (knock on wood).

My thought there is - yes fasting is great for a lot of things and maybe helped, but I can't help but think it's other effect. I've tried a lot of things, even other peptides that supposedly work and had 0 effects across hundreds of supplements. This stuff had huge effects.

The friend I can't speak to much, simply that they told me they started it and without my prompting told me a couple months later about the allergy thing.


Thanks for sharing.

I wonder how that works!

Also, curious what type of allergy the friend has, and what type of relief.


I once used RxJS for a dev tool platform, we had a use case where we had to take a tree-like structure of user data and recursively resolve all the async nodes.

Took it to the RxJS discord after a couple days of pounding my head on it. One of the creators was there and was super helpful.

We went back and forth on the problem at least 6 times each, with new attempts. He tried quite a few variations, but none ever worked.


This is an anti-human ideology as bad as the worst of communism.

Humanity only survives as much as it preserves human dignity, let's say. We've designed society to give rewards to people who produce things of value.

These companies take that value and giving nothing back to the creators.

Supporting this will lead to disaster for all but the few, and ultimately for the few themselves.

Paying for your (copyrighted) inputs is harmony.


These models literally need ALL data. The amount of work it would take just to account for all the copyrights, let alone negotiate and compensate the creators, would be infeasible.

I think it’s likely that the justice system will deem model training as fair use, provided that the models are not designed to exactly reproduce the training data as output.

I think you hit on an important point though: these models are a giant transfer of wealth from creators to consumers / users. Now anyone can acquire artist-grade art for any purpose, basically for free — that’s a huge boon for the consumer / user.

People all around the world are going to be enriched by these models. Anyone in the world will be able to have access to a tutor in their language who can teach them anything. Again, that is only possible because the models eat ALL the data.

Another important point: original artwork has been made almost completely obsolete by this technology. The deed is done, because even if you push it out 70 years, eventually all of the artwork that these models have been trained on will be public domain. So, 70 years from now (or whatever it is) the cat will be out of the bag AND free of copyright obligations, so 2-3 generations from now it will be impossible to make a living selling artwork. It’s done.

When something becomes obsolete, it’s a dead man walking. It will not survive, even if it may take a while for people to catch up. Like when the vacuum tube computer was invented, that was it for relay computers. Done. And when the transistor was invented, that was it for vacuum tube computers.

It’s just a matter of time before all of today’s data is public domain and the models just do what they do.

…but people still build relay computers for fun:

https://youtu.be/JZyFSrNyhy8?si=8MRNznoNqmAChAqr

So people will still produce artwork.


> The amount of work it would take just to account for all the copyrights, let alone negotiate and compensate the creators, would be infeasible.

Your argument is the same as Facebook saying “we can’t provide this service without invading your privacy” or another company saying “we can’t make this product without using cancerous materials”.

Tough luck, then. You don’t have the right to shit on and harm everyone else just because you’re a greedy asshole who wants all the money and is unwilling to come up with solutions to problems caused by your business model.


This is bigger than the greed of any group of people. This is a technological sea change that is going to displace and obsolesce certain kinds of work no matter where the money goes. Even if open models win where no single entity or group makes a large pile of money, STILL the follow-on effects from wide access to models trained on all public data will unfold.

People who try to prevent models from training on all available data will simply lose to people who don’t, and eventually the maximally-trained models will proliferate. There’s no stopping it.

Assume a world where models proliferate that are trained on all publicly-accessible data. Whatever those models can do for free, humans will have a hard time charging money for.

That’s the sea change. Whoever happens to make money through that sea change is a sub-plot of the sea change, not the cause of it.

If you want to make money in this new environment, you basically have to produce or do things that models cannot. That’s the sink or swim line.

If most people start drowning then governments will be forced to tax whoever isn’t drowning and implement UBI.


Maybe the machines will just pay for more of leisure time as they were originally designed to do? It may just be as simple as that?

Remember the 4 hour work week ? Maybe we are almost there ?

Let’s face it, most people in a developed country have more free time than they know what to do with, mostly spent in HN and social median ofc :)


Check out the short story Manna by Marshall Brain for some speculative fiction on exactly these subjects.

https://marshallbrain.com/manna1


>Tough luck, then. You don’t have the right to shit on and harm everyone else just because you’re a greedy asshole who wants all the money

It used to be that property rights extended all the way to the sky. This understanding was updated with the advent of the airplane. Would a world where airlines need to negotiate with every land-owner their planes fly above be better than ours? Would commercial flight even be possible in such a world? Also, who is greediest in this scenario, the airline hoping to make a profit, or the land-owners hoping to make a profit?


Your comment seems unfair to me. We can say the exact same thing for the artist / IP creator:

Tough luck, then. You don’t have the right to shit on and harm everyone else just because you’re a greedy asshole who wants all the money and is unwilling to come up with solutions to problems caused by your business model.

Once the IP is on the internet, you can't complain about a human or a machine learning from it. You made your IP available on the internet. Now, you can't stop humanity benefiting from it.


Talk about victim blaming. That’s not how intellectual property or copyright work. You’re conveniently ignoring all the paywalled and pirated content OpenAI trained on.

https://www.legaldive.com/news/Chabon-OpenAI-class-action-co...

Those authors didn’t “make their IP available on the internet”, did they?


First, “Plaintiffs ACCUSE the generative AI company.” Let’s not assume OpenAI is guilty just yet. Second, assuming OpenAI didn’t access the books illegally, my point still remains. If you write a book, can you really complain about a human (or in my humble opinion, a machine) learning from it?


> So people will still produce artwork.

There's zero doubt that people will still create art. Almost no one will be paid to do it though (relative to our current situation where there are already far more unpaid artists than paid ones). We'll lose an immeasurable amount of amazing new art that "would have been" as a result, and in its place we'll get increasingly bland/derivative AI generated content.

Much of the art humans will create entirely for free in whatever spare time they can manage after their regular "for pay" work will be training data for future AI, but it will be extremely hard for humans to find as it will be drowned out by the endless stream of AI generated art that will also be the bulk of what AI finds and learns from.


AI will just be another tool that artists will use.

However the issue is that it will be much harder to make a career in the digital world from an artistic gift and personal style: one's style will not be unique for long as AI will quickly copy it and so make the original much less valuable.


AI will certainly be a tool that artists use, but non-artists will use it too so very few will ever have the need to pay an artist for their work. The only work artists are likely to get will be cleaning up AI output, and I doubt they'll find that to be very fulfilling or that it pays them well enough to make a living.

When it's harder to make a career in the digital world (where most of the art is), it's more likely that many artists will never get the opportunity to fully develop their artistic gifts and personal style at all.

If artists are lucky then maybe in a few generations with fewer new creative works being created, AI almost entirely training on AI generated art will mean that the output will only get more generic and simplistic over time. Perhaps some people will eventually pay humans again for art that's better quality and different.


The prevalence of these lines of thought make me wonder if we'd see a similar backlash against Star-Trek style food-replicators. "Free food machines are being be used by greedy corporations to put artisanal chefs out of business. We must outlaw the free food machines."


>one's style will not be unique for long as AI will quickly copy it and so make the original much less valuable

Note that the fashion industry doesn't have copyrights, and runway fashions get copied very quickly. Fashion designers still exist in such a world.


There are alternative systems. One would be artists making a living through other ways such as live performances, meet and greet, book signings, etc.)

We could also do patronage. Thats how musicians used to be funded. Even today we have grants from public/private institutions.

We could also drift back into "owning the physical media" We see this somewhat with the resurgence of records.

NFTs would have been another way, but at least initially, it failed to become generally accepted into the popular conscious.


I'll gladly put money on music that a human has poured blood, sweat, tears and emotion into. Streaming has already killed profits from album sales so live gigs is where the money is at and I don't see how AI could replace that.


Lol, you really want content creators to aid AI in replacing them without any compensation? Would you also willingly train devs to do your job after you've been laid off, for free?

What nonsense. Just because doing the right thing is hard, or inconvenient doesn't mean you get to just ignore it. The only way I'd be ok with this is if literally the entire human population were equal shareholders. I suspect you wouldn't be ok with that little bit of communism.


There is no way on Earth that people playing by the existing rules of copyright law will be able to compete going forward.

You can bluster and scream and shout "Nonsense" all you want, but that's how it's going to be. Copyright is finished. When good models are illegal or unaffordable, only outlaws -- meaning hostile state-level actors with no allegiance to copyright law -- will have good models.

We might as well start thinking about how the new order is going to unfold, and how it can be shaped to improve all of our lives in the long run.


I think there’s no stopping this train. Whoever doesn’t train on all available data will simply not produce the models that people actually use, because there will be people out there who do train models on all available data. And as I said in another comment, after some number of decades all of the content that has been used to train current models will be in the public domain anyway. So it will only be a few generations before this whole discussion is moot and the models are out there that can do everything today’s models can, unencumbered by any copyright issues. Digital content creation has been made mostly obsolete by generative AI, except for where consumers actively seek out human-made content because that’s their taste, or if there’s something humans can produce that models cannot. It’s just a matter of time before this all unfolds. So yes, anyone publishing digital media on the internet is contributing to the eventual collapse of people earning money to produce content that models can produce. It’s done. Even if copyright delays it by some decades, eventually all of today’s medial will be public domain and THEN it will be done. There are 0 odds of any other outcome.

To your last point, I think the best case scenario is open source/weight models win so nobody owns them.


> We've designed society to give rewards to people who produce things of value

Is that really what copyright does though? I would be all for some arrangement to reward valuable contributions, but the way copyright goes about allocating that reward is by removing the right of everyone but the copyright holder to use information or share a cultural artifact. Making it illegal to, say, incorporate a bar you found inspiring into a song you make and share, or to tell and distribute stories about some characters that you connected with, is profoundly anti-human.


Ah yes my favorite was the early covid numbers, some of the "smartest" people in the SF techie scene were daily on Facebook thought-leadering about how 40% of people were about to die in the likely case.


Let's be honest, everyone was speculating. Nobody knew what the future would bring, not even you.


The difference is some people were talking a whole lot confidently, and some weren’t.


Genuine question - unless he was directly involved with the genocide, was he not doing the exact same thing the allies were doing? It’s not a war crime to participate in a war for your country.

The US bombed German civilians and Japanese civilians in mass numbers.


The atrocity is that von Braun's V-2 factory was an extermination-through-labor camp. About 12,000 people were forcibly worked and tortured to death to produce those weapons—numerically more deaths than V-2, as a weapon, caused in Britain. von Braun was aware of this, complicit in this, oversaw parts of it as a high-ranking SS officer: his Wikipedia entry quotes a survivor testifying "von Braun went to the concentration camp to pick slave laborers".


Assuming this is true (I have no reason to believe it's not) it's clearly a damning indictment of von Braun himself. And it calls into question his accounts, so it seems like odds are he was "actually" a Nazi as opposed to someone affiliating with the party to avoid punishment or execution.

I still don't see what that has to do with the original comment that mentioned him. If we're talking about him in depth, absolutely mention it and dig into it. I guess the thing I'm having trouble reconciling in my head is the need to, upon a passing reference to someone orthogonal to the main point, say it's "worth including" that they're a controversial figure. The controversy seems irrelevant to me. It seems to border on virtue signaling, this need to say "oh by the way, Nazis are bad" when that (objective fact) has nothing to do with anything.

I see your other comment and I get the point you're trying to make but I don't think it has anything to do with speaking respectfully or with any sort of courtesy about a Nazi, just about trying to make a point.


I can see your point, but I think it's worthwhile to understand the full context, even if it's irrelevant on the surface.

I wasn't familiar with von Braun before reading this thread, and I appreciate the extra info. Complex figure. Maybe even a really bad guy. But, also interesting that his work was useful in getting us to the moon.

Maybe we can all appreciate that dichotomy.

Even more interesting to note, is without your initial pushback, I wouldn't have read more detail about him, so I owe your resistance to actually exploring this facet of the man's alleged history to getting me to actually read a bit about it.


> The US bombed German civilians and Japanese civilians in mass numbers.

Yes, agreed. I'm not arguing that the standards of Nuremberg were actually the right ones.


A significant chunk of the adult male population of Nazi Germany was involved with the genocide. Hitler made sure there was blood on as many German hands as possible. In retrospect the Allies were extremely lenient[0] on Germany and Japan and they probably could have punished them way, way worse.

As for Allied war crimes, many of those were only criminalized after-the-fact. For example, the justification for nuking Nagasaki was "well, there's a factory nearby, so that's a valid military target".

[0] For example, "fiduciary duty to shareholders" was a valid excuse that saved several businessmen at Nuremburg, despite them running forced labor camps that were deadlier than Auschwicz.


The justification at the time for nuking Hiroshima and Nagasaki was field testing two different novel bomb designs .. the urgency came from Germany's surrender and unavailability to use as a test bed and from the rapid depletion of pristine targets in Japan.

The H&N bombings followed close on the heels of bombing 72 other cities (including Tokyo) as part of an ongoing HE + incendiary campaign with list of targets.

These specific targets were selected for atomic tested as they had not been bombed before and served as "clean" test beds for the before and after comparisons, in addition to having some containing topography.

What's important to remember is that they were selected from a long list of targets that were all scheduled to be bombed, the fact that they were low priority from a military standpoint is what had "saved" then from not already having been bombed.

When Hiroshima was bombed the only prior atomic test at Trinity was on a tower with a lot of external controls .. it wasn't even certain at the time that this would work as a bomb let alone "end the war".

The military compulsion to battlefield test a weapon that had consumed more R&D budget than any ever before in history was intense, and the WW's were rapidly closing out with Germany defeated and Russia closing in on Japan.

After the bombings, immediately after, came a lot of retro fitting justifiction, more so with the Cold War .. but it was never as clear cut and about swift endings and saving casualities as came to be believed.

People forget that atomic bomb or not the US was already committed to levelling all cities within Japan.

For more, and a deeper dive in the many takes on dropping the bomb, see:

https://blog.nuclearsecrecy.com/2013/03/08/the-decision-to-u...

for example (it has many references to many historic viewpoints)


Yes, and it's not even like there was a prior plan to use two bombs and see what happened - the military was fully planning to continue the atomic bombings as the cores became available (the third expected to be available by late August). It was only Truman intervening that stopped it at two.

It seems, unsurprisingly, that the military didn't really see the atomic bombs as anything other than a really big bomb - it was only later that they came to be seen as something qualitatively different, and "a bomb so big it is war-ending" is really only something you can know in hindsight.


> It seems, unsurprisingly, that the military didn't really see the atomic bombs as anything other than a really big bomb - it was only later that they came to be seen as something qualitatively different

That, arguably, came with advances in delivery methods. A-bombs alone don't end wars. Multiple sides each putting them on advanced bombers and intercontinental missiles, made to hit quickly and be effectively impossible to stop - that's when nuclear weapons graduate from being just bigger bombs to being existential threats and/or tools for keeping world peace.


> The justification at the time for nuking Hiroshima and Nagasaki was field testing two different novel bomb designs

That was probably one reason, but by no means the only one, nor even IMO a very significant one (and the article you link to, which is a good one from a good historian whose entire nuclear secrecy blog is worth reading, does not make the claim you are making--it gives a number of justifications that were made at the time, and the one you give is not one of them).

The Gar Alperovitz book referenced in the article is also worth reading, as is another historical study, Racing the Enemy [1] by Hasegawa. The latter book is not solely about the decision to use the bomb, but more generally about the process by which the war with Japan ended, but that decision and the process that produced it of course play a large role.

[1] https://www.amazon.com/Racing-Enemy-Stalin-Truman-Surrender/...

> it wasn't even certain at the time that this would work as a bomb

AFAIK there was no doubt that the implosion method used in the Nagasaki bomb would work after the Trinity test. And there was never any doubt that the gun-type method used in the Hiroshima bomb would work--they didn't even bother to test it before the Hiroshima bombing. The only question was what the practical yield would be under bombing conditions. But that could have been assessed by bombing tests on uninhabited locations, as was done after the war.

> it was never as clear cut and about swift endings and saving casualities as came to be believed.

People forget that atomic bomb or not the US was already committed to levelling all cities within Japan.

These things are quite true. They do not, however, mean that wanting to field test two different bomb designs was a significant factor. Based on my reading I don't actually think it was one at the political level (what the military people thought was another matter, but the key decisions were made at the political level). Politically, I think the biggest factors involved were uncertainty about what it would actually take to get Japan to surrender, and the desire (at least once Truman came into office) to keep the Soviet Union from playing any part in postwar Japan, and more generally to deter them from expanding further.


All good points, and as you say there's little on the record that supports my take on why the two post trinity bombs were always going to be used.

Circling back to at the time, there are copious notes on materials, orders, deployments, and boxes of documentation, and then there is relatively little on the underlying thought given to whether the two bombs should even be used at all. At the time. No shortage of after the fact recollections of course.

While I'm not a historian, more an applied geophysics dabbler, I'm familiar with the material, I've met and talked with Alex Wellerstein, I once interviewed Mark Oliphant, and I've spoken with a number of the technical people who came out for the Emu Fields and Montebello tests.

In the context of an ongoing bombing campaign and truly vast amounts money and resources spent on creating new weapons it's difficult to imagine a scenario in which the bombs would not have been field tested. If that kind of take is a foundation then everthing written becomes bookkeeping and for the record.

In life, where there's momentum politics will often follow rather than lead.

Regardless, I simply have an opinion (well, many and not all consistent with each other) and not a book or a career; I principally enjoy jolting people who have a particular fixed view of many events in history to become aware of wider fields of opinion and to consider how consensus views evolve over time.


> For example, the justification for nuking Nagasaki was "well, there's a factory nearby, so that's a valid military target".

Erm, what? The justification for nuking nagasaki was that demonstrating overwhelming might would swiftly end the war (it did) while continuing conventional war would cause many more casualties over time.

Maybe you mean to say that because a lot of ships, bombs and military equipment were made there (though it wasn't merely "a factory nearby") they dropped the bomb there rather than somewhere else.

Wiki: https://en.wikipedia.org/wiki/Atomic_bombings_of_Hiroshima_a...


Many would consider those war crimes also.


yes, but that side "won". so the important take away is that if you're going to commit war crimes, you must win the war to avoid being charged


Who spends 10k/year on a car, that seems incredibly high. Also I'd say most people spend more than 10k/year indirectly on upgrading their health, but I'd count things like "not eating cheap food all the time" as one of those, or for example, moving to a places in less crime-ridden areas. And people rationally will spend money like that before they try experimental drugs.

Also - you can't really go buy this OTC anyway so it's not really a substitutable good.

A bit aside, I am bullish on these compounds. Tirzepatide is a discovery on the same order of magnitude as any - even potentially bigger than all the recent ML stuff. It's not even close to it's full potential. The data shows it's the only thing we have that really squashes diabetes and obesity with minimal side effects, but also has big positive effects on addiction, heart, bones, liver, brain, and immune system.

The addiction effects alone could change the world tremendously for the better if it's made easier to get and easier to ingest. I gave one of my Mounjaro shots I wasn't going to use to someone who had been trying to quit cigarettes for a decade and they were basically in tears a few days later telling me they went two full days without smoking, the first time they'd ever even gone a few hours since they were young.


> Who spends 10k/year on a car, that seems incredibly high.

You may need to recalibrate your intuition. Average payment on a new car purchase in the US is almost $9,000/year. Average used car payment is >$6,000/year. This is before taxes, insurance, gas, etc. It is common to spend $10,000/year on owning a car in the US.

The Americans who don't spend a lot of money on car-related expenses are the outliers. Despite this, the median American household still has $12,000 per year leftover after all of their ordinary expenses like paying for cars. Average Americans have high incomes, profligate spending patterns, and poor savings behavior.


Once you pay off the car, the monthly payment becomes $0. Most people don’t replace with a brand new car every 3-5 years.


You still have oil changes, preventative maintenance, gasoline, and insurance to pay


Avoid using averages: they are meaningless to the average person.


> Who spends 10k/year on a car[?]

The average new car buyer. The average payment on a new car is now $726 per this source (some sources have even higher numbers). That's about $8700 per year. That plus insurance, fuel, and maintenance puts you well over $10k.

https://finance.yahoo.com/news/average-auto-loan-payments-ex...


Car payments aren't "spending money on a new car". You don't own it yet, you're still buying it.

The insurance, fuel and maintenance are correct though.


If you put 200,000 miles on a $50,000 40 mpg car over 10 years, that's $50,000 for the car, $15,000 for insurance, $25,000 for gas, and almost certainly more than $10,000 for tires, maintenance and repairs.

There is a rule of thumb that the purchase price of the car is 1/3 of the cost over its lifetime. That's pretty generous in 2024 since cars now have better mileage, better reliability and cost more, but it's almost certainly more than half the cost.

That's over $10,000 a year


The average car purchase price in the US is under 34k


Does that include used vehicles? Average for new is $47k. https://fortune.com/2024/02/28/how-expensive-new-used-cars-o...

And new is appropriate, since used price is payment from one owner to another. IOW, if one side of the transaction gets a good deal the other gets a bad one, leaving average unchanged.

Average annual capital cost is new price divided by lifetime.


First - the average number there is according to Kelly Blue Books "proprietary editorial process", and I have doubts. Given it's spiked in recent years too we can assume that 90% of people today aren't at this new inflated price even if it's true and likely many more are holding out or buying used now.

Second - every one of your numbers is rounded up quite a bit, especially mileage as the average mileage would be 135k by year 10.

Third - You left out selling your 50k car in year 10 given 135k miles.

Fourth - EAC, one way to get to the truth, not the only.


I never claimed the numbers were average. The OP claimed that $10,000 a year on a car is uncommon. I'm saying that you don't have to go much above average to pay $10,000 a year on your car -- that spending $10,000 a year is common, even if a little less than 50% of people do so.


I'd guess closer to 10% than 50% do, the more average amount would be closer to 5k/year making the original claim pretty far off.


$10k/yr on a car is not incredibly high

https://www.edmunds.com/bmw/x3/2022/cost-to-own/


Did you just link a high end SUV as proof?


It's not far off. The average new car transaction price is now $47K. I can't understand how so many people are able to afford such expensive cars but somehow they're making it work.

https://www.coxautoinc.com/market-insights/kbb-atp-january-2...


It comes to buy price minus sell price. People buying 47k cars are not driving them to ground or totally writing off them outside rare cases. Not that it is not still significant sum of money, but there is also large second hand market involved.


Not just that but 47 is the average for a new car. Average used is 27. From a cursory look 70% of cars are bought used.

So the actual average purchase price is closer to 34.


"incredibly high". This means finding an amount that high strains believability. I don't own one but I see X3s every time I commute to work. Obviously I see far more expensive cars too but I wanted to keep it somewhat reasonable.


Yes it does strain believability because it’s far off reality.

The average purchase price for a car is about $33.5k in the US. That BMW starts at 47.


Yes, 10k/yr is higher than average but it is not impossible and it isn't unheard of. It's believable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: