No, it's just stronger than the opposing point: that there exists such a potentially negative technology that it's worth freezing/delaying our technological development to avoid it.
The "opposing point" I am making is that technology is ethically neutral and needs to be examined as as it functions in the world, on a case-by-case basis (and implemented accordingly). I can't personally say I know many people who fear dangerous technology so much that they advocate for a complete halt to all technological development everywhere.
And how are you gonna do that? Not even the very inventors of technology can predict how it’s gonna be used (especially for bad) and by whom. Are you gonna have a committee of bureaucrats who never invented anything in their life come in and promise they will protect you from any possible downside? And are you gonna believe them?
What do you mean "how are you going to do that"? Are you seriously trying to assert that ex. the FDA has never successfully blocked a harmful drug from going to market? In another comment you yourself concluded that not adopting nuclear exacerbated the climate crisis, isn' this exactly what you're saying is impossible? Scientists and engineers make estimates about the large-scale impact of technological adoption all the time, and often they're very good at it (for example we're very clear about what the combustion engine is doing to our environment).
> Are you gonna have a committee of bureaucrats who never invented anything in their life come in and promise they will protect you from any possible downside?
This is such an absurd strawman that I almost didn't want to address it. Nobody is asking to live in the perfect nanny state, although if that's how you view everyone who disagrees with you no wonder you're into e/acc. It is both possible and extremely achievable for a government to collaborate with scientific experts to decide how and where to deploy different technologies. The fact that this process doesn't have a 100% hit-rate is emphatically not an argument for just abandoning the concept of industrial/technology regulation entirely. I personally enjoy not having industrial runoff in my groundwater or dangerous drugs available for sale in the pharmacy, and indeed believe that this type of prudence is a major contributor to the progress in human quality of life that you touted elsewhere.
edit: also how is this not it's own form of "doomerism"?? In one breath you make these wildly sweeping statements about human history and the unstoppable power of progress, and then here you state that it's basically impossible to ever
make informed estimates about the consequences of our actions. It seems to me like you can only hold one of these two positions, consistently.
> What do you mean "how are you going to do that"?
I mean that in order to regulate a technology you normally would develop it first, to see how it's used. EU (where I am unfortunately residing) just reached an agreement to regulate AI without any AI products originally developed here at all!
Previously they made practically illegal to develop other charging standards than USB-C.
This is "the strawman" I am talking about: regulating emerging tech into oblivion out of fear something bad may come out of it.
> the FDA
Same FDA that regulated insulin so hard they gave a practical monopoly to a producer who promptly then decided to jack up the prices? The drug prices in US are so high, buying trips to Mexico and Canada come out cheaper. This is what happens when you decimate competition.
You see a few dangers narrowly avoided by the FDA, like thalidomide, but you never see the millions dead because life-saving drugs stay tens of years in approval limbo or never get developed in the first place.
> Nobody is asking to live in the perfect nanny state
Are you sure? Because that is where the EU is clearly headed. Have you forgotten the pandemic and its unspeakable abuses? Can't you see the ever accumulating regulations? Can't you hear people asking more and more from the state to intervene while politicians are more than happy to promise anything for votes? That is why I am into e/acc.
You’re lightly changing the topic here, I think. I brought up regulation as one type of situation where someone makes estimates about the future societal impact about technology. Setting aside the big libertarian rant for a sec, I’d like to go back to original question. Be it an individual, researcher, lab, corporation, or regulatory agency: do you believe that when developing a new technology it is both possible and prudent to assess its future social impacts, and continue or discontinue development accordingly? Or do you believe that it is always the best idea to continue development regardless of estimated impacts?
> do you believe that it is always the best idea to continue development regardless of estimated impacts?
Emphatically yes. The possible downsides are much smaller than the expected upside (as history shows) and we are really bad at predicting either in advance anyway (also as history shows). Finally, we really really really need those upsides to change our fate as a species currently one cosmic accident (meteorite, virus, nuclear escalation, etc) away from extinction.
Also even if the good guys stop, the bad guys will certainly accelerate - so in a competitive world there is really no advantage in voluntarily giving up technological development.
Got it, so when say a private individual or lab decides that the cost of continuing a line of research or development exceeds the predicted benefit/profit and cancels it, you think that's a missed opportunity regardless of the context? Every invention everywhere should be shipped and never rolled back, no matter if it is profitable (on any time horizon) because eventually there's a chance it could return social benefits?