The fear of technology is the fear of man's capability to misuse it. It's got nothing to do with economic or political preference and all to do with the fact that Machiavellian people exist in all societies.
I think this is naive. I really wish that all bad in the world came from machiavellian people and no one else. But there is a lot of good people (or at least not outright evil people) doing bad things, either through incompetence, ignorance, mistakes, etc. I think questioning the direction technology takes with capitalist incentives (profits over people) is really important. You can't dismiss those concerns by saying "technology is only bad when people are bad".
Bad in and of itself isn't a universal constant or pillar, morality is relative.
Not sure if you've read The Prince, but Machiavelli never suggested "evil" intent, just that as a good ruler you must embrace all strategies to achieve your goal, the ruler may not think of themselves as "bad", but should accept doing questionable things to advance whatever agenda they feel is "right". And what is "right" all depends on your perspective, doesn't it?
There's no moral absolutes, just majorities that ascribe to a similar set of red lines.
Capitalism creates machines that put profit and shareholder value above all else and creates some pretty twisted motivations of what is "right".
Historical Communism puts the will and power of the state over the welfare of the people as what is "right". We all know how well that turned out.
I'd rather say it's potentially naive to think that ideas such as "good" and "bad" are absolutes.
I'll rephrase my point so we can steer clear of the pedantic discussion about machiavellianism and "good" or "bad":
There's a lot of people doing things they consider bad without actually wanting to do them. Thinking that all consequences of technology are intended is missing half the picture, as it dismisses the fact that there will be unintended and unwanted consequences. It is a naive take that does not allow space for discussing how to predict, detect and avoid those unintended consequences.
> It is a naive take that does not allow space for discussing how to predict, detect and avoid those unintended consequences.
(I think you might want to reconsider using "naive", it's belittling and RealStorm of r/iamverysmart.)
Its not about the misuse of technology, it gets created without a Hobbesian leviathan, there is no universal overseer that knows when something is invented and can therefore pull the breaks on whether progress should be stopped. Nor can the invention of said technologies' uses be predicted, especially when it comes to fundamental research.
An example: Maxwell created his famous equations in 1860, they would become foundational in enabling radio broadcasting in 1890. Radio in and of itself has enabled all kinds of amazing communications breakthroughs to make humanity richer. But it also enabled true modern warfare as it expanded the capability of nations to orchestrate massive military engagements across multiple theatres.
Should we have stopped Hertz and Marconi? Where would this debate be held? Who enforces the outcomes of these debates in the modern geopolitical space we live in today? There's a simple practical problem with the whole situation.
Once the genie is out, it's unstoppable - see nukes, once the US used one, the race was on to invent it independently.
Perhaps your issue is down to me using "misuse" as that implies a correct one exists. Let's just say that a "correct" one (or many) exists in the eyes of the original inventor or creator. It takes the imagination and motivations of others to re-apply that knowledge.
One could go so far to say that all technology in and of itself is dangerous.
You started this thread by saying that the fear of technology does not have to do with the political and economic system but with Machiavellian people, and now that “incorrect” uses are the problems of other people. But I’m not talking about that. I’m not talking about someone using radio for warfare or nuclear energy for destruction. Those are included when I said “I wish all bad things happened because of bad people”: those are things that I consider bad and have been done on purpose. Consequences of using the atomic bomb were clear, intended and understood.
The problem comes when someone uses technology for some purpose but some unintended consequences happen. For example, judicial systems using AI to predict recidivism and adjust sentences. It’s a system created to improve the situation (people with low chance of recidivism receive lighter sentences as the goal of rehabilitation is accomplished earlier) but, if the system picks up certain biases or incorrect proxy measures, can make mistakes and put certain people way too long in prison, which decreases their opportunities to redo their lives after prison. An unintended consequence of the system is that it might actually increase recidivism or discriminate and condemn people. It’s not “misuse” of AI, it’s unwanted consequences.
And in that context it is warranted to ask what are the incentives of the economic system and how will they affect technology. In a capitalist society, the incentives make people focus on profits, and people wellbeing is not necessarily important in that context. In a non-capitalist society you wouldn’t have to worry about, for example, insurance companies using AI to predict car crash probabilities and stopping people (sometimes wrongly) from getting insurance and being able to drive a car.
That’s what I mean when I say that reducing bad outcomes (however you want to define “bad”) to people wanting to have those outcomes does not help at all. Because then you stop worrying about all the people with good intentions that do something with unwanted consequences. For example, if you don’t talk about the unwanted effects of AI and how to avoid them, you will have data scientists creating models and not worrying about indirect effects. Most of the time you won’t need to “stop progress” but just be mindful of a bunch of extra things.
And, of course, ignoring the system in which technology is developed and used blinds you to a whole class of problems created by the incentives of the system (capitalism or whatever).