I like the points you're making. The proportion of us who aren't convinced by this zeitgeist's most promoted fears is larger than it seems.
I'm not willing to consider a bot swarm to be human though, just a new human tool used to do stuff humans have been doing forever.
Since I've been watching, society has been one step behind the state of the art manipulation techniques. It seems like society gains a big increase in resistance to a technique when we collectively agree on a catchy label for it. Morton Downy Jr. and all the big audience talk shows of the 80s and 90s were perfecting ways control the fears of people (particularly parents). They seemed to become a lot less influential when we started calling it "Trash TV".
24 hour news adopted the lessons learned from Trash TV, and really refined them after 9/11. The Iraq War, the Patriot Act, the financial crisis were all offered to the public by the same techniques that made your mom ask you if you were huffing canned air.
It hasn't been until a few years after internet techniques have been put into use at scale that society has accepted "fake news" as a label for the last cycle's technique. We've always known "shill" but it doesn't seem like it's the label to start inoculating us. To be fair, we did mitigate a class of techniques with "clickbait". I'm super curious what the final phrase we'll settle on is that'll blunt artificial consensus and fake conversations.
The fear I come back to is that those who manipulate and exploit have perfected not pushing people too far, so things will always get worse and we'll never again have a sinusoidal rebound to an era of greater optimism.
Two things help assuage that fear for me. 1) Reading historical examples of conspiracies to manipulate and exploit. There are a ton of things in American history alone that make the current suspected intrigues pretty milquetoast. The Business Plot was crazy! Bay of Pigs. Mohammad Mosaddegh. MK Ultra. Tuskegee. All that makes me feel less worried that we've got a steepening slope to dystopia.
2) The various revisions to the Playstation 4's "TV and Video" section. I don't have to worry about companies riding fascism's razor edge, they're as unsubtle as ever.
I say consider it human in the same line of thought the government says "consider a corporation a person". It simplifies the abstraction to make it easier to reason about.
But in another line of thought, what is consciousness, what is awareness? If you live in a vacuum void of stimuli, eventually thoughts cease - everything becomes predictable and you don't have to think or reason about anything if you choose. In that line of thinking, humans are dependent on other humans for the continuance of thought, idea, etc.
So in that line of reason, what is an AI? AIs today are created, maintained, directed, trained, modified, and destroyed by humans. They are not programed to 'make mistakes' or to have a tolerance or appreciation for making mistakes, and that's the biggest difference I see right now between what it means to be human and what it means to be a machine. But functionally, they operate the same as any human - they are dependent on humans for a variety of tasks and they are better than humans at a variety of tasks, just like you might make a comparison between one human to another.
So when it comes to determining how to reason about AIs, it's honestly much easier for me to reason about them like they are a human, because that supersedes all the technology language (similar to how you might choose to interact with a person without considering their neurology or psychology as a factor in anticipating or trusting in their behavior) and allows you to reason about them as though they have intent that can be reasoned about, simplified. Regardless of whether that intent is coupled with the humans that are 'behind the wheel' or not, the point is that you can consider 'human + machine' to be a singular entity, because at the very least, without data, there is no machine.
I think it's important to reason about stuff this way because it separates details - details that while significant to the actual research / engineering of AI, are not necessarily significant to their effect on things like the individual and society, global economy, etc. AI appears to operate on a finer granularity, but I'm not even sure that matters, because what is granularity when it comes to the reasoning systems people use to function modernly? Is it possible to even use that type of terminology when comparing ways of thinking?
And I think that is sort of a middle ground between everyone in a panic and everyone who thinks "same old shit, different decade". It could be different, we don't know.
> The fear I come back to is that those who manipulate and exploit have perfected not pushing people too far, so things will always get worse and we'll never again have a sinusoidal rebound to an era of greater optimism.
I don't know about that. Machine learning is in part, used to predict (or control) the future, before the future has happened. That kind of seems like trying to bite your own teeth.
I'm not willing to consider a bot swarm to be human though, just a new human tool used to do stuff humans have been doing forever.
Since I've been watching, society has been one step behind the state of the art manipulation techniques. It seems like society gains a big increase in resistance to a technique when we collectively agree on a catchy label for it. Morton Downy Jr. and all the big audience talk shows of the 80s and 90s were perfecting ways control the fears of people (particularly parents). They seemed to become a lot less influential when we started calling it "Trash TV".
24 hour news adopted the lessons learned from Trash TV, and really refined them after 9/11. The Iraq War, the Patriot Act, the financial crisis were all offered to the public by the same techniques that made your mom ask you if you were huffing canned air.
It hasn't been until a few years after internet techniques have been put into use at scale that society has accepted "fake news" as a label for the last cycle's technique. We've always known "shill" but it doesn't seem like it's the label to start inoculating us. To be fair, we did mitigate a class of techniques with "clickbait". I'm super curious what the final phrase we'll settle on is that'll blunt artificial consensus and fake conversations.
The fear I come back to is that those who manipulate and exploit have perfected not pushing people too far, so things will always get worse and we'll never again have a sinusoidal rebound to an era of greater optimism.
Two things help assuage that fear for me. 1) Reading historical examples of conspiracies to manipulate and exploit. There are a ton of things in American history alone that make the current suspected intrigues pretty milquetoast. The Business Plot was crazy! Bay of Pigs. Mohammad Mosaddegh. MK Ultra. Tuskegee. All that makes me feel less worried that we've got a steepening slope to dystopia.
2) The various revisions to the Playstation 4's "TV and Video" section. I don't have to worry about companies riding fascism's razor edge, they're as unsubtle as ever.