I’ve said it before and I’ll say it again. Regardless of where you stand on specific issues like censorship, public health and misinformation, I think it’s really dangerous how counter-consensus opinions cannot be expressed in the open without individuals facing intense public scrutiny. There needs to be some kind of correction mechanism in case the consensus opinion is misinformed.
Maybe it would help to put in terms of a (not-so-implausible) scenario folks here are more familiar with. Suppose one day a niche topic in computer science became the subject of mass public concern, e.g., a computer virus escaped the lab and programable computers became heavily regulated. Anyone caught running unapproved software could face criminal charges. "But Turing Machines can be good, creative tools", you protest. "No, we cannot predict their behavior and unsanctioned software endangers us all," says your friend. As a programmer whose creative freedom depends on libre software, how would that situation make you feel?
That might be what comes to pass in the far future. It's possible for something to be a good and creative tool and also capable of terrible things.
I think the gain of function research that may have lead to covid could possibly have had interesting applications, but it also (possibly) lead to something terrible, probably best to regulate it more.
In fiction AI is banned in Dune and Warhammer 40k because of past events. It makes sense that if something really bad happened because of AI you might want to put regulations on development of it.
I think that banning general purpose computers only makes sense if you believe AI cannot be controlled. A more reasonable approach would be to design the program in such a way that by its very nature, fulfills certain criteria that cannot be escaped. Think of a linear type system or substructural logic. You have a core calculus which keeps track of resources and effects so that computation has no way to propagate outwards unchecked. Safety is not so much imposed by some external guard condition, but woven into the very fabric of computation itself.
It doesn't seem possible to me to ban general purpose computers. Any society that tries would at best, lose to the ones that don't. It seems impossible to me to have enough surveillance to both prevent unauthorized code and also still allow effective/competitive software development.
It may not be technically possible, but that won't stop misguided people from trying to regulate Turing machines and exploit their vulnerabilities. Rather than putting so much energy into in regulation, surveillance and offensive measures, I suspect the "winning" strategy is to focus on secure, verifiable computing. Whoever masters this first will grow an immunity to (human and AI) cyber-threats and gain a significant competitive advantage in the digital economy.
The extent to which truly secure computing is possible depends in large part on which of the many possible "cryptographic universes" we live in (based on how the algorithmic complexity categories like P and NP are).
There are different kinds of secure computing. The kind of security you are referring to is based on cryptographic schemes like homomorphic encryption or zero-knowledge proofs. The kind of security I am referring to is language-based security.
It depends on the specific opinion. I don't think there's anything wrong with people not being allowed to express the opinion that minority groups should be systematically exterminated.
Transparency for a start. Leaving our most important decision-making to only those who have the right credentials is like security through obscurity -- credentials can always be hacked by sufficiently motivated actors. Write down a program describing the logical steps you took to reach some decision, and release it. Once it's open source, formalize all the steps using type theory so anyone can verify it. It's the only reasonable way to do secure computing.
Maybe it would help to put in terms of a (not-so-implausible) scenario folks here are more familiar with. Suppose one day a niche topic in computer science became the subject of mass public concern, e.g., a computer virus escaped the lab and programable computers became heavily regulated. Anyone caught running unapproved software could face criminal charges. "But Turing Machines can be good, creative tools", you protest. "No, we cannot predict their behavior and unsanctioned software endangers us all," says your friend. As a programmer whose creative freedom depends on libre software, how would that situation make you feel?