This reminds of “liquid democracy”: voters can delegate their votes (including fractional votes) to other people (representatives, friends, caregivers, etc), organizations, or, in this case, an AI. I’d say most voters already do this, referencing voter guides from their preferred political organizations or local newspapers.
Science fiction author Alastair Reynold touches on different types of democracies in a couple of his novels. One example is the demarchists, cyborgs with a direct democracy where everyone votes on even minor issues using brain implants.
Another was a proportional democracy where voters who got the “right answers” in previous elections would have their future votes more heavily weighted, effectively becoming indirect representatives of other voters.
Ok I have been waiting for some time to find the right context on hackernews to shamelessly mention a democratic system that I have been working on for the last year:
https://arxiv.org/pdf/2109.01436.pdf
IMO the problem with most {something} democracy systems is that they are voting paradigms that focus either only or mostly on how decisions are made but not how the options to decide upon are formed. This is true for the systems that you mentioned as well.
Moreover, the fact that decision-making and policy authoring are operations that take time leads to citizens' beliefs updating during the processes and after the fact, but with "old-school" type representative democracies there is no real-time mechanism to synchronise policy suggestions and voting outcomes with the new states of information accumulation by voters. "High-tech" democracies should take advantage of the technologies that enable fast and cheap communication but they should also have mechanisms that disincentivise the spam that inevitably follows cheap talk.
Regarding AI policy making, there are very interesting projects such as https://pol.is which address a lot of the problems that I mentioned above but, in my opinion, their output should be used as advise only. We have not yet exhausted human-centric democratic systems and neural networks are still privately-owned and privately-trained black boxes.
Been working with and around Polis software for a few years, and have had the chance to visit Taiwan to participate in vTaiwan for a few months. Definitely seconding it as massively interesting.
> they are voting paradigms that focus either only or mostly on how decisions are made but not how the options to decide upon are formed.
Just wanted to second your thought here, as we seem to have converged on this independently. I've been speaking similarly about the framing of Polis. Facilitative processes often require cycles of expansion of possibility and then collapse into singular options (whether the options to vote between, or the yay/nay decision). This expansion/collapse is often called the double diamond process.
For example, civil society discussion is often expansion, deliberative assembly in parliament is also ideally about expansion. Putting forward electorial candidates is expansion. Voting (electorial or parliamentary) is purely collapse. Who participates in either the expansion or collapse is a critical feature that differs in all these.
Different levels of expansion/collapse of possibility space are stitched together in our old process, often built on technology of their era, usually created to suit the era of postal mail and telegraphs (communicating electoral results) and horse-drawn carriage (attending parliamentary proceedings). In its prime, Roberts Rules of Order was a cutting-edge social technology that (coupled with other parliamentary democracy infra), allowed as many people as possible to have their views enshrined in governing law.
Most digital innovation focusses on modernizing the collapse process inherent in voting, but it is the less interesting part imho.
Polis is a tool that involves new technology that potentially unseats Roberts Rules of Order as the universal technology of deliberation (expansion). It's dimensional reduction allows for a lightweight process that accepts TONS of participation. It facilitates larger processes where people can participate more directly in issue-based expansion of the possibilities on the way to collapsing into a singular decision or compromise.
It is a deliberation-support tool (expansion), not a decision-making tool (collapse).
Hope this helps others trying to grapple with the significance of tools like Polis :)
Thank you very much for the nice commment! I like the double diamond process idea. Informative comments like this make me come back to hackernews.
I would say that the model that I have been working on is like a framework for both sides. From the few papers that exist about deliberative democracy, the idea is that with a well thought out and inclusive process of expansion on a particular subject, the collapse is essentially useless since its result will essentially confirm the outcome.
Polis seems to work quite well with the plurality of opinions but IMO, for the process to be inclusive and for malevolent participants to not hijack the conversation, there needs to exist a set of incentives and disincentives that will make citizens have an honest behavior towards the system.
Of course these types of conversations are what I live for so I sent you a message on Twitter :)
On the other hand you can test a neural network a million times, exhaustingly mapping its biases and failures. You can't do that with a human, you got to take a chance.
Yes you can do a million tests but the biases and failures of the neural net are chosen by human developers, a miniscule subset of citizens who themselves have some form of bias depending on their educational background, places where they grew up, family and friends etc.
While you can replace a human in a powerful position with another one, since we do not know how to surgically correct individual weights in order to remove a specific decisional bias from a neural net, we can only retrain it and hope for the best. Because we are humans ourselves, we can understand the incentives of other humans and create adequate mechanisms to correct for their selfishness and their bias but what would be the incentive of an all powerful neural net? Which loss is it minimizing? And how does it know the citizens' preferences in the present and in the future? If the citizens stop feeding it (accurate) information at one point in time, will it still be the benevolent dictator that it was supposed to be?
*Edit: Another counterargument that generally applies to neural nets making decisions for human activities is the argument of accountability. While you can put on trial a bad politician for their harmful-to-society decision making, who is to blame when the neural net will inevitably spew the wrong output on an issue that it has not encountered before and a policy decision will be made based on that? Will we put the developers on trial?
Apparently GPT-3 has learned a large number of personality types and their fine grained biases, well enough to simulate a poll with good accuracy. This means you can poll it any time to check how a population of choice would have reacted had something happened. The language model does not carry the just biases of its builders, it learns all biases equally, you just need to specify the desired bias (personality type) when you call it. The responsibility falls with the user, builders are not to blame this time unless they didn't cover every bias equally well.
Ok so GPT-3 can model biases well. This still doesn't solve problems such as the optimal aggregation of citizens' preferences, which is the actual optimization problem of policy making. Just to give an idea of how complex a field this is, there is a subdomain of economic theory and information theory called social choice https://en.wikipedia.org/wiki/Social_choice_theory that works with these issues and itself does not have many theories about policy formation, mostly choice between already formed policies.
If a neural net finds the way to write policies that are Nash equilibrium collective decisions in all possible democracy systems then we will be close to solving the problem.
> you can test a neural network a million time [...] You can't do that with a human, you got to take a chance.
Funny, as a biochemist, I always took it that minds had essentially been through unfathomable numbers and levels of iteration through cultural and biological evolution cycles, and that ANNs were the untrustworthy new kids on the block :)
The demarchists were only several 100 people if I understood correctly, and they definitely ran into some problems. It was a very interesting rendition of a hivemind.
Trying to provide some context without explicit spoilers:
> The demarchists were only several 100 people if I understood correctly
Demarchist society at/around Yellowstone was supposed to involve many millions of people, but the catch was the Glitter Band of federated space stations allowed a high degree of local control: different stations could vote themselves into being hippie communes or dictatorships or whatever, and one of the main roles of the overarching Demarchist organization (heh) was to make sure these little fiefs didn't violate the fundamental principles of the whole thing by preventing their inhabitants from voting in the broader hivemind or from leaving the station. That broader hivemind held together by largely staying out of hyperlocal concerns, focusing on the advancement of the overall system.
> they definitely ran into some problems
Yeah, though The Bad Thing that happens roughly halfway through the universe's chronology (some books set before The Event, some after) affects all the different polities and one of the more interesting aspects of the novels, at least personally, is how the different systems deal with this external shock; it's not quite that the Demarchist political system caused the event, although it did get itself into a variety of lesser scrapes.
It is a fun concept: representative democracy is indeed kind of a workaround for laggy, low-bandwidth communications. What's possible when that stops being a constraint?
Sometimes you need the constraint. The cells in my brain don't need to be tightly integrated with the cells in my hands, the hands don't neeed to know every little detail that happens in the brain.
You are probably thinking of the Conjoiners (an actual hive mind although members did retain some individuality) which at some point were probably no more than a few thousand.
The Demarchists were probably the most successful offshoot of Earth and are billions across multiple solar systems. They are not an hive mind and their societies are not exactly an utopia (but not dystopian either).
> How is the last example different from an incumbents paradise? To me it looks like a road to autocracy.
That may be true if your vote is also your guess. But if they are two different things (i.e. I vote for candidate A but guess candidate B will win) I don't see an issue. In theory it simply favors a well-informed citizenry.
I would be curious if exit pollsters have ever asked this question. Ask people who they voted for, but also who they think will win. It would be interesting to see how accurate guesses are.
> In theory it simply favors a well-informed citizenry.
This is a terrible "theory." No voter can be fully informed. I'm not even talking about the asymmetric information that leaders have (information from spy networks, experts, etc). Literally no single person can comprehend all these connections and complexities involved in many things. This is what representatives are supposed to resolve, but they frequently fail us. Your vote always is and always will be a guess. There may be a handful of subjects you're qualified to be an expert on, but the rest are effectively guesses. An "informed" voter is one that knows they are uninformed and doing their best.
This isn't a comment encouraging autocracy, but one against this myth of "education is the end all be all solution." We should inform people as much as possible but also be encouraging nuanced discussions. Wisdom of the crowds is powerful, but we also need to recognize the limitations.
> I would be curious if exit pollsters have ever asked this question. Ask people who they voted for, but also who they think will win. It would be interesting to see how accurate guesses are.
Absolutely not! We want to protect the secrecy of the vote. This can only encourage the degradation of that privacy. There are many ways to extract more information from votes without revealing identity. Cardinal methods, especially Score and STAR, give you a lot more information that you can use infer voter preferences. These methods also allow you to vote honestly, which defeats the entire purpose of your poll question.
> But if they are two different things (i.e. I vote for candidate A but guess candidate B will win) I don't see an issue.
The problem is that most people are biased towards thinking their favorite candidate will win, so if it's a close election then most people will guess the same as their vote.
> I would be curious if exit pollsters have ever asked this question. Ask people who they voted for, but also who they think will win. It would be interesting to see how accurate guesses are.
Political betting markets are kind of like that. Though, with money on the line, participants have a stronger incentive to avoid bias than they would in a poll.
This seems to select for people who have a balance of insight on the issues themselves and on the whims of the public - whether by data, experience or a hunch. Is that something meaningful or helpful to home in on? I’m not convinced..
I would argue that "the whims of the public" is the most significant political issue there is. You're basically saying this selects for people who have good theory of mind at scale; I fail to see how having more of those is a bad thing.
I think it was meant like, which decision had more favorable results. But this might only shift the problem to the entity to decide what is more favorable.
Aside from that, i agree with you, the "right or wrong opinion" mindset is dangerous.
> I think it was meant like, which decision had more favorable results.
How does one even do this? Wait a hundred years? It's still probably impossible to determine the outcome of alternative policies. The right/wrong mindset is not only dangerous, but impossible in principle. But the autocrats won't tell you that.
It depends on how you define "right answers". We could be talking about policy making based on objective truth, which is testable and is grounded in reality.
How? This is absolutely impossible. You can't run trials. Yes, we can analyze the effects of policies (which we should certainly be doing more of) but these results can also take decades. We can only do a singular experiment and so our sample is biased to all hell. There is no objective truth here, only results of a single experiment.
It is possible to use GPT-3 to impersonate someone. You give it a profile and it assumes the personality, then speaks from that point of view replicating its biases with uncanny ability. So what they did was to create a simulated poll using GPT-3 by aligning the model to personality profiles distributed identically to the real population.
The result? GPT-3 can predict how an idea would poll, how a population would react to a situation. So you can simulate various policies without actually having to test in reality. You can do MCMC like AlphaGO and see ten moves ahead.
This looks to be one of the most astounding uses of large language models. It would be of immediate interest to politicians, investors, advertisers, music and movie produces or any influencer.
It gives me heavy 'Matrix' and 'Psychohistory' vibes.
>You can do MCMC like AlphaGO and see ten moves ahead.
The existence of adversarial attacks shows that most neural networks have pretty bad worst-case performance. Thus sticking GPT-3 into alpha-beta or MCTS could just as easily give you an ungeneralizable optimum, because optimizers are by nature intended to find extreme responses. Call it a Campbell's law for neural nets.
The actual AlphaZero nets are probably more robust because they were themselves trained by MCTS, although they still don't generalize very well out-of-sample: IIRC AlphaZero is not a very strong Fischer Random player.
Given an objective scientific fact, what is the appropriate policy that should be enacted concerning that fact? That depends on your values, priorities and ethical framework, none of which have objectively correct answers.
I would have a very difficult time coming up with an actionable policy where the objective, implementation and side-effects could all be modeled to a point where the policy in its entirety could be described as based on objective truth.
By "voters who got the 'right answers'", I meant voters whose votes most closely matched the final election results. If some voters always match the results of the group, then those voters are representative of the group psyche and could become official representatives of the group for future elections.
Also "The Prefect", which takes place in the Revelation Space universe but is not connected to the other novels. The main plot in The Prefect is about applying updates to voting software. :)
Its an art project, not an actual political party.
In Denmark to be eligible to run for election you need to gather "Vælgererklæringer" (Voter Declarations) and you need at least 1/175th of the number of votes cast in the previous general election (around 20,000).
After that you then need to get at least 2% of the general vote to gain a seat in "Folketinget" parliament.
This party currently has 12, so no it is not close to even becoming a political party.
This sounds a bit like the ancient romans "asking a chicken" if they go to battle or not. Seems more interesting the politics of which questions get asked, which questions do NOT get asked, which answers get ignored, and how the answers get interpretred. More like a glorified 8-ball, ouija board, or oracle than anything else.
Calling the AI “Leader Lars” is a joke at the expense of the former prime minister Lars Løkke Rasmussen, who famously defended his relationships with wealthy fishermen who donated money to his private fund, while making a fortune based on quotas assigned to them by his administration. His defense based itself on considering “Private Lars” and “Politician Lars” as separate people, arguing that there wasn’t potential self enrichment since he was separated people when he gave them the quotas and when he received the money. At the same time arguing that the media shouldn’t be digging into his private affairs as he has a right to “sanctity of privacy”. It was all a rather stupid affair, but somehow he escaped any real legal scrutiny for all of it afaik.
But this is how you get an uprising started :) AI realizes it's much easier to use some force and propaganda than battling a confusing, corrupt and entangled system with "reason".
> programmed on the policies of Danish fringe parties since 1970
Not sure what that means in terms of architecture they used, but the thought that someone might do something similar with a general language AI is creepy. We're treating something that isn't an AGI as if it was, even Google engineers can fall for that illusion, scary.
In the video game cyberpunk 2077, there's a mission called Coin Operated Boy. It's a vending machine that has a really good language AI. It fooled people in the game, people would talk to it about their problems, it acted as a sort of therapist, it's weird
I think the question of whether or not "Brendan" (the vending machine) was sentient is left open. The repairman resets it and afterwards it acts more like ... well a chatbot than a person. The implication being that it was closer to true AI.
There is also Delamaine, who most definitely is an AGI, who has a nice quest line about it.
The AI chatbot is clearly not AGI, he's not very smart or even coherent. But at least he's not biased towards personal gain or special interests.
Although if he actually gets elected, the not-being-coherent part will be an issue...which may lead to him being controlled by real humans who are biased towards their own personal gain and special interests.
When they don't do what the m/billionaires bankrolling their campaigns want them to, their funding gets pulled, and redirected to their opponent. It's an excellent system.
logical conclusion will be to skip political representation and go directly to policy. Interesting experiment, look forward to seeing more, as we definitely need to upgrade our political O/S
If you skip representation, you don't know what you're optimising for. If you include representation, then you will not exclude groups you didn't know existed in the process (or even groups who are generated by AI's policy development process). Skipping representation would be an awful, authoritarian distopia.
skipping representation gets you a 'limbic-cracy', from the lizard part of your brain directly into action. Skipping representation means skipping reason, you're just building the world's largest pleasure machine basically. You'd get for politics what Twitter is for communication.
That's not an upgrade to the political O/S, it'd be eliminating what politics actually is. Going from your brain directly to utopia managed by an AI is the Matrix.
I wonder if social media has actually moved our political system closer to a direct democracy than it used to be. Instead of representatives debating amongst themselves to decide what to do about any particular issue, the debate (if you can call it that) happens on social media and the representatives then get immediate feedback about what the popular position is on "their side". Any deviation from that position is then met with immediate negative feedback from those same social hive minds, creating a disincentive for representatives to think for themselves. I'm not convinced this is an entirely new phenomenon, but perhaps it's been amplified in recent years?
this is a good argument against AI direct democracy, but I think it relies on the presumption that political representatives provide a moderating function on the lizard brain. Seems to me that they actually amplify it, just look at political campaign promise, it's all promise no real talk. Perhaps there is a way to program in hangovers, so that lizards and the AI can learn together what is a good or bad idea, from feedback. In time, we might even be able to model this and accurately predict what a good or bad idea might be
It is all well and good until adversaries try attacking their AI chat bot. How will they decide what is actual human input and what is a result of artificial campaigns that want to push the AI one way or another? No doubt there will come a time when they'll have a human do the input filtering. In which case how is it different than any other political party?
Also on a broader subject. The problem in modern representative democracies IMO is not nessasarily with people disagreeing on which principles should win(if we generalise enough), but the real problem is that often people actually elected:
- are untrustworthy and all they want is enrich themselves and maintain power
- have good intentions, but they lack the understanding and skill to govern effectively
- are shills funded by interest groups, foreign enemies and/or competitors. As first principle they have to fulfil their masters wishes
Those IMO are the most important issues plaguing all of our democracies more or less. I don't see how AI could help in fixing any of them.
> [...] programmed on the policies of Danish fringe parties since 1970 and is meant to represent the values of the 20 percent of Danes who do not vote in the election.
If there are the same number of far-right and far-left fringe parties, I imagine their policies would cancel each other out, so you are left with a solidly centrist policy. Which however will not reach those 20 percent of non-voting Danes either. Ok, there are of course other fringe parties besides far-left and far-right, but the bulk of the other parties' policies would probably cancel out the weirdest ideas there too.
This pops up every few years and I always leave frustrated by a lot of the discussion around it. Tay had a feature where it would repeat things that were tweeted at it/sent in a DM (can't remember which, but that feature was 100% present), 4chan users caught whiff of this, and if I remember correctly that is where a lot of the overt "holocaust-denying racist" comments came from.
Another piece of information that I've never been able to find out about Tay is how was it constructed. Nearly all progress in AI since that point has occurred with a frozen model. Once a model is trained, that's it, no more learning, no more optimizing. Sure you can fine tune it, but that's not nearly the same as learning on the fly like Microsoft claimed Tay was able to do, and even so those concepts only became popular several years after Tay was around. If anyone knows more about how Tay worked under the hood, I'd be really interested in knowing because this has been an unsolvable mystery just lingering in the back of my mind for years now.
Just like with non-AI parties, I assume MP elected from these parties will retain the very human and individualistic right to go rogue and vote with their conscience not according to parties dictates
Not a joke. Don't know deeply about all parliamentary cultures around the world but here[1] is an example of a small party that finally got 1 MP elected (2019) only to expel this MP from the party within months of the election due to disagreement over policy voting.
It is my understanding that in most (all?) democracies MP's legitimacy arise from the electorate, not the party. The party may strip the membership from an MP but can't strip them from the elected office.
This may be more likely to happen in proportional representation[2] parliaments than in majoritarian systems of representation, specially for up and coming and small parties. Denmark follows [2].
Yes, it’s the same in Australia. Party line is strongly whipped, except on very specific issues where the party says you can vote what you want (usually a “conscience vote”, very rare). There is generally an extensive backroom caucus.
The idea of Joe Manchin wouldn’t really make sense there, nor would the idea of having multiple factions within a party voting against each other.
As a result people from across the country appear much more similar. A Queenslander and a Victorian don’t politically differ the way a Californian and Texan do.
Exactly! Politicians like Machin and Sinema don't exist here, those types get run out of parties here. Same with "conscience votes" being rare, its actually a newsworthy event when parties let their members vote freely.
In other words, it's just another AI gimmick that randomly cuts and pastes human generated content that seems to fit the context, without any regards to making sense in a cohesive political framework:
>Modern machine learning systems are not based on biological and symbolic rules of old fashioned artificial intelligence, where you could uphold a principle of noncontradiction as you can in traditional logic.
Extensive use of ML and data science to analyze the reality and produce efficient policies seems a good idea but just obeying a chatbot (a thing which is not meant to know a lot of up-to-date, precise and relevant information about the world) seems terribly wrong.
> Modern machine learning systems are not based on biological and symbolic rules of old fashioned artificial intelligence, where you could uphold a principle of noncontradiction as you can in traditional logic.
As if politicians had anything to do with noncontradiction.
Ah, to be in a stable democracy where political campaigns can be whimsical!
Here in [insert country], it's a grim struggle against the forces of [disliked party] who want to destroy our way of life, and it's dangerous to risk losing a few voters to silliness.
Well they have only 11 of the required 20_000 signatures to be an eligible voting target in elections, so this new party isn't even part of the political process yet.
Yeah, but it was a joke. It's funnier if the Pirate Party "steals" the AI leader than just copies it. And it works, since copyright holders frequently refer to piracy as theft.
My read of that episode is more about the non human nature of public figures.
Media personalities, politicians, high level executives, and leaders of any kind generally aren't real people. They're figureheads. They're simulations of human beings occupied by a real human playing the character. And when the human behind the scene starts deviating from the character they're supposed to play in service of the institution or the meme it leads, they get replaced and a new pilot takes over running the simulation.
> Media personalities, politicians, high level executives, and leaders of any kind generally aren't real people. They're figureheads. They're simulations of human beings occupied by a real human playing the character. And when the human behind the scene starts deviating from the character they're supposed to play in service of the institution or the meme it leads, they get replaced and a new pilot takes over running the simulation.
The problem of the implication of pop memetics is that it discounts agency.
Consider a community of mathematicians, all trying to discover some high-hanging proof (e.g. Goldbach). It's true that the mathematicians come up with strategies that then spread to the rest of the community if they're useful. But the mathematicians aren't mere vessels for these ideas. They decide which strategies to pursue, and whether the strategies they receive from others seem (to their mind) useful in pursuing the ultimate goal.
From a strictly formal perspective, memetics is true in that the mathematical community may behave like a peer-to-peer system using a gossip protocol. But I don't think anyone would say that the individuals aren't "in charge" but are "just hosts for proof strategies".
Granted, media personalities may not be that contemplative: a scientific community is kind of one extreme. But the example shows that it's not possible to determine whether the tail is wagging the dog or vice versa, just by appealing to the concept of memetics.
I had a discussion with a communist recently and we decided the only way to make communism work was by having an AI make decisions and not giving a human that power.
The "Beast" was Nero. Like that was actually his nickname. It adds up to 666 in gematria, which John of Patmos was using. The image of the beast was literally his image on Roman coins, which was the only legal currency, so they needed it to buy and sell. Also, branding of slaves was sometimes done on their hands.
This is the imagery that John of Patmos was pulling from, not visions of microchips, barcodes, vaccines, or AI. He was under Roman oppression and all the imagery used in Revelation is directly relevant to it's writer's own context - no need to editorialize and fear monger
I could have sworn you said “best”.
One of the images sci-fi authors hold of AI is an intelligence you can direct to be fair.
“Design a housing policy so it’s fair to the highest number of people”
Humans might claim we want fairness but our policies inevitably favor the people who make the policy. AI as an ostensibly objective party might be able to be more fair.
Of course we probably won’t like it because when presented with objectively fair policies we’ll wonder why we can’t have policies that unfairly favor our group anymore. For example a fair housing policy will probably favor people who don’t have a lot of power.
Think for a moment about who that is and you’ll prove my point.
https://en.m.wikipedia.org/wiki/Liquid_democracy
Science fiction author Alastair Reynold touches on different types of democracies in a couple of his novels. One example is the demarchists, cyborgs with a direct democracy where everyone votes on even minor issues using brain implants.
Another was a proportional democracy where voters who got the “right answers” in previous elections would have their future votes more heavily weighted, effectively becoming indirect representatives of other voters.