At least with a human president, there is the chance that they will grow up and shrug off the orders they are given. The power actually lies with the person that was elected, not the invisible people who paid to get them the job.
But how do they know Watson would find the expected value of these things positive? Maybe Watson would be a republican. And this all points down a huge rabbit hole of ethical and political philosophy and stuff.
Like, should Watson take into consideration his likelihood of being elected? In that case, much of his neural network should be dedicated to predicting voter outcome. And that seems pretty obviously problematic.
Now that I've gotten that out of the way, I'm going to respond to your counterpoint in character.
If Republican Watson ran against Democrat Watson, for the first time, we would have a debate purely about issues and policies, and not about personal character. As for the platforms themselves, you might view those political platforms as wrong, but millions of Americans agree with them. This is just the ultimate expression of democracy: a statesman whose views conform to those of the people.
>Every 8-bit CPU I know of can be programmed to look up values in a hash table and spit out the expected results, just like human politicians program their brains to do.
That's true. But if Watson is 1% better than an 8-bit CPU, then that justifies spending millions on the upgrade, since the role of President has such a large effect on how well the government runs.
Pragmatically speaking, I think we could capture the political spectrum in 4 to 5 parties. Watson-left-liberalist mode (for the Sanders crowd), Watson-authoritarian-right (for Trump supporters) etc.
Edit: Sanders correction based on comment below.
The problem is that Watson has been talked up as almost a strong AI, when it's actually a really good classifier, annotator and summarizer. While there's a great role for Watson-style systems in policy development and practice, they are only one in a battery of ML and analytic techniques, none of which can stand alone without a fully human point of view.
Easy enough. If Watson turned out to be a Republican they would start tweaking the parameters until it became a Democrat.
That’s very unlikely.
Any logical system based on the concept that human life is worth existing on its own (no matter what the person has contributed to society) automatically ends up with the necessary conclusion that things like subsidized healthcare are mandatory.
Obviously, one could give the program the basic assumption that human life is not worth anything, and it should instead focus purely on profit, and it might end up with a more republican ideology.
But giving an AI with access to nuclear weapons the assumption that human life isn’t the most important factor is... a bad idea.
It's easy to derive conclusions from moral axioms, but very difficult to do actual politics in a country full of voters, corporations, lobbyists, etc. Artificial intelligence is not a magic solution to that.
Fundamental Republicans believe capitalism and the free market is the system that "works best" to ensure a level playing field for everyone. True republicans work to ensure a fair marketplace for everyone, both for poor individuals and rich corporations, providing citizens the ability to better their condition and increase their freedom.
Fundamental Democrats believe that the freedom of citizens is constantly at risk from outside factors, and that the government is the best agent to maintain citizen's free will.
True republicans believe that human nature is fundamentally good, and that the government increases equality by maintaining a free market, while democrats suggest that human nature is often weak, requiring the government's intervention to protect society from itself.
In reality, very few Republican and Democrat politicians actually represent these values. Often, republican views of the free market disproportionately benefit the rich, and democratic attempt to redistribute wealth, instead of fixing inequality at the source.
Although I consider myself independent, I personally would side with a republican interpretation of health care. Studies have shown that government programs in fields such as healthcare are inefficient when compared to their private counterparts, as a competitive market increases supply, and therefore decreases costs, as opposed to monopolies or government programs that provide a single source of service.
There is no need to choose – you can easily provide a minimum standard to everyone, and let the market handle everything above that.
Which is the concept of the social market economy in general: Everyone gets at least a specified minimum level of service, provided by society, paid by everyone (the social part) – and everything above that level is done with a fair and free market (the market part).
Edit: of course this in practice would devolve to 50+% of the country in essence saying "Whatever I think the bible says should be our acxiom". Completely ruining the discussion.
But then the AI may decide life is suffering, and end suffering by ending life...
Asimov’s laws of robots end up with the previously mentioned axiom, though.
No they really shouldn't. Asimov's Laws of Robotics were a plot device intended to be subverted by the robots in his stories, they were never meant to be taken seriously.
After all, neither of those experiments has ever gone seriously wrong.
And that abortion is illegal. :-)
Whether a given bunch of cells may be considered human is still very much an open question but whether a cell or cluster of cells is alive should be pretty easy to agree on.
Actually, a strict utilitarian model would probably conclude that it's not worth aborting to save the mother's life if the baby is viable, since the baby would ultimately produce more value for society than the mother would were she to live out her life.
And that's the sort of reasoning that makes everyone hate utilitarian ethics.
Second, Utilitarianism roughly translates to "the greatest good for the greatest number." If a pregnant, utilitarian woman was granted with the power of foresight, she would abort her child only if he/her was to provide a negative net utility to society. If the child was to provide a net benefit to humankind, she would not abort.
Third, your assumption that unborn (and born) children are a burden to society is correct, but that initial investment is small with regards to the average net "benefit" a grown human creates. It must also be noted that the vast majority of humans benefit humankind through their work (although some have greater impact than others).
No person can see the future, however, so most utilitarians would never abort their children, as the probability their offspring benefit humanity as a whole is greater than the chance that they destroy value.
Frankly, I doubt any woman considers the ethical implications of abortion when they undergo one, and are primarily concerned with family, relationship, and personal problems. If one thinks beyond personal convenience, and look at the bigger picture, abortions are morally unjust from almost every popular ethical system.
Think hard about the choice your parents made by not aborting you. Do you think they made the right decision? Whether you're old, young, rich, poor, hopeful or hopeless, I imagine you'll answer yes.
Worth to whom? A human life has value, but not to everybody. Or at least not the same value for everybody. Keeping people alive at any cost, and imposing to people how much they should contribute for achieving that, is not something everyone can agree with.
Oh, unless the people are politicians, is that what you are saying? Of course, not everyone can agree with that, either. But it is already the norm, so this is just an extension to the rule. So the question is just how much they would agree to contribute.
Essentially, what if maximizing individual human liberty was the basis of the program, not a utilitarian notion of maximizing net-human life.
Caveat: even with human liberty as a basis for developing a political system, you could still end up rationalizing mandatory, subsidized health-care i.e. maximizing freedom entails a poor person shouldn't have to lose his liberty to poor health etc. This is my position, but I don't necessarily think it is the inevitable conclusion of attempting to maximize for human liberty.
If you start solely from that premise: "that human life is worth existing on its own (no matter what the person has contributed to society)" you're very unlikely even to reach taxation (a forced contribution to society) let alone forced subsidies or making anything mandatory. Most of government is based in the notion that someone's only value is in what they contribute to society -- from "tax-dodgers" to "benefit leeches" the vernacular is all about the amount of cash that gets paid into the social coffers.
Much as I appreciate subsidised healthcare, it's not an "automatic conclusion". It is a negotiated compromise, and largely based on nationalism not the value of the individual (eg, the NHS came into existence post-war, as part of the national rebuilding. It's beginnings very much relied on the war effort and large scale conscription having devalued individual freedom amongst the public).
It's become more popular since then because it turns out to work pretty well as a system. Not so much from pure logic, as that healthcare gets more expensive over time (effectively, healthcare can exert a rent on people's lives) and social control of healthcare is a way of putting a cap on its costs at the expense of those in healthcare who could charge much more (eg, watch the NHS junior doctors complaining about the contract changes).
Not just facts - the designers of the AI also choose the underlying assumption and models. Even the very idea of using an AI implies a certain set of biases and intentions.
Characteristics that are truly common enough in humans that can safely be extracted as a factor are rare. Most of the time we try to compromise so we can call our differences "close enough". The process of finding and/or creating those compromises is what we call "politics", and while computers can certainly help as a tool, but the process needs human involvement by definition.
Attempts to turn over any kind of political or social decision-making to an algorithm is simply a way to disguise the concentration of power. The algorithm's designers ultimately end up with the power, while others are denied power.
The racist tactic known as "redlining" is a pre-AI example. Black people aren't denied hosing directly, they simply "don't quality" for a loan, with the real reasons obscured behind a proprietary "credit worthiness" equation.
Instead of using AI as a decision-maker, a place where AI (and other technology) might actually be useful is as the facilitator and/or part of the "panel of experts" used in Delphi methods. While the people tend to jump on bandwagons and make stupid decisions when unorganized, we have a lot of examples where a general crowd of people make very good decisions when they are focused on a specific goal and have enough structure to allow for iterative refining of ideas.
A few of his comments that are relevant to the use of technology with politics and society:
We have these extraordinarily limiting constraints from a past in which we did not have
the tools to have anything other than extraordinarily limiting constraints. But, now we
do have the tools, and the tools are running away with us faster than the social
institutions can keep up.
I think countries ought to set up Departments of the Future. [...] We are on the edge of
having the technology to be able to say, let us run a constant, dynamic, updated review
of everything that science and technology is thinking about [...] then let us use the same
techniques to ask the public in general, not politicians, whether they like that idea,
whether they feel that they could live with that idea. And then, like a Delphi technique,
re-run it until everybody stops changing their mind.
Collate all [research laboratories and business R&D] together and process them using stuff
like big data to see what the pattern looks like becoming, and then layering on top of that
social media analytics to say, if this was coming, would you like it, and if not, why not?
In other words, to have a sort of 24 hour a day referendum
... it’s no longer important to teach people to be chemists or physicists or anything ‘ists
because those jobs are gone, and if they’re not gone today they’re gone tomorrow. And unless
we know the old tools of critical thinking and logic and such, we will not be able to handle
what follows. So, we’re wasting our time training people to be things that will no longer
exist in 10, 15, 20 years time.
Every single value structure is meaningless [...] commercial society will be destroyed
at a stroke. The trouble is the transition period [...] how we get from here to there.
The vested interests, I mean, we’re going to have to shoot every one of them – nobody,
nobody is going to give way to this. [...] All cultural values relate to scarcity, ultimately.
It is absolutely normal to have an establishment and an elite that influences the governments and the presidents decisions.
Only in the most dictatorial and absolutist types of governments you would see a lone person at the top deciding on issues without consulting with anyone.
But I'm not sure that such a thing has ever existed. Even maniacs with a cult following behind them like Hitler had to balance the interests of different factions within the system.
What we have to get rid off is certain categories of influencing decisions that have a negative impact for the majority. (bribery)
My main point though is that we tend to place too much emphasis on how much new tech can help solve age-old, human nature driven problems.
The game backstory is that during cold war, it became a hot war, nukes were launched, and in the US was built one (or more) underground cities, that are administrered by a paranoid computer that hates and fear communists.
My current Paranoia character works on "TechServices" and sometimes make his opinion known (something that is actually illegal to do) that "The Friend Computer" ins't the real ruler, his designers and programmers are.
It is fun to see the implications in the game, specially as the mindset of players affect their characters and behaviour, some people are loyal to the computer, some people consider the computer the enemy, and some people consider the computer only a tool, and are loyal (or enemy) of the "High Programmers" that have access to the administration computer code.
The end result is a rather byzantine situation where everyone (including The Computer) is plotting against everyone else, all being sufficiently paranoid.
Granted, it may not work out so rosily in real life- the reasoning log would likely be liberally redacted due to factors classified from public knowledge (just as some of Obama's more puzzling stances may, charitably, be explained by things he's not allowed to tell us)- but that's its own problem.
Having AI's as advisers, that would be totally different. In the end we can still hold responsible the people that listened to that advice as it's their responsibility to check if the advice from their AI advisers is real.
That doesn't change the AI, though - since it still is physical hardware in some place, even if you could verify the software you can still exploit the hardware.
This has so much potential for a Phillip-K-Dick-ian rabbit hole of paranoia and insanity that it made me chuckle. I think you might be hand-waving away a lot of complexity here. :)
Insert a layer of expert representatives (akin to congresspersons) to generate the PRs?
Making the system resilient doesn't sound like anything more than another engineering problem to me.
I interpreted @mtgx's statement as referring to the information that would be input into the IA. Namely that if presidents are getting bad advice from advisors already, replacing the president with an AI might not improve things, if the same bad advice is still being fed in.
This was literally the best usage of AI as comedy ever made.
"Anyone smart enough to be a good president wouldn't want to be president."
This, because of the pressure and stress involved...
Watson, with human advisers, should be relatively stress-proof...a possible plus...
I guess then the joke would evolve to: "Anyone smart enough to be a good adviser wouldn't want to be."
... unless there's leverage against the elected, something that would subjectively seem to compromise the integrity more than the slowly gotten used to corruption.
No other area of human activity is so unscientific and so replete with falsehoods and lies as politics. We don't tolerate physicists or doctors lying, but lying in politics is a norm. Add to this a mix of dogmas, ideologies (often based on backward religious ideas), wishful thinking, and yes, men's testosterone power plays, and you have a recipe for non-progress, wars, subjugations, geopolitical games, etc etc etc. Even ideas that look like a noble cause often backfire and result in death and destruction.
Scientifically-based politics should be a norm in the 21th century.
This statement is already a normative one. I don't even know where to start in terms of breaking down this incredibly feeble argument but one place to start would be the fact that a move to "Scientifically-based politics" would be a political move in and of itself. Then you have to consider that you're asking people to get rid of ideologies, "backward religious ideas" and replace them with "science." Science done by whom? A bunch of California tech companies? I wonder what that world would look like.
This comment reads like something from /r/juststemthings or /r/justneckbeardthings and I honestly can't even tell if this is a joke or not.
Merkel’s government, despite being criticized for never having their own opinion, did this quite well, and handled most things well.
Technocracy was also a theme of many communists. It's also reappeared in the form of the Futurist Party, and some other small movements like the venus project.
For an interesting review of these ideas, seriously read this: http://slatestarcodex.com/2014/09/24/book-review-red-plenty/
>This book was the first time that I, as a person who considers himself rationally/technically minded, realized that I was super attracted to Communism.
>Here were people who had a clear view of the problems of human civilization – all the greed, all the waste, all the zero-sum games. Who had the entire population united around a vision of a better future, whose backers could direct the entire state to better serve the goal. All they needed was to solve the engineering challenges, to solve the equations, and there they were, at the golden future. And they were smart enough to be worthy of the problem – Glushkov invented cybernetics, Kantorovich won a Nobel Prize in Economics.
Project Cybersyn was a really cool idea that tried to actually implement these ideas in the real world just as computers were becoming advanced enough to do these things. But unfortunately it didn't last very long:
If my company violates ITAR rules when we do aerospace work we can go to jail. A Secretary of State handles classified email without regard for security and on servers she controls, and she could be the next President. A President lies and manipulates facts and he suffers no consequences. A Senator makes-up shit, lies, cheats and makes promises he will never fulfill and is not held accountable, ever. A Mayor changes a vote and nothing happens to him. A Government organization spends lavishly and goes so far as to use their migt to punish people who do not align with their politics and nobody is fired or goes to jail. Politicians launch us into bullshit wars and they are not held accountable for any of it. They give money lavishly to other countries (and brutal dictators) while our own kids suffer and schools have to lay off teachers.
No, the elephant in the room isn't the lack of science (although some would be good), it's the lack of consequences. It's the lack of honesty. It's the lack of accountability and restraint. It's that and more. And it won't change until people wake up and demand it, which is unlikely until things become FUBAR.
The system is utterly broken and needs to be adjusted if it will be effective in the next 200 years.
2. Analysing religions seems to be within the domain of sociology.
3. Determining the truth of different religions is kind of tricky, because a lot of them are about the afterlife and unless there is a way to return we can't get any information about it. But if the local priests claims that "anyone who desecrates the temple will immediately be smitten by lightning", defecating on the altar seems to be a surefire way to find out. And of course, if a valkyrie hands you a mug of mead after your death, you know you shouldn't have prayed to Cthulhu after all.
Scientific beliefs are objectively testable and falsifiable. Religious beliefs are neither (though they are often subjectively testable). This doesn't mean religion is bad, but it means that it can't be understood in scientific terms.
>Back in the old days, there was no concept of religion being a separate magisterium. The Old Testament is a stream-of-consciousness culture dump: history, law, moral parables, and yes, models of how the universe works. In not one single passage of the Old Testament will you find anyone talking about a transcendent wonder at the complexity of the universe. But you will find plenty of scientific claims, like the universe being created in six days (which is a metaphor for the Big Bang), or rabbits chewing their cud. (Which is a metaphor for...)
>Back in the old days, saying the local religion "could not be proven" would have gotten you burned at the stake. One of the core beliefs of Orthodox Judaism is that God appeared at Mount Sinai and said in a thundering voice, "Yeah, it's all true." ... The vast majority of religions in human history - excepting only those invented extremely recently - tell stories of events that would constitute completely unmistakable evidence if they'd actually happened. The orthogonality of religion and factual questions is a recent and strictly Western concept. The people who wrote the original scriptures didn't even know the difference.
It's no different than what happens when religion gets mixed up in politics. The separation of church and state protects the church as much or more than it does the state. Politics and power corrupt whatever they touch.
There are ways to improve the impact of science (and religion and education, etc) on the political world, but mixing them together too much will just poison the positive traits, rather than lifting up politics.
We have a democratic-republic in the United States so popular / populist majorities can't inflict absolute will over unpopular / unpopulist people / regions.
Populists today, at least in the US, aren't as dangerous as they are / were in countries where changing the constitution is much easier for a powerful / popular leader.
what if the computer decides there is no way to feed the population at the current (or a future) growth rate and implements a mandatory two children only policy?
a computer wouldn't understand tact, politics or empathy, or if it would, it wouldn't be a solution much better than what we'd get now.
Purge everyone who doesn't agree with the purge.
150 milliseconds later, law-enforcement drones and robots start executing.
Not really, judging from the size of aisles filled with homeopathic remedies. Some people practically have to be dragged away, kicking and screaming, by government agencies so that they stop harming themselves.
It's always people. Stupid people ruining beautiful ideals. (And that includes me, of course.) Until we can fix people, it's futile to wish for "scientific" politics.
I'm kinda optimist, so I believe better education will improve the situation, but who knows.
More realistic is to get politics out of as many areas of life as possible.
There is a big problem, how do you do that? I thought about it recently on another forum - every scientific statement has a form of implication: If you do A, you will get B. So there is no beginning of it. You may say, I start from some axiom system, but then you have to agree on that. Even in classical propositional logic, even if you consider the same resulting formal system, you can get large (infinite?) variety of axiomatic systems that can lead to it.
So, what I would like to see, would be - equip everybody with Watson and let them vote, direct democracy style.
More than anything, this ad makes me think of how long of a way we have to go before this type of AI will be useful to humans, much less able to run a country. Why doesn't President Obama consult Watson before making decisions? Because every decision would require a massive data collection and processing project, training and tuning of delicate models and rigorous testing. And the net result would be what? Processing of factual information that any human could get by reading a brief prepared by an aide?
We're worried about AI representing an existential threat or creating wide spread unemployment, but right now the best and brightest computer in the world can't provide any more practical political value than Monica Lewinsky. This is a good ad campaign, timely and provocative. But it also highlights how the path ahead is as long and arduous as a trip to Mordor. glhf, IBM!
From the policy platform this could be Bernie's VP running mate.
- Single-payer national health care.
- Free university level education.
- Ending homelessness.
- Legalizing and regulating personal recreational drug use.
- Shift bulk of electrical generation to solar, wind, hydroelectric, and wave farm.
- Review/Repair/Replace/Remove highways, bridges, dams.
- Upgrade and subsidize metropolitan public transit solutions for the next century.
- Build and subsidize metropolitan high-speed communication networks.
- Ensure a minimum-wage that meets a reasonable cost of living.
- Ensure fair and safe working conditions.
- Ensure global environmental commons protections.
It's just a positive platform, that doesn't address divisive issues (abortion, immigration, etc...)
This doesn't inspire my confidence in their systems ability to run the nation.
- No idea, the president's database server is down...
- And where are the autonomous tanks flying to ?
- Access denied...
- Oh crap, not again..
I assume they're relying on non-ML people who make the money decisions to get caught up in the hype and force it on their organizations, but in my (very limited) experience that hasn't worked out. The people in charge at the organizations I've worked at have been just as skeptical of the hype.
Except this isn't anything to do with Watson-the-jepordy-winner. It's a set of web services which are useful for doing analysis on various things.
The question answering service was withdrawn last year: https://developer.ibm.com/watson/blog/2015/11/11/watson-ques...
We already have someone running for president with this agenda.
Are you honestly and seriously suggesting that the US tax system is absolutely at the limit and there's no room to pay for any of these policies? For example, we shouldn't have any more tax brackets over the $400,000 one? Have you looked at the proposed tax brackets that actually pay for these plans? How do you explain the idea that 3 new tax brackets ($500k-2m 43%, $2m-10m 48%, and $10m+ 52%) would destroy the economy? How exactly do people who make $10m+ single-handedly sustain our economy, why would a 10% tax increase on their over-$10m income end that, and why do the thousands of dollars of reduced overall expenses (including taxes) for almost everyone else not matter at all in your math?
I believe not, as Watson is a solution to the Big Database problem of finding relevant information from a massive corpus, but I am not convinced that that's what enables human intelligence, preferring the views that intelligence is necessarily context specific (there is no objective "intelligent" act or being) and is enacted between an environment, body, and culture, rather than processing of patterns of simulation in the brain.
However, I can't shake the feeling that Watson is just telling us what we want to hear.
Time and time again, Hollywood (and Dr. Stephen Hawking) warn us that once given enough power and control, the end-game policy of an AI with ambitions of omnipotence is Kill All Humans
But it reads awfully seriously. I don't see any of the telltale signs of satire.
So I think a better explanation is this is a clever marketing stunt for IBM.
What if Watson says vegetables should be banned and smoking should be mandatory? What if he's right?
I think this is one of the most important statements on the entire page.
All in all, this is a terrible idea for all kinds of reasons that have no connection to the false idea that Watson is intelligent. ;)
"If you are interested in the intersection between technology and politics we invite you to donate to the Electronic Frontier Foundation ( https://supporters.eff.org/donate ). For 25 years the EFF has been a champion for civil liberties, privacy, and education on politics around emerging technologies. With your support they will continue to aid in technological progression with humanity in mind."
//Uncomment for production
//#define kill_all_humans 0
Watson, the Watson logo, Power7, DeepQA, and the IBM logo are
copyright IBM. The Watson 2016 Foundation has no affiliation
with IBM. The views and opinions expressed here in no way
represent the views, positions or opinions - expressed or
implied - by IBM or anyone else.
Registrant Name: Aaron Siegel
Registrant City: Los Angeles
Registrant State/Province: California
One Corporation, under God!
Since Google\FB\Twitter\Amazon etc are all being run algorithmically, no reason to indefinably keep paying the execs 300x what the engineers make. Let's just pay Watson. Watson as a VC sounds much more interesting.
If you are not just being cynical---which I wouldn't blame---I would revisit this thinking: It seems as though executive orders and vetoes alone seem like reason enough to have the office. Even if you don't agree with all orders or vetoes, this kind of stuff certainly seems like more than entertainment to me.
> Watson as a VC sounds much more interesting.
That actually seems like a really cool idea. There is a developer API; someone should start one!
Even if a AI president is in place, providing policy direction to maximize some agreeable set of end goals - the humans around the president would not understand the nuances, nor effectively able to / want to implement the policy given their own agendas. I feel that systems is bound to fail.
Now if only the AI president is able to do behind-close-door deals with politicians and special interests groups, some of the the policies might actually get implemented. But the end goals could shift pretty radically in that scenario.
I am having a hard time visualizing a scenario where an AP president would be effective / useful.
Maybe a human president could use an AI advisor to get data-driven arguments to support his/her policies.
now that would be some serious dogfooding!
I don't think we can trust an AI just yet. For example I had arguments from people who wanted me to feed motivation (cover) letters from job applicants into Watson to determine "cultural fit" (I'm in the tech recruitment business atm). IMO these technologies are way over-hyped for now and we are walking down a very dangerous path, because marketing pushes into this direction and the technology is far from ready.
To prove my point I tried to feed Joseph Mengele, Stalin & Bin Laden writings into Watson to see how he evaluates the data. As expected Watson had some "great things" to say about these characters.
Another feeling I get is that when we read info about ourselves in this context it's like reading a horoscope. People read 2 things that are true (but vague) and the 3rd thing may not be true but they shrug it off as "oh I didn't know this about myself yet ... I'll have to monitor myself in future to see if this is right". We are prone to be "open" to such statements as long as they sound like a positive trait. But is it true? So in that sense the machine learning might fool us into thinking we remove bias (but we can not remove bias like this).
I honestly think that this technology should come with a warning label because people who have no idea about how the data is being prepared or analysed will interpret the output verbatim and take it face value.
Here is the link http://blog.valbonne-consulting.com/2015/06/13/using-big-dat...
Every politician seems incompetent because there is no way any human being can gather and analyse the wants and needs of every single constituent and form a strategy that benefits as many people as possible. There's just not enough time in the day and brainpower available no matter how you divise the workload.
AI can solve this. Maybe not as a candidate but at least as a raw information parser.
If someone gathered and open sourced the information on what everyone wanted in relation to some policy we could even have competing AIs that parse it in different ways to figure out the best way to tackle a problem.
But we don't want electronic voting, so electronic politicians would be a very bad idea
For anyone interested, here is a recent presentation I gave on Watson and a summary of what it can do.
IBM Watson: Building a Cognitive App with Concept Insights
I cannot imagine how it can possibly get any worse.
Your imagination is severely lacking then.
The executive branch should do the same thing, but the legislature should try it first. That's what I was saying, just the order of operations.