Short answer: Yes.
What is the evidence for this claim, upon which the premise of this article rests? The existence of algorithms that simulate human game play today is hardly it.
The authors have fallen into the trap of accepting the nomenclature of "artificial intelligence" without further questions. There is nothing "artificial" nor "intelligent" about it.
Rather, machine-learning algorithms are trained on a diet of human-derived data that is simply a reflection of existing human biases. The danger is in their human programmers being non-introspective of those biases, not in the algorithms themselves. Thus I personally am much more fearful of human-made decisions than non-human ones.
Don't hate the player, hate the game.
>We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.
This survey is a few years old. Discussion and knowledge about AI risk has increased considerably since then. And AI itself has made remarkable progress in that time as well. It's amazing how much progress there has been in AI in just the last 5 years. Who knows where it will be in 40 years. Look where computer technology was 40 years ago.
Superintelligence goes well beyond that. A machine with cognitive abilities far beyond humans. I don't think such a machine is very unlikely even in the near future. It's unreasonable to believe that humans are the pinnacle of intelligence. We are just the very first intelligent creature to evolve. Our brains are heavily resource constrained by size and energy. And neurons are many orders of magnitude slower than transistors. And also far larger and less compact.
The tasks humans are hired to perform by employers, and the processes which can be used to break democracy by exploiting big data, are far more constrained than the scope of cognition required to pass the Turing test.
>>The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. ...We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized. Vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society. ...In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons...who understand the mental processes and social patterns of the masses. It is they who pull the wires which control the public mind.<<
The current fit on part of the intellectuals and media is due to them having lost this power to control the narrative and ideas because they failed to adapt to the new media which have replaced the old.
Dumb evolution was able to create human-level intelligence with just random mutations and natural selection. Surely human engineers can do better. But in the worst case, we could reverse engineer the human brain.
Whether a true 'generalist' AI is possible in the forseeable future is debatable.
> they can do AI research as good or better than human researchers.
Now your AI is not just a 'generalist' but rather a specialist in AI. A very big leap of faith has occurred here. This also presumes that the AIs are even capable of effective invention and improvisation rather than mimicry and optimization (the only two features we have seen from the very best of cutting-edge ML work so far.)
> And they can make AIs that are even better, which in turn can make even better AIs, and so on.
All of which is predicated on the limiting factor being software and not hardware. If it is the latter then these postulated AIs are hitting the same brick wall as humans and thinking about the problem faster or harder does not magically make the necessary hardware appear.
Sure, but see the survey I posted above. The rate of progress of AI is incredible. We will almost certainly be approaching human level in a few decades at most.
>Now your AI is not just a 'generalist' but rather a specialist in AI.
That's what general intelligence is. The ability to learn different specializations. AI researchers are not literally born as AI researchers and capable of nothing else.
> This also presumes that the AIs are even capable of effective invention and improvisation rather than mimicry and optimization
Why wouldn't they be? If they are generally intelligent and can do all the same tasks humans can do. Whats magical about invention that would prevent computers from ever doing it?
>All of which is predicated on the limiting factor being software and not hardware.
All the same arguments apply to hardware. Hardware has been improving exponentially for a much longer time than AI has. And I think hardware may already be close enough. Transistors are orders of magnitude faster and denser than biological synapses.
The rate of progress in AI is actually not "incredible" and is in-line with the advances in hardware which have made old research suddenly applicable to a wider range of problems. As someone on the outside looking in it may appear as though magic is happening, but the field is mostly progressing at an only marginally faster rate than it has done over the previous few decades. What has changed significantly to lead to all of these "incredible" results you see is the larger data sets available and improved hardware upon which to run massively parallel but weakly connected computations.
As for why AI can't invent or improvise I am simply suggesting that so far we have only seen optimization and in fact invention and similar feats may actually require more work than we realize. A statistical simulation (based upon a huge corpus) of how a human would respond to various situations is NOT general intelligence, and so far you are just making hand-waving assumptions that paper over a large number of hard problems that no one has a clue how to solve.
A true Turing Test passing machine, is still something that we seem to be more than a quantum leap behind of.
In any case, I think there has been tons of progress since the 80s. Taking the best algorithms from the 80s and running them on modern hardware would fail. The core idea of backpropagation and gradient descent was there, but not much else.
All I'm trying to say is, let's please keep a cool head and a wholly skeptical approach about all this.
Note that no one is claiming that skynet will appear any day now. Just that we will very likely have human-level AI in a few decades. I don't think that's a terribly wild or speculative claim.
The Commodore PET, Apple II, and TRS-80 all came out exactly 40 years ago (1977)
By the way, there is a list of who does and who does not think there is a serious risk for society (of any type) within the next 50 years: https://goo.gl/Oe7S6e
The earliest that machines will be able to simulate learning and every other
aspect of human intelligence:
Within 10 years 6 5%
Between 11 and 25 years 3 2%
Between 26 and 50 years 14 11%
More than 50 years 50 41%
Never 50 41%
Totals 123 100%
The divergence in poll results seems to suggest we should put even less credence on polls here than usual.
Therefore, we need to be proactive about policies that would protect against unfettered influence by those controlling the technology.
You've said the equivalent of "guns don't kill people. People kill people".
I don't disagree about the need for policies; I just think those policies need to be directed toward humans as the weak link in the chain, not machines. This is not like gun control where a single person with a weapon can do a lot of unchecked damage, and thus access to guns needs to be controlled. Very few have access/can do damage with an algorithm.
This is definitely the point to make a "did you read the full article" check.
My reading of it was that people were clearly seriously concerned about the current state of the art. The article also considered Future possibilities.
Someone (can't remember who) said it better: "You don't want to be an edge case in this brave new world."
"No one knows, not even the programmers who wrote it! Deep learning is a black box!"
Maybe we shouldn't call Supervised learning techniques "Artificial Intelligence", but instead "Memoized Intelligence"?
Then again, it's probably a spectrum from memoized intelligence to "true" intelligence.
I guess the difference is that supervised learning systems can't yet make an autonomous effort to acquire new knowledge from other social things.
What I mean to say is, progress is incredibly, impossibly blazingly fast. 2060 is extreeeemely far away.
In fact the idea goes back even further to the 1940s, when the first papers were published about how NNs could possibly work.
Edit: found this page that has the history: http://www.scholarpedia.org/article/Deep_Learning
The AI that plays Go is now better than all Go Players and it trains by playing itself. It was bootstrapped by analyzing human games, but now it's past that. Theoretically reinforcement learning AIs could have access to deeper levels of understanding than just by copying humans because they can build upon experience. These are limited domains for now, but I think the first startling AI will come from chatbots that can come up with novel arguments. Watch the chatbot space. When someone comes up with a really good chatbot that can come up with creative responses, that will really be an AI turning point. All the deep learning ones I've seen so far are like talking to someone with moderate dementia. They forget things from earlier in the conversation and go into conversational loops.
"The algorithms that power Libratus aren’t specific to poker, which means the system could have a variety of applications outside of recreational games, from negotiating business deals to setting military or cybersecurity strategy and planning medical treatment – anywhere where humans are required to do strategic reasoning with imperfect information.
“Poker is the least of our concerns here,” said Roman V Yampolskiy, a professor of computer science at the University of Louisville. “You have a machine that can kick your ass in business and military applications. I’m worried about how humanity as a whole will deal with that.”
This one point simply cannot be stressed enough.
Amazon and Netflix get score voting for their products, yet we can't even get measly, pathetic IRV?
Imagine if Amazon had to use IRV or FPTP to determine "the best products"... It'd go out of business pretty quick!
But given the potentially higher cost of implementation, I would be very happy with approval voting (score of 0 or 1), which would be compatible with most existing ballots. "You may vote for zero or more of the following candidates."
Approval voting, as with score, breaks the stranglehold of two-party rule, allows for rather than discourages policy overlap among parties (breaking down polarization), and increases voter satisfaction.
Range-3 seems like the minimum range desirable. It would allow voters to indicate whether they disapprove, were neutral toward, or approve of a candidate. The only historical adoption of range-3 I am aware of is the electoral council of the Republic of Venice, and I believed they used it as a constant component in their electoral process for selecting the Doge without abandoning it.
The costs of switching to something even better like range-5 may also be over-estimated. Range-5 seems like it would work with existing optical scan ballots. The cost of acquisition for range voting tabulatation code on the state level would also ideally be fairly low due to the simplicity of the preference aggregation method in range voting in comparison to IRV, and Maine is already undertaking a switch to the later.
Using more ranges is silly. There's no reason to ever give a candidate less than maximum vote, if you want to give them a chance at winning. If you don't, then there's no reason to give them anything above 0.
There is no reason to give your favorite candidate less than max score, and no reason to give your least favorite candidate more than 0 score, but the middle does serve a purpose. Scoring other candidates in the middle is to boost their chances of winning against your less-favored candidates at the cost of boosting their chances of winning against your more-favored candidates.
There are some variants, such as Range2Runoff, which beats Range often in simulations. I hear that it is much better than pure Range when there are many strategic voters.
And that's when people have no incentive to lie...
Here's a hypothetical example involving 3 different shampoos. All 3 work. One smells bad. One smells decent. One smells great.
If I were to review these on Amazon, I would likely give them 3, 4, 5 stars respectively. All 3 of them worked, but I have a preference. I could imagine a theoretically worse shampoo that didn't work, or even worse, made all my hair fall out. Such a shampoo would be worth a rating of 0 stars. I don't give my least favorite shampoo I actually used 0 stars because I can imagine that in the future I may come across something worse, and it doesn't seem correct to put a functional product on the same level as a harmful one.
If I were to vote (using a score system) for which one of these 3 shampoos I would like my workplace to stock, then it is much simpler. 0, 2, 5 stars for each option, respectively. I don't have to worry about a hypothetical worse 4th option in the future skewing my results now. If a future election is held with that worse, 4th option, then I can give it a 0 and adjust my previous 0 and 2 star ratings accordingly.
However, approval is something that could be installed with relatively low cost and low effort. And it is nearly immediately understood by anyone. Score voting requires a tiny bit more education (although it too is easy to understand; and I suspect most voters would get the hang of it after a couple elections).
I would be disappointed with IRV/ranked because it seems the most difficult to understand, requires rework of ballots, and disallows equal scoring (thereby reducing expressiveness). It would be a shame to squander voting reform momentum to end up with IRV.
I feel an incredible sense of urgency about this, but am not sure how to go about acting on it. It's hard to convince people that this is part of the root of the problem. Most folks I talk to see the voting system as some kind of technicality, and don't understand that it's actually on a foundational level.
It seems like range/score voting could use a big marketing and education overhaul. Fairvote is not helping the situation.
It's also so simple. It's incredible we don't have this already.
I'm not sure if this was a rhetorical question...but the answer is that districts are drawn by the ruling party in order to assure its continued dominance. The incentives for these legislators are totally different from Amazon's.
A very interesting example happened in Georgia in the 1990s. The Republicans and the Black Caucus (who are mostly Democrats) worked together to pack black voters into a few concentrated districts. This resulted in both more black Democratic legislators and more Republican legislators.
 says there is already software to do this, and  gives a terrible example of a district in Pennsylvania.
Oops. It's late here.
Unfortunately, as soon as you lay down some guideline, and perhaps a purely rational one, that puts one side or the other at a loss, you'll have objections to the rule.
I hate to say it, but I don't think your process solves anything and actually just adds complexity to the process. It probably makes matters worse.
Why do other parties not get a say?
Until recently, we haven't had a precise way to define it:
"The challenge for reformers, however, is that the courts need a judicial standard, which is different from a mere scientific standard. A judicial standard must be judicially discernible — it has to be derived from a specific right given in the Constitution. It also has to be judicially manageable — it needs to provide courts with clear guidance on how to rule." 
This judicial standard is what is being proposed to the Court in an ongoing case concerning Washington. (It derives from Equal Protection and the First Amendment.)
That's popularity, just like FPTP.
Say I'm a photographer, but due to Alt-Amazon's draconian policies, I can choose one and only one camera. And not only that, I can purchase one and only one lens! What do I pick? Compact or fullblown setup?
But most photographers have at least a couple options to shoot with, depending on the needs of the situation.
And the star rating system is classic score voting.
Lately, normal seems to have become the exception.
How many people here work for companies that threaten democracy in this way? How many people have a google or facebook tracker on their personal site? This kind of change is the sum of small individual responsibilities.
The early dream of the internet was one of free expression, but people -not corporations or governments- have worked very hard to undermine that.
It's a coordination problem. You're right, in a sense; at the end of the day it's people doing this, and if they all stopped then it wouldn't happen. Unfortunately, that's what it would take: They'd all need to stop, simultaneously.
In the meantime our world is such that any company or person who refuses on ethical grounds will be outcompeted, and go out of business.
It isn't something anyone decided to do. It's just implicit in the system, which itself wasn't really deliberately built. We'll break our backs lifting Moloch to heaven, and then...
Well, there is no "and then". Humanity will have achieved its final purpose. Maybe it won't go that far, but I'm skeptical.
Everyone doesn't need to stop simultaneously. Some people need to stop and convince others that it's worthwhile to do so. People who refuse to stop should be shown the consequences of what they're doing and be forced to make a decision that they otherwise might not have been aware of. Some people will still make that decision and will be able to justify it completely, some people will change.
Your part in all of this, even though you don't believe change is possible, could be to be less dismissive and condescending when it's brought up.
PHP has like a 70% market share for web backends, Haskell/Rust/etc. basically none.
VHS beat Betamax.
I'd say that's individuals outsourcing their decisions to the perceived or real pressures and expectations of others. Also, while you can surely describe individuals making decisions as groups in aggregate, as a simplification, groups don't actually decide anything. They don't even do anything. And where those external expectations are also based on outsourced decision making, you can sometimes ignore the whole chain.
Interestingly, when individual people are powerless but no single person is responsible, and also nobody is responsible for doing what they "have" to do, that's kind of even more anarchic and desolate than even the anarchy of everyone against everyone. I mean, in the latter case there are even still "ones" there, not just one blob, one river of people flowing where they can't help but go.
Interestingly, the old Greeks' kind of self-servingly considered slaves as obviously born to be slaves, because if they weren't, they'd simply kill themselves rather than be slaves. The Spartans certainly were big on that, at least if the writers are to be believed. Not that I want to glorify their outlook, but I find it fascinating how people constantly pretend anyone has to do anything other than die at some point. As in "I can't do that, or I would get fired" -- fine, but own your decision, don't call it anything but a decision. I would rather argue with someone who says they decided to be a selfish or cowardly person, than tolerate someone who is "good" just because that's more convenient or others expect it.
Anyways, here's someone who did pay orders of magnitude more attention than we are today for the most part:
> Private interests which by their very nature are temporary, limited by man's natural span of life, can now escape into the sphere of public affairs and borrow from them that infinite length of time which is needed for continuous accumulation. This seems to create a society very similar to that of the ants and bees where "the Common good differeth not from the Private; and being by nature enclined to their private, they procure thereby the common benefit."
> Since, however, men are neither ants nor bees, the whole thing is a delusion. Public life takes on the deceptive aspect of a total of private interests as though these interests could create a new quality through sheer addition. All the so-called liberal concepts of politics (that is, all the pre-imperialist political notions of the bourgeoisie)-such as unlimited competition regulated by a secret balance which comes mysteriously from the sum total of competing activities, the pursuit of "enlightened self-interest" as an adequate political virtue, unlimited progress inherent in the mere succession of events -have this in common: they simply add up private lives and personal behavior patterns and present the sum as laws of history, or economics, or politics. Liberal concepts, however, while they express the bourgeoisie's instinctive distrust of and its innate hostility to public affairs, are only a temporary compromise between the old standards of Western culture and the new class's faith in property as a dynamic, self-moving principle. The old standards give way to the extent that automatically growing wealth actually replaces political action.
> Hobbes was the true, though never fully recognized, philosopher of the bourgeoisie because he realized that acquisition of wealth conceived as a never-ending process can be guaranteed only by the seizure of political power, for the accumulating process must sooner or later force open all existing territorial limits. He foresaw that a society which had entered the path of never-ending acquisition had to engineer a dynamic political organization capable of a corresponding never-ending process of power generation. He even, through sheer force of imagination, was able to outline the main psychological traits of the new type of man who would fit into such a society and its tyrannical body politic. He foresaw the necessary idolatry of power itself by this new human type, that he would be flattered at being called a power-thirsty animal, although actually society would force him to surrender all his natural forces, his virtues and his vices, and would make him the poor meek little fellow who has not even the right to rise against tyranny, and who, far from striving for power, submits to any existing government and does not stir even when his best friend falls an innocent victim to an incomprehensible raison d'etat.
> For a Commonwealth based on the accumulated and monopolized power of all its individual members necessarily leaves each person powerless, deprived of his natural and human capacities. It leaves him degraded into a cog in the power-accumnulating machine, free to console himself with sublime thoughts about the ultimate destiny of this machine, which itself is constructed in such a way that it can devour the globe simply by following its own inherent law.
> The ultimate destructive purpose of this Commonwealth is at least indicated in the philosophical interpretation of human equality as an "equality of ability" to kill. Living with all other nations "in the condition of a perpetual war, and upon the confines of battle, with their frontiers armed. and canons planted against their neighbours round about," it has no other law of conduct but the "most conducing to [its] benefit" and will gradually devour weaker structures until it comes to a last war "which provideth for every man, by Victory, or Death.
> By "Victory or Death," the Leviathan can indeed overcome all political limitations that go with the existence of other peoples and can envelop the whole earth in its tyranny. But when the last war has come and every man has been provided for, no ultimate peace is established on earth: the power-accumulating machine, without which continual expansion would not have been achieved, needs more material to devour in its never-ending process. If the last victorious Commonwealth cannot proceed to "annex the planets," it can only proceed to destroy itself in order to begin anew the never-ending process of power generation.
-- Hannah Arendt, "The Origins of Totalitarianism"
Leaves each person powerless? Check. Everybody is a slave to warfare, those who don't wage war perish, and there isn't even a tyrant you could assassinate. Planned obsolescence so we can hustle harder? Check. And oh boy, we can't wait to do this other planets. And we'll be driven by corporate, brainless greed even before we set foot on any on them, it's going to be so much more efficient than the destruction of this environment.
But the real kicker to me is
> It leaves him degraded into a cog in the power-accumnulating machine, free to console himself with sublime thoughts about the ultimate destiny of this machine
that is, the fact that many today don't consider this degradation, but elation. That's all we have, our "communities", our hopes and dreams for this utopia like world with constant new things to consume, and other abstractions. Reflecting one oneself as an individual in the naked here and now? Not so keen on that.
Show concern about the data that is collected, how it is collected, how it benefits others, how it is safeguarded.
Offer operational alternatives that have a lesser negative impact with comparable benefit.
Challenge people to show how that their algorithms treat people equitably and don't exacerbate existing disparities.
Get a job somewhere else in an industry that helps people or positively impacts society in some way.
Quit and quietly wage cyber-war against your former employer from an abandoned missile silo? Just spitballing here.
That's the origin of the dissonance, the lies, the blame, and the general negativity of the blamers in power today. I won't grace them with the term "far right", given having right leaning views is fair enough in some circumstances. So is leaning left, however, but then left leaners don't blame and right leaners do, so you get the natural polarization occurring we see today between the groups.
This group of outright liars is another thing entirely, however.
There is a solution; it's treaties, regulations, and so on. The problem is caused by defection being a worthwhile strategy, so the solution is to change the landscape so that it isn't. That won't happen if we don't try it, though.
To a major degree, the story of western civilization is the story of building our way out of such traps. It isn't hopeless.
Plenty of people stop all the time, plenty never started.
> In the meantime our world is such that any company or person who refuses on ethical grounds will be outcompeted, and go out of business.
Again, no. Plenty of individuals and corporations refuse to cut plenty of corners that would give them 0.N% more profit on purely ethical grounds. If the world was actually like you described, nobody would even survive the first day after their birth, they'd just get eaten.
- What's up friend?
- Saving for a bicycle!
- But don't you work in bicycles factory?
- Yeah... I tried to bring home the parts. But I ended up with a machine gun :(
I think the best we can hope for is to design systems that are difficult to undermine or subvert. Distributed/mesh systems, encryption, and so on - things which at least delay the onset of corruption, centralization, and control. This is obviously a big ask, but it's also something that can be worked on by a few, instead of requiring that the many don't work towards the reverse.
The obvious courses of actions have also been discussed here - governing and ethics bodies for code/coders as a start.
It's even likely that in the future unions may be required as the industry grows older as well. This is probably the least popular opinion to state on HN.
But That idea isn't an issue. We're relatively far off from that happening or it becoming a political/collective necessity.
: http://www.spektrum.de/news/wie-algorithmen-und-big-data-uns... (in German)
Article is very broadly written, when the imminent threat might be manipulation of upcoming elections (France, Germany); "fake news" isn't even mentioned
Where do you see clickbaity bias besides the title and lead (I haven't checked thoroughly)? Many phrases/stereotypes in German pop-science articles will only make sense in the context of the German media discourse/bubble (if they make sense at all).
I'd be surprised if that kind of concept will remain limited to China. Whenever there's massive amounts of money to be made, it will find its way to the US (Credit Score, for example, is already reality). It must be very tempting for the current administration.
That's why they spy on activists (civil liberties, climate change, animal abuse, it doesn't matter), journalists, libertarians, OWS, BLM, Linux users, etc.
Credit scores are at least regulated, viewable, and disputable. The newer pseudo-credit-scores derived from social media are completely opaque.
> It must be very tempting for the current administration.
I'm much less concerned about government than about corporations. If GOOG/FB can make money by selling a "Citizen Score" to banks, employers, etc., they will do so.
There is no reading of American history where the press or the judiciary function as some conduit for the immortal "brilliance" or protection of the framers. One needs to only look at recent history, the condition of minorities in modern carceral state, being lied into war in Iraq, to recognize this. Further back, the judiciary was used to justify segregation, eugenics, and mass disenfranchisement. And the history of the press speaks quite bleakly for itself.
This kind of mentality, where individual agency is sacrificed in favor of "mythic" protections, is a big part of the reason things have become so precarious.
 see also a novel "The Klansman", the Word of that narrative made flesh and into "Birth of a Nation" , i which the modern movie was born.
Sacrificing individual agency to mythic protections has a name. It's collectivism. And it takes quite the storyteller to kill that sort of story.
The housing crisis? "Housing never goes down." Vietnam? "The domino theory." Manifest Destiny? "Go West, young man."
The problem is that we're adapted to narrative as our principle means of information exchange. So we try to construct counter-narratives. Those work just about as well.
The antidote to narrative is rhetoric.
It's actually spelled hypocrisy, but that's a fun misspelling for those who know Greek roots.
Otherwise, I think a huge reason for our moral progress in the US, slow and unsteady though it may be, is having a document like the condition to point to when advocating for better conditions.
The constitution enshrined the diminishment of blacks' humanity, had to be ammended to extend the franchise to women, and still says that slavery is fine under the right circumstances. It's as much a reflection of our failings as it is our ambitions, and has been wielded violently to uphold or change the status quo. It has no inherent trajectory towards progress other than the one we actively embody.
The more insidious statist leaders create deep, subtle, lasting harm. It's the Bushes and Obamas who set up the largest illegal domestic spying operation ever and got away with it because, even when called out, people trusted them not to commit evil acts with the store of data they were collecting. It was Obama who routinely violated journalists rights yet continued to be generally supported by mainstream news outlets.
That genre of politician will be making a comeback after Trump, and that's when people will start feeling comfortable again and go along with anything that promises stability and safety. And I believe that's the environment where the type of manipulation described in this article ("liberal paternalism") can really thrive.
Uhm, is this a joke?
How about having an anti-Semite for a chief strategist? Or selecting a woefully unqualified campaign donor to to lead the Department of Education? How are you feeling about transgender rights? (And let's not pretend this is just about bathrooms -- if you can't use the bathroom in public, you can't go out in public.) Maybe you care about sexual assault?
 - http://www.nbcnews.com/politics/2016-election/trump-campaign...
 - https://www.washingtonpost.com/news/answer-sheet/wp/2017/01/...
 - http://www.npr.org/sections/ed/2017/02/23/516837258/5-questi...
 - https://www.washingtonpost.com/politics/trump-recorded-havin...
I'm not writing that to be funny, it's a serious issue. Also applies to climate change deniers who refuse satellite data and moon landing deniers who deny everything from NASA… not that I wish to imply any overlap.
Its happening all over the world already, people are already crying for democracy to be reduced in their countries after AI based analytics companies have spread fake news and targeted advertising. Its already extremely effective even though people are fully aware of its existance.
Its been a common trend of the past few years, the impact of media is vast and its widely subverting democracy right now and I am convinced its a grave threat to our societies but not sure how to stop it without doing equally bad things.
Perspective matters when considering the brilliance of the Framers. The debate over the electoral college and the effectiveness of a voter in rural Wyoming vs one in urban LA is a great example.
Personally I think they got a lot right but that the US is a vastly different country now. How do you make adjustments or further codify rights without disenfranchising others though? In theory through amendments but that seems unlikely in the current political environment.
I wouldn't want to live in a society where all of our rules and laws are determined by popular vote. That is a tyranny of the majority. With all due respects to Thomas Paine, just because a majority believe something to be true does not make it so..."common sense" is bs.
Also, calling our version of democracy a "sham" or an "illusion" is ignorant at best. That people are dissatisfied with the results or that opinions can be manipulated does not make it any less so. It is ultimately our own fault when we have things like single issue voters, a lack of voting and/or people who only vote during presidential election years, and those who fall for sloganeering.
There is a reason that billions of dollars are spent on political elections and not simply rigged. It's because your vote does matter.
And they still get this money even though they are wealthy enough to not rely on state subsidies.
"Democracy is the worst form of government, except for all the others."
Like, the vast majority of the population confusing a Constitutional Representative Republic; where voters vote for their representative who then vote on behalf of their constituents, with a Democracy where citizens vote directly.
Which leads to the general population not understanding:
-Why there is an electoral college instead of a direct vote for the president.
-Why there are two divisions of the legislative branch, the House which is proportionally represented, and the Senate which originally was appointed by the State-Legislatures.
Companies like Facebook are pushing for more direct voting. Which should give you pause as the tyranny of the Majority is a very real thing, if you don't believe so look at California which voted down gay marriage despite being a liberal bastion. The average voter is not informed enough to vote on specific legislation, and find themselves easily manipulated by the mass media.
Removing another safeguard to prevent the tyranny of the Majority is never a good idea.
PS There is good reason when searching through the Constitution you will not see the Country defined as a Democracy, or even the word democracy used. Rather it is explicit in the fact that we live in a Republic
Norway is a democracy and not a republic; Zimbabwe is a republic and a dictatorship; France is a republic and a democracy.
>The United States shall guarantee to every state in this union a republican form of government, and shall protect each of them against invasion; and on application of the legislature, or of the executive (when the legislature cannot be convened) against domestic violence.
Also, explain Napoleon who was Monarch over the Republic of France
Yes, our constitution guarantees that each state will be a republic, not a monarchy.
> Also, explain Napoleon who was Monarch over the Republic of France
The French state while Napoleon was emperor is known as the "First French Empire", which replaced the "First French Republic". Eventually, after both the Empire and the Old Regime were re-abolished in 1848, the resulting non-monarchical state was called the "Second French Republic". So your example actually supports my point.
Democracy is the people electing on the issues. It wasn't conceived with universal vote in mind, in fact, it was very elitist in nature.
Republicanism is the people electing representatives to weigh the pros and cons of each choice for them. In practice it is much closer to our current system, its flaws included.
I never said that I did.
Democracy is being hidden. They've renamed it and called it Populism.
In 2014, FQXi held an essay contest, sponsored by Scientific American, called "How Should Humanity Steer the Future?". First prize  went to German physicist Sabine Hossenfelder of the Frankfurt Institute for Advanced Studies .
Her award-winning proposal  was to hook up everybody to a recommendation engine "to give the user an intuitive feeling for how well a decision matches with recorded priorities", so they would not have to think for themselves. Because, as she succinctly put it, "They don't like to think" ("they" being "most people": "It is time to wake up. We’ve tried long enough to educate them. It doesn’t work.").
Granted, the technological contraption needed to make this work would currently be Google Glass-clunky, but fear not; the essay helpfully adds that "If such a feedback in the future can be given by a brain implant, it will be like an additional sense."
No, I am not making this up.
Let's, for a moment, assume that AI becomes highly capable in most human tasks. In that case, I've a pretty radical view on this topic:
AI can create a heaven for humanity. It can get rid of capitalism. It can eradicate poverty, hunger, etc - But yes, it comes with "Terms and Conditions".
Some people might rename such existence as being non-democratic, non-independent, etc. But don't we already live under an umbrella of rules imposed by a democratic system? Many would agree that the rules exist for a reason...right? If AI was ruling such a system - it would have such rules for a reason.
Can a human live peacefully in heaven? We really don't know.
An AI program today exists for a predefined purpose - or works towards a predefined goal. It does not define its own goals (same can be said about most humans).
On the flip side, AI could wreak havoc - most likely when controlled by human. Lets say we have democratic-government controlled AI (assuming all governments have similar capabilities in AI). i.e purpose of the AI is defined by the government, but plans and actions are taken care of by the AI. Two countries with two different AIs now start to differentiate between humans - CountryA citizen, CountryB citizen. If some activities in CountryA are a strategic threat to CountryB. CountryB AI takes actions and CountryA takes actions and it goes on... Humans are slow, imperfect, but not stupid. They will stop at a certain loss. AI that has been ceded control won't stop - its fast and could do a lot more damage in a short span.
May be intelligence is just randomly connected neurons where connections are evolved via feedback. Or may be we do not understand something fundamental about intelligence
Since our attempts to date have been dismal failures, and every religion (the formalization of millenia of human experience about how to live) says man is flawed, I think it's very likely that humans cannot live peacefully absent any need to work for survival.
People often conceive democracy is a pinnacle of development for society but nature doesn't care about democracy. It selects for efficiency.
And on the average I guess people are really less interested in having their vote count as opposed to having a greater quality of life.
But we aren't close to this point right now socially nor technologically. Might be different in 50 years though. Particularly when the end effects of "democracy" (used loosely because that's all we have ever seen) are seen played out.
I'm not advocating either. Just predicting.
That's only true up to the moment when their vote doesn't count, though, because if their vote doesn't count, quality of life is mere coincidence, or the side effect of someone else's goals at best.
So, yeah, people don't care that much about the symbolic act of having their vote counted (and why should they?), but they do ultimately care if they lose control of their life, which, after all, nominally is the point of a democratic process.
This is our fault (the many of us who have worked on profiling tech, or for companies that do). But there are things to do about it.
The problem is largely a coordination and logistics issue -- there are far more people and resources concerned by the problem than think things are going well. The problem is that they are spread out and have trouble focusing on issues such as this, and so end up being largely ineffective.
Here we see our second fault: how many of us have taken the time to actually do something about it, rather than complain while we spend our work hours exacerbating the problem?
Many of us have backgrounds in crowd sourcing, in logistics, in machine learning (especially things like auto-summarization) -- why are we applying those skills (only) to cab rides and not to making democracy more efficient and robust?
We've spent too much time on offense -- against people, to build addiction, manipulation, and control -- and if we want a healthy society, one that isn't going to tear itself to shreds on behalf of a few wealthy radicals, we need to work on defense -- deprogramming psychological traps, tools that increase autonomy and self-control, media meta-analysis, etc.
Politics only works if we engage with it, putting ourselves and our ideas out there, and technologists have been reluctant to, because many of us saw that it would devolve in to this mess. Well, the mess happened anyway -- can we at least get engaged about trying to fix it?
I agree that people of multiple political camps are using technology, but my experience has been that all camps are exploiting rather than empowering people. You seem to be of the opinion that Im advancing a particular political agenda here. This is actually a policy neutral concern -- Im worried about how we're having the debate, not the outcome.
(And there are some I know of, like CBT apps and productivity timers, but I don't think they're that effective as engineered. They also don't cover, eg, meta-analysis.)
I will not go into the technological details as I am just a end user. But I will say this you are mistaken when you say people are manipulated. I see them as a cost effective way of dispensing your message. People always vote based on their interest. Those can be short term or long term.
Have a good one! :)
It would essentially remove the career politician and allow people to actually understand things like what their politicians/highest value contributors did rather than now what we have. No-one would ever go for it, but I actually think it'd be great if nurses, doctors, social workers, environmentalists, and yes entrepreneurs who had done well, charity founders, bin men etc. all got more voting rights than me who types into a computer all day for no real societal improvements.
The rich get a disproportionate amount of say anyway, it'd be more even than it is currently.
All public good works would obviously have to be recorded through the system and you might eventually be able to do away with money based upon this. I think it could definitely be worth building and doing away with local government as a starter ;-)
Your system wouldn't work for entire classes of people: coal miners, welders, fishermen, and yes, software engineers, to name a few.
Not everyone can work in a job that provides "real societal improvements", whatever that means. And not everyone wants to.
Yes, the rich get a disproportionate amount of say, but that's because they can just afford to spend money on advertising and lobbying.
Their vote isn't actually worth more than mine or yours. No one's is, and that's great, in my opinion.
I'm not sure what you mean by this. Here are some links I'm providing to you.
What we have now is basically what you propose, except with an ensemble of ranking algorithms based on algorithm selection and execution from the public.
Without a better source of algorithm selection, why would it end up any better?
Everyone already has a personal measure of value for the people they're voting on and a system for computing that value.
Ed: If you don't have personalized algorithms, the algorithm selection process will be co-opted to create an autocracy.
We actually see a blend of those problems in the modern system, where voters have disengaged from the primary process and allow the aggregate ranking algorithm to go unchecked.
Of course, more ranking algorithms doesn't resolve how you mediate conflicting outputs (which you've failed to address) and why that would produce better outcomes than elections.
One man's junk is another man's treasure!
Reminds me of the trap I often get into of thinking that if something isn't on google, it must not exist.
People often get caught thinking that something's common monetary value is equivalent to its worth, to its potential. That's just not true!
Well I know all the Smart Nation, Data.gov, Gov Tech and Data science initiatives but I think the influence is way over-exaggerated here.
Does this have any grounding in facts or reason?
I found this page listing Fortune 500 companies that failed from Dec 2000 to Jan 2012:
I don't know how accurate that is, but it lists 24 failures (about 5% of the Fortune 500), of which one company (Hostess) is listed twice, and at least two (American Airlines and GM) are still around.
"Almost all companies and institutions have already been hacked, even the Pentagon, the White House, and the NSA."
Maybe we can learn from governance systems that have survived over a long period of time? Say, Swiss direct referendums. What I also like in their system is that elected officials are rotated frequently. For instance the president is elected for a period of just 1 year.
"All animals are equal"
"All animals are equal. But some aninals are more equal than others"
(for any who haven't heard of Betteridge's law, "Any headline that ends in a question mark can be answered by the word no")
I'd say those people are the real threat to democracy.
Of course democracy will morph too towards trust networks where privacy, anonymity and invisibility will be key factors in collective decision making. Market forces at their best.
I am a big follower of Bruno Frey and his political theories for liberty, one of the modern panarchists.
Maybe it should be even formal: to vote you must voluntary renounce your UBI entitlement for the next electoral cycle.
Which might apply to some, but is a huge carricature on the whole. And even where it does apply, it's more because of a lack of opportunities and felt sense of systematic injustice rather than people being actual lazy stereotypes due to DNA or something.
Bread and circuses
In fact people creating much monetary worth could be many orders of magnitude more useless and destructive for society than a welfare recipient.