Hacker News new | comments | show | ask | jobs | submit login
Will Democracy Survive Big Data and Artificial Intelligence? (scientificamerican.com)
269 points by aburan28 on Feb 26, 2017 | hide | past | web | favorite | 232 comments



"It can be expected that supercomputers will soon surpass human capabilities in almost all areas—somewhere between 2020 and 2060...Is this alarmism?"

Short answer: Yes.

What is the evidence for this claim, upon which the premise of this article rests? The existence of algorithms that simulate human game play today is hardly it.

The authors have fallen into the trap of accepting the nomenclature of "artificial intelligence" without further questions. There is nothing "artificial" nor "intelligent" about it.

Rather, machine-learning algorithms are trained on a diet of human-derived data that is simply a reflection of existing human biases. The danger is in their human programmers being non-introspective of those biases, not in the algorithms themselves. Thus I personally am much more fearful of human-made decisions than non-human ones.

Don't hate the player, hate the game.


There's this: http://www.nickbostrom.com/papers/survey.pdf

>We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.

This survey is a few years old. Discussion and knowledge about AI risk has increased considerably since then. And AI itself has made remarkable progress in that time as well. It's amazing how much progress there has been in AI in just the last 5 years. Who knows where it will be in 40 years. Look where computer technology was 40 years ago.


Is there an established, rigorous definition of "high level machine intelligence"? Or even a concrete list of sufficient criteria? Every single discussion I've encountered leaves the notion undefined, or just "I'll know it when I see it"


Artificial General Intelligence is roughly defined as a machine that can do all the things a human can do. For example, an AI capable of doing AI research and programming computers, would be AGI. The Turing test (as Turing originally described it, not garbage like chatbot competitions) is the most widely accepted standard of AGI.

Superintelligence goes well beyond that. A machine with cognitive abilities far beyond humans. I don't think such a machine is very unlikely even in the near future. It's unreasonable to believe that humans are the pinnacle of intelligence. We are just the very first intelligent creature to evolve. Our brains are heavily resource constrained by size and energy. And neurons are many orders of magnitude slower than transistors. And also far larger and less compact.


The problem is this conception of superintelligence as something that comes after general artificial intelligence. Superintelligence is irrelevant. We'll have superaptitude before general AI, and that will be enough. One of the important principles of design is that specialized solutions are better than general ones.

The tasks humans are hired to perform by employers, and the processes which can be used to break democracy by exploiting big data, are far more constrained than the scope of cognition required to pass the Turing test.


You forget that democracy has always been broken, by design.

>>The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. ...We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized. Vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society. ...In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons...who understand the mental processes and social patterns of the masses. It is they who pull the wires which control the public mind.<<

The current fit on part of the intellectuals and media is due to them having lost this power to control the narrative and ideas because they failed to adapt to the new media which have replaced the old.


citation for the quote?


Many, maybe even most, interesting tasks in AI can't be done without general intelligence. Specialized AI can only do very limited, simple tasks, that require little thinking or understanding. That's cool, but it's hardly an existential threat like superintelligence is.


This is what I'm talking about. This comment is very vague.


Vague in what way? The Turing Test is a pretty concrete definition of human-level AI, which is what you asked for.


So how will this intelligent thing come about.


Once we have AIs as smart as humans, they can do AI research as good or better than human researchers. And they can make AIs that are even better, which in turn can make even better AIs, and so on.

Dumb evolution was able to create human-level intelligence with just random mutations and natural selection. Surely human engineers can do better. But in the worst case, we could reverse engineer the human brain.


> Once we have AIs as smart as humans

Whether a true 'generalist' AI is possible in the forseeable future is debatable.

> they can do AI research as good or better than human researchers.

Now your AI is not just a 'generalist' but rather a specialist in AI. A very big leap of faith has occurred here. This also presumes that the AIs are even capable of effective invention and improvisation rather than mimicry and optimization (the only two features we have seen from the very best of cutting-edge ML work so far.)

> And they can make AIs that are even better, which in turn can make even better AIs, and so on.

All of which is predicated on the limiting factor being software and not hardware. If it is the latter then these postulated AIs are hitting the same brick wall as humans and thinking about the problem faster or harder does not magically make the necessary hardware appear.


>Whether a true 'generalist' AI is possible in the forseeable future is debatable.

Sure, but see the survey I posted above. The rate of progress of AI is incredible. We will almost certainly be approaching human level in a few decades at most.

>Now your AI is not just a 'generalist' but rather a specialist in AI.

That's what general intelligence is. The ability to learn different specializations. AI researchers are not literally born as AI researchers and capable of nothing else.

> This also presumes that the AIs are even capable of effective invention and improvisation rather than mimicry and optimization

Why wouldn't they be? If they are generally intelligent and can do all the same tasks humans can do. Whats magical about invention that would prevent computers from ever doing it?

>All of which is predicated on the limiting factor being software and not hardware.

All the same arguments apply to hardware. Hardware has been improving exponentially for a much longer time than AI has. And I think hardware may already be close enough. Transistors are orders of magnitude faster and denser than biological synapses.


I am familiar with the survey you presented and can only point out that if you had passed out the same survey back in the late 80s when I was in the field it would have had a similar result. The long-term estimates of people whose current paycheck depends on these long-term estimates being achievable are basically useless BS.

The rate of progress in AI is actually not "incredible" and is in-line with the advances in hardware which have made old research suddenly applicable to a wider range of problems. As someone on the outside looking in it may appear as though magic is happening, but the field is mostly progressing at an only marginally faster rate than it has done over the previous few decades. What has changed significantly to lead to all of these "incredible" results you see is the larger data sets available and improved hardware upon which to run massively parallel but weakly connected computations.

As for why AI can't invent or improvise I am simply suggesting that so far we have only seen optimization and in fact invention and similar feats may actually require more work than we realize. A statistical simulation (based upon a huge corpus) of how a human would respond to various situations is NOT general intelligence, and so far you are just making hand-waving assumptions that paper over a large number of hard problems that no one has a clue how to solve.


As far as I know, there hasn't really been a quantum leap in AI in decades. Most of the fundamentals of the things you see in AI and machine learning are old, like 70s and 80s. The big change is that we have way more and cheaper processing power, so you can see AI happening in fields and areas that were just impossible in the past.

A true Turing Test passing machine, is still something that we seem to be more than a quantum leap behind of.


Who says we need a quantum leap? Most technologies progress by slow iterative improvement. The idea of sudden significant breakthroughs is mostly a myth. Even in biology, the human brain only has some slight differences from other primates. Which in turn aren't terribly different from other animals.

In any case, I think there has been tons of progress since the 80s. Taking the best algorithms from the 80s and running them on modern hardware would fail. The core idea of backpropagation and gradient descent was there, but not much else.


I suspect the "quantum leap" itself will be yielded by a new method of parameter updates, perhaps one that expands the scope of neural networks as they are presently defined.


Maybe that's not the right question. What I understand from the article is that the threat is about being manipulated by computer algorithms. We don't really care the computers are actually intelligent and thinking or whatnot. What's bad is that we are manipulated by computers and they automates more and more areas of our lives each year.


How about: "It'll know you when it sees you"


futurism as a whole is 100% un-scientific


And does being unscientific make a thing wrong? Saying "we won't have strong AI in our lifetimes" is an equally strong claim that is also unscientific. You can't say "we can't know" because what does that even mean? That the outcome is 50/50? That's still pretty decent odds for something so significant!


As a professional scientist in AI and deep learning, I see futurists making wild predictions about AI that I wouldn't see anyone serious in the field coming remotely close to making. I don't know how it is with other futurism fields like genetics, space travel and energy generation, but I expect that it's a similar situation.

All I'm trying to say is, let's please keep a cool head and a wholly skeptical approach about all this.


But the survey I posted that started this discussion was a survey of experts in AI.

Note that no one is claiming that skynet will appear any day now. Just that we will very likely have human-level AI in a few decades. I don't think that's a terribly wild or speculative claim.


also please note I didn't say "we won't have strong AI in our lifetimes". Let's work on the actual developments specific of science instead of what we dream it to be. We want people to think about deep learning as a sophisticated pattern analysis algorithm instead of thinking that Skynet is going to start a coup any day now.


I wouldn't place too much stake in predictions made by experts. There is evidence that these may be far less accurate than we'd like to think, even for very near events. See http://press.princeton.edu/titles/7959.html. This research was also discussed in Thinking Fast and Slow.


Individual experts may make very inaccurate predictions, sure. But averaged together you get the wisdom of crowds effect.


Only if those experts' individual inaccuracies are uncorrelated - which for public intellectuals, they will probably always be. See https://en.wikipedia.org/wiki/The_Wisdom_of_Crowds#Failures_...


>Who knows where it will be in 40 years. Look where computer technology was 40 years ago.

The Commodore PET, Apple II, and TRS-80 all came out exactly 40 years ago (1977)[1]

[1]https://en.wikipedia.org/wiki/1977_in_science#Computer_scien...


In the risks we should take into account problems caused by mass unemployment (caused by AI).

By the way, there is a list of who does and who does not think there is a serious risk for society (of any type) within the next 50 years: https://goo.gl/Oe7S6e


Consider that 41% of those surveyed conclude that machines will never be able to "simulate learning and every other aspect of human intelligence". The next 41% conclude it is possible within an unbounded "more than 50 years". It seems to me that the vast majority of experts are pretty skeptical.


That's not 41%, that's 4.1%. When asked when there would be a 90% of AI, the median answer was 2075.


No. It's definitely 41%. Look at page 6 of your link

    The earliest that machines will be able to simulate learning and every other
    aspect of human intelligence:
    
    Within 10 years 6 5%
    Between 11 and 25 years 3 2%
    Between 26 and 50 years 14 11%
    More than 50 years 50 41%
    Never 50 41%
    Totals 123 100%


That's in the prior work section. It's a much older survey of one specific conference.


And was the other survey a comprehensive survey of everybody?

The divergence in poll results seems to suggest we should put even less credence on polls here than usual.


It was in 2006 which is way before AI started seeing it's recent progress. And it was a conference full of old fashioned AI people.


I think you've missed the point of the article. It's not completely about AI per se, but about the broader issue of technology in general bringing us to new territory with respect to the ability to control and/or manipulate society, which could have disastrous unintended consequences.

Therefore, we need to be proactive about policies that would protect against unfettered influence by those controlling the technology.

You've said the equivalent of "guns don't kill people. People kill people".


There are numerous historical examples of societal manipulation (of the most extreme kind) that had little to do with technology.

I don't disagree about the need for policies; I just think those policies need to be directed toward humans as the weak link in the chain, not machines. This is not like gun control where a single person with a weapon can do a lot of unchecked damage, and thus access to guns needs to be controlled. Very few have access/can do damage with an algorithm.


Which is the meat of the article.

This is definitely the point to make a "did you read the full article" check.

My reading of it was that people were clearly seriously concerned about the current state of the art. The article also considered Future possibilities.


We don't need true artificial intelligence. Just pseudo intelligence will kill off a lot of jobs. The most hurt will be the developed countries. Let just say a few million driving jobs go because of Ai. That will snowball and remove millions of other supporting jobs. Billions of dollars will stop circulating in the economy as most of it will go into the pockets of corporations and rich people who can afford to owns stocks of these companies. Instead of being used.


Or, as it stands today, human decision makers using the output of these algorithms as a signal, without understanding the potential flaws of that signal. "I just did what the computer recommended"

Someone (can't remember who) said it better: "You don't want to be an edge case in this brave new world."


"I was just following orders" the excuse for most of the worst behavior of humans.


The difference is they have the power to make a decision.


"Why did the computer recommend that?"

"No one knows, not even the programmers who wrote it! Deep learning is a black box!"



> Rather, machine-learning algorithms are trained on a diet of human-derived data that is simply a reflection of existing human biases.

Maybe we shouldn't call Supervised learning techniques "Artificial Intelligence", but instead "Memoized Intelligence"?

Then again, it's probably a spectrum from memoized intelligence to "true" intelligence.


Fuck, I am also trained on a set of human derived data and past experiences. What even is intelligence?


Totally agreed; intelligence (IMHO) is all about using pattern matching to model and reason about a domain. Social things like us get some of that acquired pattern matching knowledge from other social things.

I guess the difference is that supervised learning systems can't yet make an autonomous effort to acquire new knowledge from other social things.


I agree with you on the first points. We don't know how soon computers will match us in intelligence and they're still underpowered but 2 things might accelerate their rise: 1. GPU computing. Much more similar to the brain. 2. IoT - it might bring computers the senses humans have today. A self-correcting ML system with access to broad sensors, huge computing resources and the Internet is starting to look surprisingly similar to our abilities and data access.


Without disagreeing (nor agreeing), what do you mean by intelligence? A lot of the disagreements I see is from people disbelieving that "mere" math and algorithms can exhibit intelligence. A mystical sort of Intelligence of the Gaps. I don't think that is your stance but I'd be interested in clarification.


the thing is that deep learning is like 3 years old. I'm 28 years old and even I have seen the world without the internet, forget about cellphones and smartphones

What I mean to say is, progress is incredibly, impossibly blazingly fast. 2060 is extreeeemely far away.


Deep learning is at least 50 years old, depending on when you want to start counting. Take a look at Ivakhnenko and Lapa's paper "Cybernetic predicting devices" from 1965, that had the first algo. There's been a lot of work since then during the 70s, 80s, 90s, through today.

In fact the idea goes back even further to the 1940s, when the first papers were published about how NNs could possibly work.

Edit: found this page that has the history: http://www.scholarpedia.org/article/Deep_Learning


>Rather, machine-learning algorithms are trained on a diet of human-derived data that is simply a reflection of existing human biases

The AI that plays Go is now better than all Go Players and it trains by playing itself. It was bootstrapped by analyzing human games, but now it's past that. Theoretically reinforcement learning AIs could have access to deeper levels of understanding than just by copying humans because they can build upon experience. These are limited domains for now, but I think the first startling AI will come from chatbots that can come up with novel arguments. Watch the chatbot space. When someone comes up with a really good chatbot that can come up with creative responses, that will really be an AI turning point. All the deep learning ones I've seen so far are like talking to someone with moderate dementia. They forget things from earlier in the conversation and go into conversational loops.


Go is an objectively defined game; anything dealing with the real world is going to involve subjective human biases in the criteria.


AI beat humans at poker, which is a game with incomplete information.

https://www.theguardian.com/technology/2017/jan/30/libratus-...

"The algorithms that power Libratus aren’t specific to poker, which means the system could have a variety of applications outside of recreational games, from negotiating business deals to setting military or cybersecurity strategy and planning medical treatment – anywhere where humans are required to do strategic reasoning with imperfect information.

“Poker is the least of our concerns here,” said Roman V Yampolskiy, a professor of computer science at the University of Louisville. “You have a machine that can kick your ass in business and military applications. I’m worried about how humanity as a whole will deal with that.”


Poker may have incomplete information, but it's still an objectively-defined game. The cards, the "win" state, all of that stuff is trivially encodable and well-defined.


"...machine-learning algorithms are trained on a diet of human-derived data that is simply a reflection of existing human biases."

This one point simply cannot be stressed enough.


Glad your reply is top. To be honest we could do with less of these fluff pieces on HN, leave them to Reddit and Facebook.


It will only survive if its lawmakers restore the robustness of the electoral process. Seriously, why is gerrymandering even a thing right now? How is that normal? Not to mention the highly manipulable first-past-the-post voting system. It's no wonder that primitive AI was so able to obliterate the last election. Our human collective decision making processes are thoroughly busted and nobody seems to care to fix them.

Amazon and Netflix get score voting for their products, yet we can't even get measly, pathetic IRV?

Imagine if Amazon had to use IRV or FPTP to determine "the best products"... It'd go out of business pretty quick!


I would love score voting.

But given the potentially higher cost of implementation, I would be very happy with approval voting (score of 0 or 1), which would be compatible with most existing ballots. "You may vote for zero or more of the following candidates."

Approval voting, as with score, breaks the stranglehold of two-party rule, allows for rather than discourages policy overlap among parties (breaking down polarization), and increases voter satisfaction.


The problem with approval voting is that it does not give voters a mechanism by which to indicate partial approval. If there is a centrist candidate which many voters only slightly support, there would still be a media horse race over whether or not voters will interpret partial approval to mean 'do not approve' or 'approve' on election day, with a final result that may be unnecessarily surprising to participants. There are also examples on the wikipedia page of some organizations trying range-2 approval voting and abandoning it.

Range-3 seems like the minimum range desirable. It would allow voters to indicate whether they disapprove, were neutral toward, or approve of a candidate. The only historical adoption of range-3 I am aware of is the electoral council of the Republic of Venice, and I believed they used it as a constant component in their electoral process for selecting the Doge without abandoning it.

The costs of switching to something even better like range-5 may also be over-estimated. Range-5 seems like it would work with existing optical scan ballots. The cost of acquisition for range voting tabulatation code on the state level would also ideally be fairly low due to the simplicity of the preference aggregation method in range voting in comparison to IRV, and Maine is already undertaking a switch to the later.


It doesn't really matter, it still does vastly better than plurality voting. I tried simulations where every individual automatically approved of the top 50% of candidates. It still got good results. Similar if everyone voted for the top 30% of candidates they liked, or any other arbitrary percentage.

Using more ranges is silly. There's no reason to ever give a candidate less than maximum vote, if you want to give them a chance at winning. If you don't, then there's no reason to give them anything above 0.


More range -> greater expressivity -> better voting system

There is no reason to give your favorite candidate less than max score, and no reason to give your least favorite candidate more than 0 score, but the middle does serve a purpose. Scoring other candidates in the middle is to boost their chances of winning against your less-favored candidates at the cost of boosting their chances of winning against your more-favored candidates.

There are some variants, such as Range2Runoff, which beats Range often in simulations. I hear that it is much better than pure Range when there are many strategic voters.


Have you ever noticed how star rating systems are completely useless? People either rate 5 stars or 1 star, and very rarely in between. Most places have switched to simple like/dislike based systems. E.g. reddit or youtube.

And that's when people have no incentive to lie...


Score ratings are not as useful when the scales are poorly defined, as many internet scoring systems are. Many internet scoring systems want you to measure quality to some universal standard that should apply to all things past and present in the same way. A range voting election asks you to measure quality relative only to the options you are presented with.

Here's a hypothetical example involving 3 different shampoos. All 3 work. One smells bad. One smells decent. One smells great.

If I were to review these on Amazon, I would likely give them 3, 4, 5 stars respectively. All 3 of them worked, but I have a preference. I could imagine a theoretically worse shampoo that didn't work, or even worse, made all my hair fall out. Such a shampoo would be worth a rating of 0 stars. I don't give my least favorite shampoo I actually used 0 stars because I can imagine that in the future I may come across something worse, and it doesn't seem correct to put a functional product on the same level as a harmful one.

If I were to vote (using a score system) for which one of these 3 shampoos I would like my workplace to stock, then it is much simpler. 0, 2, 5 stars for each option, respectively. I don't have to worry about a hypothetical worse 4th option in the future skewing my results now. If a future election is held with that worse, 4th option, then I can give it a 0 and adjust my previous 0 and 2 star ratings accordingly.


Like I said, I would love score voting (range voting). Its my preference for precisely the reason you state: it provides the expressiveness to show partial support. And it does so without the constraints of IRV or ranked voting. To my mind, score is the "superset" voting model.

However, approval is something that could be installed with relatively low cost and low effort. And it is nearly immediately understood by anyone. Score voting requires a tiny bit more education (although it too is easy to understand; and I suspect most voters would get the hang of it after a couple elections).

I would be disappointed with IRV/ranked because it seems the most difficult to understand, requires rework of ballots, and disallows equal scoring (thereby reducing expressiveness). It would be a shame to squander voting reform momentum to end up with IRV.


> It would be a shame to squander voting reform momentum to end up with IRV.

I feel an incredible sense of urgency about this, but am not sure how to go about acting on it. It's hard to convince people that this is part of the root of the problem. Most folks I talk to see the voting system as some kind of technicality, and don't understand that it's actually on a foundational level.

It seems like range/score voting could use a big marketing and education overhaul. Fairvote is not helping the situation.


Approval voting seems to be the best system if you have tactical voting. In simulations, the improvement from plurality voting to approval voting, is about the same as the improvement from monarchy to plurality voting: http://rangevoting.org/

It's also so simple. It's incredible we don't have this already.


It's good, but Score Voting is better.


(US centric response)

I'm not sure if this was a rhetorical question...but the answer is that districts are drawn by the ruling party in order to assure its continued dominance. The incentives for these legislators are totally different from Amazon's.

A very interesting example happened in Georgia in the 1990s. The Republicans and the Black Caucus (who are mostly Democrats) worked together to pack black voters into a few concentrated districts. This resulted in both more black Democratic legislators and more Republican legislators.


Hence districts should be determined not manually, but through a certain rule that is agreed by both parties Nationally


Minimising the ratio of area to perimeter, ideally with some allowance for naturally 'crinkly' boundaries like coastlines, would be a great start.

[1] says there is already software to do this, and [2] gives a terrible example of a district in Pennsylvania.

[1] http://www.washingtonpost.com/wp-srv/special/politics/gerrym...

[2] https://www.washingtonpost.com/news/wonk/wp/2015/03/01/this-...


Maximizing?


Minimising!

Oops. It's late here.


It's interesting that in a thread about how ai's are a threat to democracy people are suggesting replacing a human process with an algorithm.


Not necessarily replacing, but restricting. Algorithms would be difficult to implement perfectly, but putting geometric restrictions on district boundaries would be fairly easy and still allow human flexibility in specifying the lines. Also independent third party commissions do a better job than state legislatures.


I completely agree! But it is a very difficult problem because the job security of so many individual members of Congress depends on gerrymandering. Obama and former attorney general Eric Holder are working on an initiative related to this, though.


So you propose moving the politicization of boundary drawing to the rule making process for the rules which govern the drawing of boundaries. After all, your goal hasn't changed, only the point at which you bring in the human factor which is pulling for one side or the other.

Unfortunately, as soon as you lay down some guideline, and perhaps a purely rational one, that puts one side or the other at a loss, you'll have objections to the rule.

I hate to say it, but I don't think your process solves anything and actually just adds complexity to the process. It probably makes matters worse.


And first past the post ensures that only 2 parties are ever heard and all the rest utterly ignored, just as you just did.

Why do other parties not get a say?


> why is gerrymandering even a thing right now

Until recently, we haven't had a precise way to define it:

"The challenge for reformers, however, is that the courts need a judicial standard, which is different from a mere scientific standard. A judicial standard must be judicially discernible — it has to be derived from a specific right given in the Constitution. It also has to be judicially manageable — it needs to provide courts with clear guidance on how to rule." [1]

This judicial standard is what is being proposed to the Court in an ongoing case concerning Washington. (It derives from Equal Protection and the First Amendment.)

[1] https://www.washingtonpost.com/news/monkey-cage/wp/2017/02/0...


Don't they use sales numbers as one of their primary sorting factors?

That's popularity, just like FPTP.


Sales numbers are more like approval voting, because you can buy more than one thing in the same category, but you can't vote for two people for president.

Say I'm a photographer, but due to Alt-Amazon's draconian policies, I can choose one and only one camera. And not only that, I can purchase one and only one lens! What do I pick? Compact or fullblown setup?

That's FPTP.

But most photographers have at least a couple options to shoot with, depending on the needs of the situation.

And the star rating system is classic score voting.


> How is that normal?

Lately, normal seems to have become the exception.


normalized is the normal I was going for, like a fish in the water. I.e. the water is polluted with active levels of synthetic estrogens and the fish know nothing different.


The tech workers of today are building the future that we will all live in. None of this stuff is appearing out of thin air. Someone, an individual human with morals and values needs to sit down and say 'today I will program something that will benefit a corporation/government at the expense of a person' and then carry on.

How many people here work for companies that threaten democracy in this way? How many people have a google or facebook tracker on their personal site? This kind of change is the sum of small individual responsibilities.

The early dream of the internet was one of free expression, but people -not corporations or governments- have worked very hard to undermine that.


This is what we call a multi-polar trap.

It's a coordination problem. You're right, in a sense; at the end of the day it's people doing this, and if they all stopped then it wouldn't happen. Unfortunately, that's what it would take: They'd all need to stop, simultaneously.

In the meantime our world is such that any company or person who refuses on ethical grounds will be outcompeted, and go out of business.

It isn't something anyone decided to do. It's just implicit in the system, which itself wasn't really deliberately built. We'll break our backs lifting Moloch to heaven, and then...

Well, there is no "and then". Humanity will have achieved its final purpose. Maybe it won't go that far, but I'm skeptical.


"multipolar trap"! I'm do glad to finally have a proper word for it. I've been describing this as "prisoner dilemma with a huge number of prisoners" which didn't feel 100% accurate and not terse enough. I wish our political debate would acknowledge this type of situation which seems to occur quite frequently. Being able to categorize the problem like this would help us elevate the discussion and come up with proper solutions. Thank you!


You might like https://slatestarcodex.com/2014/07/30/meditations-on-moloch/, which is where I got the term.


It's a double bind, on a social level, enabled by the connectedness of technology.


This is a very sad way to view the world.

Everyone doesn't need to stop simultaneously. Some people need to stop and convince others that it's worthwhile to do so. People who refuse to stop should be shown the consequences of what they're doing and be forced to make a decision that they otherwise might not have been aware of. Some people will still make that decision and will be able to justify it completely, some people will change.

Your part in all of this, even though you don't believe change is possible, could be to be less dismissive and condescending when it's brought up.


As the parent says, if you stop doing this and somebody else does not, you are out of business. You might not have time to convince people after the fact if you care about your business. You also are very unlikely to convince all the powers in your organization to put all your eggs in the social good basket given the very likely outcome that you won't be able to convince everyone outside your organization in time to put common, social good over company profits. Sometimes we need to make decisions as a society. That's why we don't have anarchy.


Horrendous abuses at companies like Uber and the lackadaisical approach to protecting user data demonstrated in the seemingly weekly security breaches show that there is plenty of low-hanging fruit.


That's if you care about those things, seems like most people don't. Most people seem to just want an affordable service that solves their immediate problem.

PHP has like a 70% market share for web backends, Haskell/Rust/etc. basically none.

VHS beat Betamax.


> Sometimes we need to make decisions as a society. That's why we don't have anarchy.

I'd say that's individuals outsourcing their decisions to the perceived or real pressures and expectations of others. Also, while you can surely describe individuals making decisions as groups in aggregate, as a simplification, groups don't actually decide anything. They don't even do anything. And where those external expectations are also based on outsourced decision making, you can sometimes ignore the whole chain.

Interestingly, when individual people are powerless but no single person is responsible, and also nobody is responsible for doing what they "have" to do, that's kind of even more anarchic and desolate than even the anarchy of everyone against everyone. I mean, in the latter case there are even still "ones" there, not just one blob, one river of people flowing where they can't help but go.

Interestingly, the old Greeks' kind of self-servingly considered slaves as obviously born to be slaves, because if they weren't, they'd simply kill themselves rather than be slaves. The Spartans certainly were big on that, at least if the writers are to be believed. Not that I want to glorify their outlook, but I find it fascinating how people constantly pretend anyone has to do anything other than die at some point. As in "I can't do that, or I would get fired" -- fine, but own your decision, don't call it anything but a decision. I would rather argue with someone who says they decided to be a selfish or cowardly person, than tolerate someone who is "good" just because that's more convenient or others expect it.

Anyways, here's someone who did pay orders of magnitude more attention than we are today for the most part:

> Private interests which by their very nature are temporary, limited by man's natural span of life, can now escape into the sphere of public affairs and borrow from them that infinite length of time which is needed for continuous accumulation. This seems to create a society very similar to that of the ants and bees where "the Common good differeth not from the Private; and being by nature enclined to their private, they procure thereby the common benefit."

> Since, however, men are neither ants nor bees, the whole thing is a delusion. Public life takes on the deceptive aspect of a total of private interests as though these interests could create a new quality through sheer addition. All the so-called liberal concepts of politics (that is, all the pre-imperialist political notions of the bourgeoisie)-such as unlimited competition regulated by a secret balance which comes mysteriously from the sum total of competing activities, the pursuit of "enlightened self-interest" as an adequate political virtue, unlimited progress inherent in the mere succession of events -have this in common: they simply add up private lives and personal behavior patterns and present the sum as laws of history, or economics, or politics. Liberal concepts, however, while they express the bourgeoisie's instinctive distrust of and its innate hostility to public affairs, are only a temporary compromise between the old standards of Western culture and the new class's faith in property as a dynamic, self-moving principle. The old standards give way to the extent that automatically growing wealth actually replaces political action.

> Hobbes was the true, though never fully recognized, philosopher of the bourgeoisie because he realized that acquisition of wealth conceived as a never-ending process can be guaranteed only by the seizure of political power, for the accumulating process must sooner or later force open all existing territorial limits. He foresaw that a society which had entered the path of never-ending acquisition had to engineer a dynamic political organization capable of a corresponding never-ending process of power generation. He even, through sheer force of imagination, was able to outline the main psychological traits of the new type of man who would fit into such a society and its tyrannical body politic. He foresaw the necessary idolatry of power itself by this new human type, that he would be flattered at being called a power-thirsty animal, although actually society would force him to surrender all his natural forces, his virtues and his vices, and would make him the poor meek little fellow who has not even the right to rise against tyranny, and who, far from striving for power, submits to any existing government and does not stir even when his best friend falls an innocent victim to an incomprehensible raison d'etat.

> For a Commonwealth based on the accumulated and monopolized power of all its individual members necessarily leaves each person powerless, deprived of his natural and human capacities. It leaves him degraded into a cog in the power-accumnulating machine, free to console himself with sublime thoughts about the ultimate destiny of this machine, which itself is constructed in such a way that it can devour the globe simply by following its own inherent law.

> The ultimate destructive purpose of this Commonwealth is at least indicated in the philosophical interpretation of human equality as an "equality of ability" to kill. Living with all other nations "in the condition of a perpetual war, and upon the confines of battle, with their frontiers armed. and canons planted against their neighbours round about," it has no other law of conduct but the "most conducing to [its] benefit" and will gradually devour weaker structures until it comes to a last war "which provideth for every man, by Victory, or Death.

> By "Victory or Death," the Leviathan can indeed overcome all political limitations that go with the existence of other peoples and can envelop the whole earth in its tyranny. But when the last war has come and every man has been provided for, no ultimate peace is established on earth: the power-accumulating machine, without which continual expansion would not have been achieved, needs more material to devour in its never-ending process. If the last victorious Commonwealth cannot proceed to "annex the planets," it can only proceed to destroy itself in order to begin anew the never-ending process of power generation.

-- Hannah Arendt, "The Origins of Totalitarianism"

Leaves each person powerless? Check. Everybody is a slave to warfare, those who don't wage war perish, and there isn't even a tyrant you could assassinate. Planned obsolescence so we can hustle harder? Check. And oh boy, we can't wait to do this other planets. And we'll be driven by corporate, brainless greed even before we set foot on any on them, it's going to be so much more efficient than the destruction of this environment.

But the real kicker to me is

> It leaves him degraded into a cog in the power-accumnulating machine, free to console himself with sublime thoughts about the ultimate destiny of this machine

that is, the fact that many today don't consider this degradation, but elation. That's all we have, our "communities", our hopes and dreams for this utopia like world with constant new things to consume, and other abstractions. Reflecting one oneself as an individual in the naked here and now? Not so keen on that.


It's indeed very sad, but it's not just a view. Russian paid internet trolls are probably another prominent example of multipolar trap. Each individual don't even care about politics, probably just earning some money for living, but together they are an army doing enormous harm to democracy, starting in Russia and then spread to the US and Europe. How would you convince them to stop?


I work at a huge advertising company that's using machine learning and personal information from cookies, etc. All I do is manage devops work flows. What am I supposed to do, quit? Where am I supposed to get a job with no ethical concerns?


You chose to do this. You are accepting money in exchange for a service that potentially harms other people and democratic society. Nobody is under the gun to provide you with another job in order for you to make more ethical choices.

Show concern about the data that is collected, how it is collected, how it benefits others, how it is safeguarded. Offer operational alternatives that have a lesser negative impact with comparable benefit. Challenge people to show how that their algorithms treat people equitably and don't exacerbate existing disparities. Get a job somewhere else in an industry that helps people or positively impacts society in some way. Quit and quietly wage cyber-war against your former employer from an abandoned missile silo? Just spitballing here.


The people who refuse to stop, will stop at nothing to elevate themselves to a better position than others regardless of the harm that comes.

That's the origin of the dissonance, the lies, the blame, and the general negativity of the blamers in power today. I won't grace them with the term "far right", given having right leaning views is fair enough in some circumstances. So is leaning left, however, but then left leaners don't blame and right leaners do, so you get the natural polarization occurring we see today between the groups.

This group of outright liars is another thing entirely, however.


I strive to view the world realistically.

There is a solution; it's treaties, regulations, and so on. The problem is caused by defection being a worthwhile strategy, so the solution is to change the landscape so that it isn't. That won't happen if we don't try it, though.

To a major degree, the story of western civilization is the story of building our way out of such traps. It isn't hopeless.


> They'd all need to stop, simultaneously.

Plenty of people stop all the time, plenty never started.

> In the meantime our world is such that any company or person who refuses on ethical grounds will be outcompeted, and go out of business.

Again, no. Plenty of individuals and corporations refuse to cut plenty of corners that would give them 0.N% more profit on purely ethical grounds. If the world was actually like you described, nobody would even survive the first day after their birth, they'd just get eaten.


An old soviet joke..

- What's up friend?

- Saving for a bicycle!

- But don't you work in bicycles factory?

- Yeah... I tried to bring home the parts. But I ended up with a machine gun :(


Heavy. Reminds me of this performance piece I saw in Kiev. It was about 30 men in a large cage in the middle of the room, sitting at a long table. Each was blindfolded and holding a semiautomatic rifle. The man at the end of the table would slap the table for the next step, and in lockstep, they disassembled and reassembled their weapons in a matter of seconds.


This seems like the same issue faced by someone who is assembling a warhead; I think all they really need to say is "Today I will [do] something that will benefit myself, at the expense of someone else." Specifically, in the sense that they retain their employment and means of income, and (optionally) justify their exclusion from the subset of affected persons.

I think the best we can hope for is to design systems that are difficult to undermine or subvert. Distributed/mesh systems, encryption, and so on - things which at least delay the onset of corruption, centralization, and control. This is obviously a big ask, but it's also something that can be worked on by a few, instead of requiring that the many don't work towards the reverse.


If a throwaway really needed to say this? I bet there's enough people on HN who have already come to this conclusion.

The obvious courses of actions have also been discussed here - governing and ethics bodies for code/coders as a start.

It's even likely that in the future unions may be required as the industry grows older as well. This is probably the least popular opinion to state on HN.

But That idea isn't an issue. We're relatively far off from that happening or it becoming a political/collective necessity.


(2015)

[1]: http://www.spektrum.de/news/wie-algorithmen-und-big-data-uns... (in German)

Article is very broadly written, when the imminent threat might be manipulation of upcoming elections (France, Germany); "fake news" isn't even mentioned


Google Translate does a surprisingly good job of translating that article. It's pretty funny how the publisher (Scientific American) added their own bias in translating the title and certain phrases, presumably to drive more traffic to their site.

[1]: https://translate.google.com/translate?sl=auto&tl=en&js=y&pr...


Indeed, the translation is scarily good (which is on topic enough I guess). Can't believe this was auto-translated.

Where do you see clickbaity bias besides the title and lead (I haven't checked thoroughly)? Many phrases/stereotypes in German pop-science articles will only make sense in the context of the German media discourse/bubble (if they make sense at all).


In all these discussions of AI-doom, no-where is it mentioned that we already run a global distributed AI, called the world financial system. It is a classic min-sum algorithm [1], running to devastating effect. I actually don't have any particular chip on my shoulder regarding the "evils" of money, and have even worked in the finance industry. But people should understand the limitations of min-sum: loops tend to derail it, and also it can't handle when there is more than one solution. Both of these limitations are causing serious problems in the world.

[1] https://en.wikipedia.org/wiki/Belief_propagation


Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. It involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called ”Citizen Score”, which will determine under what conditions they may get loans, jobs, or travel visa to other countries.

I'd be surprised if that kind of concept will remain limited to China. Whenever there's massive amounts of money to be made, it will find its way to the US (Credit Score, for example, is already reality). It must be very tempting for the current administration.


Chances are the NSA already employs such a system based on "increased danger", where danger would be defined as absolutely anything that would "change" the status quo.

That's why they spy on activists (civil liberties, climate change, animal abuse, it doesn't matter), journalists, libertarians, OWS, BLM, Linux users, etc.


> Credit Score, for example, is already reality

Credit scores are at least regulated, viewable, and disputable. The newer pseudo-credit-scores derived from social media are completely opaque.

> It must be very tempting for the current administration.

I'm much less concerned about government than about corporations. If GOOG/FB can make money by selling a "Citizen Score" to banks, employers, etc., they will do so.


Wasn't this an episode of Black Mirror?


Yes, indeed. And a pretty good at showing a path for liberation too.


Yup, you just show up at your childhood friend's wedding drunk and make a speech about freedom!


This is the topic I've been most worried about. My only solace is the brilliance of the Framers of the U.S. Constitution. They planned for a wide variety of government failures, knowing full well of how democracies fail. The freedom of the press and the Judiciary would seem to be strong firewalls, even under AI run amok or distorted toward nefarious ends. We shall see...


The history of American democracy is the rebellion of the marginalized against the grotesque hypocrisy of our stated values, faced as they are with the reality of their social conditions. The only reason we have any freedoms is because the oppressed weaponized the noblesse oblige of a bunch of slave owners to carve it out for themselves.

There is no reading of American history where the press or the judiciary function as some conduit for the immortal "brilliance" or protection of the framers. One needs to only look at recent history, the condition of minorities in modern carceral state, being lied into war in Iraq, to recognize this. Further back, the judiciary was used to justify segregation, eugenics, and mass disenfranchisement. And the history of the press speaks quite bleakly for itself.

This kind of mentality, where individual agency is sacrificed in favor of "mythic" protections, is a big part of the reason things have become so precarious.


It's not myth, it's narrative. Antebellum slavery and Jim Crow were narratives[1]. Even worse, they're narratives based on phrenological "science".

[1] see also a novel "The Klansman", the Word of that narrative made flesh and into "Birth of a Nation" , i which the modern movie was born.

Sacrificing individual agency to mythic protections has a name. It's collectivism. And it takes quite the storyteller to kill that sort of story.

The housing crisis? "Housing never goes down." Vietnam? "The domino theory." Manifest Destiny? "Go West, young man."

The problem is that we're adapted to narrative as our principle means of information exchange. So we try to construct counter-narratives. Those work just about as well.

The antidote to narrative is rhetoric.


Hippocracy: government by or of the horses?

It's actually spelled hypocrisy, but that's a fun misspelling for those who know Greek roots.

Otherwise, I think a huge reason for our moral progress in the US, slow and unsteady though it may be, is having a document like the condition to point to when advocating for better conditions.


Pedantry about spelling on a message board will certainly save the republic. Just like literacy tests protected voting rights.

The constitution enshrined the diminishment of blacks' humanity, had to be ammended to extend the franchise to women, and still says that slavery is fine under the right circumstances. It's as much a reflection of our failings as it is our ambitions, and has been wielded violently to uphold or change the status quo. It has no inherent trajectory towards progress other than the one we actively embody.


Constitutions come and go. Republics are saved by interests.


Freedom of the press is under attack by the Trump administration. From labelling critics as 'fake news' to banning journalists from news briefings[1] - it's a pretty alarming time for democracy.

[1] http://money.cnn.com/2017/02/24/media/cnn-blocked-white-hous...


Freedom of press will easily survive an ephemeron like Trump, maybe even come out stronger for the short-term tribulation. He's superficial.

The more insidious statist leaders create deep, subtle, lasting harm. It's the Bushes and Obamas who set up the largest illegal domestic spying operation ever and got away with it because, even when called out, people trusted them not to commit evil acts with the store of data they were collecting. It was Obama who routinely violated journalists rights yet continued to be generally supported by mainstream news outlets.

That genre of politician will be making a comeback after Trump, and that's when people will start feeling comfortable again and go along with anything that promises stability and safety. And I believe that's the environment where the type of manipulation described in this article ("liberal paternalism") can really thrive.


The press is being undermined by its economics. Trump just smelled weakness and attacked.


Legacy media was biased against Trump before the election and they still are. They've done nothing but try to take him down. It's to the point that anybody who doesn't already hate Trump tunes them out. God forbid Trump actually does something bad, it will be the boy who cried wolf.


>God forbid Trump actually does something bad

Uhm, is this a joke?

How about having an anti-Semite for a chief strategist[0]? Or selecting a woefully unqualified campaign donor to to lead the Department of Education[1]? How are you feeling about transgender rights[2]? (And let's not pretend this is just about bathrooms -- if you can't use the bathroom in public, you can't go out in public.) Maybe you care about sexual assault[3]?

[0] - http://www.nbcnews.com/politics/2016-election/trump-campaign... [1] - https://www.washingtonpost.com/news/answer-sheet/wp/2017/01/... [2] - http://www.npr.org/sections/ed/2017/02/23/516837258/5-questi... [3] - https://www.washingtonpost.com/politics/trump-recorded-havin...


What source of information should be provided to a person who has written to the effect that they distrust the media?

I'm not writing that to be funny, it's a serious issue. Also applies to climate change deniers who refuse satellite data and moon landing deniers who deny everything from NASA… not that I wish to imply any overlap.


The freedom of journalism likely makes AI informed press just as protected. But that press can be used to control what people know and trigger them into acting to undo democracy defenses.

Its happening all over the world already, people are already crying for democracy to be reduced in their countries after AI based analytics companies have spread fake news and targeted advertising. Its already extremely effective even though people are fully aware of its existance.


I see you've just read the article on Robert Mercer :)


I have been worried about how the press is generating fake news or at the very least using the main stream news to trigger people into becoming angry so they buy news papers/click on their articles for a while. The Mercer article was interesting for explaining the who of it but I knew it must be happening before and the mechanism is clearly based on wide data more than likely from facebook.

Its been a common trend of the past few years, the impact of media is vast and its widely subverting democracy right now and I am convinced its a grave threat to our societies but not sure how to stop it without doing equally bad things.


> This is the topic I've been most worried about. My only solace is the brilliance of the Framers of the U.S. Constitution.

Perspective matters when considering the brilliance of the Framers. The debate over the electoral college and the effectiveness of a voter in rural Wyoming vs one in urban LA is a great example.

Personally I think they got a lot right but that the US is a vastly different country now. How do you make adjustments or further codify rights without disenfranchising others though? In theory through amendments but that seems unlikely in the current political environment.


I think we'd have a lot less debate over the electoral college if we hadn't capped the size of the House of Representatives in the 1900s.


Show me the democracy that's supposed to survive, and then we can talk. I'm looking around, and I see quite a few different systems, but no democracy.


Perhaps you are looking for a direct democracy, where citizens would take on the responsibility of the legislative branch and vote on each and every bill. I think there is a good reason why direct democracy doesn't work except on the smallest of scales.

I wouldn't want to live in a society where all of our rules and laws are determined by popular vote. That is a tyranny of the majority. With all due respects to Thomas Paine, just because a majority believe something to be true does not make it so..."common sense" is bs.

Also, calling our version of democracy a "sham" or an "illusion" is ignorant at best. That people are dissatisfied with the results or that opinions can be manipulated does not make it any less so. It is ultimately our own fault when we have things like single issue voters, a lack of voting and/or people who only vote during presidential election years, and those who fall for sloganeering.

There is a reason that billions of dollars are spent on political elections and not simply rigged. It's because your vote does matter.


Your standards may be too high. It's a sliding scale, not binary. But... Norway?


It's a monarchy with executive powers the king delegates. Although it's a "nice" monarchy like Holland and Sweden, I'm not convinced they are proper democracies.


"Nice" is also highly situational... they're nice because they can literally afford to be.


Yes indeed. The swedish monarchy still doesn't need to account for how they spend their "appanage", i.e. tax payers' money.

And they still get this money even though they are wealthy enough to not rely on state subsidies.


The king has no powers, practically speaking. It's a monarchy in name only.


Does the king work, rely on inherited wealth, or receive money through taxes?


In what way is a monarch's taxes different from someone receiving rental income from inherited property?


Yes, it's a monarchy, but the king has no real power. Thus this is not a good argument as to why Norway is not a proper democracy.


The problem is, there are loads of democracies. And they are democratic. Their people get to vote. But when the vast majority of the population of these democracies don't understand basic politics, economy and philosophy how can they vote for a fair and moral society. To quote Churchill:

"Democracy is the worst form of government, except for all the others."


"The best argument against democracy is a five-minute conversation with the average voter." Churchill

Like, the vast majority of the population confusing a Constitutional Representative Republic; where voters vote for their representative who then vote on behalf of their constituents, with a Democracy where citizens vote directly.


Citizens voting directly on policy issues is direct democracy, which is only one type of democracy. Other political arrangements, such as representative democracy, are types of democracy too.


Two different variants of democracy. Direct democracy, representative democracy. Both have inherit advantages and disadvantages. So, what's your point?


I suppose my point is the co-opting of the word "Democracy" by certain groups, to present a different idea of government than those they govern live under.

Which leads to the general population not understanding:

-Why there is an electoral college instead of a direct vote for the president.

-Why there are two divisions of the legislative branch, the House which is proportionally represented, and the Senate which originally was appointed by the State-Legislatures.

Companies like Facebook are pushing for more direct voting. Which should give you pause as the tyranny of the Majority is a very real thing, if you don't believe so look at California which voted down gay marriage despite being a liberal bastion. The average voter is not informed enough to vote on specific legislation, and find themselves easily manipulated by the mass media.

Removing another safeguard to prevent the tyranny of the Majority is never a good idea.

PS There is good reason when searching through the Constitution you will not see the Country defined as a Democracy, or even the word democracy used. Rather it is explicit in the fact that we live in a Republic


Republic means "not a monarchy". It is totally orthogonal to whether a country is a democracy or not.

Norway is a democracy and not a republic; Zimbabwe is a republic and a dictatorship; France is a republic and a democracy.


Article 4, Section 4

>The United States shall guarantee to every state in this union a republican form of government, and shall protect each of them against invasion; and on application of the legislature, or of the executive (when the legislature cannot be convened) against domestic violence.

Also, explain Napoleon who was Monarch over the Republic of France


> Article 4, Section 4

Yes, our constitution guarantees that each state will be a republic, not a monarchy.

> Also, explain Napoleon who was Monarch over the Republic of France

The French state while Napoleon was emperor is known as the "First French Empire", which replaced the "First French Republic". Eventually, after both the Empire and the Old Regime were re-abolished in 1848, the resulting non-monarchical state was called the "Second French Republic". So your example actually supports my point.


There's no such thing as a "direct democracy and representative democracy". There's democracy and there's republicanism.

Democracy is the people electing on the issues. It wasn't conceived with universal vote in mind, in fact, it was very elitist in nature.

Republicanism is the people electing representatives to weigh the pros and cons of each choice for them. In practice it is much closer to our current system, its flaws included.


That only direct democracy is actual democracy, and the BS different "types" were only sold as such.


They vote for their interests.


Unless they are misled...


That's a tedious argument at best. Most countries with free elections and effective rule of law can reasonably be called "democracies".


Switzerland.


It doesnt matter if its not pure extra virgin democracy, as long as its the best way for a community to be governed


You can't really be arguing that we enjoy the "best way for a community to be governed" do you? Pangloss? Is that you?!


What makes you think some kind of ideal democracy works best? Even ancient athenians were wary of crowds.


>What makes you think some kind of ideal democracy works best?

I never said that I did.


> I'm looking around, and I see quite a few different systems, but no democracy.

Democracy is being hidden. They've renamed it and called it Populism.


It's funny to see this German effort turn up in Scientific American, of all places.

In 2014, FQXi held an essay contest, sponsored by Scientific American, called "How Should Humanity Steer the Future?". First prize [1] went to German physicist Sabine Hossenfelder of the Frankfurt Institute for Advanced Studies [2].

Her award-winning proposal [3] was to hook up everybody to a recommendation engine "to give the user an intuitive feeling for how well a decision matches with recorded priorities", so they would not have to think for themselves. Because, as she succinctly put it, "They don't like to think" ("they" being "most people": "It is time to wake up. We’ve tried long enough to educate them. It doesn’t work.").

Granted, the technological contraption needed to make this work would currently be Google Glass-clunky, but fear not; the essay helpfully adds that "If such a feedback in the future can be given by a brain implant, it will be like an additional sense."

No, I am not making this up.

[1] http://fqxi.org/community/essay/winners/2014.1

[2] https://fias.uni-frankfurt.de/fellows/

[3] http://fqxi.org/data/essay-contest-files/Hossenfelder_fqxi_e...


The guidlines for the fqxi essay explicitly prioritize first of all "interesting" and secondly "relevant". No-where is it mentioned that these essays be based on reality or in any way practical. In my opinion, it's very typical of academia: they are in it for the buzz, not for actually coming to any practical conclusions about anything. (Not that they are actually averse to practical applications/reality.)


There is a thin line between science and science fiction. Unfortunately, this article is more fiction.

Let's, for a moment, assume that AI becomes highly capable in most human tasks. In that case, I've a pretty radical view on this topic:

AI can create a heaven for humanity. It can get rid of capitalism. It can eradicate poverty, hunger, etc - But yes, it comes with "Terms and Conditions".

Some people might rename such existence as being non-democratic, non-independent, etc. But don't we already live under an umbrella of rules imposed by a democratic system? Many would agree that the rules exist for a reason...right? If AI was ruling such a system - it would have such rules for a reason.

Can a human live peacefully in heaven? We really don't know.

An AI program today exists for a predefined purpose - or works towards a predefined goal. It does not define its own goals (same can be said about most humans).

On the flip side, AI could wreak havoc - most likely when controlled by human. Lets say we have democratic-government controlled AI (assuming all governments have similar capabilities in AI). i.e purpose of the AI is defined by the government, but plans and actions are taken care of by the AI. Two countries with two different AIs now start to differentiate between humans - CountryA citizen, CountryB citizen. If some activities in CountryA are a strategic threat to CountryB. CountryB AI takes actions and CountryA takes actions and it goes on... Humans are slow, imperfect, but not stupid. They will stop at a certain loss. AI that has been ceded control won't stop - its fast and could do a lot more damage in a short span.

May be intelligence is just randomly connected neurons where connections are evolved via feedback. Or may be we do not understand something fundamental about intelligence


Maybe the Garden of Eden was a AI created heaven for humanity. And the forbidden fruit was adjusting the AI to exclusive self benefit. Someone did, then someone else, then everybody was tinkering and the AI predicted it would lead to complete destruction of humanity. So it shut itself definitely. As human knowledge was all in digital form registered in organic form and the code was in a language that no human understood anymore, we had to start from scratch.


> Can a human live peacefully in heaven? We really don't know.

Since our attempts to date have been dismal failures, and every religion (the formalization of millenia of human experience about how to live) says man is flawed, I think it's very likely that humans cannot live peacefully absent any need to work for survival.


These AI pieces remind me of the FUD surrounding genetic engineering in the 80s and 90s. Which is ironic because with the advent of CRISPR and related discoveries genetic engineering might actually be the threat to worry about now.

Edit: typo


I suppose you meant _genetic_ here. Still, now I am tempted to google for "generic engineering" and see how this could have been a threat (especially in the 80s/90s).


That's not Fud that's a moderately successful prediction on a 40 year timescale. Put 40 years into the linked article and let's talk.


My belief is that AI eventually replaces at least some aspects of "democracy".

People often conceive democracy is a pinnacle of development for society but nature doesn't care about democracy. It selects for efficiency.

And on the average I guess people are really less interested in having their vote count as opposed to having a greater quality of life.

But we aren't close to this point right now socially nor technologically. Might be different in 50 years though. Particularly when the end effects of "democracy" (used loosely because that's all we have ever seen) are seen played out.

I'm not advocating either. Just predicting.


> And on the average I guess people are really less interested in having their vote count as opposed to having a greater quality of life.

That's only true up to the moment when their vote doesn't count, though, because if their vote doesn't count, quality of life is mere coincidence, or the side effect of someone else's goals at best.

So, yeah, people don't care that much about the symbolic act of having their vote counted (and why should they?), but they do ultimately care if they lose control of their life, which, after all, nominally is the point of a democratic process.


Democracy is the most efficient arrangement. That's why nature chose it.


I think the parent's argument is that while democracy may be the most efficient system so far, but that who knows how governmental systems might change or evolve in the presence of modern technological advancement.


I've said it before, but I'll repeat it here:

This is our fault (the many of us who have worked on profiling tech, or for companies that do). But there are things to do about it.

The problem is largely a coordination and logistics issue -- there are far more people and resources concerned by the problem than think things are going well. The problem is that they are spread out and have trouble focusing on issues such as this, and so end up being largely ineffective.

Here we see our second fault: how many of us have taken the time to actually do something about it, rather than complain while we spend our work hours exacerbating the problem?

Many of us have backgrounds in crowd sourcing, in logistics, in machine learning (especially things like auto-summarization) -- why are we applying those skills (only) to cab rides and not to making democracy more efficient and robust?

We've spent too much time on offense -- against people, to build addiction, manipulation, and control -- and if we want a healthy society, one that isn't going to tear itself to shreds on behalf of a few wealthy radicals, we need to work on defense -- deprogramming psychological traps, tools that increase autonomy and self-control, media meta-analysis, etc.

Politics only works if we engage with it, putting ourselves and our ideas out there, and technologists have been reluctant to, because many of us saw that it would devolve in to this mess. Well, the mess happened anyway -- can we at least get engaged about trying to fix it?


You are mistaken if you think these things are not being done. The only difference is that people with different ideology than yours are doing it.


Could you point to some, of any ideology?

I agree that people of multiple political camps are using technology, but my experience has been that all camps are exploiting rather than empowering people. You seem to be of the opinion that Im advancing a particular political agenda here. This is actually a policy neutral concern -- Im worried about how we're having the debate, not the outcome.

(And there are some I know of, like CBT apps and productivity timers, but I don't think they're that effective as engineered. They also don't cover, eg, meta-analysis.)


I didn't mean to imply anything on your behalf. I just meant you might have missed the use of technology in politics. Because you might not be part of the group (ideology) who are using it.

I will not go into the technological details as I am just a end user. But I will say this you are mistaken when you say people are manipulated. I see them as a cost effective way of dispensing your message. People always vote based on their interest. Those can be short term or long term.


Well, if you're not going to name particular services or technologies, I think we're going to have to leave it at disagreeing -- but I do want to thank you for trying to make sure I'm informed!

Have a good one! :)


To be honest I think actually direct meritocracy would be beneficial - you build up a page rank as a human being based on your contribution to society and that reflects your ability to vote in the system.

It would essentially remove the career politician and allow people to actually understand things like what their politicians/highest value contributors did rather than now what we have. No-one would ever go for it, but I actually think it'd be great if nurses, doctors, social workers, environmentalists, and yes entrepreneurs who had done well, charity founders, bin men etc. all got more voting rights than me who types into a computer all day for no real societal improvements.

The rich get a disproportionate amount of say anyway, it'd be more even than it is currently.

All public good works would obviously have to be recorded through the system and you might eventually be able to do away with money based upon this. I think it could definitely be worth building and doing away with local government as a starter ;-)


But I like how my vote counts as much as the votes of Bill Gates and the Walmart Greeter.

Your system wouldn't work for entire classes of people: coal miners, welders, fishermen, and yes, software engineers, to name a few.

Not everyone can work in a job that provides "real societal improvements", whatever that means. And not everyone wants to.

Yes, the rich get a disproportionate amount of say, but that's because they can just afford to spend money on advertising and lobbying.

Their vote isn't actually worth more than mine or yours. No one's is, and that's great, in my opinion.


You can vote and contribute as much or as little as you like to the functioning of society. That would be the point - but it is a fun experiment to think about the world in a different way. I'm pretty certain we shouldn't give people who are rich such excessive influence. Probably what would really happen is the exact opposite of what I want the system to do and everyone would vote for fascism for a laugh.


"Contributing to society" is a very broad and ill-defined subject. What I may consider beneficial, others might not see it that way. It is very subjective.


Pagerank solves this though right? Everyone can choose to rank certain professions/people every few years, better than we have now.


That isn't an appropriate description of how PageRank works.


Do I have to provide a link for you? I'm genuinely serious about everyone ranking (linking = rating) each other for positive contribution to society and then allowing them to have more of a vote. Maybe rankings would be a bit more like chess than PageRank but I'm convinced we could come up with something fairer than it is now.


Do I have to provide a link for you?

I'm not sure what you mean by this. Here are some links I'm providing to you.

https://en.wikipedia.org/wiki/PageRank

https://en.wikipedia.org/wiki/Whuffie


Who picks the ranking algorithm?

What we have now is basically what you propose, except with an ensemble of ranking algorithms based on algorithm selection and execution from the public.

Without a better source of algorithm selection, why would it end up any better?


Everyone picks the rankings via pagerank-for-people algorithm...


If everyone has a personal algorithm for ranking the people, then you have the same system with the same problems.

Everyone already has a personal measure of value for the people they're voting on and a system for computing that value.

Ed: If you don't have personalized algorithms, the algorithm selection process will be co-opted to create an autocracy.

We actually see a blend of those problems in the modern system, where voters have disengaged from the primary process and allow the aggregate ranking algorithm to go unchecked.

Of course, more ranking algorithms doesn't resolve how you mediate conflicting outputs (which you've failed to address) and why that would produce better outcomes than elections.


I've only seen the movie, but this sounds like the type of society espoused by Robert Heinlein in Starship Troopers where "service guarantees citizenship"


Value just isn't one dimensional like that.

One man's junk is another man's treasure!


Well currently we only have money as a measure so my system providing an alternative valuation of merit isn't a terrible idea...


There are many ways to give positive feedback. Money is only one system. Sure, it's prevelant, but it's not the only way people measure or indicate appreciation. For example, there are situations where a sincere gesture of gratitude means a lot more than money ever could.

Reminds me of the trap I often get into of thinking that if something isn't on google, it must not exist.

People often get caught thinking that something's common monetary value is equivalent to its worth, to its potential. That's just not true!


Why is my scoring system so much worse than the current ones? They can exist in parallel, no need to participate if you don't like it.


> Today, Singapore is seen as a perfect example of a data-controlled society. What started as a program to protect its citizens from terrorism has ended up influencing economic and immigration policy, the property market and school curricula.

Well I know all the Smart Nation, Data.gov, Gov Tech and Data science initiatives but I think the influence is way over-exaggerated here.



US democracy has already been completely undermined by gerrymandering and brib- I mean "lobbying."


Oh you don't mean "lobbying", because that word is these days legally defined, and most actors in that sector prefer to see themselves as advisors, influencers and think tanks. Certainly not filthy lobbyists.


I can't tell if you're serious. Large corporations wouldn't lobby if they didn't get a return on investment. If you've never looked into it, follow the link below. The return on investment for lobbying is _massive_, and a lot of that lobbying is at the detriment of society in general.

http://www.npr.org/sections/money/2012/01/06/144737864/forge...


> 40% of today's top 500 companies will have vanished in a decade.

Does this have any grounding in facts or reason?


It's an extrapolation of current trends. It appears that the lifespan of Fortune 500 companies is falling, and we've lost most of the ones from the past century or so.


Seems like an extreme extrapolation.

I found this page listing Fortune 500 companies that failed from Dec 2000 to Jan 2012:

http://traderhabits.com/86-of-the1955-fortune-500-have-faile...

I don't know how accurate that is, but it lists 24 failures (about 5% of the Fortune 500), of which one company (Hostess) is listed twice, and at least two (American Airlines and GM) are still around.


Or this?

"Almost all companies and institutions have already been hacked, even the Pentagon, the White House, and the NSA."


That one is a lot more believable, and in fact verifying the first two are already true only requires a quick Google search.


I wouldn't go as far as declaring the democracy being under the existential threat just yet. The alleged malfunction in the US does not necessarily represent a trend.

Maybe we can learn from governance systems that have survived over a long period of time? Say, Swiss direct referendums. What I also like in their system is that elected officials are rotated frequently. For instance the president is elected for a period of just 1 year.


So we're all going to have smart homes yet half of us are going to be made jobless?! Doesn't quite add up.


The claims about 'citizen score' in China are likely mostly bogus. https://www.techinasia.com/china-citizen-scores-credit-syste...


One could use wisdoms of crowds to build an artificial president with an human oversight watch. Ie we would not elect politicians we would elect an artificial intelligence that represent our political wills.


The original article title in German captures the issue better - "Digitale Demokratie statt Datendiktatur", roughly translating to "Digital Democracy instead of Data Dictatorship".


The most fascinating thing about this article, to me, is that it was published so recently but has no mention of Cambridge Analytica.


I gather that it was published in 2015 (noted above) so before Cambridge Analytica became public knowledge.


Big Data has big competitors: Big Governments.

"All animals are equal"

"All animals are equal. But some aninals are more equal than others"


Democracy will. Representative government will not, and will be replaced by [redacted].


I'd like to think that this headline is a clever rhetorical [ab]use of Betteridge's law, but that may broadly give the editorial staffs of the world too much credit.

(for any who haven't heard of Betteridge's law, "Any headline that ends in a question mark can be answered by the word no")


A lot of people seem to conflate "a threat to democracy" and "people I dislike getting elected".

I'd say those people are the real threat to democracy.


Big Data, no, AI, maybe.


will news agency survive the sensationalistic headlines?


Having read "Colossus" by DF Jones as a teen, I know how this will end. It doesn't end well for humans.


Will my straw shack survive a hurricane? There are far bigger risks with artificial intelligence.


In the long run, yes, but we don't know the timeline. We need to survive long enough to have a long run...


I hope people will realize that more liberal systems are the way to go - no regulation can offset this, because no government will ever follow it. I just hope they'll realize soon enough.


Stop conflating the term "democracy" with a (neoliberal) Constitutional Republic.


"Republic" and "Democracy" are not mutually exclusive. Many republics are democracies (including the US); many are not. Republic just means not monarchy.


No it won't in the long run as we know it. The most important political aspect of individuals will be their cloaking abilities in order to avoid plunder from government. Once AI and technology allow individuals to produce and exchange with invisible currencies and without any political intervention or regulation then no matter how politicians morph they won't be able to "see" us.

Of course democracy will morph too towards trust networks where privacy, anonymity and invisibility will be key factors in collective decision making. Market forces at their best.

I am a big follower of Bruno Frey and his political theories for liberty, one of the modern panarchists.


Democracy is meant to be executed by responsible voters, meaning: people who vote with their money. If technological unemployment will mean that most people will be unemployable and dependent on UBI or some other form of handout, there won't be a democracy anymore.


Not a downvoter; what do you mean people who vote with their money?


Someone who votes decides that way for himself how he wants his or her tax money to be spent: what the guy he votes for will likely do with them? If you pay no taxes, you are not a responsible voter. If these people form a majority at some point, democracy is dead. UBI recipients don't fund the government, it is the other way around. They shouldn't vote (or some way must be found to make their votes irrelevant, like in Russia where they already have same problem because of oil money), or they will run any nation into the ground.

Maybe it should be even formal: to vote you must voluntary renounce your UBI entitlement for the next electoral cycle.


He means people who have skin in the game.


That being the case, he is implying there is only fiscal skin, and that people on UBI will have no fiscal matter to care about which sounds utopian.


I guess he is alluding to a popular conception of welfare recipients as people who are only content to collect their checks, and don't have any other aspirations or financial/social stakes.

Which might apply to some, but is a huge carricature on the whole. And even where it does apply, it's more because of a lack of opportunities and felt sense of systematic injustice rather than people being actual lazy stereotypes due to DNA or something.


No one on UBI will ever vote to have their UBI reduced...

Bread and circuses


This is more or less what i meant. Why someone who is useless and knows it's irreversible, would vote for anything but more free stuff?


Well, receiving payment from the state doesn't make one "useless" except in the eyes of the protestant work ethic, and all the moralistic garbage that comes with it.

In fact people creating much monetary worth could be many orders of magnitude more useless and destructive for society than a welfare recipient.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: