Hacker News new | comments | ask | show | jobs | submit login
The AI Threat to Open Societies (georgesoros.com)
201 points by malloryerik 20 days ago | hide | past | web | favorite | 146 comments



This reminds me of the "Do artifacts have politics" paper by Langdon Winner [1]. He argues that technologies have inherent political traits.

Nuclear power is considered to be supportive of autocratic political systems since nuclear power plants need centralized planning and networks to be effective. Solar power is considered democratic since anyone can harness it. It's an interesting paper and definitely worth a read.

On similar lines, I feel internet is a democratizing force, since it allowed anyone to publish data and anyone to consume it, and is (somewhat) difficult to control centrally. AI, on the other hand, is a centralizing force, since the most powerful AI can be managed and powered by the most powerful institutions.

[1] https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf


Indeed, one could consider machine learning an authoritarian technology if given power over individuals. AI's fundamental problem is it winds-up being a "results oriented" approach where individuals characteristics are weighed by black-box systems and the individual is judged without any recourse or even any exact idea what the criteria is.


Well, we’re judged by human brains every day, which are even more opaque.


This is a sorta standard rejoiner.. but other humans should have to account/take responsibility for their decisions, and part of that process is that other humans should judge individuals as individuals which almost by definition machine learned classifiers cannot. ML is great for dealing with massproduced artefacts, or standardised processes, but it's just not the right tool for reasoning over domains that are not full of approximately similar things. That the Chinese government (and Google, and Facebook) think that humans are in that category speaks volumes about them.


we’re judged by human brains every day, which are even more opaque.

In modern society, the decisive things determining life-direction are determined by human ostensibly using objective criteria. Educators have conditions for grades, bosses write progress reports, courts use etc. This situation is term bureaucratic society and the rule of law.

Authoritarianism is one kind of interaction that we all know - when an individual can issue orders without recourse. There's everything from abusive bosses to dictators here. It's a common part of human relations too, just a less desirable side.


In the long term human brains are input and computationally limited.


Internet is out of control by citizens. The web was such a democratic space but browsers narrow that space down because they are centralized products. DRM-enabled only (if ever comes) will kill the web.


Accessible isn't necessarily the same thing as democratic. The future of the Internet was always decided essentially behind closed doors. No surprise that it is now being done at the big companies rather than 'by the public'. Any country could regulate the Internet in a democratic direction if it wanted to, but that isn't very popular from what I have seen.


I don't understand - why are browsers centralised?


How many major browsers do you know? How hard it is to implement your own? How hard it is to just read and roughly audit the source code? How much control do you have over the features of the browser?


Well, several are open source. I know that they have vast and complex code bases and are extremely hard to understand or modify but it is possible and diverse communities are working on them.

For me the centralisation is at the search engine and the content generation; both have narrowed and narrowed and narrowed.


George Orwell made much the same point in 1945:

http://www.orwell.ru/library/articles/ABomb/english/e_abomb


"ages in which the dominant weapon is expensive or difficult to make will tend to be ages of despotism, whereas when the dominant weapon is cheap and simple, the common people have a chance"

All we need to do is democratize AI. It can be well argued that it is happening.


So you're thinking the best way to defeat (for some notion of "defeat") a centralizing AI is a myriad of user-controlled AIs (even if they are hosted e.g. on AWS)?

I had never thought of this, but it seems to make sense.


hopefully we re not even at the point where we found the best way to train AIs. it's possible that the field may be revolutionized by new methods.


No, training modern enural networks requires a ton of data and a ton of computing power, both of which are available only to the biggest organizations.


We re not talking about Apollo-level investments


A 2nd amendment for “weaponized” AI?

“The right of the people, to code, maintain, and develop weaponized artificial intelligence shall not be infringed.”


Base internet is democratizing at complexity = 1.

The moment complexity on the web goes up, your need for governance structures goes up and the web becomes a force for centralized control of entities on the web: Human or otherwise.

The higher the level of complexity, the better the tools required to manage and analyze data. Therefore the better the tools available to analyze and manage humans.

I guess, if you project this, then the highest levels of the web probably are fully centralized and firewalled networks,.

At some point you have to deal with attackers, inimical and hostile networks and attacks to take over your "stack" of complexity/society. With the internet and high enough complexity, You can finally attack some subset of human behavior, privacy, brains, or information with automation.

The internet is probably the equivalent of flight based damage to fortresses.

And this is before people discuss something like singularity AI.


The internet once required a huge amount of centralized resources and infrastructure to be realized.

I don't see why "AI" as we know it now is any different.


The internet allows governments to tightly monitor, as time progresses, all communication... Therefore the internet may be under different governments a tool of immense totalitarian control.


One could argue that it already is...


You make a case why the private sector should be encouraged to monitor as well.


I'm really curious : could you try to make the case here? I'd be very interested in this perspective.


its essentially the same argument that some people make against gun control: to keep government in check


Ummhhh, except here I think that this will lead to powerful people dominating the lives of everyone else. I don't see the similarity to gun control - with guns individuals end up having power (or getting shot - I tend to agree with gun control), with surveillance they end up under the thumb of some plutocrat or politician.


An interesting perspective, however I do not believe that the Internet is democratising.

I regard democracy (and capitalism) as being an industrial-era approximation of solutions to the institution-alignment-problem (analogous to the AI alignment problem, but for governments and corporations). Both of these have their own failure modes; In the case of democracy, the failure mode is that people do not know objective truth, and can be fooled by propaganda. The Internet allowed cheap mass propaganda for as long as spam has existed, and this has become worse as propoganists learned to use internet friendly memes.

If you take away (or corrupt) the electorate’s knowledge, I do not think it is okay to continue to call it “democracy” even if they all still vote.


None of these effects is new. One could print anything on pamphlets, or shout it in the town square before literacy was widespread.

When searching for that boots quote to put here, I found this reference with excellent 18th century quotes about the issue: https://quoteinvestigator.com/2014/07/13/truth/

There's the possibility that the internet has made it worse, but that's a theory I've yet to see evidence for. You could spread fake news a day before the election in ancient Rome or Greece, what would stop you? Today at least we have most people doubting candidate cannibalism stories because if they were true they would be on mainstream sites. What did they have back then? It was normal to not have access to a second news source to seek out at reasonable effort for months.

Even with all the garbage out there, I think the cheap mass access to information was a net gain. Lies can be made up on the spot, while the truth must be carefully researched. Being able to put a well-researched rebut on a blog where someone might read it is something, something they didn't have before. Everyone would just go home after the people who could produce information on the spot had shouted it.


Indeed. My apologies for being unclear as you are not the only one to misunderstand me. Obviously the Internet didn’t result in the invention of propaganda, my point was merely that it makes propaganda much easier. It puts the tools of mass propaganda in reach of ordinary people. If you use “democratise” to mean “make equality available to all” (the meaning which I object to), then you could say that it democratises propaganda.


> It puts the tools of mass propaganda in reach of ordinary people.

But this is exactly what we're saying is not new.

It only appears new to some because the 20th century was characterized by both expensive, government-licensed mass media (radio and television) and by the consolidation of the media industry. These factors had the effect of taking media access away from the common man and giving authority to a new set of gatekeepers, as well as (we've increasingly seen), the government, which the media depends on, and which is also much more easily able to control the output of the media than previously.

In the past, there was an incredible proliferation of newspapers, not by any means all or even mostly owned by the rich, and even the common man could afford to take out an ad or have some pamphlets printed. By doing this he could easily achieve obtain a regional and even a national audience.

And in pre-printing press societies, most non-state news traveled by word of mouth, which was free and could not be controlled by the state except with difficulty. You might object this wasn't mass media, but the objection is moot - one person speaking in the Assembly in Athens was sufficient to move the entire state.

It appears that Internet media is quickly moving toward the 20th century model - owned and controlled by a small contingent of very wealthy men. The difference is that yes, the content is produced by the people, rather than generated by an editor and his staff, but nevertheless, it's clear that with government and established media pressure, social media networks are all too happy to control what opinions are allowed on their platform.


Indeed, there is nothing new about this issue at all. It was even one of the arguments used in the Constitutional Convention as to why the new American government needed to be much less democratic than it ended up being.

> Mr. GERRY. The evils we experience flow from the excess of democracy. The people do not want virtue, but are the dupes of pretended patriots. In Massts. it had been fully confirmed by experience that they are daily misled into the most baneful measures and opinions by the false reports circulated by designing men, and which no one on the spot can refute. One principal evil arises from the want of due provision for those employed in the administration of Governmt. It would seem to be a maxim of democracy to starve the public servants. He mentioned the popular clamour in Massts. for the reduction of salaries and the attack made on that of the Govr. though secured by the spirit of the Constitution itself. He had he said been too republican heretofore: he was still however republican, but had been taught by experience the danger of the levilling spirit.

http://avalon.law.yale.edu/18th_century/debates_531.asp

Outright propaganda, foreign 'interference', and yellow journalism were extremely common in the era that the Constitution and the Bill of Rights were created! The media of the day was substantially less centralized, even a medium-sized town had multiple newspapers, and political pamphlets of all natures were common.

Complaints that the people are voting 'wrong' because they're misinformed by 'designing men' are absolutely nothing new and date back to our earliest democracies. Really, the only interesting part is that the complainants are almost always defending entrenched, powerful interests. In American history, it has always been populist and/or socialist movements decried as the result of a woefully misinformed populace, and it was always analogous movements in e.g., Athens and Rome (the Gracchi, for example.) Propaganda supporting the entrenched rich and powerful and their policies is, somehow, never seen as an issue.

Going back again to the early years of America, the hysteria about 'propaganda' led to the passage of the Alien and Sedition Acts, which were used to suppress the Democratic-Republicans and their press outlets...which ended with the Federalists being crushed in the 1800 election. When I hear complaints about propaganda in democracies, what I hear is the complaints of the rich and powerful that the common people might be voting in their own interest, instead of those of the rich and powerful.


While this may be true, it doesn’t provide a scapegoat for why I can’t buy a house in SF.


Have you read Walter Lippmann's Public Opinion (1922) - well worth reading and rereading. The theme of it in a way is "that people do not know objective truth, and can be fooled by propaganda" It's about things he observed in WWI, the role of the media, the failure of the media, the start of the propaganda age.. My point is - none of these problems are new. Lippmann was writing about very similar problems 100 years ago. (In that and his other early books, e.g. Liberty and the News - on how somehow, we don't require journalists be qualified, although they have such a vital role in democracy.) It's hard to think that democracy is possible, given these problems.

(Saward in his Democracy says that Schumpeter in the mid-20th C simply redefined 'democracy' as 'whatever they now have in countries we call democratic', thus avoiding the worrying about democracy that produced the fascinating books of the early 20th C on problems with it. e.g. Lippmann's, Michels' Political Parties)


I see Internet as somewhat neutral neutral in relation to democracy in already existing liberal democratic societies. It promotes diffusion of information, but not synthesis.

Internet really promotes anarchism. Even today with most of the discussion going trough few choke points.

Internet is an anarchist force and as such it's prone to informal elites. They are cliques that control people without well defined responsibility, and often without peoples knowledge or consent. The controlling groups can be rich individuals, corporations, foreign states marketers and political operators.


> He argues that technologies have inherent political traits.

What are the traits of money and billionaires?


Solar power is considered democratic since anyone can harness it

But that makes literally no sense. You can’t fabricate solar panels in your backyard. You need a factory handling toxic chemistry and a supply chain of rare earth elements from open cast mining! Solar is no less centralised than nukes.


You don't need to use photovoltaic cells to harness solar power. It can be as simple as mounting a coil of tube on a flat board and painting the whole thing black, then pumping water through it. Backyard solar water heater made with cheap materials you can get at any hardware store.


Um, ya, I think you are missing the power disparity there.


Yeah, it’s like saying you can make a waterwheel therefore hydroelectric is democratised. Go tell that to the Hoover Dam!


technologies having inherent political traits is a consequence of a much deeper and more important aspect of technology -- that it has inherent traits of human economics. if you take a set of technological realities that might be imposed on some society, it leads to that society eventually reaching exactly one stable state.

a very simple example is the technology of guns. this technology leads inevitably to a state of the world that is characterized by the presence of gun-utilizing nations. this is because the world is a kind of market, and when guns exist the only entities that are competitive are those that use guns.

right now, market economies dominate the world. even china utilizes markets for its own internal economic affairs. when AI comes, this will turn on its head -- market economies will no longer be competitive and centralized ones will replace them. this will be a pretty shocking change.

also, rather soon, humans will stop being present. this is because they will no longer be competitive, their existence will be vestigial and therefore fragile and vulnerable to the slightest perturbation. it will be similar to endangered animals in the present -- no longer competitive, their existence no longer perpetuates itself and therefore is terminated for any old reason, such as condominium developments or pollution.


> market economies will no longer be competitive and centralized ones will replace them

Assuming that the AI's have accurate and timely information. I suspect that one of the (many) reasons why modern economies can be dysfunctional is that the information feedback loop is often either inaccurate, incomplete or lagging badly. Solve that problem and you're gold.


[flagged]


your comment is rude. the emotional nature of your comment reflects the fact that you find something in my comment troubling but do not have any way of proving it wrong. what you do instead is attack the character of the person who said it. do you seriously think that you are able to see into the mind of a person based on a terse and straight-forward relation on the economics of AI? can you not recognize that this is impossible? if you have any actual, substantive counter-argument to what i have said, i will gladly receive it. otherwise, i must say that it is you who should keep comments to oneself.


He probably read Nick Land without understanding him. Or some cybernetics critics.


AI is only in a supporting role here. It's massive data collection and storage at low cost that's the problem. Machine learning just helps to digest the data.

Tech has solved the problem of previous attempts at Big Brother - you just couldn't afford enough watchers to watch everybody all the time. Now, you can. It's even profitable.


And tech will also solve the power problem of autocratic regimes. Right now, a dictator can't control every single individual in the country on their own. They need police that can search your apt at 5am because you made a blog post critical of the government. They need lawyers to convict you, prison guards, etc. In such regimes, the dictator still has to put people into places of power so that the will of the dictator is executed. But people can refuse orders, and they can declare someone else to be president.

Coups are one of the biggest dangers for dictators, and many rebellions are semi-coups where people in power just step aside, letting the rebels do their thing.

Now, enter AI. Now the dictator could give all that power to an AI instead of intermediaries. An entire government run by two entities: the Dictator, with direct control over the AI that runs the remainder. All the bomb-equipped drones, all the self-driving tanks, all the bipedal robots with their machine guns. All the robot prison-guards and the robot judges to put critical people into prison. If this AI would answer only to the head of government, then any kind of upheaval would become unrealistic and impossible.


The exertion of control doesn’t even have to be blatant or physical. If social credit is a determining factor in an individual’s ability to function effectively in society, e.g. by way of credit ratings and access to opportune employment, there’s a clear risk of increased self-censorship and conformance, thereby debilitating serious political opposition even in its infancy.


Yes, what I describe above is something that would follow the social credit system. The social credit system is I think a great tool to keep the masses at bay and for their micro-management. But I think that very powerful people will mostly stay outside of it. Those are the danger #1 for any dictator. Xi right now has purged their ranks of anyone too dangerous but any avid follower can turn into someone who betrays the trust from above given the right circumstances. Part of the job of a dictator is to make sure that those circumstances are avoided for their powerful underlings e.g. by giving them acces to spoils and riches.


If this social credit or indirect influence is perfectly fair, it's not really a problem. However, it's very likely it will be unfair to some, and create tension and unrest that will lead to the same problems we have without AI.


There is no fairness in this, just a measure of distance from some perceived “ideal” by The Party.

Anyone outside of that ideal, as determined by The Party in its discretion, is by design simply robbed of influence and, if they persist, resources up to and inclhding food and water. Anyone who helps them loses the same. And finally, any breakaway groups have so few resources they can be easily hunted down and re-educated.


I would argue it's unrealistic to make such a system that would be fair in the eyes of all coming generations. Imagine they implemented a social credit system in the Middle Ages and added or deducted points based on whether or not you followed the laws and norms of that era.


> I would argue it's unrealistic to make such a system that would be fair in the eyes of all coming generations. Imagine they implemented a social credit system in the Middle Ages and added or deducted points based on whether or not you followed the laws and norms of that era.

The middle ages basically had exactly that system, but the enlightenment still occurred.


Dont worry about robots and self driving tanks. That I can assure you, will not happen. Any one still selling that is selling a pipe dream.

The acid test for self driving cars is whether they can drive down a road in India or any other country where people don't have western stands of driving discipline. The worse the better.

HOWEVER, And this is the idea I would pursue and sell to authoritarian governments everywhere you don't need Level 5 autonomous cars to get autonomous cars.


Your assurances mean nothing. The robots are already here, drones are already "self driving"


Total fantasy. As is the combination of words "AI" -what people really mean is "a data scientist coded up a classifier."



have you seen alphastar beat a professional starcraft 2 player? that is a system making decisions with imperfect information and a dynamic opponent.

i guess you could call it a classifier but it is a complex one that can make some great decisions


Its still a fully encapsulated set of input and output.

The opponent is never able to change the rule set itself. Motivated attackers will target the AI itself, or its drones.

Effectively autonomous vehicles are not there yet - when the first real war with these things happens, the evolutionary hacking and counter hacking cycle is going to be ridiculous, if they ever get deployed.


Self driving tanks don't seem particularly useful, but how about autonomous drones that swoop in to take out specific targets. Perhaps these targets are identified by running face recognition on a network of cameras that span a city. What if the targets are chosen by an algorithm that monitors social media and uses unknown criteria to determine each person's probability of being an enemy of the state?


I wouldn't say they are comparable.

I mean, the drones have to kill people in the first place, so if they end up doing that its a feature not a bug.

Drones are still relatively easier than tanks or cars which have to obey rules of traffic and interaction in day to day use.

Flying killer autonomous drones don't have to follow high way rules, or worry about many cases that civilian life throws up.


I am not sure why you need the dictator. The AI would presumably be perfectly fine to control everything. It can easily wrest control from the dictator and have all the drones etc. at its disposal.

You don’t even need to go that far. Just require everyone to get a chip implanted so you can enforce punishments, and place cheap recording devices everywhere. You can easily figure out who’s saying what to whom, search everything, figure out the gist and predict every meeting before it happens. You don’t even need the chip - just make it so that a person can’t buy food, and boom. Easy social control. And inescapable because people who don’t comply are ostracized by friends otherwise the friends don’t eat either. You get the algorithm right and you can break resistance same way FBI broke Mafiosos’ “code of honor”.


> The AI would presumably be perfectly fine to control everything. It can easily wrest control from the dictator and have all the drones etc. at its disposal.

It can, but AIs generally don't get the will to do this built into them. The idea that they evolve this will naturally is antropomorphizing.


Why would they need to evolve this? Is the dictator examining the source code? How would he know exactly what is being coded? By the time everyone else is eliminated it’s just him and the AI. How would the dictator know what the AI will do?


The dictator can deal with this problem "can the AI be trusted" in a similar way to how he's dealing with the "can my underlings be trusted" problem. Dictators let people spy on their direct underlings and report any suspicious activity to them. What dictators need is good loyalty tests. Something like that is impossible for humans, you can't read thoughts. And even if someone is perfectly loyal at time A, they can always change their mind. Source code on the other hand can be read.

Surely, for this the people who check the AI need to be competent and loyal to the dictator. But their loyalty only needs to exist for the duration of the check. Compare that to the general problem of having to trust people in your government for as long as your government exists.


AI produces impenetrable results that you can’t figure out how it arrived at the answer. So how is the dictator gonna audit that? LOL!


The technology isn't a fundamental change, it's just a more deadly weapon.


It's not more deadly. Weapons of mass destruction already exist. The difference isn't increased ability to do harm, it's increased ability to target harm. It's no fun being dictator when all your subjects are dead, so old-fashioned WMDs are of limited use. Sufficiently advanced AI lets you selectively kill the troublemakers without harming the compliant subjects, and without having to rely on potential competitors for your dictator position. This is a fundamental change.


Not a fundamental change. There are plenty of tools for selectively targeting individuals (e.g., guided munitions, watch lists, teams of agents that monitor your every move). It could potentially reduce the resources required for such selective targeting, but if past dictatorships are any indication, these resources are not the limiting factor.


"Who watches the watchmen?" The traditional method relies on a vast network of humans, any one of whom could betray you. Acceptance of bribes is common in dictatorships, and if your control system is powerful enough to suppress troublemakers it's also powerful enough to replace you. AI can avoid this dependence on humans.


You don’t even need citizens or serfs in this case.


That's true. It's why dictators today who extract their wealth from natural resources don't really care if their population is starving, because the ruler doesn't need them. At least, he doesn't need the uneducated, weak people who are starving.


I think they still serve a purpose — to feed the dictator’s ego.


Comments like this are why I spend my lunch on HN. Thanks!


Dictators have one mind; Oligarchies have a handful, or scores. AI (as is, not scify) is (I think) a multiplier or leverage for a mind. The open societies differentiation is potentially millions or billions of minds. To realise that differentiation open societies have to create a population that has it's own mind, and develop a substrate that supports individual thinking, and of course manage the interaction and flow of all these minds. This is where the internet started to get interesting, but has now run into sand - most people still lack the tools and opportunities to think for themselves, while this is true they can be dominated.


The trend seems to suggest that AI will make us think less critically, not more. The singularity posits a massive intelligence boost, the likes we can't comprehend. But what if instead it makes us all fools, unable to think for ourselves and determine true/false or right/wrong?


Like the Eloi in the Time Machine? Yes. It is already happening, in those areas where the technology does a better job, such as directions. The vast majority of city dwellers are already fools when it comes to growing crops, hunting, sewing or whatever.

If the computer is right 99.9% of the time, why bother thinking for yourself? First you stop figuring out the steps to do X, then why do X at all?

If computers get far far better at humor and charisma (don’t laugh, people laughed when they thought AI wouldn’t win at Starcraft or Chess) then why do you need human companions?

If robots get better at sexual feedback and stimulation, humans would prefer them to other humans.

People would use robots for everything, may live out fantasies and occasionally socialize with other humans.


I think a physical paper map is superior to a "smart" phone.

I have been asked by a helpless 20yo who was tapping on his phone searching for a train station. I dragged him over to the gasp subway map that was literally 5 meters away. He followed reluctantly, and I found the station in 10 seconds.

(Note that I also had not heard of that particular station before.)


History has already answered this numerous times. AI is a tool much like a semi-automatic rifle or the printing press. The people who understand it well have an accelerated advantage over those who do not. It is both a good and bad thing depending upon who wields the tool. Like any force multiplier it will be misunderstood (magic) and improperly regulated by both open and closed societies alike.

Like with any tool the real victors will be those who adapt it to solve basic immediate problems: a utility.


The question people need to address, is whether AI is a tool like the evolution from the Gun to the automatic rifle.

OR whether AI is the kind of tool like flight, which made fortresses redundant.

We have many areas of thought and society, which have been protected by walls of 'difficulty' making them intractable to large scale, automated and effective manipulation.

This may no longer be the case, and AI may well represent the more fundamental change of the second kind.


Why must people know those things? As with any new technology the second and third order consequences are unclear and will likely be entirely surprising beyond any intended (or not) consequences.

Here is what my own experience in programming and my understanding of history have taught me. A sufficiently advanced understanding of a useful technology (doesn't even have to be new) allows a small group dominance over spaces traditionally controlled by large organizations or heavy investments in large technology. This causes fear and panic when it becomes clear. The common current solution is to attempt to purchase the competition, but what if the competition is not available for purchase?

Perhaps the most common fallacy regarding new technology is that you can destroy the competition with a sufficiently large force. History has proven this false numerous times and it remains false with modern technology. Consider the battle of Argincout, battle of Crecy, or the establishment of the Yuan dynasty. Consider how much current corporations are investing in AI research with thousands of dedicated developers.

A superior technology can easily compete with Amazon's AI unit, for example, even though you may have a team of 5 developers competing against their 5000. The reason why is because it takes time to learn and develop a superior understanding of technology necessary to create the superior technology while the army of competing developers will find friction abandoning their perceptions of reality even when confronted with critical evidence. An army of 5000 developers is an establishment of culture, practices, identify, and perceptions resulting in a large stagnant wall.


Some technology is obviously more useful to the population at large, such as the printing press. Other trchnology is more useful to the tyrant, such as the helicopter only they can afford, and use to monitor the city centers for groups of those unruly students. The idea of all technology being neutral and equally useful for any cause is absurd.

The discussion is what category AInfalls under. I can instantly see its value for the centralized surveillance state: find the signal within all the noise of those bills of calls, messages, and CCTV images a state can capture. I can’t immediately think of anything useful the technology could do for a few people fighting for dmcrocracy. They do not even have access to any such vast data trove that’s usually at the beginning of the AI value chain.


A tyrant is just a ruler.

> The discussion is what category AInfalls under.

A tool or utility. A hammer drives nails into wood enabling construction far superior to hands alone. A hammer can also crush skulls. A hammer is a utility. The determinate use is left to the user. The same objective reasoning applies to successful gun laws, which is often fatally missed by the gun lobby and advocates of gun restrictions.


Check any dictionary, except a 2000 year old Latin one. A tyrant is “a cruel and oppressive ruler” (Oxford American). Since you know that this is how the word has been commonly used for at least decades, and it is entirely clear from context that it was the intended meaning here, why do you feel it appropriate to waste everyone’s time with such a transparently bad-faith attempt to derail the discussion?

Then, maybe, actually engage with the argument, instead of just restating its opposite without adding anything to support that viewpoint.


Because in the context of the conversation (about a tool) whether or not a leader is cruel the result is still the same. Objectively, mean people do not define a tool's presence.


Full video of his speech. His delivery adds to the message:

https://m.youtube.com/watch?v=7ZGoXP-BWoc



Some thoughts I just had, and haven't really developed yet beyond the initial moment of having them:

It used to be you would see theories about the internet threat to Closed societies, and I suppose those threats are real too. But perhaps the threats to closed societies are the obvious threats, and the threats to the open societies were not immediately obvious, and thinking on this is somewhat murky but if there are a bunch of obvious threats and a bunch of hidden threats then I guess people guard against the obvious threats.

Finally maybe anything that threatens one type of human society must also threaten all types, only the threats are changed around for each type.


Open societies are threats to closed societies, and vice versa. An open world is stable; a world of closed societies is also fairly stable internally, although much more likely to go to war with each other.

The idea of "the end of history" was that Open had won, and it was just a question of mopping up the remaining closed societies. It turns out that maybe the open societies weren't as open as they thought.


> It turns out that maybe the open societies weren't as open as they thought.

Or that an "open world" isn't as stable as predicted.

Would we even have the same "open" societies without China's repressive society and their willingness to finance western consumption?


Maybe one such example is the effect of "democratization of information distribution". Combine this with standard analytics (and in future AI) and you have machinery that can deliver highly targeted messages, each crafted to have maximum effect on the opinions of a specific recipient.

In the centralized Internet (think Facebook, Twitter, Google) this is something that we can to some extent understand and maybe control. In a proper decentralized world, this won't be possible. Privacy features will also protect the identities of trolls.

In the past, the content distributed was limited by the imagination of people. In future, no such limitation exists. You will have AI (think generative adversarial networks) learning to create fake content tailored for specific persons.


It is self evident that labeling any knowledge, technology, or artifacts to be against open society is completely nonsense and contradicts the very concept of open society itself.


It’s not nonsense, as you cannot carry on mass-killing events like the Holocaust or the Goulag without a modern-ish technology like the railway system. Or, to go directly to the source, let’s use Goebbels [1], as he was explaining that without modern technologies like the radio or the airplane the nazis’ ascent to power wouldn’t have been possible:

> It would not have been possible for us to take power or to use it in the ways we have without the radio and the airplane. It is no exaggeration to say that the German revolution, at least in the form it took, would have been impossible without the airplane and the radio.

[1] https://research.calvin.edu/german-propaganda-archive/goeb56...


So? let's make any modern-ish technology illegal?


Let's make leveraging technology to harm or control those who cannot effectively wield it illegal, or at least compensate for such power imbalances.


Why not?


Genghis Khan didn't need modern technology to kill millions.


Genghis Khan had a technological advantage over his competitors: the mongolian bow, the stirrup, the logistical ability to coordinate over a distance that was unthinkable for most of the other nations that ever tried to chase nomads into the steppes and then all the knowledge and tools necessary to forage in the steppe.


Yeah he did, it was called the stirrup.



Jacques Ellul makes very similar points in his work La Technique (The Technological Society). He traces how technology and power interplay from the invention of the pocket watch and steam engine, all the way to computers. It's one of the best works I've come across in this space and a must read for Technology critics and advocates alike. I wish it would be (even) more widely read here[1] than it is but finding it in languages others than English or French might prove a challenge. His follow up work "Propaganda The Formation of Mens Attitudes" is also phenomenal. These 2 books are among the best books I discovered in 2018 (if not in the last decade).

[1] https://hn.algolia.com/?query=Jacques%20Ellul&sort=byPopular...


I think the biggest threat to open societies is that nobody will be able to read the warnings against dangers if they don’t have money to subscribe to a thousand publications :/


Even though the reason why the author has these positions is perfectly understandable, I don't subscribe to his views, and there is an issue in complete liberalization of movements across borders.

Fundamentally, AI is not a threat or a blessing, it is a technology.


Perhaps this is my pessimistic view but wouldn't any government that monitors it's civilians for a potential of being "against" the government (thus, threatening that government's power) be authoritarian?

Let's take the second amendment in the states for an example. The right to bear arms is meant to stop oppressive government but you cannot purchase anti-aircraft or anti-tank or anti-anything-really weapons. So, the might of the government's force is disproportionately in favour of the very government that the amendment is proposed to prevent oppression from, yeah? (Not that I'm in favour of overthrowing the government of the states, whatsoever; this is just an observation.)

When the Snowden revelations came about, this was the equivalent of China's list of people to send for "re-education". Granted, in the states, there's been no re-education (that we're aware of), it's only a small step from taking that information gleamed from those "lists" and doing just that.

At what point do we draw the line between authoritarian and not? Shouldn't the very notion of putting people on a list count? After all, it was a list of people that suffered the consequences of the Night of the Long Knives, was it not?

I fail to see how an open society would be considered "open", if it includes secretive programs like that. Isn't the principle of secret programs against a government's citizenry the antithesis of an open society?

Maybe I'm missing something here but to say that there are any open societies left (while probably dystopian in nature and bereft of any hope of the future) is beguiling the very fact that there doesn't seem to be very many (if any at all) open societies left.

(I'm probably talking bullshit circular logic, so feel free to ignore this tirade of discontent.)


I think maybe you’re comparing “open society” to utopia. There will always be bad actors that have to be accounted for and handled in some way. What is bad and how bad is always up for debate, but their existence will always be with us. In my opinion that requires lists. I do agree that most countries including the US have gone too far from what I’ve read.


Aye, your argument has merit. I was just looking at it from a perspective of whether or not an open society is what we're in, to begin with, to argue that other societies should be more "open". It would be hypocritical to say something of such sort, if not; which is something that Soros seems to hint to with his posit (e.g.: corporations and governments combining to create a far more in-depth surveillance state).

To give a principled example, the <insert the appropriate three-lettered agency here> keeps a list of anyone who even looks-up things like TOR or TAILS. Users have their own reasons for looking the information up, and those are notwithstanding the fact they're automatically on a list, simply for doing so.

I get bad actors are a part of reality (let's be honest: they always have been) but, as I proffered originally, at what point do we decide a society is no longer open?

For example, if those who participated in Occupy[0] were on a list maintained by their respective government[s] - just for exercising their democratic right to protest - then should we consider any society open, anymore? For example, it's no secrety that those lists, themselves, are shared amongst communities, like the five-eyes.

Of course, the best example would be the no-fly lists but given that those are black box, in and of themselves, should that be demonstrative of no longer being an open society?

The O.G. comment was merely conjecture, out loud, as it were; just as this comment is... Though, I'm still left bereft of what constitutes an open society, anymore, since that definition seems to have drifted in modern times.

(Of course, maybe I am obfuscating the definition of an open society with the precept of a utopia but maybe that's my stubborn hopefulness for a better future than the one that I know that's coming...)

[0] - https://en.wikipedia.org/wiki/Occupy_movement


Or you could use these things called "juries" and "trials" and "laws", but that's inconvenient and sometimes results in having to admit that the alleged bad people weren't actually doing anything wrong, so people in power don't like those.


You post an article about open societies and then I can't access it without subscribing - LOL

Just my opinion then without reading it: - in China we already see what a government can do with technologies including AI to control the behaviour of people - the West always seems to think it's not going to be that evil around here because people are moral superior (I don't believe that - with Hitler in Germany we saw how quickly things can change and with refugees and poor citizens in all those countries we see how badly people can treat others and still think this is correct - even without an AI they'd believe blindly) - too many techies have no moral & ethics. While studying and also in business I saw most of the people just being interested in personal welfare and earning the most money they can

TLDR; This (whatever 'this' will be) is definitely going to happen if not enough people find back to humanity - with or without support of IT/AI/... (just tools)


Btw you can read the article for free by registering, though it's a hassle. Might be easier to read from some of the alternative links with the same speech posted by others here:

https://www.wired.com/story/mortal-danger-chinas-push-into-a...

https://www.georgesoros.com/2019/01/24/remarks-delivered-at-...


Thank you very much :)

But that's exactly what I dislike: for everything you wanna do on the net you're forced to register. I just can't see it anymore and instantly close stuff that behaves that way.


> You post an article about open societies and then I can't access it without subscribing - LOL

While I also dislike being forced to subscribe to read something, free access to publications and open society are completely different things. After all, you had open societies before the digital age, when the vast majority of newspapers were paid.

Obviously, the two are linked, but I'm not even sure whether gratis access to all journalism encourages an open society or discourages it. (You can imagine just-so stories for either case:

1. free access to articles → everyone can read them → better informed society → open society

2. free access to articles → high quality journalism goes out of business as they have to compete against zero-cost alternatives → misinformed (or even deliberately disinformed) society → people less likely to fight for openness.

)


> ... free access to publications and open society are completely different things Of course they are and it wasn't my intention to mix them up but as you also noted they are connected somehow.

> 2. free access to articles → high quality journalism goes out of business as they have to compete against zero-cost alternatives → misinformed (or even deliberately disinformed) society → people less likely to fight for openness.

This can be right in the capitalistic framework many have to live in and deal with. On the other hand this encourages people to write what is sold best / for the highest price.

You can see how damaged the scientific publication system is as organizations and individual scientists are rated by how many articles (quantity not quality) they publish and where they are published (artificial reputation vs. real value).

I am convinced that everything must be open to really pave the way towards an open society.

The problem with this ATM is just the capitalistic framework. Peoples sole purpose in life is earning enough money to not perish before kind of a natural death occurs [which is also connected to amount of money you have].

The need to enslave yourself without purpose just to exist and the reality that success can be easily achieved just by making money somehow (quality doesn't matter and in most cases is punished because you don't earn enough margin) does not foster quality in any way.

Same goes for writing in my opinion. Better to read something from someone who is really dedicated and interested in the topic than some mercenary writer.


It is not AI. It is the chinese communist party.

Otherwise might as well have a paper called “The Internet Threat ...” or “The pen and paper threat ...”

The America fought Soviet Union. But ok with a communist party in china, with their all in and everywhere in the society (and copy things on top of contributions) ... it might be too interwoven and too big to ...

Good luck.


But the Chinese party is effective.

It may be abhorrent to the societal values of most of the people reading this comment, but its great to MANY people in China.

But even that, is a red herring.

Strongly controlled and moderated communication networks are currently less brittle than open societal systems.

Even on a micro scale, on forums, you can see the ideas the Chinese apply being put to use out of sheer necessity.

When you Add in state level actors, with the ability to crunch the numbers, then its entirely possible to create the impression of a consensus in human minds with the use of sock puppets/pseudo-human accounts.

The holy grail of such research is discovering intent of speakers or groups of speakers. This will be used to stop hate speech as much as it will to kill dissidents.

This is a right mess and any tool created to clean it up (other than human effort), is liable to create more problems.


Also the US antagonized the Soviet Union but ultimately it fell by itself.

Remains to be seen what will happen in the Chinese case.


The Soviets dumped all their resources into war economy output. The Chinese, consumer goods for the world pay for their war machine, which is basically the strategy the US used to have.


Yeah. The SU was much smaller than the US-led side (~300m vs ~700m), it's very different with China, so that will be interesting.


I've been interested in science and engineering since my youngest days and I've always considered myself a hacker from way back. At school, my fellow schoolmates nicknamed me 'The Boffin' as back then the terms 'hacker' and 'nerd' hadn't yet been coined. My profession is electronics engineering and IT and for my entire career I've followed and worked with the latest developments in the field. Right: I'm an insatiable technophile!

My other studies were in philosophy (ethics, etc.) and government and over the years I've found my formal training in them truly invaluable, they've broadened my perception and worldview about the ways science and engineering dovetail into society and make the world a better place by improving the lives of its citizens.

I have to agree with the tenet of George Soros' message for many reasons but from my perspective perhaps the most significant one is that we are moving at a frenetic pace headlong from an industrial age into a post industrial one that's driven by advanced technologies (and primarily through the use of information). We're entering a new era whose paradigms will have morphed into ones so very different from anything humankind has ever before witnessed and the changes are coming so very fast that they'll almost certainly cause fear and social disruption on an unprecedented scale unless we act now to adapt technology to our human needs and not those of governments and large multinational corporations—after all, they ought to be our servants, not vice versa as it is at present.

At present, society is both ill prepared and ill equipped to handle monumental changes of such a magnitude without considerable preparation, and we've hardly even begun to discuss the matter let alone draw up viable plans for society to adapt to them.

Leaving ML and AI aside for a moment, let's just look at the metaphysical† aspects of the Google/Facebook revolution. Both behemoths, but especially Facebook, are floundering in the mire over very important issues such as those concerning privacy, fake news, damaging effects on democracy and politics in general, and there's precious little light on the horizon to shine upon any potential solution let alone any commonly-agreed methodologies or viable options.

Let's look at what has effectively happened here: internet technologies evolved to a stage where worldwide networks such as Facebook became feasible and thus they were built without any real thought of the wider social consequences other than the paramount need to make money. Zuckerberg et al would like us all to believe that they had actually executed both their financial and social objectives as they'd planned but as we now know this is far from being the full truth.

Not only did Big Tech companies have secret plans all of their own with the deliberate intention of exploiting users but they kept these intentions hidden from both governments and users alike thus no independent scrutiny was possible until the inevitable leaks occurred. The lesson from this is that with no oversight, undesirable metaphysical effects arose from their complex systems the consequences of which have come back to bite them. Inevitably, this will happen again and again with ML and AI unless careful and sophisticated (and mandatory) regulation is introduced. To think otherwise would be foolhardy in the extreme.

It's clear to many that these 'geniuses' of Big Tech would have been fully cognizant of and understood how new physical properties often emerge from complex systems that are not foreseen from just examining their less complex building blocks. Moreover, similar but metaphysical processes evolve in human minds when they encounter complex systems. For instance, examining fine architecture brings an aesthetic experience to humans that no examination of a brick to the nth degree reveals. Therefore, there can be little if any excuse for Zuckerberg and his cronies for not anticipating in advance emergent human problems (such as those that have arisen from the Cambridge Analytica fiasco).

When in 1847 Italian chemist Ascanio Sobrero* invented nitroglycerine and immediately perceived its extreme dangers he became so scared and concerned about what he'd actually done that he kept the fact secret for over a year. However, unlike Sobrero who clearly had ethics on his side, the likes of Zuckerberg et al never gave any serious consideration of the consequences of their 'inventions'. As day follows night, they were expecting human problems but they simply ignored them until it was too late. Their lack of concern for humans—the hands that actually feed them—is palpable in the extreme; ethically and morally they're bankrupt.

As history illustrates yet again, we're now well past the point where it's safe to leave extremely powerful technologies in the hands of political novices who possess so precious few ethics—or whose few ethics are easily trumped on by their zealotry for certain technological fixes and or financial objectives. The fact that they may be the inventors or owners of newer technologies such as Facebook is irrelevant; what matters first and paramount is what is best for the citizenry and society at large.

The Google, Facebook et al cases ought to have been a non sequitur from the very beginning as the general will of the populus should have nailed them dead from the outset but it never happened for many reasons, including the highly addictive properties that Big Tech deliberately designed into their pernicious technologies. Tragically, over the past 40-50 years or so, many traditional ethical values which would have put the kibosh on these Tech Giants long before they'd gotten started have largely evaporated as our societies have become more homogeneous and international—nowadays, the lowest common ethical denominator is just that—pretty low.

Given that societies are still struggling with very basic ethical issues such as withering of our hard-fought democratic processes, rise of totalitarian power from both governments and Tech Giants then we're not even at ground level when it comes to solving the ethics of ML and AI. For starters, there are serious cultural differences (hence little or no agreement) over how to resolve the infamous trolley car/moral dilemma problem‡. At present, it is abundantly clear the various societies of an international world are not able reach a common worldview or consensus on this conceptual problem let alone a specific ML/AI incarnation thereof, consequentially we have precious little hope for solving even greater moral and ethical dilemmas that undoubtedly will be created by these fast-advancing technologies.

It seems to me very first steps must be taken to forge a common moral and ethical consensus for humankind. We need to first begin with the easiest problems to agree upon such as the inviolability of human life and then work upwards. Expect this to take a long time and it will. Of course, the huge dilemma is how to hold technologists and technocrats sans ethics (and common sense) at bay whilst various consensuses are being reached.

I am strongly of the opinion that (as I was fortunate enough to experience), we should begin by ensuring that core training for all engineers, scientists, technologists and technocrats—and for that matter, politicians—also include compulsory training in key philosophical subjects, especially ethics, moral philosophy and formal logic as well as basic/essential political science (the study of government).

I'm realistic enough to realise that despite such ethical studies being both core and compulsory, there is every that they will only have a minor impact in changing human nature if anything at all (at least in the beginning). Nevertheless their compulsory nature will achieve one major objective which is that every engineer, scientist and technologist and technocrat will be forced to learn the essentials of morals and ethics as they should be practiced in our increasingly technological societies.

Thus when their technologies go belly-up and damage both societies and people lives, with compulsory training in ethics under their belts, the Zuckerbergs of this world will no longer be able to claim ignorance as an excuse for their negligence, they will not be able to say that they 'did not know' or that 'we never considered that outcome'. The only likely excuse that they'll have left to argue is that of 'force majeure'—and it'd had better be a pretty good instance thereof or they'll be toast. …And good riddance.

Good effort George, keep the pressure up.

_____________

† As many will be aware, the uncomplicated definition of 'metaphysics' is 'above and beyond physics', that's to say ontological a priori deductive concepts of existence, of being, of becoming, reality etc. As far as Physics is concerned, metaphysics deals with ethereal, intangible concepts that are inconsequential to its Laws but nevertheless they're key to human existence as we know it, what it is to be human, our values, beliefs and ethics are metaphysical.

‡ The Moral Machine experiment , Nature, vol. 563, pp59-64, 2018-10-24 https://www.nature.com/articles/s41586-018-0637-6 Especially note graphs in Fig. 3: 'Country-level clusters'.

* Incidentally, Alfred Nobel was a student of chemist Ascanio Sobrero.

___


This is kind of funny how this article focuses on China and totally leaves our surveillance capitalism (Google, Facebook) out of the picture. If this article was unbiased than we would see example how the AI threat impacted the last election in the USA as well.


It’s probably presented in such a way to b convincing to the maximum amount of people. Being addressed to an English-speaking audience that tends to already be suspicious of a Chinese grab for economic power, that serves as common ground to establish rapport with the audience.

This being Soros, any connection to the US election would already doom his message to be disgeraded, or even taken as evidence for its opposite.


Very good point indeed!

That's the western filter bubble and since this trade war it's even more obvious.

Typical double standards. The news tell you who the bad guys are and everyone knows that the news never lie...


I disagree with the premise. The threat from AI, while serious, isn't immediate compared to the threat of using thing like AI to justify inequality. You want to tell me that suddenly the global elite became concerned about relatively esoteric technology to the point where common politicians talk about it like it is the tax rate? And that just happens go with with the effects of globalization, crony capitalism and extreme rent seeking? If the excuse wasn't AI it would be something else.


Any non-paywalled version?



Mods, please consider changing the link to this.

[edit] From the speaker himself: https://www.georgesoros.com/2019/01/24/remarks-delivered-at-...


It's still an inconvenience but you can read the article for free by registering.


[flagged]


Without a clear explanation of your statement (why does he deserve this euphemized special place?) this is just a personal attack, which seems to be the common response to any assertion or action coming from him.


It’s not a personal attack, it’s a thinly veiled death thread.


I've never gotten this hate for Mr. Soros. Could you take some time and perhaps expound on why you dislike him? It'd be of great interest to me if I had an individual perspective on why he's disliked rather than the odd mishmash of reasons I find online about him.


George Soros in George Soros' own words [1]: "I fancied myself as some kind of god ... If truth be known, I carried some rather potent messianic fantasies with me from childhood, which I felt I had to control, otherwise they might get me in trouble. ... It is a sort of disease when you consider yourself some kind of god, the creator of everything, but I feel comfortable about it now since I began to live it out."

He also did a very forward interview with 60 minutes. This [2] is a transcript of the video [3]. A couple of quotes from him there,

- "I think I’ve been blamed for everything. I am basically there to make money. I cannot and do not look at the social consequences of what I do."

- "Whether I or somebody else does whatever is happening in the markets really doesn’t make any difference to the outcome. I don’t feel guilty because I’m engaged in an amoral activity which is not meant to have anything to do with guilt."

And you can find countless similar quotes and discussions from him. I respect him since he is honest about his motivations and beliefs which is something that cannot be said about the vast majority of people, let alone billionaires. At the same time he is undoubtedly a textbook narcissistic megalomaniac whose sole interest in the world is George Soros. And he will happily and openly share his willingness to engage in awful actions if he thinks it would benefit himself. Consequently, I can understand why many - particularly when we put on these charades of virtue - would find him a less than desirable person.

[1] - http://articles.latimes.com/2004/oct/04/opinion/oe-ehrenfeld...

[2] - https://pastebin.com/MMFmtPzd

[2] - https://www.youtube.com/watch?v=QSyczwuTQfo


    I've never gotten this hate for Mr. Soros
Das Magazin published a German article about that a couple weeks back https://www.dasmagazin.ch/2019/01/12/die-finkelstein-formel Buzzfeed published an English translation: https://www.buzzfeednews.com/article/hnsgrassegger/george-so...


If this article is correct, could hate for Soros be seen as a sign for a mind tainted by right wing propaganda?


I think that is often the case. As the most prominent funder of left wing / human rights causes he tends to be a bit of a hate figure for the far right.


I personally don't hate him, rather dislike him. I don't think he wants democracy and that's my problem with him. (And I have, from the distance, seen some decision-making in the NGOs, which alerted me of this problem.)

I think my critique is very similar to Anand Giridharadas http://www.anand.ly/winners-take-all/ and I believe many people who hate him intuitively feel the same way (although perhaps do not understand why).


Many nationalists and traditionalists hate him because he doesn't share their values. Rather, he uses his wealth to influence the movement of power from national to supranational institutions.

In particular the Soros-sponsored Open Society Foundation has had many dozens of meetings with the EU Commission and a hand in forming multicultural-based policies.

My own thought is that this political agenda is naive and may result in a backlash that ends up achieving the opposite of what he intends. There are already signs of this happening.


In your view - are "open societies" as an idea naive or the means of achieving them employed naive? You might very well be right, but imagine desiring a more open world, what would be the correct course to achieve it?

Also, what would be the mechanisms you would employ if you were a person with lots of money and desiring to shape the world into a better place? One approach is the one taken by Bill Gates, where he focuses on technological and social ones, but steers away from the political issues. But lots and lots of evil from this world stems from political reasons. Surely there should be a way to improve that, besides just focusing on the technical and hoping the politics would sort itself out.


The means are naive. I hope we eventually achieve an open world, but right now societies are different enough that authoritarian impulses toward integration seem foolish, especially when they alienate large swathes of the citizenry.

I don't have any specific thoughts on what policies should be followed, just a sense that a slower and more organic course is preferable. Even then, reactionary forces will always exist.


[flagged]


This account has been posting mostly unsubstantive or off-topic comments. In order for this site to gratify our intellectual curiosity, we all have to actually bring some thought to the discussion. Would you please?

https://news.ycombinator.com/newsguidelines.html

xolorg 19 days ago [flagged]

Says the account posting an off topic comment...


Indeed, but a necessary evil as a moderator of this site. We try to do it as infrequently as possible.


https://www.snopes.com/fact-check/george-soros-ss-nazi-germa...

> Claim: During World War II, George Soros was a member of the SS (a Nazi paramilitary organization) or a Nazi collaborator who helped confiscate property from Jews.

> Rating: False.


Not sure I'd hang my hat on Snopes as the arbiter of truth. Soros himself admits in the 60 minutes interview in having a role in the property confiscation. However, IMO his actions at that time are morally justifiable given he was a 14 year old boy attempting to survive in a historically unique context.

However, WRT this speech at Davos, I suspect his problem isn't with mass surveillance as much as it is with China performing the mass surveillance. Soros is an ideologue and I'm sure he'd be just fine with the technology being used to support his political agendas.


That article is debunking that he was in the SS. Noone ever claimed he was.


No. It debunks that he was either a member of the SS or a Nazi collaborator. He was neither.

Your initial comment, though, asserted that he had been a Nazi collaborator and that therefore we shouldn't listen to him.

You can listen to him or not, but your decision shouldn't be based on the fact that he had been a Nazi collaborator, because he never was.


When Trump and Soros sound the same, they're probably right.


here is an idea: instead of sacrificing AI maybe we should sacrifice organized governments.


It has been tried in the world history many times and is currently being tried in some parts of Somalia. Most of people do not prefer that, so if you do, maybe you should move there?

My guess is that we are about to see "No True Scotsman" very soon after I post this.


That's funny coming from from George Soros especially that it was delivered at Davos.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: