Nuclear power is considered to be supportive of autocratic political systems since nuclear power plants need centralized planning and networks to be effective. Solar power is considered democratic since anyone can harness it. It's an interesting paper and definitely worth a read.
On similar lines, I feel internet is a democratizing force, since it allowed anyone to publish data and anyone to consume it, and is (somewhat) difficult to control centrally. AI, on the other hand, is a centralizing force, since the most powerful AI can be managed and powered by the most powerful institutions.
In modern society, the decisive things determining life-direction are determined by human ostensibly using objective criteria. Educators have conditions for grades, bosses write progress reports, courts use etc. This situation is term bureaucratic society and the rule of law.
Authoritarianism is one kind of interaction that we all know - when an individual can issue orders without recourse. There's everything from abusive bosses to dictators here. It's a common part of human relations too, just a less desirable side.
For me the centralisation is at the search engine and the content generation; both have narrowed and narrowed and narrowed.
All we need to do is democratize AI. It can be well argued that it is happening.
I had never thought of this, but it seems to make sense.
“The right of the people, to code, maintain, and develop weaponized artificial intelligence shall not be infringed.”
The moment complexity on the web goes up, your need for governance structures goes up and the web becomes a force for centralized control of entities on the web: Human or otherwise.
The higher the level of complexity, the better the tools required to manage and analyze data. Therefore the better the tools available to analyze and manage humans.
I guess, if you project this, then the highest levels of the web probably are fully centralized and firewalled networks,.
At some point you have to deal with attackers, inimical and hostile networks and attacks to take over your "stack" of complexity/society. With the internet and high enough complexity, You can finally attack some subset of human behavior, privacy, brains, or information with automation.
The internet is probably the equivalent of flight based damage to fortresses.
And this is before people discuss something like singularity AI.
I don't see why "AI" as we know it now is any different.
I regard democracy (and capitalism) as being an industrial-era approximation of solutions to the institution-alignment-problem (analogous to the AI alignment problem, but for governments and corporations). Both of these have their own failure modes; In the case of democracy, the failure mode is that people do not know objective truth, and can be fooled by propaganda. The Internet allowed cheap mass propaganda for as long as spam has existed, and this has become worse as propoganists learned to use internet friendly memes.
If you take away (or corrupt) the electorate’s knowledge, I do not think it is okay to continue to call it “democracy” even if they all still vote.
When searching for that boots quote to put here, I found this reference with excellent 18th century quotes about the issue:
There's the possibility that the internet has made it worse, but that's a theory I've yet to see evidence for. You could spread fake news a day before the election in ancient Rome or Greece, what would stop you? Today at least we have most people doubting candidate cannibalism stories because if they were true they would be on mainstream sites. What did they have back then? It was normal to not have access to a second news source to seek out at reasonable effort for months.
Even with all the garbage out there, I think the cheap mass access to information was a net gain. Lies can be made up on the spot, while the truth must be carefully researched. Being able to put a well-researched rebut on a blog where someone might read it is something, something they didn't have before. Everyone would just go home after the people who could produce information on the spot had shouted it.
But this is exactly what we're saying is not new.
It only appears new to some because the 20th century was characterized by both expensive, government-licensed mass media (radio and television) and by the consolidation of the media industry. These factors had the effect of taking media access away from the common man and giving authority to a new set of gatekeepers, as well as (we've increasingly seen), the government, which the media depends on, and which is also much more easily able to control the output of the media than previously.
In the past, there was an incredible proliferation of newspapers, not by any means all or even mostly owned by the rich, and even the common man could afford to take out an ad or have some pamphlets printed. By doing this he could easily achieve obtain a regional and even a national audience.
And in pre-printing press societies, most non-state news traveled by word of mouth, which was free and could not be controlled by the state except with difficulty. You might object this wasn't mass media, but the objection is moot - one person speaking in the Assembly in Athens was sufficient to move the entire state.
It appears that Internet media is quickly moving toward the 20th century model - owned and controlled by a small contingent of very wealthy men. The difference is that yes, the content is produced by the people, rather than generated by an editor and his staff, but nevertheless, it's clear that with government and established media pressure, social media networks are all too happy to control what opinions are allowed on their platform.
> Mr. GERRY. The evils we experience flow from the excess of democracy. The people do not want virtue, but are the dupes of pretended patriots. In Massts. it had been fully confirmed by experience that they are daily misled into the most baneful measures and opinions by the false reports circulated by designing men, and which no one on the spot can refute. One principal evil arises from the want of due provision for those employed in the administration of Governmt. It would seem to be a maxim of democracy to starve the public servants. He mentioned the popular clamour in Massts. for the reduction of salaries and the attack made on that of the Govr. though secured by the spirit of the Constitution itself. He had he said been too republican heretofore: he was still however republican, but had been taught by experience the danger of the levilling spirit.
Outright propaganda, foreign 'interference', and yellow journalism were extremely common in the era that the Constitution and the Bill of Rights were created! The media of the day was substantially less centralized, even a medium-sized town had multiple newspapers, and political pamphlets of all natures were common.
Complaints that the people are voting 'wrong' because they're misinformed by 'designing men' are absolutely nothing new and date back to our earliest democracies. Really, the only interesting part is that the complainants are almost always defending entrenched, powerful interests. In American history, it has always been populist and/or socialist movements decried as the result of a woefully misinformed populace, and it was always analogous movements in e.g., Athens and Rome (the Gracchi, for example.) Propaganda supporting the entrenched rich and powerful and their policies is, somehow, never seen as an issue.
Going back again to the early years of America, the hysteria about 'propaganda' led to the passage of the Alien and Sedition Acts, which were used to suppress the Democratic-Republicans and their press outlets...which ended with the Federalists being crushed in the 1800 election. When I hear complaints about propaganda in democracies, what I hear is the complaints of the rich and powerful that the common people might be voting in their own interest, instead of those of the rich and powerful.
(Saward in his Democracy says that Schumpeter in the mid-20th C simply redefined 'democracy' as 'whatever they now have in countries we call democratic', thus avoiding the worrying about democracy that produced the fascinating books of the early 20th C on problems with it. e.g. Lippmann's, Michels' Political Parties)
Internet really promotes anarchism. Even today with most of the discussion going trough few choke points.
Internet is an anarchist force and as such it's prone to informal elites. They are cliques that control people without well defined responsibility, and often without peoples knowledge or consent. The controlling groups can be rich individuals, corporations, foreign states marketers and political operators.
What are the traits of money and billionaires?
But that makes literally no sense. You can’t fabricate solar panels in your backyard. You need a factory handling toxic chemistry and a supply chain of rare earth elements from open cast mining! Solar is no less centralised than nukes.
a very simple example is the technology of guns. this technology leads inevitably to a state of the world that is characterized by the presence of gun-utilizing nations. this is because the world is a kind of market, and when guns exist the only entities that are competitive are those that use guns.
right now, market economies dominate the world. even china utilizes markets for its own internal economic affairs. when AI comes, this will turn on its head -- market economies will no longer be competitive and centralized ones will replace them. this will be a pretty shocking change.
also, rather soon, humans will stop being present. this is because they will no longer be competitive, their existence will be vestigial and therefore fragile and vulnerable to the slightest perturbation. it will be similar to endangered animals in the present -- no longer competitive, their existence no longer perpetuates itself and therefore is terminated for any old reason, such as condominium developments or pollution.
Assuming that the AI's have accurate and timely information. I suspect that one of the (many) reasons why modern economies can be dysfunctional is that the information feedback loop is often either inaccurate, incomplete or lagging badly. Solve that problem and you're gold.
Tech has solved the problem of previous attempts at Big Brother - you just couldn't afford enough watchers to watch everybody all the time. Now, you can. It's even profitable.
Coups are one of the biggest dangers for dictators, and many rebellions are semi-coups where people in power just step aside, letting the rebels do their thing.
Now, enter AI. Now the dictator could give all that power to an AI instead of intermediaries. An entire government run by two entities: the Dictator, with direct control over the AI that runs the remainder. All the bomb-equipped drones, all the self-driving tanks, all the bipedal robots with their machine guns. All the robot prison-guards and the robot judges to put critical people into prison. If this AI would answer only to the head of government, then any kind of upheaval would become unrealistic and impossible.
Anyone outside of that ideal, as determined by The Party in its discretion, is by design simply robbed of influence and, if they persist, resources up to and inclhding food and water. Anyone who helps them loses the same. And finally, any breakaway groups have so few resources they can be easily hunted down and re-educated.
The middle ages basically had exactly that system, but the enlightenment still occurred.
The acid test for self driving cars is whether they can drive down a road in India or any other country where people don't have western stands of driving discipline. The worse the better.
HOWEVER, And this is the idea I would pursue and sell to authoritarian governments everywhere you don't need Level 5 autonomous cars to get autonomous cars.
i guess you could call it a classifier but it is a complex one that can make some great decisions
The opponent is never able to change the rule set itself. Motivated attackers will target the AI itself, or its drones.
Effectively autonomous vehicles are not there yet - when the first real war with these things happens, the evolutionary hacking and counter hacking cycle is going to be ridiculous, if they ever get deployed.
I mean, the drones have to kill people in the first place, so if they end up doing that its a feature not a bug.
Drones are still relatively easier than tanks or cars which have to obey rules of traffic and interaction in day to day use.
Flying killer autonomous drones don't have to follow high way rules, or worry about many cases that civilian life throws up.
You don’t even need to go that far. Just require everyone to get a chip implanted so you can enforce punishments, and place cheap recording devices everywhere. You can easily figure out who’s saying what to whom, search everything, figure out the gist and predict every meeting before it happens. You don’t even need the chip - just make it so that a person can’t buy food, and boom. Easy social control. And inescapable because people who don’t comply are ostracized by friends otherwise the friends don’t eat either. You get the algorithm right and you can break resistance same way FBI broke Mafiosos’ “code of honor”.
It can, but AIs generally don't get the will to do this built into them. The idea that they evolve this will naturally is antropomorphizing.
Surely, for this the people who check the AI need to be competent and loyal to the dictator. But their loyalty only needs to exist for the duration of the check. Compare that to the general problem of having to trust people in your government for as long as your government exists.
If the computer is right 99.9% of the time, why bother thinking for yourself? First you stop figuring out the steps to do X, then why do X at all?
If computers get far far better at humor and charisma (don’t laugh, people laughed when they thought AI wouldn’t win at Starcraft or Chess) then why do you need human companions?
If robots get better at sexual feedback and stimulation, humans would prefer them to other humans.
People would use robots for everything, may live out fantasies and occasionally socialize with other humans.
I have been asked by a helpless 20yo who was tapping on his phone searching for a train station. I dragged him over to the gasp subway map that was literally 5 meters away. He followed reluctantly, and I found the station in 10 seconds.
(Note that I also had not heard of that particular station before.)
Like with any tool the real victors will be those who adapt it to solve basic immediate problems: a utility.
OR whether AI is the kind of tool like flight, which made fortresses redundant.
We have many areas of thought and society, which have been protected by walls of 'difficulty' making them intractable to large scale, automated and effective manipulation.
This may no longer be the case, and AI may well represent the more fundamental change of the second kind.
Here is what my own experience in programming and my understanding of history have taught me. A sufficiently advanced understanding of a useful technology (doesn't even have to be new) allows a small group dominance over spaces traditionally controlled by large organizations or heavy investments in large technology. This causes fear and panic when it becomes clear. The common current solution is to attempt to purchase the competition, but what if the competition is not available for purchase?
Perhaps the most common fallacy regarding new technology is that you can destroy the competition with a sufficiently large force. History has proven this false numerous times and it remains false with modern technology. Consider the battle of Argincout, battle of Crecy, or the establishment of the Yuan dynasty. Consider how much current corporations are investing in AI research with thousands of dedicated developers.
A superior technology can easily compete with Amazon's AI unit, for example, even though you may have a team of 5 developers competing against their 5000. The reason why is because it takes time to learn and develop a superior understanding of technology necessary to create the superior technology while the army of competing developers will find friction abandoning their perceptions of reality even when confronted with critical evidence. An army of 5000 developers is an establishment of culture, practices, identify, and perceptions resulting in a large stagnant wall.
The discussion is what category AInfalls under. I can instantly see its value for the centralized surveillance state: find the signal within all the noise of those bills of calls, messages, and CCTV images a state can capture. I can’t immediately think of anything useful the technology could do for a few people fighting for dmcrocracy. They do not even have access to any such vast data trove that’s usually at the beginning of the AI value chain.
> The discussion is what category AInfalls under.
A tool or utility. A hammer drives nails into wood enabling construction far superior to hands alone. A hammer can also crush skulls. A hammer is a utility. The determinate use is left to the user. The same objective reasoning applies to successful gun laws, which is often fatally missed by the gun lobby and advocates of gun restrictions.
Then, maybe, actually engage with the argument, instead of just restating its opposite without adding anything to support that viewpoint.
It used to be you would see theories about the internet threat to Closed societies, and I suppose those threats are real too. But perhaps the threats to closed societies are the obvious threats, and the threats to the open societies were not immediately obvious, and thinking on this is somewhat murky but if there are a bunch of obvious threats and a bunch of hidden threats then I guess people guard against the obvious threats.
Finally maybe anything that threatens one type of human society must also threaten all types, only the threats are changed around for each type.
The idea of "the end of history" was that Open had won, and it was just a question of mopping up the remaining closed societies. It turns out that maybe the open societies weren't as open as they thought.
Or that an "open world" isn't as stable as predicted.
Would we even have the same "open" societies without China's repressive society and their willingness to finance western consumption?
In the centralized Internet (think Facebook, Twitter, Google) this is something that we can to some extent understand and maybe control. In a proper decentralized world, this won't be possible. Privacy features will also protect the identities of trolls.
In the past, the content distributed was limited by the imagination of people. In future, no such limitation exists. You will have AI (think generative adversarial networks) learning to create fake content tailored for specific persons.
> It would not have been possible for us to take power or to use it in the ways we have without the radio and the airplane. It is no exaggeration to say that the German revolution, at least in the form it took, would have been impossible without the airplane and the radio.
IBM and the Holocaust
Fundamentally, AI is not a threat or a blessing, it is a technology.
Let's take the second amendment in the states for an example. The right to bear arms is meant to stop oppressive government but you cannot purchase anti-aircraft or anti-tank or anti-anything-really weapons. So, the might of the government's force is disproportionately in favour of the very government that the amendment is proposed to prevent oppression from, yeah? (Not that I'm in favour of overthrowing the government of the states, whatsoever; this is just an observation.)
When the Snowden revelations came about, this was the equivalent of China's list of people to send for "re-education". Granted, in the states, there's been no re-education (that we're aware of), it's only a small step from taking that information gleamed from those "lists" and doing just that.
At what point do we draw the line between authoritarian and not? Shouldn't the very notion of putting people on a list count? After all, it was a list of people that suffered the consequences of the Night of the Long Knives, was it not?
I fail to see how an open society would be considered "open", if it includes secretive programs like that. Isn't the principle of secret programs against a government's citizenry the antithesis of an open society?
Maybe I'm missing something here but to say that there are any open societies left (while probably dystopian in nature and bereft of any hope of the future) is beguiling the very fact that there doesn't seem to be very many (if any at all) open societies left.
(I'm probably talking bullshit circular logic, so feel free to ignore this tirade of discontent.)
To give a principled example, the <insert the appropriate three-lettered agency here> keeps a list of anyone who even looks-up things like TOR or TAILS. Users have their own reasons for looking the information up, and those are notwithstanding the fact they're automatically on a list, simply for doing so.
I get bad actors are a part of reality (let's be honest: they always have been) but, as I proffered originally, at what point do we decide a society is no longer open?
For example, if those who participated in Occupy were on a list maintained by their respective government[s] - just for exercising their democratic right to protest - then should we consider any society open, anymore? For example, it's no secrety that those lists, themselves, are shared amongst communities, like the five-eyes.
Of course, the best example would be the no-fly lists but given that those are black box, in and of themselves, should that be demonstrative of no longer being an open society?
The O.G. comment was merely conjecture, out loud, as it were; just as this comment is... Though, I'm still left bereft of what constitutes an open society, anymore, since that definition seems to have drifted in modern times.
(Of course, maybe I am obfuscating the definition of an open society with the precept of a utopia but maybe that's my stubborn hopefulness for a better future than the one that I know that's coming...)
 - https://en.wikipedia.org/wiki/Occupy_movement
Just my opinion then without reading it:
- in China we already see what a government can do with technologies including AI to control the behaviour of people
- the West always seems to think it's not going to be that evil around here because people are moral superior (I don't believe that - with Hitler in Germany we saw how quickly things can change and with refugees and poor citizens in all those countries we see how badly people can treat others and still think this is correct - even without an AI they'd believe blindly)
- too many techies have no moral & ethics. While studying and also in business I saw most of the people just being interested in personal welfare and earning the most money they can
TLDR; This (whatever 'this' will be) is definitely going to happen if not enough people find back to humanity - with or without support of IT/AI/... (just tools)
But that's exactly what I dislike: for everything you wanna do on the net you're forced to register. I just can't see it anymore and instantly close stuff that behaves that way.
While I also dislike being forced to subscribe to read something, free access to publications and open society are completely different things. After all, you had open societies before the digital age, when the vast majority of newspapers were paid.
Obviously, the two are linked, but I'm not even sure whether gratis access to all journalism encourages an open society or discourages it. (You can imagine just-so stories for either case:
1. free access to articles → everyone can read them → better informed society → open society
2. free access to articles → high quality journalism goes out of business as they have to compete against zero-cost alternatives → misinformed (or even deliberately disinformed) society → people less likely to fight for openness.
> 2. free access to articles → high quality journalism goes out of business as they have to compete against zero-cost alternatives → misinformed (or even deliberately disinformed) society → people less likely to fight for openness.
This can be right in the capitalistic framework many have to live in and deal with. On the other hand this encourages people to write what is sold best / for the highest price.
You can see how damaged the scientific publication system is as organizations and individual scientists are rated by how many articles (quantity not quality) they publish and where they are published (artificial reputation vs. real value).
I am convinced that everything must be open to really pave the way towards an open society.
The problem with this ATM is just the capitalistic framework. Peoples sole purpose in life is earning enough money to not perish before kind of a natural death occurs [which is also connected to amount of money you have].
The need to enslave yourself without purpose just to exist and the reality that success can be easily achieved just by making money somehow (quality doesn't matter and in most cases is punished because you don't earn enough margin) does not foster quality in any way.
Same goes for writing in my opinion. Better to read something from someone who is really dedicated and interested in the topic than some mercenary writer.
Otherwise might as well have a paper called “The Internet Threat ...” or “The pen and paper threat ...”
The America fought Soviet Union. But ok with a communist party in china, with their all in and everywhere in the society (and copy things on top of contributions) ... it might be too interwoven and too big to ...
It may be abhorrent to the societal values of most of the people reading this comment, but its great to MANY people in China.
But even that, is a red herring.
Strongly controlled and moderated communication networks are currently less brittle than open societal systems.
Even on a micro scale, on forums, you can see the ideas the Chinese apply being put to use out of sheer necessity.
When you Add in state level actors, with the ability to crunch the numbers, then its entirely possible to create the impression of a consensus in human minds with the use of sock puppets/pseudo-human accounts.
The holy grail of such research is discovering intent of speakers or groups of speakers. This will be used to stop hate speech as much as it will to kill dissidents.
This is a right mess and any tool created to clean it up (other than human effort), is liable to create more problems.
Remains to be seen what will happen in the Chinese case.
My other studies were in philosophy (ethics, etc.) and government and over the years I've found my formal training in them truly invaluable, they've broadened my perception and worldview about the ways science and engineering dovetail into society and make the world a better place by improving the lives of its citizens.
I have to agree with the tenet of George Soros' message for many reasons but from my perspective perhaps the most significant one is that we are moving at a frenetic pace headlong from an industrial age into a post industrial one that's driven by advanced technologies (and primarily through the use of information). We're entering a new era whose paradigms will have morphed into ones so very different from anything humankind has ever before witnessed and the changes are coming so very fast that they'll almost certainly cause fear and social disruption on an unprecedented scale unless we act now to adapt technology to our human needs and not those of governments and large multinational corporations—after all, they ought to be our servants, not vice versa as it is at present.
At present, society is both ill prepared and ill equipped to handle monumental changes of such a magnitude without considerable preparation, and we've hardly even begun to discuss the matter let alone draw up viable plans for society to adapt to them.
Leaving ML and AI aside for a moment, let's just look at the metaphysical† aspects of the Google/Facebook revolution. Both behemoths, but especially Facebook, are floundering in the mire over very important issues such as those concerning privacy, fake news, damaging effects on democracy and politics in general, and there's precious little light on the horizon to shine upon any potential solution let alone any commonly-agreed methodologies or viable options.
Let's look at what has effectively happened here: internet technologies evolved to a stage where worldwide networks such as Facebook became feasible and thus they were built without any real thought of the wider social consequences other than the paramount need to make money. Zuckerberg et al would like us all to believe that they had actually executed both their financial and social objectives as they'd planned but as we now know this is far from being the full truth.
Not only did Big Tech companies have secret plans all of their own with the deliberate intention of exploiting users but they kept these intentions hidden from both governments and users alike thus no independent scrutiny was possible until the inevitable leaks occurred. The lesson from this is that with no oversight, undesirable metaphysical effects arose from their complex systems the consequences of which have come back to bite them. Inevitably, this will happen again and again with ML and AI unless careful and sophisticated (and mandatory) regulation is introduced. To think otherwise would be foolhardy in the extreme.
It's clear to many that these 'geniuses' of Big Tech would have been fully cognizant of and understood how new physical properties often emerge from complex systems that are not foreseen from just examining their less complex building blocks. Moreover, similar but metaphysical processes evolve in human minds when they encounter complex systems. For instance, examining fine architecture brings an aesthetic experience to humans that no examination of a brick to the nth degree reveals. Therefore, there can be little if any excuse for Zuckerberg and his cronies for not anticipating in advance emergent human problems (such as those that have arisen from the Cambridge Analytica fiasco).
When in 1847 Italian chemist Ascanio Sobrero* invented nitroglycerine and immediately perceived its extreme dangers he became so scared and concerned about what he'd actually done that he kept the fact secret for over a year. However, unlike Sobrero who clearly had ethics on his side, the likes of Zuckerberg et al never gave any serious consideration of the consequences of their 'inventions'. As day follows night, they were expecting human problems but they simply ignored them until it was too late. Their lack of concern for humans—the hands that actually feed them—is palpable in the extreme; ethically and morally they're bankrupt.
As history illustrates yet again, we're now well past the point where it's safe to leave extremely powerful technologies in the hands of political novices who possess so precious few ethics—or whose few ethics are easily trumped on by their zealotry for certain technological fixes and or financial objectives. The fact that they may be the inventors or owners of newer technologies such as Facebook is irrelevant; what matters first and paramount is what is best for the citizenry and society at large.
The Google, Facebook et al cases ought to have been a non sequitur from the very beginning as the general will of the populus should have nailed them dead from the outset but it never happened for many reasons, including the highly addictive properties that Big Tech deliberately designed into their pernicious technologies. Tragically, over the past 40-50 years or so, many traditional ethical values which would have put the kibosh on these Tech Giants long before they'd gotten started have largely evaporated as our societies have become more homogeneous and international—nowadays, the lowest common ethical denominator is just that—pretty low.
Given that societies are still struggling with very basic ethical issues such as withering of our hard-fought democratic processes, rise of totalitarian power from both governments and Tech Giants then we're not even at ground level when it comes to solving the ethics of ML and AI. For starters, there are serious cultural differences (hence little or no agreement) over how to resolve the infamous trolley car/moral dilemma problem‡. At present, it is abundantly clear the various societies of an international world are not able reach a common worldview or consensus on this conceptual problem let alone a specific ML/AI incarnation thereof, consequentially we have precious little hope for solving even greater moral and ethical dilemmas that undoubtedly will be created by these fast-advancing technologies.
It seems to me very first steps must be taken to forge a common moral and ethical consensus for humankind. We need to first begin with the easiest problems to agree upon such as the inviolability of human life and then work upwards. Expect this to take a long time and it will. Of course, the huge dilemma is how to hold technologists and technocrats sans ethics (and common sense) at bay whilst various consensuses are being reached.
I am strongly of the opinion that (as I was fortunate enough to experience), we should begin by ensuring that core training for all engineers, scientists, technologists and technocrats—and for that matter, politicians—also include compulsory training in key philosophical subjects, especially ethics, moral philosophy and formal logic as well as basic/essential political science (the study of government).
I'm realistic enough to realise that despite such ethical studies being both core and compulsory, there is every that they will only have a minor impact in changing human nature if anything at all (at least in the beginning). Nevertheless their compulsory nature will achieve one major objective which is that every engineer, scientist and technologist and technocrat will be forced to learn the essentials of morals and ethics as they should be practiced in our increasingly technological societies.
Thus when their technologies go belly-up and damage both societies and people lives, with compulsory training in ethics under their belts, the Zuckerbergs of this world will no longer be able to claim ignorance as an excuse for their negligence, they will not be able to say that they 'did not know' or that 'we never considered that outcome'. The only likely excuse that they'll have left to argue is that of 'force majeure'—and it'd had better be a pretty good instance thereof or they'll be toast. …And good riddance.
Good effort George, keep the pressure up.
† As many will be aware, the uncomplicated definition of 'metaphysics' is 'above and beyond physics', that's to say ontological a priori deductive concepts of existence, of being, of becoming, reality etc. As far as Physics is concerned, metaphysics deals with ethereal, intangible concepts that are inconsequential to its Laws but nevertheless they're key to human existence as we know it, what it is to be human, our values, beliefs and ethics are metaphysical.
‡ The Moral Machine experiment , Nature, vol. 563, pp59-64, 2018-10-24 https://www.nature.com/articles/s41586-018-0637-6 Especially note graphs in Fig. 3: 'Country-level clusters'.
* Incidentally, Alfred Nobel was a student of chemist Ascanio Sobrero.
This being Soros, any connection to the US election would already doom his message to be disgeraded, or even taken as evidence for its opposite.
That's the western filter bubble and since this trade war it's even more obvious.
Typical double standards. The news tell you who the bad guys are and everyone knows that the news never lie...
 From the speaker himself: https://www.georgesoros.com/2019/01/24/remarks-delivered-at-...
He also did a very forward interview with 60 minutes. This  is a transcript of the video . A couple of quotes from him there,
- "I think I’ve been blamed for everything. I am basically there to make money. I cannot and do not look at the social consequences of what I do."
- "Whether I or somebody else does whatever is happening in the markets really doesn’t make any difference to the outcome. I don’t feel guilty because I’m engaged in an amoral activity which is not meant to have anything to do with guilt."
And you can find countless similar quotes and discussions from him. I respect him since he is honest about his motivations and beliefs which is something that cannot be said about the vast majority of people, let alone billionaires. At the same time he is undoubtedly a textbook narcissistic megalomaniac whose sole interest in the world is George Soros. And he will happily and openly share his willingness to engage in awful actions if he thinks it would benefit himself. Consequently, I can understand why many - particularly when we put on these charades of virtue - would find him a less than desirable person.
 - http://articles.latimes.com/2004/oct/04/opinion/oe-ehrenfeld...
 - https://pastebin.com/MMFmtPzd
 - https://www.youtube.com/watch?v=QSyczwuTQfo
I've never gotten this hate for Mr. Soros
I think my critique is very similar to Anand Giridharadas http://www.anand.ly/winners-take-all/ and I believe many people who hate him intuitively feel the same way (although perhaps do not understand why).
In particular the Soros-sponsored Open Society Foundation has had many dozens of meetings with the EU Commission and a hand in forming multicultural-based policies.
My own thought is that this political agenda is naive and may result in a backlash that ends up achieving the opposite of what he intends. There are already signs of this happening.
Also, what would be the mechanisms you would employ if you were a person with lots of money and desiring to shape the world into a better place? One approach is the one taken by Bill Gates, where he focuses on technological and social ones, but steers away from the political issues. But lots and lots of evil from this world stems from political reasons. Surely there should be a way to improve that, besides just focusing on the technical and hoping the politics would sort itself out.
I don't have any specific thoughts on what policies should be followed, just a sense that a slower and more organic course is preferable. Even then, reactionary forces will always exist.
> Claim: During World War II, George Soros was a member of the SS (a Nazi paramilitary organization) or a Nazi collaborator who helped confiscate property from Jews.
> Rating: False.
However, WRT this speech at Davos, I suspect his problem isn't with mass surveillance as much as it is with China performing the mass surveillance. Soros is an ideologue and I'm sure he'd be just fine with the technology being used to support his political agendas.
Your initial comment, though, asserted that he had been a Nazi collaborator and that therefore we shouldn't listen to him.
You can listen to him or not, but your decision shouldn't be based on the fact that he had been a Nazi collaborator, because he never was.
My guess is that we are about to see "No True Scotsman" very soon after I post this.