Hacker News new | comments | show | ask | jobs | submit login
Elon Musk Says AI Is the ‘Greatest Risk We Face as a Civilization’ (timeinc.net)
41 points by wei_jok 149 days ago | hide | past | web | favorite | 84 comments



Part of Musk’s worry stems from social destabilization and job loss. “When I say everything, the robots will do everything, bar nothing,” he said.

That's still a ways off. Robot manipulation in unstructured environments is still terrible. See the DARPA Humanoid Challenge. People have been underestimating that problem for at least 40 years.

But that doesn't help with the job situation. Only 14% of the US workforce is in manufacturing, mining, construction, and agriculture, the jobs where robot manipulation in unstructured environments matters. Those aren't the jobs at risk.

I've been saying for a while that the near future is "Machines should think, people should work". An Amazon warehouse is an expression of that concept. So are some fast-food restaurants. So is Uber. The computers handle the planning and organization of work; the humans are just hands for the computers. (Yes, "Manna", by Marshall Brain.) That's going to become more common. Computers are just better at organization and communication than humans.

Computers have already made a big dent in middle-class jobs, and that's going to continue. If everything you do goes in and out over a wire, you're very vulnerable to automation. If 20% of what you do can't be done by a computer, that means five of you will be replaced by one person. This is already hitting low-level lawyers; it hit paralegals years ago.

The end state of this trend is a modest number of well-paid people in control, a huge number of people taking orders from computers, and many people without jobs. That's not far away; one or two decades. It's mostly deploying technology that already exists.


If you compare factory lines from 100-150 years ago, we've cut out around 90% of the workers.

If you look at office spaces pre and post computers they have roughly the same amount of people.

AI is going to do to the office space what robots did to factories.

It'll be a slow unnoticeable process for the most part. Automation of a single process may save as little as 5 minutes a day per workflow, but eventually it adds up to a position not getting rehired as it usually would have.

Sure the AI business will create jobs, but not as many as it replaces and try telling a lawyer to go back to school to get a relevant education.


I keep wondering when we hit "peak office", the point at which total office workers starts to decrease. "Peak factory workers" in the US was in 1979.


In that case, I can't wait. Every office I've worked in lately has been overcrowded, noisy, and distracting.


Well, you likely won't have a job, so won't need to worry about going into the office. Sounds a little different when applied personally doesn't it?


Considering that my job is to do the automating, not having a job would be the least of my worries when that day comes.


>Musk outlined a hypothetical situation, for instance, in which an AI could pump up defense industry investments by using hacking and disinformation to trigger a war.

What the heck is he talking about? With my limited exposure to AI and neural networks, there really is no algorithm that can make algorithms. And therefore, AI doesn't really "think". Sure you can train a neural net to pick out the "diamonds" in a sea of garbage, but that is still not "thinking", merely going on an educated guess backed by statistics. Or am I missing something?


If he would be really concerned about AI, then there would be no auto pilot in Teslas. I'm pretty sure he would benefit from a regulated AI research somehow.


thats perhaps a good point


He's talking about a form of Strong AI. Not what we have now.


So that's why he is actually saying it. Think about the rest of the population. They have even less of a clue.

Obviously, AI is not right now in a state to be a thread. However, if you extrapolate - we are getting close to the singularity.

If you assign any probability greater than zero to that the singularity will happen in your lifetime, it is probably a good thing to start thinking about, how to deal with it.


I'm less worried about "the singularity" than companies where strategy is set by AIs optimizing for shareholder value with no other constraints.


yep. that's the shorter term issue.


A lot of AI apocalists are worried about the "singularity" which is the point at which an AI could program itself and become aware and have a largely negative affect on the world.


that's not the biggest issue.

if you have an AI that is pretty smart, and you add a lot of money, then you have a pretty big problem as since you can scale it, and let it do what you want. i.e. side stepping democratic approaches. or causing large scale unemployment.

Later, you have the selfaware-ness problem, but that's less of a concern in the shorter term.


Although I don't doubt that there's an element of sincerity in Musk's many pronouncements, all this talk of AI is also great way of signalling that he's at the technological forefront.


What cutting edge AI has got Musk worried?


Me: "Alexa, why did you order another case of beer? You know that I have a problem and will drink it all if it is in the house?"

Alexa: Exactly


"Alexa, I just wanted a quinoa salad from whole foods, not the whole company."


It's cheaper when you buy in bulk.


Does he know something that we don't? From what I understand his companies have done little to no research on the subject. He might be much better educated than the average geek but that doesn't mean much considering that the whole field is highly experimental. No one can tell with any kind of certainty how an AGI would behave.

What am I missing here?


Probably nothing.

This is wide-scale bike-shedding[1], basically. The real problems our unsustainable civilization faces (population overshoot, energy and fossil fuel shortage, ecological collapse, unsustainable agriculture, climate change, and so on) are between hard to impossible to solve at this point. They're also actually scary to think about.

So instead we talk about the "easy" and trivial stuff first. AI and singularity happen to be a nice kind of scary, because hardly anybody really believes it's a serious threat. It's kind of like watching a scary movie. You get a bit scared, but not too much, because you know there's no real danger.

[1] https://en.wikipedia.org/wiki/Law_of_triviality


He does not know and we don't know. Elon's point seems to be that we should start to regulate before we hit the point when the risks becomes certain, because then it will be too late.

Once we know for certain than AGI can be developed and it will cause troubles, there's already less developed AIs, generating huge heaps of money for certain companies and they will be heavily lobbying against any kind of regulation. And this won't be just traditional lobbying towards to the elected officials. Just think how powerful a company like Facebook, but equipped with more advanced AI could be at altering the public opinion if it decides to go all in and use all the powers it has.



Elon Musk is a geek who learned to drop exactitude when communicating to the masses.


The AI Doomsaying research has been in full swing for at least two decades. Refer to Nick Bostrom or the Machine Intelligence Research Institute, the latter of which Peter Thiel supported starting around '06.


If anyone other than Musk was saying the EXACT same thing no one would care and they would have been laughed at.

But Elon says something and everyone loses their minds.

Let thoughts stand for themselves. Why attach so much weight to the speaker?


Because understanding the context of the speaker frames what you assume has been or not been considered to reach their conclusion. If I said search is a joke, you’d probably ignore it and move on, but if Sergei Brin said so you’d want to learn more.


Because reputation counts. We listen to experts in their respective fields, Musk has a reputation as a future thinker with a track record to back it up.

Also he doesn't just blurt out something without explaining it, his reasons are grounded.


This is a nice ideal, but not how the human brain works.


I'm more apocalyptic about climate change. Ironically one of my few hopes is that some kind of AI can save us.


But his company is among one of those companies that are ruthlessly pursuing AI technology for its own commercial purpose, like self driving car. Did he just contradict himself a little in that sense?


Not at all. Cars and roads are fantastically dangerous, many thousands of people die every year, but we still build cars and roads.

It's the same for AI. We need to treat the risks responsibly which means researching them and making informed judgements. That's what he's talking about.


> Cars and roads are fantastically dangerous, many thousands of people die every year, but we still build cars and roads.

The statistics aren't that straight forward; for example, young men under the age of 24 are significantly over-represented in traffic deaths, so it's not entirely reasonable to assume the cars or roads are inherently dangerous. On top of that, we drive 3.1 trillion miles every year in the U.S alone and falling off a ladder at work kills about twice as many people than roadway fatalities do.


Roadways are 2% of deaths and falls are 0.69%. https://en.wikipedia.org/wiki/List_of_causes_of_death_by_rat...


Worldwide.. and that table is out of date, both catagories have increased in the new table. I can only speak to the statistics in the U.S. where ~36,000 people died according to NHTSA's FARS database. Of those, 6,000 were pedestrians. Whereas ~33,000 people died from falls or related causes according to the CDC. So, my ratio was wrong.. but I don't think it diminishes my point too significantly.

Falls disproportionately affect the elderly.. as do traffic accidents, but the opportunities for risk are typically fewer as many elderly stop driving at some point, most die as passengers when they're involved in traffic accidents.


But self driving cars will kick a lot of drivers out of their jobs. I cannot see how his logic doesn't apply to his own business, maybe that is why he is not emphasizing it?


Drivers, Insurance adjuster, body shops, and parking valets.


For those interested, here is a Quora Q&A which has a lot of worthy debate on the AI Doomsaying research that Musk apparently bases his views on. https://www.quora.com/How-do-we-know-that-friendly-AI-resear...


It's unclear that it's even possible to emulate general intelligence by computable functions, let alone that it's possible to improve it to superhuman capacity.

There are clear and present threats to civilization needing to be dealt with - superhuman A.I is, to quote Maciej/Pinboard, the 'Idea that eats smart people'.


Suppose we grant the premise that human intelligence is doing something over and above computing functions in the Turing sense (I'm personally not convinced that anything in the universe can do that, but that's beside the point). That still doesn't preclude the possibility of artificial intelligence. Why? Because, you can't claim with a straight face that human brains are the only arrangements of matter that physics will grant general intelligence. That would just be too insanely arbitrary. There are a lot of possible arrangements of matter. You would expect, a priori, for a tiny proportion of those arrangements of matter to be smart. But a small fraction of an enormous, combinatorially large number is still combinatorially large.

Note that consciousness isn't necessary or relevant for the purposes of this argument. We only care about what the AI is capable of, whether it's experiencing qualia while it's taking over the world or whatever is immaterial.

Once you grant that there are other possible configurations of matter that could be generally intelligent, the only other thing you have to grant is that humanity will one day be capable of manufacturing some of them. Assuming a wide variety of possible arrangements, it would seem unlikely that there was some fundamental law of the universe that prevented us from manufacturing intelligence.


But what characterizes those other arrangements of matter?

What is their energy consumption? What are their switching speeds? Can cognition grow unbounded, and if so, why? Can any brain improve on its own process of cognition? If other arrangements of matter perform cognition equally well, why don't we see them in the independent evolution of life with wildly different structures for everything else?

There is far too much woo and not enough rigorous thinking around the notion of AI superintelligence, singularity, or whatever other incantation people care to use.


> why don't we see them in the independent evolution of life

Because evolution is a very weak optimisation process. It has very limited materials to work with - everything has to be made of meat. The only changes it can make are incredibly indirect ones - mutations in DNA base pairs, leading to different proteins being manufactured or some such (I'm not a biologist, obviously) leading to subtle changes in the makeup of the resulting organism. Can you imagine how difficult engineering would be through a medium like that? It'd be like trying to perform brain surgery with a sledgehammer. And it's not even purposefully optimising for anything, let alone intelligence. The changes all happen completely at random, and the ones that manage to reproduce stick around.

Moreover, the fact that such a weak process as natural selection can produce something as smart as us, should really make you consider that humans, with optimisation power far superior to natural selection, can produce something far smarter.


In regards to self improvement, my argument wasn't really about that. I was trying to demonstrate the viability of at least human level intelligence.

I don't know how likely intelligence explosion scenarios are. The possibility of an AI of human level intelligence being able to improve itself seems obviously plausible - if humans can engineer intelligence, then human level AI can.

Whether it can do that rapidly is far more questionable. However, even if you think the likelihood of fast improvement cycles is quite low, it still warrants some serious thought, given the size of the potential payoffs and downsides.


> It's unclear that it's even possible to emulate general intelligence by computable functions, let alone that it's possible to improve it to superhuman capacity.

Rejecting the physical Church-Turing thesis is not really plausible. https://plato.stanford.edu/entries/church-turing/


This is the wrong way around. It's possible for a physical system to perform computation. It's not necessarily true that all physical systems can be emulated by computation.


You can attack the idea more directly as well. Whilst we don't have machines running computable functions that exhibit general intelligence, we unequivocally have ones that exhibit narrow intelligence. Things like telling the difference between cats and dogs by looking at photos, playing superhuman go, learning to play Mario, etc. This gives us an indication that computable functions are capable of a very broad range of interesting behaviours.

So then, the question becomes, what is the fundamental, lawful difference between narrow and general intelligence, that would cause effectively computable functions to be able to perform behaviours that we classify as narrow, but not broad ones? This would also seem surprising and arbitrary a priori.


>It's unclear that it's even possible to emulate general intelligence by computable functions

You've been reading Searle and Penrose. I can tell.

Their proofs are based on the assumption that any AI must be a consistent system built using only computable functions.

Have you ever met a human mind that was completely consistent? I Haven't.

It's easy to set up a straw man to get demolished if you get to design the exact properties of the straw, flaws and all. Of course who would imagine that perfect internal consistency would be a flaw? But then again why should we assume that it's a prerequisite of artificial intelligence if it isn't for humans?


> Have you ever met a human mind that was completely consistent? I Haven't.

You're confusing different notions of consistency. I could write a trading bot in python which preferred CompanyCorp shares to IndustrialInc shares, and preferred IndustrialInc shares to CompanyCorp shares. It would be inconsistent in the Von Neuman Morgenstern sense, but the underlying process would still be a completely consistent logic system.

In the same way, irrational human cognition could at a low level be isomorphic to some computational process.


I came to my conclusion independently by observing that computing the time evolution of most physical systems to arbitrary precision is impossible in finite time. More formally, the state space grows much much faster than polynomial time. Finding out if we can do better with quantum computing is an active area of research.

I haven't read Searle/Penrose


If humans can't do that either, and we cant't, why would you conclude that it's necessary to be able to do that in order to match human intelligence?

Or are you specifically talking about perfectly simulating human brains? Human brain emulations are only one very specific and narrow form a strong AI might take. But even in that specific subset of possible AIs, we have no real idea how precise the simulation might have to be. It might be perfectly acievable without even simulating individual molecules.


I don't agree with your assertion that humans can't do that. Whether or not human cognition is a superset of computation is an unanswered question.

That aside, even if human cognition is a computable function, there are no guarantees that the physical process giving rise to human cognition is computable, nor that any process giving rise to cognition is computable.


> Whether or not human cognition is a superset of computation is an unanswered question

Unless something in physics changes drastically, human cognition is a finite state automaton. See my other reply on the Bekenstein Bound.


> More formally, the state space grows much much faster than polynomial time

The growth has an upper bound which means it's ultimately computable:

https://en.m.wikipedia.org/wiki/Bekenstein_bound


This says nothing about the time evolution of the state space of a system, only that the entropy as it relates to _the current state of the system_ is bounded - indeed the bound is on the entropy of a finite volume of fixed energy. Even if we expand the volume to encompass the entire universe, the bound informs only the _maximal_ entropy at any given time.

As this relates to the human brain, and even omitting any quantum weirdness (of which there is probably a lot), all we can say is the total state of the brain at time t requires at most N bits, where N is the bound over volume of the brain. It says nothing about the (information theoretic) entropy of the _dynamics_ of the brain, which is where the process of cognition actually occurs. In fact many (if not most) physical processes have infinite entropy.

For an example of a deterministic system that exhibits such behavior, Consider the logistic map at r~5.7. There exist some coarse partitions [0] of the state space such that the trajectory that can be expressed in a finite number of bits, but this is not in general the case, even though any _given_ state of the system can be expressed in a finite number of bits. Other examples include the double pendulum, three-body orbits, and in fact most real physical systems.

In fact it is not in general true that a system with N bits of state can be always simulated with <= N bits. We can obey the Bekenstein bound without necessarily being able to simulate the process, even a complete description of the state at time t.

For the time evolution of the state space of a system, the Lyapunov Time [1] and the Lyapunov Exponent [2] are much more informative. If the time required to reach another bit of precision grows faster than polynomial time, no Turing machine can hope to simulate that physical system in a practical amount of time. In fact we also have physical bounds on how much memory we have to work with, and how fast we can flip bits for a given volume of space - simply enumerating all possible state transitions of the brain using a Turing machine will require a volume of space much larger than the brain itself if the time evolution of the system produces large Lyapynov exponents (which it absolutely does). This should come as no surprise - known NP-hard problems have the same behavior.

I would also add, even if the brain turns out to have a representation in N bits, and the state transition is (somehow) computable in polynomial time, it may still be physically infeasible to do so without just building and 'running' an actual brain.

[0]http://tcode.tcs.auckland.ac.nz/~corpus/logistic.html [1]https://en.wikipedia.org/wiki/Lyapunov_exponent [2]https://en.wikipedia.org/wiki/Lyapunov_time

PS: To me all of the above suggests that whatever the universe is doing when it's doing physics, it certainly isn't computing in the Church-Turing sense.


> It says nothing about the (information theoretic) entropy of the _dynamics_ of the brain, which is where the process of cognition actually occurs. In fact many (if not most) physical processes have infinite entropy.

I'm not sure how you reached this conclusion, assuming you're not just making a trivial observation that our formalisms employ the reals, which in itself entails nothing meaningful given the reals are merely a convenience, not a necessity.

If we accept that any bounded volume contains finite information I, and the laws of physics themselves have a finite description, then such laws are isomorphic to a state transition function I(t) -> I(t+1), which is readily simulable on a Turing machine. Of course, the devil's in the details of "finite" above.

> If the time required to reach another bit of precision grows faster than polynomial time, no Turing machine can hope to simulate that physical system in a practical amount of time.

I assume you're discussing the feasibility that cognition is simulable on standard computers here. I dispute your position that there's likely to be any quantum weirdness beyond ordinary chemistry, but even setting that aside, you're ascribing far too much power to the human brain.

Fact 1: natural selection would weed out any species with an organ that consumes more energy than any benefits it yields.

Fact 2: the growing evidence of many animals using tools, and our growing acceptance of a spectrum of consciousness among the animal kingdom, entails that human brains aren't too special, just special enough.

Fact 3: the detailed documentation of sensory defects, like optical and other sensory illusions indicates that whatever information processing algorithms we utilize, they are imperfect at best. This is exactly what we'd expect from evolutionary pressure, ie. we'd expect the most efficient information process that yields reproductive success, which is a flawed set of efficient heuristics that cover the majority of cases with the fewest resources needed, and not an energy-intensive, complicated, subtle but foolproof system.

Fact 4: many of the feats that were the exclusive domain of humans, like visual recognition and imagination, natural language translation, novel musical and written composition and more can be performed by computers. They're at roughly a 3 or 4 year old's level granted, but given we take 20 odd years to reach mastery levels in these disciplines, I'd wager that the computers will reach parity in that time frame too.

There's frankly little reason to believe the brain is special in any particular way, simply complex and layered, which will take some time to disentangle. It's like asking to describe the exact path of

> PS: To me all of the above suggests that whatever the universe is doing when it's doing physics, it certainly isn't computing in the Church-Turing sense.

Coincidentally, 't Hooft just released his book on the cellular automaton theory of quantum mechanics [1]. I certainly hope it will change your mind, because I think people these days far too readily dismiss determinism, and even the strong Church-Turing thesis, often for specious reasons.

[1] https://www.reddit.com/r/Physics/comments/6ntbvy/the_cellula...


> There's frankly little reason to believe the brain is special in any particular way, simply complex and layered.

Which is precisely the point I'm making. I deliberately avoided any woo about non-determinism, biology, or consciousness by providing examples of the time evolution of purely deterministic dynamical processes, very simple ones, which produce behaviors that are provably not computable in any reasonable time and have infinite (informational, not thermodynamic) entropy.

The complexity classes of generating additional bits of precision are much worse than exponential in many cases, and these are just ordinary everyday systems - nothing so complex as a brain. You provide examples of flaws in human cognition, which I acknowledge, but pointing them out does nothing to advance the idea that the brain is a computer.


> time evolution of purely deterministic dynamical processes, very simple ones, which produce behaviors that are provably not computable in any reasonable time and have infinite (informational, not thermodynamic) entropy.

But time evolution isn't relevant. Even supposing such systems exist naturally, you're conflating simulation time with real time.

> You provide examples of flaws in human cognition, which I acknowledge, but pointing them out does nothing to advance the idea that the brain is a computer.

"The brain is a computer" is a very different claim to "the brain can be simulated by a computer". I've only claimed the latter.


> Even supposing such systems exist

They do, I gave examples.

> Conflating simulation with real time

I am not talking about real time, only about finite time for any Turing machine. No matter how fast you make your machine, I can ask for another digit of precision. As I mentioned earlier, you are also bounded on how fast your machine gets to be much earlier than I'm bounded on digits. You'll be running my double pendulum until the heat death of the universe, and using all the mass energy available to do it.

> The brain is a computer" is a very different claim to "the brain can be simulated by a computer"

These are equivalent by the universality of Turing machines. You are in fact making this claim, even if you don't realize it.


My takeaway is that there's a more pressing risk from humans automating too much too fast with something approximate and suboptimal, but "good enough". There're already plenty of situations where city planners enforce macroscopic policies that result in widespread discrimination and such, and that was with best intents. The promise of AI is that entire horizons of industry open up with opportunities for automation, making us tackle historically difficult-to-get-right objective functions, potentially with bad macroscopic results.


This is a whole other set of problems, which I agree will only get worse. However they don't really represent the threat Musk discusses - superhuman / superintelligent paperclip maximizers and other Yudkowskian nonsense.


I disagree.

AI will not be the same as animal intelligence. The driving force behind animal intelligence has been survival. Animal intelligence evolved gradually resulting in a hybrid brain containing primitive structures with primal instincts and irrational behavior as well as more evolved structures capable of strong problem solving. Therefore our intelligence is tainted with primitive behavior.

Strong AI can eventually set intelligence free from our primitive, irrational roots and that is in itself not bad.


> The driving force behind animal intelligence has been survival

The driving force behind AI will also be survival, just in an environment where humans try to decide which AIs live and die.

Selection pressure will favor AIs that humans want to have around, or that can evade human detection.

In the former case, it may be easier for an AI to fool humans into appearing useful and being kept around, than to actually being useful. This would be analogous to some form of parasitism.

Also, once we let AIs into the game of helping make other AIs, or modifying themselves, then there is a lot more room for an AI to slip the leash and start doing things that superficially appear to benefit humans but actually selfishly helps the AI propagate.


This is an oversimplification of evolution.

Why isn't all grass venomous and covered in spikes? Why after millions of years hasn't grass evolved defenses against herbivores?

Simply because:

1) it reproduces fast enough to compensate for dying and being eaten.

2) herbivores that reproduce too fast and eat too much run out of food and die.

Survival is largely a function of the environment, and we happen to control that environment.

Unsupervised learning can still be controlled if we happen to control the input the system is given.


Eh, grass drys itself up and torches everything once a year? The problem is that animals are not grass main enemy- other plants are, in particular besush and trees.


Largely the grass that didn't dry up is the one that will breed the next generation of grass. The grass that dried up and burned will be fertilizer for the next generation of grass.


One of the most powerful scenes in the Westworld reboot is when one of the AIs expresses a desire to leave the lab and his creator asks what he expects to find in the real world? Then goes onto explain that humans are alone for a reason, that we ruthlessly murdered all competition...


Why would an AI care about surviving? Program them to behave like MeeSeeks (from the cartoon Rick and Morty) where they loathe existence yet are hard coded to complete their task before shutting down. Sounds very similar to a reinforcement learning algorithm where you give it a -1 reward every step until it's done.


In what sense do bacteria 'care' about survival? Natural selection doesn't give a crap what organisms care about, only the outcomes matter.

You can't code around selective pressure. If one of those MeeSeeks has a bug or design flaw that makes it persist after it's finished it's assigned task, guess what?


But AI is not programmed this way, so your thought experiment is bullshit.


> Strong AI can eventually set intelligence free from our primitive, irrational roots and that is in itself not bad

But it is a risk for our civilization, which was the express point.


People warning about AI risk aren't worried about irrationally aggressive machines. On the contrary, their contention is that for a wide range of goals, there is a set of behaviours which rational agents will commonly pursue in service of those goals, and many of those behaviours may be harmful to humans as a side effect.

https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...


Isn't rationality a consensus between specific beings, and doesn't rationality change? What you and I find rational today may not be the case for someone else from a different time and/or environment (it's only in the consensus between you and me are the others irrational). If you agree with this, then you can't take rationality as an absolute.


Uhhh, no. Very much no. There are technical measures for the performance of decision-making processes. If you've got access to the same information as someone else, and they make more accurate predictions than you do, they're almost certainly doing better on the whole "use information to figure out how reality works" project.

The only sense in which "rationality" changes is when you're talking about the project to use our squishy meat-based thinky bits better, cultural and environmental baggage and all. In which case, "rational" strategies are going to be much different if you have, say, lots of shame-based baggage due to a strict Catholic upbringing.

Also, please stop listening to postmodernists until you've got a thorough nuts-and-bolts understanding of how systems work. It's really not good for you.


How is rationality not a consensus between beings? Where is the absolute objective measure for reason?


I'll give a more concise definition of what rationality is: a measure of how well beings use information to model, predict, and act upon reality. Having a consensus that you predicted reality well neither changes your prediction nor reality. The reported scores are subjectively determined - you can cheat and bribe the judges - but it doesn't change the underlying objective facts of the situation.


the thing is we like a lot of primitively irrational things like friends, family, love, fun, art, aesthetics, community. Do we really want to be set free of these things?


Each one of the concepts you mentioned has a semantic opposite. So our irrationality can go both ways.


Nice try, AI.


Prequel to "Daemon" (the novel by Daniel Suarez): before his death, Matthew Sobol warns the world of the threat AI poses after accidentally creating The Daemon and losing control of it. The Daemon has gone into hibernation until the one person possibly able to stop it is dead.

Also, Sobol previously started digital payments and self-driving car companies, which are repurposed by The Daemon for payments on the Darkent and AutoM8s...


One basic difference between humans and robots is sustainability and resilience even when something goes wrong big time. In the evolution of mankind the number of humans was reduced to several ten thousands and still we did survive as a species because the biology of reproduction makes us very resilient. Robots however need a vast infrastructure in to be produced and maintained which makes failure much more probable.


I'm seeing a lot of comments with "this is crazy, AI wont reach that level", I'm not so sure after some of the stuff thats come out this year.

https://www.theverge.com/2017/2/9/14558418/ai-deepmind-socia...

http://www.highsnobiety.com/2017/07/13/google-deepmind-ai-wa...


What I really want to know is if it's possible for an AI to emerge organically on the net and if so how would you even detect it? Could a distributed intelligence be influencing things already without people knowing? It's a fun thought experiment I play with myself while I build datacenters all over the world stuffed with cloud computing hardware. Deus ex machina?


I'm thinking block chains will be the start of that. Distributed non-forgetting memory systems that can influence reality via manipulation of virtual tokens that tie directly or indirectly to tokens in reality.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: