Hacker News new | comments | show | ask | jobs | submit login
The Merge (samaltman.com)
222 points by holman 4 months ago | hide | past | web | favorite | 177 comments



I'd love to know Sam's views on his particular responsibility in addressing this as a member of Reddit's board.

If the "attention epidemic" and risk of hacking people's attention is high, Reddit is likely to be just as culpable in that as Facebook. Further, Reddit has shown to be ripe for weaponizing by hostile foreign nations given the Russian meddling. This has not gotten nearly the attention it warrants, and many feel Reddit has done nothing particularly worthwhile to address the matter.

So in an age where algorithms optimize to capture attention for the benefit of platform owners, and platform owners are incentivized by ad revenue where the advertisers may have malicious social engineering motives, or where the platform is seen as an attack vector outside of ads, what is the responsibility (legally and morally) or platform owners and investors in doing something about this?


I don't quite get why this is written as if the author is a neutral observer of this phenomenon, when in reality he works extremely hard every day to make sure it happens.


"If we don't do it, the other guys will first."

Sure, if you could coordinate all human activity on the planet you could prevent further AI research from happening. Assuming you can't coordinate all activity, getting out in front of the problem is probably the best you can hope to do.


[flagged]


To be clear, I'm agnostic of the guy's motive -- I'm just genuinely puzzled how, if you spent most of your waking hours supporting AI startups and expanding the tech industry, you could possibly see that as "inevitable" and not something that requires a ton of hard work. Surely if this was all just inexorably happening on its own then everyone at YC would be out of a job?


This reminds me of something Slavoj Žižek often likes to bring up.

> In the Stalinist ideological imaginary, universal reason is objectivised in the guise of the inexorable laws of historical progress, and we are all its servants, the leader included. A Nazi leader, having delivered a speech, stood and silently accepted the applause, but under Stalinism, when the obligatory applause exploded at the end of the leader’s speech, he stood up and joined in.


It's both inevitable AND requires a ton of hard work. But it's not one company or guy that is doing it. There are plenty of other efforts besides Open AI.

It's just one of those processes that are complex but still have a trajectory and are predictable in some regard. Sort of like the Transcontinental Railroad in a way.


I think it's a case of "if I don't do it, someone else will". And I don't mean that in the (fallacious) sense of "it's ok that I do bad things, because if I don't, someone else will". I think it's reasonable to look at future progress, recognize that certain things are going to happen regardless of whether or not you're involved, and then make the decision to get involved (if you have the resources to do so) in order to shape how it happens.


I sort of agree, but at the same time the way incentives and work function in our society I think it's quite possible that something can both "require a ton of hard work" and be pretty inevitable, barring some huge societal change.


A lot of people would be out of a job if we collectively realized that we don't even need them, so that's not really an argument.


Can we not?

I'm still holding out hope that Sam will come participate on HN one day. I assume you don't know him on a personal level.


I'm judging him by his actions, words, and deeds. Plus the result of them. I don't care about intent. Intent doesn't matter very much, practically speaking.


Computers are tools. Some of the tasks for which we have formerly used cognition can be handled by them, just as some of the tasks for which we formerly used teeth have been largely delegated to knives. This doesn't mean we're merging with the computer any more than we have with the knife, nor does it mean computers can replace our brains any more than knives have replaced our teeth.


Large brains are just tools. Great apes used sticks and rocks as tools to extend the use of their hands, and evolved larger/more powerful brains to extend their decision making and problem solving abilities. There isn't any real difference in the global impact of humans relative to that of great apes, because humans just have a more well developed versions of this particular tool.

Then again, the descendants of the apes that didn't develop the particular "tool" of generalizable problem solving are extinct or endangered, while the descendants of the ones that did are busy working to create AI


> Large brains are just tools

That's an odd position to take. Surely there's a difference between the hand and what it grasps.


The difference is not between the hand and the tool it grasps, both of which apply force to manually manipulate the environment, but between the hand/tool and the mind that directs it.

I agree that that is an odd position -- it's not one i hold, but a reductio of the "computers are tools no different than knives" argument above. machines that are able to process novel information and make decisions are fundamentally different from other machines, whether they are composed of tissue or silicon


Your dna is the hand in this metaphor.


Knives and other tools have fully replaced teeth for those who like it that way. Full meal drinks like Soylent, and other teeth advancements like high tech dentures, etc, render normal human teeth obsolete in the modern world if you so choose.


Forget Soylent; our biology merged with our technology millennia ago.

Our ability to cook our food has rendered large molars obsolete to the point where some people require medical intervention to remove them, lest they crack their teeth and jaw, resulting in infection and death.


Molars aren't dangerous because we cook food. Molars are dangerous because our brains were so useful that they took over the head space that molars were using, and tooth problems were less harmful than bigger brains were helpful


Interesting, I thought our mouths and jaws shrank after we didn't have to rend raw meat and plant materials.


I'd be happy if my phone's autocorrect feature would actually produce sentences with proper spelling and reasonable grammar. Apparently that's still too hard for our mighty machine learning systems. I'm not going to merge with any technology that is worse than my stupid mush brain.

[edit - why am I still editing the spelling and grammar on this post?]


Just because that good autocorrect AI or AGI isn't available today doesn't mean that we should dismiss it as something that we don't need to worry about. Because the change to our way of living is so great, we should start making plans, even if it is 10 or 30 years out.

Personally I believe we will see really interesting animal-like AGI demos in 2018 and 2019.

It's just dumb that people can't take this stuff seriously until all of the engineering and research is 100% done.


I'm not dismissing it. On the contrary, I can't wait for brain-machine interfaces. I can't wait for really good self-driving cars. I'm just not as optimistic as many of my colleagues.


I'm waiting for self driving lawyers. And lawn mowers.


My lawyer can drive himself, but it only works with BMWs.


It's just a matter of doing things in a proper order. World conquering super intelligent AI first, then really high quality auto-correct. It's actually what our AI God will spend all of its time on, correcting spelling & grammar for 13 billion people sending trillions of increasingly poorly formed messages.


See the book Avagado Corp.


Can we aim lower? I'd settle for one that auto corrected "the" and "they" whenever I (frequently) get it wrong. After that I'd like one that could sort out the "there", "their" and "they're" thing for me.


I'd settle for not remembering a word I type incorrectly on a tiny screen all the time (without a prompt). No "yiu" isn't a word, stop suggesting it.


"It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate—the most surprising thing I’ve learned working on OpenAI is just how correlated increasing computing power and AI breakthroughs are—and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast."

I'm not convinced that an exponential number of people working on AI produces exponential advancements. Wouldn't we see diminished returns with each new person, presumably each one is less capable and less expert? I see AI experiencing a hype-cycle like disillusionment before seeing "double exponential" returns.


The AI research space is still a pretty green field. Right now you can basically take any random paper and combine it with another random paper and you will be able to find a use case where that succeeds and write a new paper about it. More experienced researchers will, of course, have a better intuition/knowledge of what techniques to combine to get more impressive results.

As long as it stays like that (and by opening up new sub-areas, it could stay like that for quite some while), adding more somewhat qualified people to the field results in all the useful combinations being discovered faster and thus exposing new potential starting points. Probably no exponential growth in the strict sense, but quadratic growth isn't bad either.


Perhaps we could build some sort of neural net that ingests AI research papers and then outputs the next big thing in AI. \s


Half of the research papers titles already read like that, so why not? ¯\_(ツ)_/¯ ("Learning to learn by gradient descent by gradient descent")


This is interesting. Good time to be a researcher!


Diminishing returns on an exponential can still be exponential.

Typically, diminishing returns means f(x) = x/(1+log(x)) where x is headcount and f is output. The overhead goes up with the log of the number of people, because a hierarchical organization will have log(N) layers of management that need to be traversed to make decisions.

If we have exponentially increasing manpower, x=exp(t), then f = exp(t)/(1+log(exp(t)) which is O(exp(t)/t)


The Pareto distribution more applicable in this case, since we're talking about research and scientific discovery.


It is not just "less capable" or "less expert" but also the result of how "normal science" progresses by harvesting low hanging fruits.


Well said. For example, with the hardware improvements of late (GPUs, TPUs, etc.), we are seeing lots of low hanging fruit as we haven't pushed in this direction before, but I imagine this diminishes soon.


It might just take one genius like Albert Einstein to create a new paradigm, so I could believe it happening sooner rather than later with the more people are working on it.

I suppose it depends on your view of the theory of scientific revolutions.


Good point. I tend to subscribe to Kuhn's Structure of Scientific Revolutions [0] which I think suggests that a paradigm shift comes as collective observations mount and it is eventually exhibited through a genius on the cutting edge.

[0] https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...


I don't think we need a genius or a new paradigm for AGI. I think the main ideas of a few different approaches to do it are already out there. There are a lot of tough problems to solve but to me it looks like it's a matter of hard work rather than any totally new approach.

Look up for example Yann LeCun's AGI presentation. Now see projects like Ogmaneo or recent Deep Mind papers that do fast online learning or have ways to work around catastrophic forgetting and other issues raised by LeCun.

I believe that one straightforward way to get there is to continue to apply neural network research towards attempting to mimic animals. And there are a bunch of people doing that and making progress.


attempting to mimic animals

That is harder than you might think. For example, the simplest animal with a nervous system, C. elegans worm has been simulated "only 20 to 30 percent of the way towards where we need to get" according to one of the OpenWorm project leaders. That's 302 neurons total.

Using a computer analogy, it's like struggling to build an abacus when our ultimate goal is to build a Core i7 processor.

rather than any totally new approach

It's not clear to me that we have any approach to achieve AGI. People like Ben Goertzel have worked on that for decades without much to show for it. Current deep learning methods have little to do with AGI, they focus on very narrow applications, by design.


I didn't say simulate brains or that it would be easy. Research what I suggested a bit more carefully because it addresses multiple parts of your response.


Are you associated with Ogmaneo project?


No it's just one of the more promising approaches that I have seen.


I feel like AI is already on the tail end of that hype cycle and disillusionment.


Respect the thought here, though definitely feel like we're still a ways off. That said, what resonated most with me was the footnote:

"I believe attention hacking is going to be the sugar epidemic of this generation. I can feel the changes in my own life — I can still wistfully remember when I had an attention span. My friends’ young children don’t even know that’s something they should miss. I am angry and unhappy more often, but I channel it into productive change less often, instead chasing the dual dopamine hits of likes and outrage."

Further underscores for us as a society to find time to step away from technology and experience the "world" (nature, art, human interaction, etc.).


“As we have learned, scientific advancement eventually happens if the laws of physics do not prevent it.”

The dinosaurs probably disagree. Eventually they might have developed tools and then technology but then planet-scale comet destruction and/or volcanoes got in the way.

We however don’t need a cataclysm to delay or indefinitely defer scientific progress...we have various self-inflicted ways to impede progress: religion, dictators, racism, consumerism, spending money on bombs vs. education, denying climate change, tolerating genocide, electing crazy people to run nuclear armed nations, taxing graduate students, the McRib, etc.


Sure. Altman even points that out, to a less-fleshed-out degree (emphasis mine):

> More important than that, unless we destroy ourselves first, superhuman AI is going to happen, genetic enhancement is going to happen, and brain-machine interfaces are going to happen.

I think it's a really good point. As long as the laws of physics don't prevent something, and we can't find a clever workaround, progress tends to march on forward. Really the only thing that could stop that is some sort of earth-wide cataclysm, whether it's natural or human-made.

I think you overestimate a lot of those human-made things, though. The countries of the world are very interconnected these days, to be sure, but I'm optimistic that religion, dictators, etc. can't stamp out all progress; I believe there will always be places in the world where this kind of progress is at least tolerated, if not actively embraced and encouraged.


"Unless <highly probable event>, <extremely unlikely event> is going to happen" might be truthful but it isn't what Sam is really trying to say. And there is a big gap between stalls like the McRib and epoch-defining asteroid collisions.


I was defineitely going for humor...but in all seriousness, if you take the McRib as just 1 sku in a billion+ that humans produce that are both addictive by design, inflict direct physiological harm, and also contribute to climate change and planet scale pollution which are very much epoch-defining phenomena, yes you could very easily rank the McRib as equally threatening to all human life as a very large asteroid.

That said, the real dominant species on this planet, tardigrades, could likely survive even a McRib-induced climate catastrophe. So if tardigrades merge with AI, we are totally and completely fubard!


We may be heading toward a merge, but I'm not sure it's with machines. I think it could be with each other via machines.

Cybernetics refers to this as a metasystem transition:

https://en.wikipedia.org/wiki/Metasystem_transition

(Shameless plug: if you want to see what it looks like when a hivemind speaks, check out https://reddit.com/r/AskOuija)


If Sam believes the merge has already begun, then his, Musk’s, and other’s predictions about superhuman AI are really just describing a rather boring fact about the world.

His examples of the merge in progress are social media determining how we feel and search engines deciding what we think. The problem here is the anthropomorphizing that humans have done throughout history to understand things they can’t fully explain. Gods don’t get angry and cause rain and search engines don’t make decisions about what we think. A search engine is just a complex math formula that can’t make a decision anymore than a calculator “decides” to output 4 when someone types 2 + 2.

Bostrom and others in this camp always have a hard time describing what “superhuman” will look like or mean. But I will guess in 2075 that the goal posts will be moved and it will be said that in 2017 we hadn’t realized that there was already superhuman AI, as evidenced by calculators that could already do math faster than a human.


The friendly AI goal alignment problem is a pretty interesting part of this and there’s interesting work going on there.

The basic idea is how do you effectively align a general intelligence’s goal with humans so that it uses its intelligence to solve problems in a way that aligns with our own human utility functions.

If humans figure out general AI before goal alignment it may have outcomes we don’t want.

http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/

https://youtu.be/EUjc1WuyPT8


I kind of think that we are already experiencing the fall-out from the value-alignment problem with corporations. Corporations (and the market in general) seem rather good at optimizing revenue/profit - the problem is that corporate profit doesn't always align with other human values - which ties into some other comments here about disrupting human attention-span being profitable.

Perhaps work on AI goal alignment can help us build a more humane market place, or vice versa, research into macro-economics can inspire some ideas in AI goal alignment.


Unfortunately, I think this is because we have stopped questioning whether corporate values really do end up matching human values. I think when capitalism was seen as a competitor to communism in the Cold War era, people took a slightly more critical eye at economic systems, but now there's no competition so there's no pressure for corporate values to align.


Corporate values do align with human values in a free market

It's just that the values and goals of many humans are unaligned with the values and goals of many other humans.

Which results in a cacophonous marketplace.


Maybe it would be better to stick to narrow AI. There's no reason a self-driving car or stock-picking algorithm or even a cancer research algorithm needs to be general AI.


In some sense we got lucky with nuclear weapons because they require enriched uranium to make which is hard to get.

A self improving general AI may just be be a matter of an intuitive leap, but nothing that’s hard to turn on once you understand it. This property would make it hard to suppress.

The upside is that if we can figure out the goal alignment problem we may be able to fix the coordination issues humans have and have a much better end state than all of us burning out and going extinct on our planet.

The downside is some variant of the world turning into paper clips.

http://slatestarcodex.com/2014/07/30/meditations-on-moloch/

https://wiki.lesswrong.com/wiki/Paperclip_maximizer


If it's that easy, we're doomed. Even if goal alignment works, you can't make people use it. And even if they did, whose goals would they have it align with? The goals of evil humans are plenty bad enough.


In the short term it would still probably take a lot of computing power. If you can figure out the goal alignment problem (or even simpler parts like just how to get the AI to let you turn it off) then maybe the first version can help us prevent rogue general intelligences that do things we don't want.

It is a hard problem though which is why there's all the talk of existential risk. It's also a little nuanced to explain why there's concern or why people should work on the problem.

I thought this post from yesterday was a good response to someone stating that an intelligence explosion was impossible (the original article it's responding to is linked within it): https://intelligence.org/2017/12/06/chollet/


If people make friendly AIs first, then we might be able to use those friendly AIs to figure out the actions that unfriendly AIs would take to get power, and close them off first.


Okay, but do you need general AI for that? Why not narrow AI?

On the defensive side I think there's a lot that can be done with narrow AI. Consider using really good penetration testers and bug finders to fix Internet security.

For any given task, build a tool that just does that. It's like the least privilege principle in computer security.


I think there's a ton of routes to power for a general AI other than network security holes, and someone making a narrow AI just to fix all the bugs in our computer networks will miss nearly all of these. Consider an AI that makes a ton of money on mechanical turk -like jobs online, uses the money to run thousands of shell companies, over time hires the whole population to work on things related to paperclip factories, and crowds out other job opportunities so that anyone that goes against it goes jobless. Or an AI that figures out how to create cultural memes stupidly easily, gets people to vote in totalitarian governments, and then blackmails every single member of the central planning committees to optimize their countries for paperclip production.

These ideas are mainly extreme proof-of-concept ideas, maybe there's ideas along these same lines that are much easier to pull off, but maybe these ideas as-is would be do-able for a possible superhuman intellect. I'm kind of thinking about what it looks like when individuals meticulously plan and then make tool-assisted speedrun videos through videogames, and then scaling it up to what it would look like if thousands of people worked together optimally for thousands of years brainstorming and calculating on how to speedrun through conquering a society. Physical limits permit this kind of computation as far as we know. Maybe this imagery is off-base, but I'm purposefully shooting high and worrying I'm not shooting high enough. Chimps would never even consider the possibility of "super-chimp" intelligences creating nuclear weapons. You can't make narrow AIs to defend from these possibilities if half the danger is our inability to even enumerate them.


"Our phones control us and tell us what to do when; social media feeds determine how we feel; search engines decide what we think."

Ok, let's see. I check my Android phone perhaps 5 times a day. It's in silent mode all the time and has a broken screen since 2.5 years and I don't care. I have perhaps 10 apps installed of which I use maybe 5 on a semi-regular basis. I check Facebook / LinkedIn perhaps 1 time a day and when I do I'm always slightly annoyed by the amount of useless crap in my feed. I don't use or need Snapchat or Insta or Twitter or any other social media. I doubt I even need Facebook. I do use Google, a lot, but mostly as a gateway to Stackoverflow or to satisfy my curiosity about something I've read somewhere else on the internet. I doubt it has any influence on what I think.

Am I really such an outlier?


Take public transportation during rush hour and look at what people are doing on their phones. Yes, you are an outlier. And good on you.


We're talking about the general population and you're on hacker news, you're already quite the outlier.


Yes, just look up phone sales, app sales, popular app daily active user statistics, etc. You are not in the majority. Also, checking Facebook once a day can easily affect how you think and feel. Not as much as using Snapchat 100 times a day, but anything you do every day is a major part of your life.


Yes you are an outlier. And you're probably older too.


Yes, you are an outlier.


Probably?


I think it's unclear how much progress has been made on the superhuman AI problem. We haven't pinned down a good definition of intelligence, or figured out what it is that makes monkeys smarter than mice and us smarter than monkeys. We do have a lot of progress in specific domains, like image/speech recognition, but it's hard to tell whether they're on the critical path to superhuman AI because we don't know what the critical path is yet. That makes the timeline unpredictable, but biased a priori towards "far away". It's possible that better hardware will accelerate progress, but with CPU clock speeds flattening out, significantly better hardware is not guaranteed in the future.


"Until I made a real effort to combat it, I found myself getting extremely addicted to the internet."

This seems to imply sama is resisting the inevitable merge. Can't we instead try to steer it in a more positive direction? Where are the startups trying to keep you addicted to the internet in a way that improves your life? They don't seem to exist, because the incentives aren't right -- it's way more profitable to keep you hooked in ways that make your life worse. How do we fix those incentives?


they do exist - consider the gamifying elements of Khan Academy, Duolingo and other e-learning sites for example. The problem seems to be that they are out-earned by the sites that feed into our narcissistic and negative impulses - social media, addictive games, "recommendation engines" on shopping sites and such.


>> Where are the startups trying to keep you addicted to the internet in a way that improves your life?

> consider the gamifying elements of Khan Academy, Duolingo and other e-learning sites for example

It's worth noting that I've heard a lot of criticism of e.g. Khan Academy from certain components of the mathematics education community. Concretely, they feel that the material there is designed to increase engagement and provide a sense of accomplishment ASAP, and that the learning processes that fit those goals are detrimental to development certain mathematical skill sets. I.e. you might learn how to compute derivatives really well, but at the expense of really understanding limits or the fundamental theorem.

Mind you, mathematics educators have always been an internally divisive bunch, so take from it what you will.


Ban ads, marketing, and public relations would be a start. Seems unlikely.


Recently I've been getting into cryptocurrencies (too little, too late, right?) and it's actually improving my life by making me money. It feels like the more I learn about them and the more I gamble, the better my life is at least while the value keeps climbing. I get close to zero value out of HN/Instagram/Twitter, but I may be using them wrong.


I have never been a fan of theories that give agency to superhuman phenomena.

In Sapiens, the author argues that wheat enslaved humanity into producing more of it.

Sure it's an interesting way to think about this, but it's not what is happening. People grow wheat because it has a lot of nutriments and can be kept in grains, and can be harvested twice a year.

Similarly you look at your phone because you find it useful not because it enslaved you into doing it.

Sure you lose some things when you have the worlds knowledge at your fingertips, but you win a lot too. When you had to look up the GDP of France in a book, it was harder to have fact-based arguments...

Anyway, I think General AI can become a very bad thing, but we shouldn't confuse it with "phones are controlling our lives" cause that simply isn't true.


The cybernetics movement has always sat uneasy with me.I don't see how a full transition to improvements in the physical plane will help human beings who are primarily intuitive, feeling beings. We are already disconnected from our bodies, now why would we continue to attempt to augment our minds and limbs with machine parts. I don't want to be a machine, I love my brain. My brain has emergent properties far more complex than any machine, and even if we could build computers that mimicked the brain or parts of the brain, I would like to keep my own. There is no other brain exactly like mine. My brain took millions of fucking years to create-- it's amazing that I'm alive. Machines are not alive. One could attribute a small amount of consciousness to them, if one were especially philosophically inclined, but still machines seem anti-thetical to the way we exist that it seems so wrong to develop towards "merging" with them. I have no trouble using a machine to facilitate the communication and visualization of my ideas, but this thing will not become a part of me.


The impossibility of these beliefs is summarized well in this comment made today https://news.ycombinator.com/item?id=15869657 regarding AlphaZero:

> Makes you wonder what will happen when instead of the rules of chess, you put in the axioms of logic and natural numbers. And give it 8 months of compute.

The answers are more realistic:

> How do you score this computation? What's your goal? There's no checkmate here. (this was mine)

> If you're talking about formal proofs or maths, I'm not sure how this would apply in general as the branching factor for each 'move' in a proof is efficiently infinite.

Also, there was a talk where a Google engineer admitted that a car you can put your kid into to drive them to school is still more than three decades away. From other sources, this is even more likely because the trolley problem doesn't seem solvable and so we would need to drastically decrease potential interaction between pedestrians and self driving cars which requires building guard rails, reforming transit and so on.

Don't subscribe to the hype.

Not to mention Katherine Bailey's excellent article (well, all of them on medium are on AI and really good reads):

https://medium.com/@katherinebailey/why-machine-learning-is-...

> One thing that both the pessimistic and optimistic takes on the Singularity have in common is a complete lack of rigor in defining what they’re even talking about.


"Don't subscribe to the hype."

I think the fact that internet guides people behavior on a massive scale is a clear indication that machine/human hybrid cognition is not a hype, it's a thing and it's starting just now.

What makes one go to the fridge? A sense of hunger. What makes one go to the Facebook? A different yearning.

Before, people have controlled machines, and they have not induced addiction. Now machines can induce a wide range of emotions.

This is the first step - the channel is open.

On the other hand, when will there be more than random noise and clever hacks at the machine end - I don't know.

Only that we are combined as a species on a cognitive level already by the internet.

One could say that the combination began when writing and money was invented - one coordinating thought, the other labour.

Machines have made this suprapersonal interaction much faster, and much powerfull.

It would be silly to say it's just a fad. It's not a Skynet scenario, it's not the matrix. But - people are affected, and algorithms are getting more clever.

I'm not saying we are going to have an AI overlord. But I am saying the interconnectedness, the algorithms and the addictive quality of interaction is leading us definetly into a new territory.


"a car you can put your kid into to drive them to school"

Also known as a schoolbus.


>Our self-worth is so based on our intelligence that we believe it must be singular and not slightly higher than all the other animals on a continuum.

Speak for yourself. I doubt Sam meant to appear ignorant of other perspectives on the world. However it would be nice to consider them when writing general insights, as opposed to simply tunneling his own point of view.


> Our phones control us and tell us what to do when; social media feeds determine how we feel; search engines decide what we think.

This really reads like self-satire.


If you consider each engine's personalized echo chamber with a positive feedback loop designed into it... It reads more like a confession.


I agree, its definitely due to a Sam's bubble/filter. Its way way too generalized for the general population.


The piece reads like "Boy, it sure is strange that all humans now work in silicon valley venture capital!"

It's just silicon valley bubble platitudes top to bottom. He needs to take a breather somewhere on the other 99% of the planet.


> It is a failure of human imagination and human arrogance to assume that we will never build things smarter than ourselves.

No it's 21st Century IT-bloke failure of imagination and arrogance. A failure to imagine change: specifically starting to hit real walls and running out of superficial areas of expansion to distract you from the lack of intrinsic progress. It's 21st Century IT-bloke arrogance imagining that the context that has rewarded you with money and power, is also at the heart of an earth-shattering scientific revolution...and not just (which is significant) another industrial revolution leading to new and all too human plateaus.


I'm not convinced on the "merge" happening. I fear humans cling to that idea, because we want a biological part of our selves to pass on.

I expect a technological singularity to supplant/replace biological life completely.


I agree... much as the monkeys and birds are left in their shrinking forests, we too will be left behind to dwindle.


Sure. Merge first, then that.


A "merged" singularity will lead to a non biological singularity ?

I don't think I understand.


Yes, of course it would.

Since sooner or later, biological systems would be inferior to manufactured ones.


Interesting piece, though I feel flawed in at least some pretty fundamental ways.

1. Humans have always been "merged" with technology. It is becoming more pervasive now, more powerful and to some extent more intimate, but it has always been the case. For example, the evolutionary advances of homo sapiens were in part a result of our ability to "out source" elements of digestion to cooking, enabling our intelligence to outstrip rival animals and hominids. Other examples: farming, writing, the printing press, telegraphs and so on.

2. For most of human existence humans have believed in forces superior to themselves, whose intelligence, power and strength out strip their own: whether God, gods or other metaphysical forces. Its amusing to me how theological tropes reappear in these writings with a high degree of regularity. Now one might argue that the difference is that AIs "really exist". But crucially the idea that humans have always considered themselves totally "top of the pile" seems radically false. It is at best a very limited notion in human society. Even modernity, increasingly secularised, was quick to assert that human mastery was at best an illusion, e.g. the psychoanalytic or Darwinian revolution.

3. The obsession with AI being the largest existential threat to the human species seems hubristic in the extreme given that a very current and very real threat is already here and it is often the most poor that are already feeling its effects: catastrophic climate change.


> 1. Humans have always been "merged" with technology.

This can be also generalized to other animals see "Extended Phenotype" by Richard Dawkins ( https://en.m.wikipedia.org/wiki/The_Extended_Phenotype ).


A lot of the examples brought up are innovations in communication, not AI. Phones, social media, and search are communication platforms. Just like books, newspapers, telegraph, telephones, internet, they enable communication, interaction, ideas, innovation, etc. Even what we consider AI today is just a complicated birds nest of human driven math. Sure, we can go ahead and classify phones as co evolving AI, but the same can be said of plants, trees, and animals that have co evolved with us as well.


I have had the same thought experiments that lead to this inevitable conclusion, that we are the vehicle that will create a new, immortal, more efficient intelligence and there won't really be a place left for us slow, in-evolved, inefficient, ape-like creatures. It's an unpleasant thought, but what other conclusion do you all see? I see some people arguing that "this is a bit out there" etc. but whether it's sooner, or later, i think it's inevitable.

The interesting thing to me is that life and evolution propagate because of the laws of physics. It's theorized that life is a chain reaction that evolves out of a simple necessity to be as efficient as possible. So this new life for will potentially depart from that natural basis for evolution.

What do you guys think though, do you really think a merge will happen? This is obviously a long term existential and depressing discussion, but really, when an intelligence with much more potential than ours arrises, will there really be any point in us lingering around? Do we even have a chance at this merge? I mean, I guess I see the urgency, we would need to start now, so that any innovation in AI is really linked to improving our own cognition from the get-go, otherwise we are just a stepping-stone for life originating from this solar system.


My objective function is that I continue to have new experiences. I hope any intelligence we create recognises that this is not the same as continuously different experiences, and so putting me in a simulation for an eternity of pre-determined happiness isn't appropriate. I hope anything we create recognises that bringing us along for the ride is part of the intent.


why?

We bring dogs along because they are dumb pets, but we kill wolves and took their habitat for their own.

There’s a resource cost to “finding new experiences”. Which conflicts with the “survive and Procreate” drive necessary for any successful system.

The primary survival loop will command the resources.

Why should it save you?


I think humans will coexist with machines just like micro-organisms coexist with humans.

If you think about it, people are just a bunch of cells and bacteria which work together.

We will be like the living cells or bacteria which fuels the machine... If we aren't already.


I fear this is wishful thinking. Advances in AI are progressing a lot quicker than advances in neural interfaces. So we will most probably have superhuman AI long before we have neural interfaces.

And at that point it's game over for homo sapiens.


Neural interfaces have use cases but they are not the silver bullet for the kind of protocols that we need in order to not lose control of our creation.

The protocol that that we need is for transmitting information, not data. Or in other words, you don't need a neural interface to transmit understanding.


Its likely NI/HCI are necessary to get to General AI.


Why, other than "I just think so"?


I’m all for being forward-thinking but this is a little out there.


What, specifically, are you referring to? "[S]cientific advancement eventually happens if the laws of physics do not prevent it" sounds pretty accurate - sure, we might get stuck in a local maxima for a bit, but there's no reason to think that progress towards the things Sam is talking about will stop in any meaningful way over the long term.


> but there's no reason to think that progress towards the things Sam is talking about will stop in any meaningful way over the long term.

You could have said the same about the philosophical stone, which people thought that was possible to "discover" back in the late 1700s, or about us, humans, physically reaching other solar systems and especially other galaxies, which some people thought possible for a while after WW2. It's a belief similar to how some religions started, definitely similar to the early Christians' belief that the second coming was only a decade or two away.


True, but there's a lot of things being talked about in this blog post (e.g. superhuman AI, genetic enhancement, and brain-machine interfaces). These are all different things, and I was genuinely curious as to what the OP found to be "a little out there".


>Double exponential functions get away from you fast.

Still exponential, though.

Also, as a general criticism, there's a big difference between people getting addicted to the internet, getting dumbed down by it and sucked into things like a youtube hole, and my idea of The Merge. The things you describe sound more like an automated soap opera or opiate addiction than the singularity.


It seems unlikely we will have any sort of effective governance for this (look at our current political system). At some point someone will invent AI that lets them gain an extreme advantage of some kind (financial, political or military). This accelerates current inequality and leads to revolution. Post revolution a new AI is created to manage earths resources for the benefit of all. Whatever AI is created will be flawed somehow and will eventually cause great damage to the human race. Alternative AIs will be created to improve or combat the incumbent AI and a sort of evolution of AIs will occur. Although AIs originally were created to optimize for the human race, survival of the fittest leads to AIs exploiting loopholes in their objective functions to find ways to replicate and hoard resources for their own survival. Humans will still be accomodated to some degree, but in more and more unnatural and distorted ways.



> It is a failure of human imagination and human arrogance to assume that we will never build things smarter than ourselves.

If you define intelligence as “being really good at chess” or “factoring prime numbers”, sure, computers are already smarter than us. If you define intelligence as “knowing when to let your child make mistakes on their own and when to help them”, or “knowing how to conduct an orchestra”, it doesn’t seem so extreme anymore.

In fact, the opposite statement rings just as true:

It is a paragon of human arrogance to assume that we will build things smarter than ourselves in every conceivable way.

The road ahead looks more like a planet with its ecosystems ravaged by resource extraction, with buggy computerized systems we don’t understand running people’s lives in harmful ways (eg see all the writeups about machine learning reinforcing systemic biases) than a world full of meta humans in symbiosis with inconceivably intelligent machines of their own design.


>“knowing when to let your child make mistakes on their own and when to help them”

That can be tested and quantified, based on end state goals.

>The road ahead looks more like a planet with its ecosystems ravaged by resource extraction

In such a world, the savviest and more strategic 1% would thrive, just like they always do. Eventually though, they will need to merge with machines to continue making the cut. It's too advantageous not to.


> If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict.

The term "dominant species" really jumped at me as I was reading this piece. It's so fantastically vague that it begs the question - would us and AI have the same definition for what "dominant" is?

We're not building AI individuals with self-preservation instinct, hunger for resources and sexual drive. We're building super-individual systems like Google, Facebook and trading systems, that can be much more intelligent than we are but also have vastly different built-in "purposes" (just as evolution have "built-in" purpose of eating, having sex and caring for our family into us).

I think that in the end, the thing we're building will end up much closer to Solaris ocean than Terminator.


Well, the issue is we are not building general AI systems at all yet. If and when we can, the game changes, and we pretty much cannot figure how it will until it happens.

There is no other general intelligence at this point that can rival human intelligence, we have a systems sample size of 1. Making predictions based on such a wide range of possible technologies is hopeless at this point.

Why would a fixed placement general learning computer system that takes input from millions of possible sources evolve in the same way that a mobile robot with general learning intelligence? Will both be possible? What about swarm intelligences?

Too many questions with no possible way to answer them.


Most people in their prime today will say no to implants. I suspect that 2% may be ok with it. Each generation however will have a higher and higher percentage OK with it. Especially considering at a certain point it will be the competitive advantage to merge oneself to tech. Similar to developers taking derivatives of speed today.

When would this start? Probably not limited to the pace of AI but the pace of human computer interfaces catching up. I suspect the largest increase of usage will be the non-invasive helmet electro-magnetic style that is already proven to work to some degree.

Generally too, HCI can help us create a massive amount of training data.


It also depends if the implants are visible / detectable or not.


Maybe the reason the whole world isn't getting behind this is because the proponents of such theories have yet to provide a single shred of evidence that this is happening. At least it's interesting science fiction, but it's quite delusional to think people will get behind this considering the sorry state of "AI" at the moment. When the rest of us look at "AI" we don't see a single shred of intelligence because it doesn't exist. How about at least a prototype? But no, "AI" proponents don't even have that. And please, hold your arguments about how self driving cars are intelligent; self driving does not equal intelligence. Anyway, there are much better articles and arguments than I can make on the subject, many of which show up here on hn quite frequently. I personally will definitely take this "merge" seriously once we have even a hint of proof that we can create intelligence at all, let alone intelligence greater than ourselves. Currently there is zero proof anyone has ever created anything intelligent in the sense that the word "intelligent" is generally applied to humans. The technology is about as far along as technology to teleport star trek style: not at all.


Advancement in nuclear technologies was basically shut down since the 1970's. Thiel's thesis is that almost no technological progress has occurred since then. I tend to agree. Computer technologies were only allowed to progress so quickly because people did not think computers were dangerous. Imagine any other new product where you could state that the product is "as is" and claim no responsibility for its functionality, purpose for use, or damage caused by malfunction. I would prefer this concept for most things but our "safety first" society definitely does not.

The one avenue open to tech advancement became so powerful that it eventually began to bleed into the physical world and we are starting to see other tech slowly advancing again. Maybe if the tech wealthy of Gen X and later have sufficient power in society after the Boomers pass the baton (if they ever do), we will let the technology keep advancing and see this merging. There is also the fact that the US is no longer a hegemon. Good luck stopping people in China from doing things banned in the West.


> Advancement in nuclear technologies was basically shut down since the 1970's

That's not true at all! Civilian nuclear power in the USA (and most of the west) has stalled out since the 70's, but nuclear technology has seen plenty of advances over the intervening time frame. See esp. modern nuclear propulsion systems. Super impressive.

Also, this isn't a problem caused exclusively or even primarily by safety regulation in the nuclear sector. The evidence is in the failure of nuclear power throughout the world, despite significant variations in safety regulations.

In fact, if you look at root cause analyses for the failure of nuclear energy, you'll find that lack of regulation is a major reason for the failure of the nuclear energy. If the FF industries had to pay for their externalities, nuclear would be extremely viable (at least in the 80s-00s; now it'd have to compete with solar and wind).

> Computer technologies were only allowed to progress so quickly because people did not think computers were dangerous.

Again, this is a pretty wild assertion... Moore's law >>>> regulatory environment. Seriously. If the output of nuclear power plants had grown exponentially for multiple decades, we'd be in nuclear-powered paradise.


I am not sure what you mean by "modern nuclear propulsion systems". Some links to details of those would be great. I'm hoping for applications outside military ship and submarines, which have been around since the 1960's. There were actual engines for nuclear rockets, nuclear airplanes, nuclear excavation techniques, etc in the 1960's. Definitely problems with those systems, but we stopped trying. Nuclear test ban treaty, etc. I'm not saying that was not the correct path for a better future (hard to say), but society definitely stopped working on that stuff.

"Moore's law >>>> regulatory environment"

I don't see how this is a disagreement with what I was saying. Imagine if society had the same amount regulation on building software systems as dealing with radioactive stuff. Moore's law would not exist.

"If the output of nuclear power plants had grown exponentially for multiple decades, we'd be in nuclear-powered paradise." Why didn't that happen? Lots of reasons, but I think it could have and still could given a chance.


> I'm hoping for applications outside military ship and submarines, which have been around since the 1960's.

I'm referring to military ships and submarines. I'm not really sure where the financial incentive for nuclear comes from for anything commercial?

TBH it seems like you have a culprit (regulation) in search of a victim (nuclear utopia), and as a result, you have a solution (nuclear power) in search of a problem. If that's the case, there are really many much better examples.

> Nuclear test ban treaty, etc.

The NTBT covers intentional detonation of nuclear weapons... I'll be the first to concede that regulation has perhaps hamstrung the private sector nuclear weapons market :-)

> engines for nuclear rockets

Radiation is a bitch and lead is heavy. Conventional rockets that we need for human space flight already can, with modification, get us to where we've wanted to go so far. So we've instead focused our resources on doing useful stuff once we get there. NASA doesn't have inf money.

This is something that people continue to research. What's holding it up is a lack of priority in decisions about what science to fund and a complete lack of any private sector market large enough to justify the investment, not any sort of regulatory barrier.

> nuclear excavation techniques

...I'm having an extremely difficult time thinking of a use case where this would make any sense. Nuclear bombs are more powerful than conventional explosives. That power has an enormous tactical advantage in wartime because you can take out a city with a single missile instead of weeks worth of bombing runs.

But that's not really a particularly beneficial feature in e.g. a commercial mining operation. Plus radiation's a bitch.

Help me out? What are the possible use cases here?

> nuclear airplanes

Radiation is a bitch and lead is heavy.

> I don't see how this is a disagreement with what I was saying. Imagine if society had the same amount regulation on building software systems as dealing with radioactive stuff. Moore's law would not exist.

This isn't clear to me. I think the hardware would've still be invented. Perhaps the business impact would've been smaller, but regulation of commercial software systems wouldn't have stood in the way of the enabling physics and engineering research.

Also, see the Ford quote about Microsoft; if the nuclear industry worked like the software industry, we wouldn't even be here to have this conversation. We'd have BSoDed our way to nuclear Armageddon some time around 2001.

> nuclear power plants had grown exponentially for multiple decades... Why didn't that happen?

Physics is a bitch. More precisely, I'm unaware of any serious conjectures by modern nuclear scientists that there are obvious advances on the horizon that could give us exponential improvements to reactor output even in the short term. Let alone over 3+ decades.

Aside from power, and perhaps inter-planetary travel, nuclear is a case of "tried it, doesn't work for fundamental reasons" or "wtf now you're just throwing nukes on things for fun". And in the case of power, we have a lot of data points from a lot of different regulatory regimes, all of which point to "this is a quite expensive way to make power which won't pay off until fossil fuel externalities are finally internalized... i.e., never"


I hope someday the risks from radiation will be evaluated on equal par with other risks of a modern society and nuclear power options will be viable again. After decades of technology stagnation in rocket launch business, Musk and company has orbital class rockets landing back and the launch pad and is looking into nuclear rockets. I wish him luck.


IMO risk is absolutely not the primary reason that nuclear-powered rockets don't exist.


What is the primary reason? Many people would love to work on such a project and some people would fund them. Would you agree that fundamental physics does not preclude a functioning nuclear rocket? Where would one build, much less test, such a device as a nuclear-powered rocket? We can't even agree in the US where to bury inert (mostly) ceramic radioactive waste. A live, radioactive (low-level, but still radioactive) exhaust stream is not going to happen in the US in the current culture.


> Would you agree that fundamental physics does not preclude a functioning nuclear rocket?... What is the primary reason?

Yes. No demand.

> Would you agree that fundamental physics does not preclude a functioning nuclear rocket?

Fundamental physics also doesn't preclude a nuclear powered ferris wheel the size of the empire state building.

There are lots of things that humans can do but don't do.

> A live, radioactive (low-level, but still radioactive) exhaust stream is not going to happen in the US in the current culture.

This hasn't stopped us from building an enormous fleet of nuclear ICBMs and a rather large fleet of nuclear power plants.


The objective of international nuclear regulations is to avoid a small group of hotheads from wiping out humanity/most of the world’s ecosystems due to their botched experiment (and that’s giving them the benefit of the doubt, you could also imagine some crazy cult who believes that inducing a nuclear winter is the only way to heaven).

What do you think ISIS would look like in the world you describe?

So uh yeah, your argument is pretty insane?


This is exactly the confusion that I am talking about that stalled nuclear power technology. Nuclear weapons are bad to have floating around and getting the current stockpiles down to low levels would be great, but building a nuclear weapons is very hard to do. Separating out U235 from U238 to make and easy to build gun-like nuclear weapon takes a nation state level of power and dedication. Getting enough pure plutonium 239 is at the same order of difficulty and building a plutonium bomb is very difficult and dangerous.

Fuel in nuclear reactors do not have to be a weapons grade levels and only a few research reactors are (or were, as they are being phased out).

Also, one or two (or a dozen) of small nuclear bombs set off by some small group is not going to wipe out humanity and most of the worlds ecosystems. Horrible, massive death and destruction for sure, but not an existential threat to life on the planet.


Thiel is... an interesting individual. I'd recommend taking anything he says with a grain of salt. Look at what he does instead.


What does a rapidly improving AI have to gain by "merging" with our notoriously error-prone, finicky biological hardware? It's nearly certain that intelligent machines will choose to replace humans, not enhance them.

Our "successful" descendants will probably be the unmerged, low-tech survivors living on the outskirts of a new "Machine Age" civilization.


Until the day the machines decide that the rust problem must be tackeld at its root and that atmosperic oxygen must be eliminated...

    L'ennui c'est l'oxygène
    qui fait rouiller nos roues
    dentées, c'est l'oxygène
    qui fait rouiller nos clous


I believe the merging Sam is referring to here is augmenting human intelligence with non-intelligent (but better) machines. Kurzweil talks about this as well, and was one of the original options in Vinge's "Singularity" (https://edoras.sdsu.edu/~vinge/misc/singularity.html)


> Unless we destroy ourselves first, superhuman AI is going to happen... Perhaps the AI will feel the same way [that intelligence is singular] and note that differences between us and bonobos are barely worth discussing.

He's referring to a superhuman artificial intelligence, not merely an augmented human intelligence.


Maybe by the time AI is that sophisticated it will also be just as error prone and finicky as "BI" (biological intelligence)


I'm still skeptical about super-human-intelligence AI, now or within a few decades.

There just doesn't seem to be any evidence for this kind of development. In fact the only differences to 10 or 20 years ago, when people weren't bullish on this nonsense (my opinion) is that we have much more compute power now, and good results with deep learning. Deep learning is "just" (the quotes are big here) a search for a function that approximates a process of interest.

We are miles (more like multiple earth-circumferences) away from anything approaching general intelligence. And if that's not what is meant, then AI is already super-human in some domain of interests. But then it already was decades ago.

Of course, we'll keep getting increasingly incomprehensible and useful algorithms, although I believe we already reaped the low-hanging fruits and we are not going to 10x what we have now.

If I may venture a wild guess, progress towards general intelligence might come from learning more about our own cognition.


I'm still skeptical about super-human-intelligence AI too. But we can't stop this and need to think about how we can live with them peacefully.


Sam wants to talk about the utility function of a superhuman AI. Ok, what are the likely outcomes?

(For sake of discussion, let's just accept that a superhuman AI will exist soon.)

Asimov's "Three Laws" point out that a utility function is just a program like any other program. It has no inherent moral code. If it prefers "the good of mankind," it is because the engineers made it so.

How long until someone makes one that actively destroys mankind?

Sam seems to accept that a single superhuman hostile AI is an extinction-level event. Friendly AIs are non-events, but a hostile AI is an existential threat.

There are counter-arguments: humans are resilient; our monoculture hasn't wiped out all avenues of escape; governments are still human-dominated and unlikely to surrender to AI control.

I haven't decided yet, but I am convinced there are concrete actions _right_ _now_ that have a strong effect on the outcome.

The real contest, though, is that humans just don't care, and AIs are tireless, flawless machines.


"the merge" may or may not happen in our lifetimes. current AI tech has a ceiling, even if we havent hit it yet. that ceiling may be AGI, it may not, who knows

The bigger question, if we want to make human life better, is: should we worry about AGI more than other things?

One can argue that if people want to help their fellow humans, theyd get the most value for their time, say, volunteering with the many 8th graders in east palo alto who can't even spell their first name, rather than trying to prevent an AI apocalypse that they cant even predict with any degree of accuracy

I think it is good and important that people develop AI responsibly and think about these things, but does this topic really deserve more public attention than the many other threats and challenges that humanity faces?

I know sam and elon and others are very smart and have large megaphones, but we should certainly question their priorities


Sorry if this is more negative than HN permits, but am I the only one who gets the feeling that the style of this post sounds kind of smug? Ending a paragraph with "And gradual processes are hard to notice" and an invisible smirk kind of leaves a bad taste in my mouth.


I am genuinely not sure what you're getting at. What do you find smirky about that sentence?


No you're not, but the people that don't worship the ground this guy walks on tend to stay quiet.


> Our phones control us and tell us what to do when; social media feeds determine how we feel; search engines decide what we think.

All of this is only true is you let it be true. And if that is a basis for thinking we are moving into the singularity, then there is an echo chamber informing that conclusion. There are plenty of people who do not live their life via devices or social media. There are also plenty of youth rejecting those choices. But, almost by definition, you aren't hearing from those people online.

If anything, there is a voluntary split going on between those who are embracing tech as a central core of their life and those who reject it. And a subset of people like myself who make their living at it, then go to a home without it.


Not to mention, all of those examples are, as they say, socially constructed. A far cry from the speculative science involved in creating true MMI.


Was there this much hand-wringing during previous AI boom/bust cycles? There's a lot of fear and emotion swirling around AI/machine learning right now, and I'm curious if that's been the case in the past as well.


Well drones have really captured the imagination of the public this time, and we have very public and popular tech leaders warning the public about AI. Combine that with the 24/7 media clickbait cycle, I think we have a unique situation.


In addition, the machines didn't have so much control of our news input and conversation the last time around.


I'm super pro singularity. I've been watching what I think will be the key to that next step of growth: Decentralized AI.

Using the blockchain for decentralized access to distributed machine learning models and creating a heterogenous network of autonomous agents that can collaborate, learn, and grow will be huge. One of the companies I see doing that right now is https://synapse.ai/ and it's pretty epic if you dig into their yellow paper.

When we start building a global brain where everyone can contribute, then we'll really start seeing what the future can hold.


With all due respect.

You would feel better (particularly in regards to the emotions you describe at the end), Sam, if you put yourself on a "information diet".


Agreed.

I've stopped watching TV. My use of the internet is pretty much limited to email, paying bills, HN and one or two other news sites, and work (developer/system admin). I'm not on Facebook, LinkedIn, Twitter, or any other social media. My phone is used as a phone, for SMS, and Google Maps when driving to someplace new.

Other than my phone I have no "smart devices" in my home and doubt I ever will.

I'm not a Luddite -- I just see no personal benefit in any of the things I've mentioned.


Luddite!

J/K, I'm a luddite too.


I view HN (and soon the world) through a filter that is designed with my information and emotional needs combined. I quit Facebook years ago. I pick and choose the technology I use.


Are you speaking of a literal filter? Or is this more of a personal practice?


A technological filter integrated with personal praxis.


As a human, you can imagine the existence of a color you've never seen. However, it's greater than what you are able to perceive as a human. We can only see our slice of the spectrum. Therefore it's impossible to describe or create that color.

As far as I'm concerned, this is the same with AI.

You can imagine an AI that is smarter, bigger, more capable than humanity, but realistically we can't describe that.

We can't create something that is greater than our own limitations, the same way we can't create a color that we can't perceive.

Humanity is bound by it's own intellect, so any AI could only ever be as smart as we are.


> We can't create something that is greater than our own limitations

If that were true we would still be single-celled organisms. Or do you mean it can't be done intentionally?


Intentionally. I suppose emergent behavior could create this AI, but I'm pretty heavily skeptical there.


> As a human, you can imagine the existence of a color you've never seen.

TIL I'm not human.


It follows that evolution has at least as much intellect as human beings.


Evolution isn't a noun.


Same nounyness as "market".


What we call AI is very good at pattern recognition. I haven't seen examples yet though of AI learning quickly. It can teach itself how to play chess, but it takes a very large number of attempts before it becomes good. The rate of learning for a human is still much faster than for an AI. (We just hit a plateau faster). I'd put my money on a child that has played 10 games of chess vs a computer that's learning from scratch and has played 10 games. I wonder if there have been any studies on trying to speed up the pace of learning for AI.


It's difficult to compare. A child learning chess invariably has an adult around to comment on their game and improve it, and frankly after 10 games if they still remember the rules that's an achievement (if the child is young).

If you did the same approach with a child that could fully comprehend the rules from the start, playing another child, both of whom had never played before, I really don't think they would have learned as much as the computer would have done. It would be an interesting experiment - my bet would be that the children had invented another game and were playing that instead.


> I wonder if there have been any studies on trying to speed up the pace of learning for AI.

Isn't there one posted to HN about every day?

> It can teach itself how to play chess, but it takes a very large number of attempts before it becomes good. The rate of learning for a human is still much faster than for an AI.

But this is just a power optimization versus availability problem. AGZ played 44 million games in 8 hours and became better than the best program which is better than the best humans. Optimizing for a human level of power usage doesn't seem like the best method at this point.


Independently of the time frames, I also believe that the merge is our specie's only hope for survival.

With apologies in advance for the self-promotion, here's a paper where I present my arguments for alternative scenarios:

https://arxiv.org/abs/1609.02009

In short, I argue that non-evolutionary superintelligences are not stable (they eventually go inert), while evolutionary superintelligences are a very serious existential threat to our species.


> I believe the merge has already started, and we are a few years in. Our phones control us and tell us what to do when; social media feeds determine how we feel; search engines decide what we think.

I think this is a misunderstanding about human nature. Humans are thinking and feeling beings who are responsible to others. They are only slaves and automatons if they choose to; most aren't. Many who are have hints and feelings that what they are doing is unnatural.


>They are only slaves and automatons if they choose to; most aren't.

I completely disagree. Most are. In fact we have study after study showing this the case.

You want to focus on the modern automatons, such as phones and social media, but we have plenty other examples. Focus on the clock. Focus on the law. Focus on societal and cultural norms no matter the negative effects that these things have. These things have driven us since antiquity.


> The algorithms that make all this happen are no longer understood by any one person.

I smell epicism.

How did it become chic for women to smoke cigarettes back in the first quarter of the 20th century?


> I believe the merge has already started, and we are a few years in. Our phones control us and tell us what to do when; social media feeds determine how we feel; search engines decide what we think.

Wishful thinking.

And you can't just go appealing to complexity, either. Economies are too complex for any actor to understand; we haven't lost our individuation.


I enjoyed reading Homo Deus: A Brief History of Tomorrow (by Yuval Noah Harari) this year which focuses on this subject.


I feel like we'll be lucky if we can even get artificial hearts by 2025 that don't significantly increase the risk of stroke. The various tech seems to be around, but I don't know if the experience to combine it all does.


The real unstoppable algorithm is capitalism. Capitalism is funding the exponential advancements in AI at Google, Facebook and other places. But it does so for capitalism's purpose, the algorithms get better at grabbing our attention because attention=profit. Profit is the only goal that capitalism optimizes for. The only reason that AI will ever start destroying humans is if there's a profit motive. Which might very well happen at some point, but we should focus on the real motivation not the tools it uses. Capitalism won't give up powerful profit tools very easily. Sam's hope for worldwide coordination has not worked against climate change, capitalism sensed that threat and started attacking our political systems and propaganda channels and so far it has won that battle handily.


I'm disturbed by the fact that many people in SV just say "oh well, nothing we can do" when discussing technology while implying the exact opposite when addressing human nature.


>...and brain-machine interfaces are going to happen

> Most guesses seem to be between 2025 and 2075.

No way.

Disseroth is on a fast-trak to win a solo(!) Nobel for opto-genetics and Clarity, sure, but we are at least a century away from a wet-ware interface, if they are possible at all. The BRAIN initiative was effectively a failure (many reasons here) and the Connectome projects are essentially coming up with 'brains be different, yo'. Hell, we just discovered that the immune system is in the brain at all, like 3 years ago. We have not idea how many astrocytes and glia are in your brain (50% or 90%?) or how they are regulating synapses (maybe they are the primary regulators). What the hell are vestigial cilia even doing in the brain anyways? The list continues for miles of .pdfs.

Repair of neurons would be a necessary step for wet-ware, and still we have a damnable time trying to get people to dump ice-water on their heads as their father is dying. We are decades away from a cursory understanding of a wet-ware interface that won't just glia-up in a year or put you on drugs for life and at a 10,000x risk for strokes. We know electrodes don't work in the brain and the drug cocktails don't either.

Opto-genetics is a great discovery (use light, not electrodes) for interfacing, but the damn Abbe' diffraction limit (a huge physics limitation) screws you. ~125,000 um^2 of light at the focus versus a 25 um^2 neuron's soma. Maybe, yeah, for peripheral nerves where you can 'multiplex' along the length of a long fiber bundle, you can get away with a wet-ware interface. But cortical? Not gonna happen. You can use STED techniques, but you'll cook the brain to get the resolution down first. Opto is good only for applications where you aren't limited by Abbe', that's not the cortical areas.

> We will be the first species ever to design our own descendants

Maaaybe. However, what is a 'family' then? Your kids may not look or 'be' anything like you. All the families that will have done so will essentially have adopted a child, as far as the genes go. Plus, that kid will be 'whicked smahrt' if I'm reading this correctly. Not a lot of people do that even today, for many reasons. How will the kids think of their 'dumber' parents? Will they be 'parents' to them, or more like the cat, but with an inheritance? I think the initial forays are key here, and those forays will not be happening in 1st world countries, but much more 'familial' based ones like Korea and China. Places where the distortion of the family will be even more 'cutting' to the societal fabric.


I believe the merge has already started, and we are a few years in. Our phones control us and tell us what to do when; social media feeds determine how we feel; search engines decide what we think.

Someone needs to take a vacation from their devices, it seems. I feel like this overstates and dramatizes the situation to a large degree.


can anyone recommend a sci-fi book that investigates this line of thought (what happens when AI can learn without human intervention, for benefit and detriment of society)?


What are some current examples of exponential AI advancement?


It's Elon's biggest fear for very good reasons...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: