If the "attention epidemic" and risk of hacking people's attention is high, Reddit is likely to be just as culpable in that as Facebook. Further, Reddit has shown to be ripe for weaponizing by hostile foreign nations given the Russian meddling. This has not gotten nearly the attention it warrants, and many feel Reddit has done nothing particularly worthwhile to address the matter.
So in an age where algorithms optimize to capture attention for the benefit of platform owners, and platform owners are incentivized by ad revenue where the advertisers may have malicious social engineering motives, or where the platform is seen as an attack vector outside of ads, what is the responsibility (legally and morally) or platform owners and investors in doing something about this?
Sure, if you could coordinate all human activity on the planet you could prevent further AI research from happening. Assuming you can't coordinate all activity, getting out in front of the problem is probably the best you can hope to do.
> In the Stalinist ideological imaginary, universal reason is objectivised in the guise of the inexorable laws of historical progress, and we are all its servants, the leader included. A Nazi leader, having delivered a speech, stood and silently accepted the applause, but under Stalinism, when the obligatory applause exploded at the end of the leader’s speech, he stood up and joined in.
It's just one of those processes that are complex but still have a trajectory and are predictable in some regard. Sort of like the Transcontinental Railroad in a way.
I'm still holding out hope that Sam will come participate on HN one day. I assume you don't know him on a personal level.
Then again, the descendants of the apes that didn't develop the particular "tool" of generalizable problem solving are extinct or endangered, while the descendants of the ones that did are busy working to create AI
That's an odd position to take. Surely there's a difference between the hand and what it grasps.
I agree that that is an odd position -- it's not one i hold, but a reductio of the "computers are tools no different than knives" argument above. machines that are able to process novel information and make decisions are fundamentally different from other machines, whether they are composed of tissue or silicon
Our ability to cook our food has rendered large molars obsolete to the point where some people require medical intervention to remove them, lest they crack their teeth and jaw, resulting in infection and death.
[edit - why am I still editing the spelling and grammar on this post?]
Personally I believe we will see really interesting animal-like AGI demos in 2018 and 2019.
It's just dumb that people can't take this stuff seriously until all of the engineering and research is 100% done.
I'm not convinced that an exponential number of people working on AI produces exponential advancements. Wouldn't we see diminished returns with each new person, presumably each one is less capable and less expert? I see AI experiencing a hype-cycle like disillusionment before seeing "double exponential" returns.
As long as it stays like that (and by opening up new sub-areas, it could stay like that for quite some while), adding more somewhat qualified people to the field results in all the useful combinations being discovered faster and thus exposing new potential starting points. Probably no exponential growth in the strict sense, but quadratic growth isn't bad either.
Typically, diminishing returns means f(x) = x/(1+log(x)) where x is headcount and f is output. The overhead goes up with the log of the number of people, because a hierarchical organization will have log(N) layers of management that need to be traversed to make decisions.
If we have exponentially increasing manpower, x=exp(t), then f = exp(t)/(1+log(exp(t)) which is O(exp(t)/t)
I suppose it depends on your view of the theory of scientific revolutions.
Look up for example Yann LeCun's AGI presentation. Now see projects like Ogmaneo or recent Deep Mind papers that do fast online learning or have ways to work around catastrophic forgetting and other issues raised by LeCun.
I believe that one straightforward way to get there is to continue to apply neural network research towards attempting to mimic animals. And there are a bunch of people doing that and making progress.
That is harder than you might think. For example, the simplest animal with a nervous system, C. elegans worm has been simulated "only 20 to 30 percent of the way towards where we need to get" according to one of the OpenWorm project leaders. That's 302 neurons total.
Using a computer analogy, it's like struggling to build an abacus when our ultimate goal is to build a Core i7 processor.
rather than any totally new approach
It's not clear to me that we have any approach to achieve AGI. People like Ben Goertzel have worked on that for decades without much to show for it. Current deep learning methods have little to do with AGI, they focus on very narrow applications, by design.
"I believe attention hacking is going to be the sugar epidemic of this generation. I can feel the changes in my own life — I can still wistfully remember when I had an attention span. My friends’ young children don’t even know that’s something they should miss. I am angry and unhappy more often, but I channel it into productive change less often, instead chasing the dual dopamine hits of likes and outrage."
Further underscores for us as a society to find time to step away from technology and experience the "world" (nature, art, human interaction, etc.).
The dinosaurs probably disagree. Eventually they might have developed tools and then technology but then planet-scale comet destruction and/or volcanoes got in the way.
We however don’t need a cataclysm to delay or indefinitely defer scientific progress...we have various self-inflicted ways to impede progress: religion, dictators, racism, consumerism, spending money on bombs vs. education, denying climate change, tolerating genocide, electing crazy people to run nuclear armed nations, taxing graduate students, the McRib, etc.
> More important than that, unless we destroy ourselves first, superhuman AI is going to happen, genetic enhancement is going to happen, and brain-machine interfaces are going to happen.
I think it's a really good point. As long as the laws of physics don't prevent something, and we can't find a clever workaround, progress tends to march on forward. Really the only thing that could stop that is some sort of earth-wide cataclysm, whether it's natural or human-made.
I think you overestimate a lot of those human-made things, though. The countries of the world are very interconnected these days, to be sure, but I'm optimistic that religion, dictators, etc. can't stamp out all progress; I believe there will always be places in the world where this kind of progress is at least tolerated, if not actively embraced and encouraged.
That said, the real dominant species on this planet, tardigrades, could likely survive even a McRib-induced climate catastrophe. So if tardigrades merge with AI, we are totally and completely fubard!
Cybernetics refers to this as a metasystem transition:
(Shameless plug: if you want to see what it looks like when a hivemind speaks, check out https://reddit.com/r/AskOuija)
His examples of the merge in progress are social media determining how we feel and search engines deciding what we think. The problem here is the anthropomorphizing that humans have done throughout history to understand things they can’t fully explain. Gods don’t get angry and cause rain and search engines don’t make decisions about what we think. A search engine is just a complex math formula that can’t make a decision anymore than a calculator “decides” to output 4 when someone types 2 + 2.
Bostrom and others in this camp always have a hard time describing what “superhuman” will look like or mean. But I will guess in 2075 that the goal posts will be moved and it will be said that in 2017 we hadn’t realized that there was already superhuman AI, as evidenced by calculators that could already do math faster than a human.
The basic idea is how do you effectively align a general intelligence’s goal with humans so that it uses its intelligence to solve problems in a way that aligns with our own human utility functions.
If humans figure out general AI before goal alignment it may have outcomes we don’t want.
Perhaps work on AI goal alignment can help us build a more humane market place, or vice versa, research into macro-economics can inspire some ideas in AI goal alignment.
It's just that the values and goals of many humans are unaligned with the values and goals of many other humans.
Which results in a cacophonous marketplace.
A self improving general AI may just be be a matter of an intuitive leap, but nothing that’s hard to turn on once you understand it. This property would make it hard to suppress.
The upside is that if we can figure out the goal alignment problem we may be able to fix the coordination issues humans have and have a much better end state than all of us burning out and going extinct on our planet.
The downside is some variant of the world turning into paper clips.
It is a hard problem though which is why there's all the talk of existential risk. It's also a little nuanced to explain why there's concern or why people should work on the problem.
I thought this post from yesterday was a good response to someone stating that an intelligence explosion was impossible (the original article it's responding to is linked within it): https://intelligence.org/2017/12/06/chollet/
On the defensive side I think there's a lot that can be done with narrow AI. Consider using really good penetration testers and bug finders to fix Internet security.
For any given task, build a tool that just does that. It's like the least privilege principle in computer security.
These ideas are mainly extreme proof-of-concept ideas, maybe there's ideas along these same lines that are much easier to pull off, but maybe these ideas as-is would be do-able for a possible superhuman intellect. I'm kind of thinking about what it looks like when individuals meticulously plan and then make tool-assisted speedrun videos through videogames, and then scaling it up to what it would look like if thousands of people worked together optimally for thousands of years brainstorming and calculating on how to speedrun through conquering a society. Physical limits permit this kind of computation as far as we know. Maybe this imagery is off-base, but I'm purposefully shooting high and worrying I'm not shooting high enough. Chimps would never even consider the possibility of "super-chimp" intelligences creating nuclear weapons. You can't make narrow AIs to defend from these possibilities if half the danger is our inability to even enumerate them.
Ok, let's see. I check my Android phone perhaps 5 times a day. It's in silent mode all the time and has a broken screen since 2.5 years and I don't care. I have perhaps 10 apps installed of which I use maybe 5 on a semi-regular basis. I check Facebook / LinkedIn perhaps 1 time a day and when I do I'm always slightly annoyed by the amount of useless crap in my feed. I don't use or need Snapchat or Insta or Twitter or any other social media. I doubt I even need Facebook. I do use Google, a lot, but mostly as a gateway to Stackoverflow or to satisfy my curiosity about something I've read somewhere else on the internet. I doubt it has any influence on what I think.
Am I really such an outlier?
This seems to imply sama is resisting the inevitable merge. Can't we instead try to steer it in a more positive direction? Where are the startups trying to keep you addicted to the internet in a way that improves your life? They don't seem to exist, because the incentives aren't right -- it's way more profitable to keep you hooked in ways that make your life worse. How do we fix those incentives?
> consider the gamifying elements of Khan Academy, Duolingo and other e-learning sites for example
It's worth noting that I've heard a lot of criticism of e.g. Khan Academy from certain components of the mathematics education community. Concretely, they feel that the material there is designed to increase engagement and provide a sense of accomplishment ASAP, and that the learning processes that fit those goals are detrimental to development certain mathematical skill sets. I.e. you might learn how to compute derivatives really well, but at the expense of really understanding limits or the fundamental theorem.
Mind you, mathematics educators have always been an internally divisive bunch, so take from it what you will.
In Sapiens, the author argues that wheat enslaved humanity into producing more of it.
Sure it's an interesting way to think about this, but it's not what is happening. People grow wheat because it has a lot of nutriments and can be kept in grains, and can be harvested twice a year.
Similarly you look at your phone because you find it useful not because it enslaved you into doing it.
Sure you lose some things when you have the worlds knowledge at your fingertips, but you win a lot too. When you had to look up the GDP of France in a book, it was harder to have fact-based arguments...
Anyway, I think General AI can become a very bad thing, but we shouldn't confuse it with "phones are controlling our lives" cause that simply isn't true.
> Makes you wonder what will happen when instead of the rules of chess, you put in the axioms of logic and natural numbers. And give it 8 months of compute.
The answers are more realistic:
> How do you score this computation? What's your goal? There's no checkmate here. (this was mine)
> If you're talking about formal proofs or maths, I'm not sure how this would apply in general as the branching factor for each 'move' in a proof is efficiently infinite.
Also, there was a talk where a Google engineer admitted that a car you can put your kid into to drive them to school is still more than three decades away. From other sources, this is even more likely because the trolley problem doesn't seem solvable and so we would need to drastically decrease potential interaction between pedestrians and self driving cars which requires building guard rails, reforming transit and so on.
Don't subscribe to the hype.
Not to mention Katherine Bailey's excellent article (well, all of them on medium are on AI and really good reads):
> One thing that both the pessimistic and optimistic takes on the Singularity have in common is a complete lack of rigor in defining what they’re even talking about.
I think the fact that internet guides people behavior on a massive scale is a clear indication that machine/human hybrid cognition is not a hype, it's a thing and it's starting just now.
What makes one go to the fridge? A sense of hunger. What makes one go to the Facebook? A different yearning.
Before, people have controlled machines, and they have not induced addiction. Now machines can induce a wide range of emotions.
This is the first step - the channel is open.
On the other hand, when will there be more than random noise and clever hacks at the machine end - I don't know.
Only that we are combined as a species on a cognitive level already by the internet.
One could say that the combination began when writing and money was invented - one coordinating thought, the other labour.
Machines have made this suprapersonal interaction much faster, and much powerfull.
It would be silly to say it's just a fad. It's not a Skynet scenario, it's not the matrix. But - people are affected, and algorithms are getting more clever.
I'm not saying we are going to have an AI overlord. But I am saying the interconnectedness, the algorithms and the addictive quality of interaction is leading us definetly into a new territory.
Also known as a schoolbus.
Speak for yourself. I doubt Sam meant to appear ignorant of other perspectives on the world. However it would be nice to consider them when writing general insights, as opposed to simply tunneling his own point of view.
This really reads like self-satire.
It's just silicon valley bubble platitudes top to bottom. He needs to take a breather somewhere on the other 99% of the planet.
No it's 21st Century IT-bloke failure of imagination and arrogance. A failure to imagine change: specifically starting to hit real walls and running out of superficial areas of expansion to distract you from the lack of intrinsic progress. It's 21st Century IT-bloke arrogance imagining that the context that has rewarded you with money and power, is also at the heart of an earth-shattering scientific revolution...and not just (which is significant) another industrial revolution leading to new and all too human plateaus.
I expect a technological singularity to supplant/replace biological life completely.
I don't think I understand.
Since sooner or later, biological systems would be inferior to manufactured ones.
1. Humans have always been "merged" with technology. It is becoming more pervasive now, more powerful and to some extent more intimate, but it has always been the case. For example, the evolutionary advances of homo sapiens were in part a result of our ability to "out source" elements of digestion to cooking, enabling our intelligence to outstrip rival animals and hominids. Other examples: farming, writing, the printing press, telegraphs and so on.
2. For most of human existence humans have believed in forces superior to themselves, whose intelligence, power and strength out strip their own: whether God, gods or other metaphysical forces. Its amusing to me how theological tropes reappear in these writings with a high degree of regularity. Now one might argue that the difference is that AIs "really exist". But crucially the idea that humans have always considered themselves totally "top of the pile" seems radically false. It is at best a very limited notion in human society. Even modernity, increasingly secularised, was quick to assert that human mastery was at best an illusion, e.g. the psychoanalytic or Darwinian revolution.
3. The obsession with AI being the largest existential threat to the human species seems hubristic in the extreme given that a very current and very real threat is already here and it is often the most poor that are already feeling its effects: catastrophic climate change.
This can be also generalized to other animals see "Extended Phenotype" by Richard Dawkins ( https://en.m.wikipedia.org/wiki/The_Extended_Phenotype ).
The interesting thing to me is that life and evolution propagate because of the laws of physics. It's theorized that life is a chain reaction that evolves out of a simple necessity to be as efficient as possible. So this new life for will potentially depart from that natural basis for evolution.
What do you guys think though, do you really think a merge will happen? This is obviously a long term existential and depressing discussion, but really, when an intelligence with much more potential than ours arrises, will there really be any point in us lingering around? Do we even have a chance at this merge? I mean, I guess I see the urgency, we would need to start now, so that any innovation in AI is really linked to improving our own cognition from the get-go, otherwise we are just a stepping-stone for life originating from this solar system.
We bring dogs along because they are dumb pets, but we kill wolves and took their habitat for their own.
There’s a resource cost to “finding new experiences”. Which conflicts with the “survive and Procreate” drive necessary for any successful system.
The primary survival loop will command the resources.
Why should it save you?
If you think about it, people are just a bunch of cells and bacteria which work together.
We will be like the living cells or bacteria which fuels the machine... If we aren't already.
And at that point it's game over for homo sapiens.
The protocol that that we need is for transmitting information, not data. Or in other words, you don't need a neural interface to transmit understanding.
You could have said the same about the philosophical stone, which people thought that was possible to "discover" back in the late 1700s, or about us, humans, physically reaching other solar systems and especially other galaxies, which some people thought possible for a while after WW2. It's a belief similar to how some religions started, definitely similar to the early Christians' belief that the second coming was only a decade or two away.
Still exponential, though.
Also, as a general criticism, there's a big difference between people getting addicted to the internet, getting dumbed down by it and sucked into things like a youtube hole, and my idea of The Merge. The things you describe sound more like an automated soap opera or opiate addiction than the singularity.
If you define intelligence as “being really good at chess” or “factoring prime numbers”, sure, computers are already smarter than us. If you define intelligence as “knowing when to let your child make mistakes on their own and when to help them”, or “knowing how to conduct an orchestra”, it doesn’t seem so extreme anymore.
In fact, the opposite statement rings just as true:
It is a paragon of human arrogance to assume that we will build things smarter than ourselves in every conceivable way.
The road ahead looks more like a planet with its ecosystems ravaged by resource extraction, with buggy computerized systems we don’t understand running people’s lives in harmful ways (eg see all the writeups about machine learning reinforcing systemic biases) than a world full of meta humans in symbiosis with inconceivably intelligent machines of their own design.
That can be tested and quantified, based on end state goals.
>The road ahead looks more like a planet with its ecosystems ravaged by resource extraction
In such a world, the savviest and more strategic 1% would thrive, just like they always do. Eventually though, they will need to merge with machines to continue making the cut. It's too advantageous not to.
The term "dominant species" really jumped at me as I was reading this piece. It's so fantastically vague that it begs the question - would us and AI have the same definition for what "dominant" is?
We're not building AI individuals with self-preservation instinct, hunger for resources and sexual drive. We're building super-individual systems like Google, Facebook and trading systems, that can be much more intelligent than we are but also have vastly different built-in "purposes" (just as evolution have "built-in" purpose of eating, having sex and caring for our family into us).
I think that in the end, the thing we're building will end up much closer to Solaris ocean than Terminator.
There is no other general intelligence at this point that can rival human intelligence, we have a systems sample size of 1. Making predictions based on such a wide range of possible technologies is hopeless at this point.
Why would a fixed placement general learning computer system that takes input from millions of possible sources evolve in the same way that a mobile robot with general learning intelligence? Will both be possible? What about swarm intelligences?
Too many questions with no possible way to answer them.
When would this start? Probably not limited to the pace of AI but the pace of human computer interfaces catching up. I suspect the largest increase of usage will be the non-invasive helmet electro-magnetic style that is already proven to work to some degree.
Generally too, HCI can help us create a massive amount of training data.
The one avenue open to tech advancement became so powerful that it eventually began to bleed into the physical world and we are starting to see other tech slowly advancing again. Maybe if the tech wealthy of Gen X and later have sufficient power in society after the Boomers pass the baton (if they ever do), we will let the technology keep advancing and see this merging. There is also the fact that the US is no longer a hegemon. Good luck stopping people in China from doing things banned in the West.
That's not true at all! Civilian nuclear power in the USA (and most of the west) has stalled out since the 70's, but nuclear technology has seen plenty of advances over the intervening time frame. See esp. modern nuclear propulsion systems. Super impressive.
Also, this isn't a problem caused exclusively or even primarily by safety regulation in the nuclear sector. The evidence is in the failure of nuclear power throughout the world, despite significant variations in safety regulations.
In fact, if you look at root cause analyses for the failure of nuclear energy, you'll find that lack of regulation is a major reason for the failure of the nuclear energy. If the FF industries had to pay for their externalities, nuclear would be extremely viable (at least in the 80s-00s; now it'd have to compete with solar and wind).
> Computer technologies were only allowed to progress so quickly because people did not think computers were dangerous.
Again, this is a pretty wild assertion... Moore's law >>>> regulatory environment. Seriously. If the output of nuclear power plants had grown exponentially for multiple decades, we'd be in nuclear-powered paradise.
"Moore's law >>>> regulatory environment"
I don't see how this is a disagreement with what I was saying. Imagine if society had the same amount regulation on building software systems as dealing with radioactive stuff. Moore's law would not exist.
"If the output of nuclear power plants had grown exponentially for multiple decades, we'd be in nuclear-powered paradise." Why didn't that happen? Lots of reasons, but I think it could have and still could given a chance.
I'm referring to military ships and submarines. I'm not really sure where the financial incentive for nuclear comes from for anything commercial?
TBH it seems like you have a culprit (regulation) in search of a victim (nuclear utopia), and as a result, you have a solution (nuclear power) in search of a problem. If that's the case, there are really many much better examples.
> Nuclear test ban treaty, etc.
The NTBT covers intentional detonation of nuclear weapons... I'll be the first to concede that regulation has perhaps hamstrung the private sector nuclear weapons market :-)
> engines for nuclear rockets
Radiation is a bitch and lead is heavy. Conventional rockets that we need for human space flight already can, with modification, get us to where we've wanted to go so far. So we've instead focused our resources on doing useful stuff once we get there. NASA doesn't have inf money.
This is something that people continue to research. What's holding it up is a lack of priority in decisions about what science to fund and a complete lack of any private sector market large enough to justify the investment, not any sort of regulatory barrier.
> nuclear excavation techniques
...I'm having an extremely difficult time thinking of a use case where this would make any sense. Nuclear bombs are more powerful than conventional explosives. That power has an enormous tactical advantage in wartime because you can take out a city with a single missile instead of weeks worth of bombing runs.
But that's not really a particularly beneficial feature in e.g. a commercial mining operation. Plus radiation's a bitch.
Help me out? What are the possible use cases here?
> nuclear airplanes
Radiation is a bitch and lead is heavy.
> I don't see how this is a disagreement with what I was saying. Imagine if society had the same amount regulation on building software systems as dealing with radioactive stuff. Moore's law would not exist.
This isn't clear to me. I think the hardware would've still be invented. Perhaps the business impact would've been smaller, but regulation of commercial software systems wouldn't have stood in the way of the enabling physics and engineering research.
Also, see the Ford quote about Microsoft; if the nuclear industry worked like the software industry, we wouldn't even be here to have this conversation. We'd have BSoDed our way to nuclear Armageddon some time around 2001.
> nuclear power plants had grown exponentially for multiple decades... Why didn't that happen?
Physics is a bitch. More precisely, I'm unaware of any serious conjectures by modern nuclear scientists that there are obvious advances on the horizon that could give us exponential improvements to reactor output even in the short term. Let alone over 3+ decades.
Aside from power, and perhaps inter-planetary travel, nuclear is a case of "tried it, doesn't work for fundamental reasons" or "wtf now you're just throwing nukes on things for fun". And in the case of power, we have a lot of data points from a lot of different regulatory regimes, all of which point to "this is a quite expensive way to make power which won't pay off until fossil fuel externalities are finally internalized... i.e., never"
Yes. No demand.
> Would you agree that fundamental physics does not preclude a functioning nuclear rocket?
Fundamental physics also doesn't preclude a nuclear powered ferris wheel the size of the empire state building.
There are lots of things that humans can do but don't do.
> A live, radioactive (low-level, but still radioactive) exhaust stream is not going to happen in the US in the current culture.
This hasn't stopped us from building an enormous fleet of nuclear ICBMs and a rather large fleet of nuclear power plants.
What do you think ISIS would look like in the world you describe?
So uh yeah, your argument is pretty insane?
Fuel in nuclear reactors do not have to be a weapons grade levels and only a few research reactors are (or were, as they are being phased out).
Also, one or two (or a dozen) of small nuclear bombs set off by some small group is not going to wipe out humanity and most of the worlds ecosystems. Horrible, massive death and destruction for sure, but not an existential threat to life on the planet.
Our "successful" descendants will probably be the unmerged, low-tech survivors living on the outskirts of a new "Machine Age" civilization.
L'ennui c'est l'oxygène
qui fait rouiller nos roues
dentées, c'est l'oxygène
qui fait rouiller nos clous
He's referring to a superhuman artificial intelligence, not merely an augmented human intelligence.
There just doesn't seem to be any evidence for this kind of development. In fact the only differences to 10 or 20 years ago, when people weren't bullish on this nonsense (my opinion) is that we have much more compute power now, and good results with deep learning. Deep learning is "just" (the quotes are big here) a search for a function that approximates a process of interest.
We are miles (more like multiple earth-circumferences) away from anything approaching general intelligence. And if that's not what is meant, then AI is already super-human in some domain of interests. But then it already was decades ago.
Of course, we'll keep getting increasingly incomprehensible and useful algorithms, although I believe we already reaped the low-hanging fruits and we are not going to 10x what we have now.
If I may venture a wild guess, progress towards general intelligence might come from learning more about our own cognition.
(For sake of discussion, let's just accept that a superhuman AI will exist soon.)
Asimov's "Three Laws" point out that a utility function is just a program like any other program. It has no inherent moral code. If it prefers "the good of mankind," it is because the engineers made it so.
How long until someone makes one that actively destroys mankind?
Sam seems to accept that a single superhuman hostile AI is an extinction-level event. Friendly AIs are non-events, but a hostile AI is an existential threat.
There are counter-arguments: humans are resilient; our monoculture hasn't wiped out all avenues of escape; governments are still human-dominated and unlikely to surrender to AI control.
I haven't decided yet, but I am convinced there are concrete actions _right_ _now_ that have a strong effect on the outcome.
The real contest, though, is that humans just don't care, and AIs are tireless, flawless machines.
The bigger question, if we want to make human life better, is: should we worry about AGI more than other things?
One can argue that if people want to help their fellow humans, theyd get the most value for their time, say, volunteering with the many 8th graders in east palo alto who can't even spell their first name, rather than trying to prevent an AI apocalypse that they cant even predict with any degree of accuracy
I think it is good and important that people develop AI responsibly and think about these things, but does this topic really deserve more public attention than the many other threats and challenges that humanity faces?
I know sam and elon and others are very smart and have large megaphones, but we should certainly question their priorities
All of this is only true is you let it be true. And if that is a basis for thinking we are moving into the singularity, then there is an echo chamber informing that conclusion. There are plenty of people who do not live their life via devices or social media. There are also plenty of youth rejecting those choices. But, almost by definition, you aren't hearing from those people online.
If anything, there is a voluntary split going on between those who are embracing tech as a central core of their life and those who reject it. And a subset of people like myself who make their living at it, then go to a home without it.
Using the blockchain for decentralized access to distributed machine learning models and creating a heterogenous network of autonomous agents that can collaborate, learn, and grow will be huge. One of the companies I see doing that right now is https://synapse.ai/ and it's pretty epic if you dig into their yellow paper.
When we start building a global brain where everyone can contribute, then we'll really start seeing what the future can hold.
You would feel better (particularly in regards to the emotions you describe at the end), Sam, if you put yourself on a "information diet".
I've stopped watching TV. My use of the internet is pretty much limited to email, paying bills, HN and one or two other news sites, and work (developer/system admin). I'm not on Facebook, LinkedIn, Twitter, or any other social media. My phone is used as a phone, for SMS, and Google Maps when driving to someplace new.
Other than my phone I have no "smart devices" in my home and doubt I ever will.
I'm not a Luddite -- I just see no personal benefit in any of the things I've mentioned.
J/K, I'm a luddite too.
As far as I'm concerned, this is the same with AI.
You can imagine an AI that is smarter, bigger, more capable than humanity, but realistically we can't describe that.
We can't create something that is greater than our own limitations, the same way we can't create a color that we can't perceive.
Humanity is bound by it's own intellect, so any AI could only ever be as smart as we are.
If that were true we would still be single-celled organisms. Or do you mean it can't be done intentionally?
TIL I'm not human.
If you did the same approach with a child that could fully comprehend the rules from the start, playing another child, both of whom had never played before, I really don't think they would have learned as much as the computer would have done. It would be an interesting experiment - my bet would be that the children had invented another game and were playing that instead.
Isn't there one posted to HN about every day?
> It can teach itself how to play chess, but it takes a very large number of attempts before it becomes good. The rate of learning for a human is still much faster than for an AI.
But this is just a power optimization versus availability problem. AGZ played 44 million games in 8 hours and became better than the best program which is better than the best humans. Optimizing for a human level of power usage doesn't seem like the best method at this point.
With apologies in advance for the self-promotion, here's a paper where I present my arguments for alternative scenarios:
In short, I argue that non-evolutionary superintelligences are not stable (they eventually go inert), while evolutionary superintelligences are a very serious existential threat to our species.
I think this is a misunderstanding about human nature. Humans are thinking and feeling beings who are responsible to others. They are only slaves and automatons if they choose to; most aren't. Many who are have hints and feelings that what they are doing is unnatural.
I completely disagree. Most are. In fact we have study after study showing this the case.
You want to focus on the modern automatons, such as phones and social media, but we have plenty other examples. Focus on the clock. Focus on the law. Focus on societal and cultural norms no matter the negative effects that these things have. These things have driven us since antiquity.
I smell epicism.
How did it become chic for women to smoke cigarettes back in the first quarter of the 20th century?
And you can't just go appealing to complexity, either. Economies are too complex for any actor to understand; we haven't lost our individuation.
> Most guesses seem to be between 2025 and 2075.
Disseroth is on a fast-trak to win a solo(!) Nobel for opto-genetics and Clarity, sure, but we are at least a century away from a wet-ware interface, if they are possible at all. The BRAIN initiative was effectively a failure (many reasons here) and the Connectome projects are essentially coming up with 'brains be different, yo'. Hell, we just discovered that the immune system is in the brain at all, like 3 years ago. We have not idea how many astrocytes and glia are in your brain (50% or 90%?) or how they are regulating synapses (maybe they are the primary regulators). What the hell are vestigial cilia even doing in the brain anyways? The list continues for miles of .pdfs.
Repair of neurons would be a necessary step for wet-ware, and still we have a damnable time trying to get people to dump ice-water on their heads as their father is dying. We are decades away from a cursory understanding of a wet-ware interface that won't just glia-up in a year or put you on drugs for life and at a 10,000x risk for strokes. We know electrodes don't work in the brain and the drug cocktails don't either.
Opto-genetics is a great discovery (use light, not electrodes) for interfacing, but the damn Abbe' diffraction limit (a huge physics limitation) screws you. ~125,000 um^2 of light at the focus versus a 25 um^2 neuron's soma. Maybe, yeah, for peripheral nerves where you can 'multiplex' along the length of a long fiber bundle, you can get away with a wet-ware interface. But cortical? Not gonna happen. You can use STED techniques, but you'll cook the brain to get the resolution down first. Opto is good only for applications where you aren't limited by Abbe', that's not the cortical areas.
> We will be the first species ever to design our own descendants
Maaaybe. However, what is a 'family' then? Your kids may not look or 'be' anything like you. All the families that will have done so will essentially have adopted a child, as far as the genes go. Plus, that kid will be 'whicked smahrt' if I'm reading this correctly. Not a lot of people do that even today, for many reasons. How will the kids think of their 'dumber' parents? Will they be 'parents' to them, or more like the cat, but with an inheritance? I think the initial forays are key here, and those forays will not be happening in 1st world countries, but much more 'familial' based ones like Korea and China. Places where the distortion of the family will be even more 'cutting' to the societal fabric.
Someone needs to take a vacation from their devices, it seems. I feel like this overstates and dramatizes the situation to a large degree.