I know HN hates techno groups, but this kind of thing is not new at all to folks who consider themselves "Transhumanists."
How is this not an obvious eventuality for everyone here? It seems pretty clear that the whole vector of humanity is to functionally merge with our engineered system in a symbiotic way and then probably see the extinction of the human species in (relatively) short order.
We're gonna go extinct anyway, so what's the alternative?
edit: Used a slightly different time horizon on suggestion.
Compare 1817. No useful railroads. A few steam engines here and there. No electricity. Running water and indoor plumbing were rare. Steel was as rare as titanium is now. Nothing moved faster than a horse.
The information age or post-industrial society we live in today an inverse process: a process of decentralization and customization, where consumers want personalized solutions rather than adapt themselves to an average/massive solution.
Now, the world isn't uniform with respect to progress. There are still pre-industrial societies where most jobs are concentrated around agriculture, and industrial societies where all jobs concentrate in manufacturing.
Of course things will change, but many things stay largely the same. Not only will most things stay the same, but the things that are unpredictable.
Where do I see humanity in 200 years? It's totally unfathomable to me (although I'd bet on drinking glasses and chairs still being recognizable as similar to what our ancestors used).
I don't see why despite it being unfathomable to people 200 years ago, the future 200 years from now is not only fathomable, but predictable and even inevitable to you. That's some pretty extreme arrogance.
While some things are the same, some things have drastically changed. It's all too easy to pick out the things that stay the same and declare "Look, fundamentally we haven't changed!" because the things that have fundamentally changed we are already used to.
That's not an argument against, but it justifies significant skepticism, and helps explain why your "but it's obvious" argument isn't going to convince anyone to take you seriously.
Poll ten transhumanists, get ten mutually incompatible futures, all "obvious".
They didn't necessarily advocate or want it, but they saw it as an eventuality.
"functionally merge with our engineered system in a symbiotic way and then probably see the extinction of the human species"
You can probably find quotes that will confirm your belief, but from a neutral viewpoint there is no such consensus between these people about the future.
Secondly, even if there was, we have 60+ years more experience than they did with these systems, and we should be able to make better predictions than they did.
There are many potential future outcomes, and it's reasonable to think about them and to try to mitigate certain possibilities. It's not reasonable to say that a certain outcome is inevitable and you know what's going to happen. We just don't.
Let me rephrase then:
If you evaluate the written history of humanity, there is the unavoidable trend that humans will create tools which replicate and obviate human actions in a more optimized manner.
Stone Tool > Controlled Fire > Shelter > etc... to the point that the ultimate boundaries of human capability including things like creativity (Generative Adversarial Networks are the frontier right now) - which is one of the few remaining distinguishing features of humanity by the way - are being mechanized.
Eventually they all will be, because there are people like me trying to make it happen.
So while you are right that it's not inevitable, you can rest assured that we're working hard on it and the trends are in our favor.
The trend you identify is that there will be a breakthrough and everything will change, but we have no way of knowing what it will be. When it comes, it might make your field irrelevant. Arguing that the breakthrough will come from the field you happen to be working in is understandable but it doesn't carry much weight with anyone outside your field.
If you solve AGI you solve all of those problems. That's the whole point here. It's not a standalone technology it's really something that changes every industry.
It is of course true that solving AGI solves most other problems (while creating new ones that we are by no means ready to handle) but that doesn't mean it's likely to happen. Hopefully it doesn't anytime soon.
There are other possible breakthroughs. Just as a random example, humanoid robots with effective IQ 60 that could be trained to do all our current manual labor. That would completely transform society and we would be sorting it out for decades. Or it might become possible for someone in a garage to synthesize infectious biological agents and intentionally or accidentally wipe us all out. Or we could get cheap energy beamed down as microwaves from solar-powered satellites. While it's true that AGI changes everything, it's not the only thing that can change everything, and you can't predict what will happen once everything changes.
This will be used for control. 50 pieces of senators want to sell you online thoughts already (internet history). And yet, you are willing to live in an alternate reality where this doesn't happen and it's all for good.
Read the full original paper, summaries fall short. But briefly, due to always finding use for more power/computation, primary gains coming from increasing density, and the speed of light limiting galactic colonization, intelligent civilizations that don't go extinct eventually find themselves living in a computer in a black hole leaching energy directly from a companion star or on a course to merge with other black holes.
There's a lot more to it though, hence my suggestion to read the full paper.
I find the idea of black-hole as hypercomputing environment interesting for sure.
edit: I am thinking of something different that I read in the 90's that was similar. I will have to look at this more in depth as it is different.
I am not saying that new technologies will not emerge — something new will rule its day, for a while. What is currently fragile will be replaced by something else, of course. But this “something else” is unpredictable. In all likelihood, the technologies you have in your mind are not the ones that will make it, no matter your perception of their fitness and applicability — with all due respect to your imagination.
Tonight I will be meeting friends in a restaurant (tavernas have existed for at least 25 centuries). I will be walking there wearing shoes hardly different from those worn 5,300 years ago by the mummified man discovered in a glacier in the Austrian Alps. At the restaurant, I will be using silverware, a Mesopotamian technology, which qualifies as a “killer application” given what it allows me to do to the leg of lamb, such as tear it apart while sparing my fingers from burns. I will be drinking wine, a liquid that has been in use for at least six millennia. The wine will be poured into glasses, an innovation claimed by my Lebanese compatriots to come from their Phoenician ancestors, and if you disagree about the source, we can say that glass objects have been sold by them as trinkets for at least twenty-nine hundred years. After the main course, I will have a somewhat younger technology, artisanal cheese, paying higher prices for those that have not changed in their preparation for several centuries.
Restaurant's are are completely different today than they were even 10 years ago - from the POS to the bluetooth tracking, menu printing (or maybe even e-menu on an ipad like in airports), logistics for raw food delivery, availability of food, cost, sous vide in the kitchen...I can go on.
Shoes are completely different - again, the cost, supply chain differences so you can get exotic materials, who and how made them.
Sliverware metallurgy is totally different, availability is totally different.
Point is, it is a totally myopic argument that misses what 99% of innovation is about.
It's like saying - hey we still use fire, and eat with our original teeth and see with our original eyes, so I guess pretty much everything is like it was 2Million years ago.
What Nassim describes can be considered a luxury event similar to going to a Renaissance fair to some. Your daily life is far more amazing than it was even 20 years ago. Here is my day:
wake up to the sound of an alarm synchronized to a global satellite network with ~100 nanoseconds of GMT. turn on a light source powered by nuclear energy produced hundreds of miles away. put on clothing made of synthetic fibers that keep me warm and dry on a winter morning. heat up a food source, processed and irradiated of bacteria, enriched with extra nutrients and vitamins, using microwave energy. sit in front of a bank of liquid crystal displays showing 18+ million colors at 4k resolution. Read 20 different news articles from around the world while drinking my coffee, which was shipped and processed in a global trade market. made using water that was pumped to my house, going through ~10 different filters to remove toxins.
Now that I'm done with my coffee I'm preparing for a weekly HD video conference with 20+ people around the world to discuss new technologies we have to prepare the network infrastructure for. I'll then spend time analyzing TeraBytes of data stored in a cloud database that is updated in realtime. The meeting was about how that dataset will be PetaBytes by next year and ExaBytes soon after.
If predicting the future 200 years from now is arrogance, then saying nothing has changed is hubris. We are still trying to convince some people that evolution is real. That makes it hard to even consider the idea that we can predict the path evolution and technology will take.
prediction and evolution rely on one thing, probability. Someone else in this thread brought up the Fermi paradox, the ultimate 50/50 split. After accepting that, you are either an optimist or a pessimist. A pessimist will never accept odds better than 50/50 cause failure happens. So be it.
The probability that your, or ANY individual's, prediction of successful technological evolution is very low. If you are an optimist though, the probability of SOMEONE's vision of the future out of EVERYONE's ideas being a winner is near 100%. THAT is the power of survival of the fittest. THAT is why trans-humanists call this inevitable. We are optimistic of our chances. But we have to be open to new perspectives and adapt.
I will embrace these technologies when they are available to me. I will even attempt to contribute.
But I will also still enjoy drinking from Phoenician meal technology with friends. And paddling across a lake in a native American river craft. And chopping wood for a campfire in the forest. I like camping during my vacation. My daily life will continue advancing forward. You don't have to forget the past to live in the future.
This really hasn't changed much since the 2000 years ago. Romans had complex society with lawyers, education, citizenship, money and finances etc. Yes, big things happened: we became more mobile and information now travels much faster. But it still didn't change the most basic flow of life that much, it's some change but quantity still didn't materialized into quality. And we're now on the threshold where those things can really change. It's hard to predict how society as a whole will change because of this.
I remember immersing myself in the Guggenheim's exhibition of Italian Futurism, 1909-1944 . It was a rich display of the future course of humanity, per Italy's interwar and war-era fascists. F.T. Marinetti's Electric War: a Futurist Visionary Hypothesis , which I read in the Guggenheim's upstairs reading area, comes to mind:
"Up there, in their monoplanes, using cordless phones, they control the breathtaking speed of the seed trains which, two or three times a year, cross the plains for a frenetic sowing. ––Each wagon has a huge iron arm on its roof which swings horizontally, spreading the fertile seeds everywhere."
I remember the Aeropittura paintings, putting the fascists' obsession with planes front and centre  .
What still stands out is how certain they were in their airplane-based civilization and "abnormal growth of plants, as a direct result of artificial, high-voltage electric power"  future. Spending a few hours in that reading room, I could suspend disbelief and start convincing myself that this future was not only possible, it was inevitable.
They were wrong. Linear extrapolations are a good starting point. They're bad for long-term predictions. Don't put too much faith in what you think is obvious in the long run.
I'd be interested to see some of those as I haven't yet and I've been in the community for a while.
What I will grant is that, there are those who speak in the same structure as religious people. "In the future there will be no poverty because machines will make everything we need for free" sounds a little too close to "In heaven, you can eat all you want and never get full!"
However similar they sound though, they have radically different theoretical roots.
In fact though there is plenty of hard evidence for progress on all of the accounts you mentioned:
mind uploading: Not sure about Whole Brain Emulation progress right off the top of my head.
genetic enhancement: http://www.sciencealert.com/scientists-reverse-sickle-cell-d...
Are they solved? Not by a long shot. Do we know when they will be? Of course not. There is progress though.
As far as I know, nobody is making hard progress on when Jesus is coming back.
> Namely: Doctrine, rituals, totems, prayer, and above all
it would require that there is some credulity to a higher
power/state such as heaven/nirvana that will never be seen
The Rituals: Writing talks about Singularity.
Totems: Arguably this is the lacking part.
The Prayers: Their talks about Singularity.
So presumably that makes me a secular-religious nut by your estimation, but I don't observe any of the religious customs you specify above. Perhaps the first, if you use a wide definition.
It would be more interesting if the criticism of transhumanist ideas explained why these ideas are unlikely or impossible, rather than dismiss them with the quasi-ad hominem of comparing them to religious ideas that are either outright wrong, or completely outdated in their real-world impact.
It doesn't have those, but it does need lots of money, just like religion.
Not that i'm against the idea
Never been to Less Wrong, eh?
Calling LW a cult is a stupid meme based on ingroup/outgroup thinking and a few cases where they may have got carried away too far with considering logical implications of the theories they were developing.
The things transhumanists believe are possible are engineering problems. Historically we managed to solve those when we put our minds to it.
All of these are far from being engineering problems yet. We're still solidly in the basic science phase.
ps: btw, "good life" hasn't changed to me. Who wants to go to the country side, walk in the forest, sit down a fire with friends, look at the sun, the sky; animals. Enjoy a disconnected cabin ? To me the basics of happiness didn't change, and tech doesn't really improve that either; in a way .. tech is often pornographic in substance; an excitation in "more" (more capabilities, more bandwidth, more speed).
Shouldn't that be one of our main goal in life, as 'hackers'?
We can do better than blindly compare hacking a brain to hacking a piece of technology made in Taiwan (with all my respects to Taiwan).
Also it can't stop someone from bricking your heart.
The brain has so many moving parts, no security, and just far too trusting of what's in it to start injecting thoughts and information into it.
As a transhumanist myself, I don't think that either of these things is obvious.
Your focus on the "vector of humanity" is where you err. What we can see when we view the history of life - complexity - on Earth is the evolution not of humanity but of intelligence.
This entails two questions:
Are humans the endpoint of that evolution process? There is no rational reason why the answer should be yes.
What, then. would be the next evolutionary step in intelligence? Two answers - genetically-enhanced humans, and machine-based intelligent actors.
The fascinating - and distinguishing - thing about humans is that we will be the first organism in the history of the process (at least on Earth) to consciously create our evolutionary successors.
One could also argue that the medical advancements of the last 100 years or so have greatly reduced the evolutionary pressure created by "bodily weakness", and I'd imagine similar arguments could be made in the opposite direction when it comes to other factors (like the norms of a society and its influence on the advantage of mental capabilities of some kind).
It would be very interesting to see in what ways societies "skew" the evolutionary process. Does anyone know of good research/data in this direction?
But there's a downside. Why will augmentation be any more egalitarian than wealth is now? In Hannu Rajaniemi's vision, the Sobornost are the ultimate 1%.
- Tesla Model 3 production line (First deliveries late 2017. He said he was going to live at the factory to get the cars out the door.)
- Brownsville TX launch facility (first launch scheduled for 2018, not much construction started yet)
- Manned Dragon spacecraft (as of 2015, first crewed launch scheduled for late 2017)
- Falcon Heavy (as of Q3 2016, supposed to launch Q1 2017. Now Q3 2017. Maybe.)
AFAIK, it's the most promising technology for actually making the neural lace.
TL;DR: If you put electrodes on an angled plastic mesh, you can roll it up, inject it, and it'll safely unroll. Also, brain cells like to grow into it.
IMHO, future AI should be used to enhance human cognitives in a noninvasive way. Never in such a dystopian way as in the movie "The Matrix".
1. Many humans today are already "monsters" compared to what was considered normal a few hundred years ago. You could spin a simple hip replacement or bone marrow transfer as "Frankenstein-ian" if you wanted to.
2. If AI becomes vastly more intelligent than non-"monsterified" humans, then the question may become: "Do you really want the human race to be enslaved / extinct instead of creating a monster human species?"
Half my mind is already in cyberspace, why not make it half my brain, too?
Edit: Dystopias don't come from technologies, they come from people being shitty.
What happens with the equivalent of drive-by ransomware on your brain? Send bitcoin to this address or we permanently give you a migraine?
(OTOH we're talking about a technology that could potentially directly modify human psyche, so even that isn't clear anymore :))
Since it's this that is the primary reason that all of the next generation of technologies are incredibly dangerous existential risks, and is related to why other mitigations for existential risks are underfunded compared to the grand projects of being shitty to each other, it seems to me that fixing this should be the main priority of human research.
It involves coming up with at least workable answers to a lot of difficult questions and is a bit of an ethical minefield, but if you believe, as I am inclined to that the alternative is extinction, it seems pretty important to at least make a stab at it.
It will happen, whatever is your personal point of view (or fears) on that subject. In 20y or 200y (or maybe later), but that will happen. Technology is going forward no matter what you think and vote, because there will always be that kind of people that can't stand living a 'standard' life. Go check the Myers–Briggs on personnality types: not everybody is a mainstream xFxJ ('earnest traditionalists who enjoy keeping their lives and environments well-regulated' as wikipedia says).
The real questions are: what pourcentage of humanity will go that way, and how well will humans/neohumans cohabitate.
Right now computational neuroscience is using a lot of blunt instruments such as electrode arrays which are implanted into mice for the duration of a few experiments before the animal is put down for ethical reasons. Safe, minimally invasive brain implants would make long term experiments a lot simpler and more ethical.
I'm not quite sure about all of this, so maybe someone with up to date information on the technology can help me out here?
The most popular implant is probably Blackrock Microsystems' "Utah Array", which has 96 electrodes arranged in a 10x10 grid (minus the corners). It looks like this: http://aerobe.com/wp-content/uploads/2016/11/utah-3.jpg For scale, the entire electrode grid is about 4mm on a side and the electrodes are between 0.5 and 1.5mm long (depending on the model).
There are a few other models (and similar stuff from other companies), but I'd be surprised if anything with thousands of contacts is in regular in vivo use. There are some in vitro (i.e., cells or tissue slices in a dish) systems with more contacts, but the signal quality isn't nearly as good.
We can read out the activity of single neurons--people have been doing it for single electrodes since the 1960s. It's slightly easier with a single (movable) electrode since you can creep up on the cell until its action potentials are fairly large and well-isolated from the background noise (here, large means about ±150 µV). You can't move the array or its individual electrodes, so you're stuck hoping that the individual shanks end up in good positions. Then, data is recorded at a fairly high sampling rate (say, 30 kHz) and the "spikes" are clustered based on their shapes to get individual neurons' responses.
The ADCs aren't directly at the contacts, but you want the amplifiers and ADCs as close to electrode as possible to avoid all sorts of weird EMI from the mains, other equipment, etc. Getting the grounding and shielding right is a bit of a black art and eats up tons of researcher time. (You'd think "throw it all in a Faraday cage" would work, but...it doesn't).
What else do you want to know? :-)
Unfortunately, less invasive recording techniques will never give you the ability to record from single units.
edit: pulled your google scholar and boy am I preaching to the choir...
I did single-electrode experiments for my PhD and those definitely mess up the brain after a while. The Utah array stuff strikes me as "less bad" in there's only one* big insult to the brain, but it is a pretty bad one: the arrays are inserted with a pneumatic "gun".
I think you're right that non-invasive techniques will never give us single unit data, though I hope we can get some longer-lasting implantable electrodes soon.
How does the brain adapt to having the electrodes in there, how long before the probes are accepted as being "part of" the brain?
Do you envision we need lots more sensors than in your example above, or is said number enough for precision input (say, text/words, or navigation in a 3 dimensional position & rotation plus a temporal dimension interface)? I guess the brain would work around the rough edges (or lack of sensor resolution) just like it already does with keyboards, mice, bodies, and language.
For most electrodes, the brain doesn't really incorporate the implant. When a single electrode is inserted, you can start recording as soon as the contacts are inside the brain (in practice, you wait a few minutes since the brain is slightly elastic and stuff moves around). In humans, this is how deep brain stimulation is done--the surgeons use the neural activity to figure out when the electrode is in the right place. For larger implants like the Utah arrays, the insertion is a bit more traumatic. Allegedly, you get a pretty good signal right away, then inflammation makes it degrade for a while, and after ~12 hrs, the signal returns. However, the animal/patient is usually recovering during this time, so it's moot.
These electrodes are usually silicone and metal, usually tungsten, platinum/platinum iridium, or iridium oxide, so the brain doesn't really "accept" them. In fact, it tries to encapsulate and reject them, which limits the lifespan of the electrodes. In my experience, a two week old array might have nice, well-isolated neurons on more than half of the channels; after two years, you'd be lucky to get single units on more than a handful of the 96 channels.
However, there's a lot of interest in developing coatings that inhibit this immune response or actually encourage neurons to grow into the array. There's a lot of promising research on this, but nothing (as far as I know) that's commercially available.
As for the number of sensors...it also depends. You can do a lot with a 96 channel array implanted in the right spot, including spelling (https://elifesciences.org/content/6/e18554#bib27) and control of a robotic arm (http://www.jhuapl.edu/prosthetics/scientists/neural.asp) though neither of these is anywhere near "native" performance yet. More electrodes might help, but there's also probably some low-hanging fruit in figuring out the right control paradigms, decoding algorithms, and even where the electrodes are placed.
For research though, more and better arrays would be great. Many brain areas have a spatial structure. In visual areas, for example, cells representing neighboring spots in the visual world are also near each other. Motor and sensory cortex also have a foot-to-face progression. Bigger arrays might let us sample from a more diverse population of neurons at the same time, which could be scientifically interesting and useful for BCI. Denser arrays might also allow for better recordings from single neurons. If you have a sufficiently dense array, you can record the activity of one neuron from multiple sites--this lets you isolate its responses better (this trick is commonly used with bundles of four wires, called tetrodes).
I would also love to get my hands on arrays made from multiple materials. Platinum is great for recording the activity of single units, but lousy for stimulation; its low charge injection capacity means that high stimulation currents damage the electrodes and/or nearby cells. Iridium oxide has a much higher charge injection capacity, but lower impedance and thus, fewer well-isolated cells. A "checkerboard" pattern of Pt and IrOx electrodes would be awesome, but is apparently difficult to make (no chance you run a fab, is there?)
The flips side of all of this is that amplifiers and ADCs are expensive (though much cheaper than they used to be), and adding channels rapidly increases the data files' size too. My experiments generate about 1 GB/minute, and we record 5-8 hours a day, 6-7 days a week.
What else? :-)
 The organization is really "retinotopic", meaning that cells receiving inputs from the adjacent parts of the retina are near each other in the brain.
I'm thinking about the immune response from the brain, and whether implementing this array of sensors can be done elsewhere than the actual brain, connecting to nerves instead of neurons, essentially creating a virtual limb. I guess it kind of misses the point of this whole thing, and can't reach as far as brain-implementations, but with the advantage of being more feasible as a solution. I think what I'm getting at is whether we need lots and lots of sensors with very high resolution data, or if we instead can ensure the computer interface is consistent enough that a "muscle memory" can be formed for controlling the virtual limb. I guess I don't have a question really haha, thanks for your time and replies!
Also, the main issue with their work is glia' scarring inside the central nervous system; the body ensures the implants are time limited.
It is very interesting to watch some of the emotional and social implications this kind of technology will bring.
Also are you regarding a specific season? the camera work? The ideas/stories themselves?
Improving UIs will provide much more bang for the buck for many many decades to come. A neural lace is the equivalent of trying to increase the yield per square inch of the herbs in your window box when you have 100 acres of empty land around you.
The fact that you have control over your attention... you can direct it to arbitrary things you could do or feel with an orange... that gives you the subjective impression of seeing the whole thing all at once, but that's just a lie your brain tells you. You only actually experience tiny vignettes.
You imagine your experience of "orange" is this massive huge bandwidth cognitive experience, but you only really need a handful of neurons to maintain a weak signal signifying orange.
A crude drawing of an orange in an app can trigger exactly those same neurons. As long as your mind is busy with other things, you won't even notice the difference. This is why novels work. And it's why comics work. "Understanding Comics" by Scott McCloud does a great job of showing how abstract representations of things can provide richer experiences than realistic ones.
Of course, if you direct your attention to the differences between the crude drawing and a "real" orange, you can interrogate those differences. But the fact that you can explore a rich representation of an orange in your brain doesn't mean that you do when you experience one in daily use.
But if you're going to be interfacing with the brain, then there's a lot we can do probabilistically - i.e. deliberately inducing different parts of the memory centers of the brain to promote recall, targeted to the patterns of activation at previous times.
Modifying the production of neurotransmitters or being able to deliberately dampen some would also lead to some interesting possibilities.
Despite being a massively parallel processing unit, it appears that there are only a few queues for data input.
Applying the awesome pattern matching power of the brain to a lot of data simultaneously (in parallel) is not something our brains appear to be capable of.
Perhaps it would require too much energy and get too hot?
Consciously. Unconsciously, your mind is always looking for threats - big movements - in the periphery of your view. It's a co-processor which runs through a lot of data in order to direct your conscious mind.
> Perhaps it would require too much energy
I think that the problem of energy could be overcome. I think that instead of getting too hot, it would simply produce too much waste, and since the waste evacuation system for your brain only really operates when you sleep (at least, last I heard), this would be the main limiting factor.
I'd love to see some research into just how much data can humans really process if humans are properly trained and the data is cleverly presented.
Another non-paywalled news article:
No iBrain device is touching my brain.
My mum was super anti Facebook when it first came out. Now she has one.
I'm not saying you will have an iBrain, but myself for instance, hope that I will not have one. Maybe though, I won't really have a choice in a world where nearly everyone has an iBrain. Kinda like that show where everyone has smart contact lenses and only some people go against the movement and remove them.
Stallman knew these issues were and are important, he was simply a man so far ahead of his time most people fail to understand the level of importance of his arguments.
We're probably in for another 1880's labor movement as automation keeps eating jobs. We either decide to benefit from it via strict regulation or we somehow try to compete with it which is greatly lower the value of our labor and only enrich the owners of automation.
I suspect, though I'm not 100% sure, that when Musk speaks about AI "competition" there's an implication of existential danger. If true, then from his POV it would not so much about "I want to min-max my CEO experience" but "I'd rather we not go extinct within my children's lifetime"
Seems that Musk wants to be a superhero that saves humanity, but villains that actually exist aren't cool enough for him.
The distinction is not leisure vs work. It's freedom vs. constraint.
'happy' is hard, it could very well be that suffering is an important part of happiness... who knows, but it's a valid line of argument.
Not for me, as long as I get to keep all the toys in my garage. The list of projects I want to complete is probably an order of magnitude too big to finish in one lifetime as long as I have a full time job.
You don't have an option.
Said another way, in the long run you either merge with AGI or go extinct.
If anything, dissuading people from standing up for their liberty and basic humanity seems to be the best route toward extinction to me.
I'd also like to point out that although we didn't "become" the assembly line, employees certainly have become more replaceable in many of the jobs impacted by the assembly line. Depending on the job, some employees have become the metaphorical "cog in the machine", and can be replaced by other employees with minimal training, as compared to the artisan-based system that predated mass production.
I certainly see your point, however, and I truly hope that we can maintain our liberty and humanity as technology increases, instead of regressing into a kind of dystopian neo-feudalism.
Returning to your post, I think that human-AI hybrids will be perfectly able to communicate or explain their plans to baseline humans (merging with AI should enhance communication capabilities, not hamper them).
They could have problems explaining their feelings, thought processes, perceptions, as those things may not correspond to anything baseline human experiences. They could have no reasons to talk to baseline humans at all. But it is not inability to communicate.
"No, I want it to create value that will be given back to me via television."
He is the first to admit that his plans will probably fail, but he actually has an incredible track record over 10+ years.
But Elon Musk I trust. He's shown time and again that he honestly wants to realize those ideas, and not pursue them for the money. He also has a pretty good track record there. I'm hoping there will be more people like Musk though; I think we desperately need them.
The rest, we will never know about due to them being born into repressed governments, and/or extreme poverty.
Once we reach this point, I think humans will eventually replace their brains with artificial ones, either gradually (Moravec transfer), or in one-go. There will be various motivations : immortality, mind-performance improvements, etc. The end-result will be the same : we will turn into machines. It won't be a merger, it'll be a plain replacement. The scenario where AI robots kill us all will only be different from a subjective point of view.
Most importantly, the idea that we can understand the human brain and then manipulate it must be false from our everyday experience. Simply put, if we can understand our brains, then we can understand women, something we all know to be forever impossible.
A Master Programmer passed a novice programmer one day.
The Master noted the novice's preoccupation with a hand-held computer game.
"Excuse me," he said, "may I examine it?"
The novice bolted to attention and handed the device to the Master. "I see that the device claims to have three levels of play: Easy, Medium, and Hard," said the Master. "Yet every such device has another level of play, where the device seeks not to conquer the human, nor to be conquered by the human."
"Pray, Great Master," implored the novice, "how does one find this mysterious setting?"
The Master dropped the device to the ground and crushed it with his heel. Suddenly the novice was enlightened.
If you want to compete with AI, don't make humans easier to hack.
 - https://en.wikipedia.org/wiki/Optical_illusion
 - https://en.wikipedia.org/wiki/Subliminal_stimuli
For example, the existence of optical illusions and stage magic are a necessary consequence of particular limitations of our visual system and attention. One could predict the existence of new, never before seen optical illusions purely from knowledge of the way the brain processes visual stimuli. For more information on this, see the works of Roger Shepard who did a lot of research into the psychology of perception and mental representations.
This has ramifications for not just human psychology but artificial intelligence. If we want to build a computer system or robot that can process visual information quickly like humans and animals do, then we may very well have to program them with the same simplifying assumptions that humans and animals use to make rapid visual processing tractable. A consequence of this may be that these computer systems will be subject to the same optical illusions as humans as a necessary consequence of limited attention and computational resources. Furthermore, the misperceptions that make stage magic possible may be possible to induce in any system that can only pay attention to a subset of the visual stimuli given and that must make assumptions about the intentions of the subject being viewed. These assumptions and inferences are usually accurate under ordinary circumstances when the subject is not trying to deceive the observer, but a clever subject could engineer circumstances where the observer has no choice but to be either deceived or accept that their perceptions have no logical explanation -- hence the woman appears to be sawed in half even though we know this is unlikely; there is no visual information to disprove that she was (the lack of blood is evidence that she wasn't sawed in half but this is only evidence from our past experience with people being cut by blades).
We are susceptible to influence by fake news and celebrity product endorsement due to our evolved preference for information coming from "authority figures" and sources that agree with our existing views. Now, normally one may hesitate to call exploiting these systems "hacking" because the exploiter often doesn't know that that is what they are doing just as someone may stumble upon a new computer exploit without knowing exactly why it works.
Again, I would argue that it may be to our advantage to think of the targeted exploitation of these innate tendencies as a type of "hacking" if only to make it more likely that we can avoid being influenced to beliefs and behaviors that may not be in our long term best interest.
 - http://im-possible.info/english/art/classic/shepard.html
 - http://rumelhartprize.org/?page_id=110
 - http://ilab.usc.edu/publications/doc/Miau_etal01spie.pdf
 - http://www.sciencedirect.com/science/article/pii/S0004370202...
 - https://en.wikipedia.org/wiki/Authority_bias
- see also https://en.wikipedia.org/wiki/List_of_cognitive_biases
But I think there'll be significant problems in interfacing with the brain in a meaningful way, if you want to bypass the existing interfaces (hearing, sight, touch etc).
I think it will turn out that the internal implementation is a lot stranger than the external interface, it could vary significantly between persons. Perhaps falling into major categories for some aspects, analogous to blood types; but then varying in the details as much as our finger prints - note that even genetically identical twins have unique fingerprints, as they are highly developmentally affected, like the branches of a specific tree. Simikar for retina prints, an example closer to (some say part of) the brain.
I think interoperation with the internal implementation could require actual understanding of the brain - enough to build strong AI. We might even have strong AI before.
It seems very difficult, we haven't the faintest clue at this point. We may need new fields of marhematics. Could take even more than 20 years.
OTOH, if something does come out of this, it could have extremely far-reaching consequences. Forget interfacing with AI. Think - locked-in syndrome, or people with various disabilities (blindness, etc).
This is the kind of moonshot that ought to be encouraged.
We had tried external BCI for locked-in patients based on visually evoked potentials -- thought being that even if you lose the ability to move your eyes you can still steer an on-screen keyboard. EEG is so noisy, though, that it doesn't work unless the patient's looking at the target directly; ie, they can move their eyes. In which case, you should use an eye-tracker because it's $90 and 10 times faster.
The human brain has on the order of 1e11 neurons. The galaxy has on the order of 1e11 stars.
Now, would we confidently predict that we could "solve" the galaxy in less than 100 to 500 years, say? And yet we make very, very aggressive estimates of how quickly we might unravel the secrets of the brain.
Biology is just monstrously, monstrously complex!
The hope is that it may actually be easier to build a useful thinking machine than to fully reverse-engineer the brain.
If we can really build a neural lace in the sense that Iain Banks originated the term, then we've probably also solved cancer and the rest of biology. Imagine the fine-grained control you'd need to manufacture a molecular-scale structure in-situ.
I'm intrigued by the difference between "understanding" a population, in aggregate, and having a fine-grained view of each individual while also knowing how each individual relates to the whole.
So, what might we learn about the galaxy if we knew what / where / when every single bright spot of star-ish size is, that we would not learn from an aggregated simulation? Rhetorical question, obviously.
I don't even know how many people live within 5km of me. How many are human-trafficked, held against their will? etc. We can make approximations, but there's a lot of interesting information at the margins.
While you don't know how many humans are within 5km of you, that's certainly easier to verify with confidence N than it is to verify how many humans within 5 light years.
1) How soon do you think we'll be able to tap every neuron?
2) How soon do you think we'll understand the galaxy in terms of each and every individual constituent star (not aggregates).
We can hand-wave estimates for the number of constituents in each of these systems. But it's interesting that we don't really know how many neurons there are in your individual brain, or in mine, etc. We can't yet look across the scales required to e.g. count and localize every single cell in your brain. Also, at what time? It changes. Just interesting questions when you get to a fine scale...
If it's doable but the computer can't tell the difference between my attempts to command it and my daydreams, maybe still no.
But cause != correlation, maybe he just likes founding stuff.
My point is most of society is self-limiting for whatever reason (obsessed with profit, obsessed with status quo, etc). Luckily we've got folks like Musk who don't seem to give a shit and do what they think is right for humans. I'd love some other examples of people like this.
Another example: Zuckerberg took flak for his Internet.org idea, because clearly he's an evil Capitalist that just wants to profit off the advertising clicks of Indian goat farmers.
With today's assumptions, sure. Who's to say some material scientist with a trickle of funding doesn't find a way to mix in some cheap leftover metal into an asphalt mix and wouldyalookathat, for some reason you can now plus a USB cable into the road and it'll charge your phone. I'm being silly, but my point is exactly that: we just assume it won't work because yea duh, it won't work if we don't investigate it beyond "huh, if you put a car on a solar panel, it breaks. This is impossible, clearly."
I'm not saying that he won't eventually turn into yet another selfish business baron. That would be a very sad day, though. We need more people with noble intentions and means to execute on them - and such people need to be encouraged, not constantly doubted.
Also, I've been thinking that we will need a new interface for our cell phones. The screens can't get much bigger without being uncomfortable, yet we need to interact with them more and more. I was thinking that a contact lens display would probably be within reach soon but maybe we'll just skip the physical stuff and go directly to injecting signals into the brain stem.
Trying to read brain signals from a skin-based EEG device is like trying to listen in on a conversation being had inside a packed sports stadium while you fly over it in a helicopter.
Anybody know if it works? Could be cool if I could meditate "good" or "not good" in order to turn a light on and off, or turn on my cat feeder or whatever lol
However, the cool thing about Muse, IMO, is that it's cheap, comfortable (moreso than the Emotiv), gives reasonable signal quality, and has a free SDK for developers. Brain-connecting devices even if they just use EEG open up a lot of doors for interesting products. Assisting in meditation is a great idea, but there's got to be a lot of good BCI ideas that we probably just haven't thought of yet.
It's a very bare-bones EEG system: the Muse has 4 channels and a few references; EMOTIV has either 5 or 14. The slick part is that they use "dry" electrodes, which do not need to be filled with a conductive gel before use. The signal from wet electrodes is still a bit better (less noise, etc), but the gel is fairly gross to have in your hair.
Edit: I'm not condemning Elon Musk or downplaying his efforts. This was just an honest question I was hoping someone more knowledgeable than me might comment on.
I don't doubt climate change, but I'm very skeptical of studies positing a slippery slope of catastrophic proportions. It's true we don't know the cascading effects of increased CO2 and methane emissions, but that doesn't mean it the unknown is apocalyptic.
Unfortunately we'll take a lot of innocent fora/fauna down as well.
As for other grand efforts - I haven't heard of them either, and I'd damn love to hear about them. We need to praise and support people who're doing good work.
It's amazing how fast we can learn now, fuelled by information-sharing over the internet. I'm literally forgetting the names of everyday acquaintances because I'm learning and retaining so much new stuff.
The problem, from an individual's perspective is that, while you can stand on the shoulders of giants, you can't easily commandeer all that brain-power.
I'm imagining getting some time-slices of Terry Tao, Geoff Hinton, <insert other big brain names> 's cognition to devote to my own projects. What would that even look like?
On a different note, if we really could mind-meld, could we ever truly hate or kill each other?
Imagine that at the moment you fully understand your adversary you also fully understand others like yourself. The hurt that lead to the desire to never be harmed again. The harm that this kind of mentality inflicts. All the victims of your proposed victory. ...
I mean, it's a hell of a quote, but at a certain point he just stops the recursion for bad-ass conclusion.
If you accept the notion that the internet is our backbone, then we're already a super organism with shared thoughts.
Or to reduce it to a catch phrase: If you can't beat [it], join [it].
But why didn't the machines just build geothermal generators? I guess it's not much of a story then?
An entrepreneur doesn't have to do the day to day heavy lifting. An entrepreneur hires and works with people who do the heavy lifting.
But give it 15 years and Neuralink is the only product on the market that allows humans to be relevant in the face of A.I.... Smells like the kind of profits one would have if, for example, they dragged an asteroid to Mars and monopolized water, or weaponized Saturn's orbit and controlled the only port for deep space travel, etc. Seems silly now but guaranteed there's gonna be people wealthy on a scale we've never seen when these starts of things come through.
Or we'll all be dead. Whatever.
Does he? He seems to be highly leveraged personally and within his companies.
"Ah, but a man's reach should exceed his grasp,
Or what's a heaven for?" 
> Mr. Musk has taken an active role setting up the California-based company and may play a significant leadership role, according to people briefed on Neuralink’s plans, a bold step for a father of five who already runs two technologically complex businesses.
You dont let opportunities slide with those boxes ticked.