Hacker News new | comments | show | ask | jobs | submit login
Elon Musk’s Neuralink wants to boost the brain to keep up with AI (techcrunch.com)
315 points by ndr 177 days ago | hide | past | web | 272 comments | favorite

Everyone is so scared of all of this, but where do you all realistically see humanity in 200 years? Living like we do now? I mean do you realize how different things were in 1817 compared to now? Unfathomable to them [1].

I know HN hates techno groups, but this kind of thing is not new at all to folks who consider themselves "Transhumanists."

How is this not an obvious eventuality for everyone here? It seems pretty clear that the whole vector of humanity is to functionally merge with our engineered system in a symbiotic way and then probably see the extinction of the human species in (relatively) short order.

We're gonna go extinct anyway, so what's the alternative?

edit: Used a slightly different time horizon on suggestion.

[1] https://en.wikipedia.org/wiki/1817

1917 was a lot like today in the most developed areas. The New York City subway system was running. Electric lights, telephones, and telegraphs were available. Running water. Indoor plumbing. Movies, even. Cars were being driven around, and airplanes were flying. Railroads were everywhere. NYC had skyscrapers with elevators. IBM was already in business.

Compare 1817. No useful railroads. A few steam engines here and there. No electricity. Running water and indoor plumbing were rare. Steel was as rare as titanium is now. Nothing moved faster than a horse.

100 years ago was still an extension of the industrial revolution, where solutions were designed for the masses: mass media, mass education, mass production... there were mass produced standardized solutions that you had to adapt to.

The information age or post-industrial society we live in today an inverse process: a process of decentralization and customization, where consumers want personalized solutions rather than adapt themselves to an average/massive solution.

The world is far from post-industrial. Industry has just self-organised into clusters that have little overlap with the industrial clusters of old.

Post-industrial society is when services have a more important role in the economy than manufacturing. Doesn't mean manufacturing doesn't exist.

Now, the world isn't uniform with respect to progress. There are still pre-industrial societies where most jobs are concentrated around agriculture, and industrial societies where all jobs concentrate in manufacturing.

In 1817, we sat in chairs like we do new, drink out of glasses like we did now, talked with close friends and relatives over coffee or tea, enjoyed music and books, etc etc.

Of course things will change, but many things stay largely the same. Not only will most things stay the same, but the things that are unpredictable.

Where do I see humanity in 200 years? It's totally unfathomable to me (although I'd bet on drinking glasses and chairs still being recognizable as similar to what our ancestors used).

I don't see why despite it being unfathomable to people 200 years ago, the future 200 years from now is not only fathomable, but predictable and even inevitable to you. That's some pretty extreme arrogance.

Sure, I'm sitting in a chair right now, but I'm using my fingers to communicate with a person on the other side of the planet in real time. I daresay that's a bit more crazy than what object I'm resting my ass on.

While some things are the same, some things have drastically changed. It's all too easy to pick out the things that stay the same and declare "Look, fundamentally we haven't changed!" because the things that have fundamentally changed we are already used to.

Predictions that were "obvious" to small groups of people and not to anyone else have a poor track record historically.

That's not an argument against, but it justifies significant skepticism, and helps explain why your "but it's obvious" argument isn't going to convince anyone to take you seriously.

Poll ten transhumanists, get ten mutually incompatible futures, all "obvious".

Well it's not just transhumanists, or small groups. It's basically every legendary computer scientist ever that assumed this would be an outcome - Minsky, Turing, Shannon, Knuth, Von-Neumann etc... so my point is that on a technologist site like HN, most people here would have been exposed to these people and what pretty much all the greats assume is an eventuality.

They didn't necessarily advocate or want it, but they saw it as an eventuality.

First, I don't think any of those greats had such a specific vision as this as an inevitable future:

"functionally merge with our engineered system in a symbiotic way and then probably see the extinction of the human species"

You can probably find quotes that will confirm your belief, but from a neutral viewpoint there is no such consensus between these people about the future.

Secondly, even if there was, we have 60+ years more experience than they did with these systems, and we should be able to make better predictions than they did.

There are many potential future outcomes, and it's reasonable to think about them and to try to mitigate certain possibilities. It's not reasonable to say that a certain outcome is inevitable and you know what's going to happen. We just don't.

There are many potential future outcomes, and it's reasonable to think about them and to try to mitigate certain possibilities.

Let me rephrase then:

If you evaluate the written history of humanity, there is the unavoidable trend that humans will create tools which replicate and obviate human actions in a more optimized manner.

Stone Tool > Controlled Fire > Shelter > etc... to the point that the ultimate boundaries of human capability including things like creativity (Generative Adversarial Networks are the frontier right now) - which is one of the few remaining distinguishing features of humanity by the way - are being mechanized.

Eventually they all will be, because there are people like me trying to make it happen.

So while you are right that it's not inevitable, you can rest assured that we're working hard on it and the trends are in our favor.

There are people like you trying to cure cancer, too, and trying to make money on Wall Street, trying to solve outstanding problems in physics, trying to make it to Mars, solve global warming, win wars, and so on.

The trend you identify is that there will be a breakthrough and everything will change, but we have no way of knowing what it will be. When it comes, it might make your field irrelevant. Arguing that the breakthrough will come from the field you happen to be working in is understandable but it doesn't carry much weight with anyone outside your field.

Right, and if you look at all of those problems, they have one thing in common: They are progressing the fastest in places where they are applying machine learning. Literally, I mean really they are.

If you solve AGI you solve all of those problems. That's the whole point here. It's not a standalone technology it's really something that changes every industry.

Well, yes, at the moment machine learning is scooping up low-hanging fruit in a lot of areas. Whether that will lead to AGI somehow is pure speculation.

It is of course true that solving AGI solves most other problems (while creating new ones that we are by no means ready to handle) but that doesn't mean it's likely to happen. Hopefully it doesn't anytime soon.

There are other possible breakthroughs. Just as a random example, humanoid robots with effective IQ 60 that could be trained to do all our current manual labor. That would completely transform society and we would be sorting it out for decades. Or it might become possible for someone in a garage to synthesize infectious biological agents and intentionally or accidentally wipe us all out. Or we could get cheap energy beamed down as microwaves from solar-powered satellites. While it's true that AGI changes everything, it's not the only thing that can change everything, and you can't predict what will happen once everything changes.

Only ten? Bostrom must not be in your sample!

Everyone is so scared of all of this, but ...

This will be used for control. 50 pieces of senators want to sell you online thoughts already (internet history). And yet, you are willing to live in an alternate reality where this doesn't happen and it's all for good.

Everything is/was going to be used for control since the start of times, what's the alternative?

Legislation that protects citizen from abuses of power. There are governing bodies that actually do this

Lemme know when legislation catches up with state of art technology.

I post this periodically, but I think this is the most compelling and intuitive vision of the future I've heard (and simultaneously a solution to the Fermi Paradox): http://accelerating.org/articles/transcensionhypothesis.html

Read the full original paper, summaries fall short. But briefly, due to always finding use for more power/computation, primary gains coming from increasing density, and the speed of light limiting galactic colonization, intelligent civilizations that don't go extinct eventually find themselves living in a computer in a black hole leaching energy directly from a companion star or on a course to merge with other black holes.

There's a lot more to it though, hence my suggestion to read the full paper.

Yes I remember coming across this in the late 90s on one of the USENET alt-compsci groups or something like it.

I find the idea of black-hole as hypercomputing environment interesting for sure.

edit: I am thinking of something different that I read in the 90's that was similar. I will have to look at this more in depth as it is different.

Life wasn't that different in 1917, 1817 or even year 17. I started writing a reply but realized Nassim Taleb does a better job at explaining this than I could. Here's an excerpt from The Future will no be cool http://www.salon.com/2012/12/01/nassim_nicholas_taleb_the_fu...

I am not saying that new technologies will not emerge — something new will rule its day, for a while. What is currently fragile will be replaced by something else, of course. But this “something else” is unpredictable. In all likelihood, the technologies you have in your mind are not the ones that will make it, no matter your perception of their fitness and applicability — with all due respect to your imagination.


Tonight I will be meeting friends in a restaurant (tavernas have existed for at least 25 centuries). I will be walking there wearing shoes hardly different from those worn 5,300 years ago by the mummified man discovered in a glacier in the Austrian Alps. At the restaurant, I will be using silverware, a Mesopotamian technology, which qualifies as a “killer application” given what it allows me to do to the leg of lamb, such as tear it apart while sparing my fingers from burns. I will be drinking wine, a liquid that has been in use for at least six millennia. The wine will be poured into glasses, an innovation claimed by my Lebanese compatriots to come from their Phoenician ancestors, and if you disagree about the source, we can say that glass objects have been sold by them as trinkets for at least twenty-nine hundred years. After the main course, I will have a somewhat younger technology, artisanal cheese, paying higher prices for those that have not changed in their preparation for several centuries.

Except that's a bullshit analogy.

Restaurant's are are completely different today than they were even 10 years ago - from the POS to the bluetooth tracking, menu printing (or maybe even e-menu on an ipad like in airports), logistics for raw food delivery, availability of food, cost, sous vide in the kitchen...I can go on.

Shoes are completely different - again, the cost, supply chain differences so you can get exotic materials, who and how made them.

Sliverware metallurgy is totally different, availability is totally different.

Point is, it is a totally myopic argument that misses what 99% of innovation is about.

It's like saying - hey we still use fire, and eat with our original teeth and see with our original eyes, so I guess pretty much everything is like it was 2Million years ago.

The problem with personal perspective is its inherently myopic...

What Nassim describes can be considered a luxury event similar to going to a Renaissance fair to some. Your daily life is far more amazing than it was even 20 years ago. Here is my day:

wake up to the sound of an alarm synchronized to a global satellite network with ~100 nanoseconds of GMT. turn on a light source powered by nuclear energy produced hundreds of miles away. put on clothing made of synthetic fibers that keep me warm and dry on a winter morning. heat up a food source, processed and irradiated of bacteria, enriched with extra nutrients and vitamins, using microwave energy. sit in front of a bank of liquid crystal displays showing 18+ million colors at 4k resolution. Read 20 different news articles from around the world while drinking my coffee, which was shipped and processed in a global trade market. made using water that was pumped to my house, going through ~10 different filters to remove toxins. Now that I'm done with my coffee I'm preparing for a weekly HD video conference with 20+ people around the world to discuss new technologies we have to prepare the network infrastructure for. I'll then spend time analyzing TeraBytes of data stored in a cloud database that is updated in realtime. The meeting was about how that dataset will be PetaBytes by next year and ExaBytes soon after.

If predicting the future 200 years from now is arrogance, then saying nothing has changed is hubris. We are still trying to convince some people that evolution is real. That makes it hard to even consider the idea that we can predict the path evolution and technology will take.

prediction and evolution rely on one thing, probability. Someone else in this thread brought up the Fermi paradox, the ultimate 50/50 split. After accepting that, you are either an optimist or a pessimist. A pessimist will never accept odds better than 50/50 cause failure happens. So be it.

The probability that your, or ANY individual's, prediction of successful technological evolution is very low. If you are an optimist though, the probability of SOMEONE's vision of the future out of EVERYONE's ideas being a winner is near 100%. THAT is the power of survival of the fittest. THAT is why trans-humanists call this inevitable. We are optimistic of our chances. But we have to be open to new perspectives and adapt.

I will embrace these technologies when they are available to me. I will even attempt to contribute.

But I will also still enjoy drinking from Phoenician meal technology with friends. And paddling across a lake in a native American river craft. And chopping wood for a campfire in the forest. I like camping during my vacation. My daily life will continue advancing forward. You don't have to forget the past to live in the future.

I think that's still too low level. On the higher view, in in average people still do this: Born on the planet; eat and sleep; raised by parents; work and try to get wealthy; find partner and create family with them; get old and die;

This really hasn't changed much since the 2000 years ago. Romans had complex society with lawyers, education, citizenship, money and finances etc. Yes, big things happened: we became more mobile and information now travels much faster. But it still didn't change the most basic flow of life that much, it's some change but quantity still didn't materialized into quality. And we're now on the threshold where those things can really change. It's hard to predict how society as a whole will change because of this.

> How is this not an obvious eventuality for everyone here?

I remember immersing myself in the Guggenheim's exhibition of Italian Futurism, 1909-1944 [1]. It was a rich display of the future course of humanity, per Italy's interwar and war-era fascists. F.T. Marinetti's Electric War: a Futurist Visionary Hypothesis [2], which I read in the Guggenheim's upstairs reading area, comes to mind:

"Up there, in their monoplanes, using cordless phones, they control the breathtaking speed of the seed trains which, two or three times a year, cross the plains for a frenetic sowing. ––Each wagon has a huge iron arm on its roof which swings horizontally, spreading the fertile seeds everywhere."

I remember the Aeropittura paintings, putting the fascists' obsession with planes front and centre [3] [4].

What still stands out is how certain they were in their airplane-based civilization and "abnormal growth of plants, as a direct result of artificial, high-voltage electric power" [2] future. Spending a few hours in that reading room, I could suspend disbelief and start convincing myself that this future was not only possible, it was inevitable.

They were wrong. Linear extrapolations are a good starting point. They're bad for long-term predictions. Don't put too much faith in what you think is obvious in the long run.

[1] http://exhibitions.guggenheim.org/futurism/

[2] https://books.google.no/books?id=c-B9EJGYbAMC&pg=PT210&lpg=P...

[3] https://en.wikipedia.org/wiki/Aeropittura

[4] https://www.pinterest.com/salvador_vico/aeropittura/

"abnormal growth of plants, as a direct result of artificial, high-voltage electric power"

Erm? http://gpnmag.com/wp-content/uploads/LED_Lights_in_Basil_Gre...

Transhumanism in terms of belief in the coming of AGI / immortality / mind uploading / genetic enhancement / etc is just a new secular religion. They're taking it on faith that those technologies will arrive soon without any hard evidence. Surely the Messiah is coming soon, right guys? Right?

I think this claim would be more credible if there were the any other characteristics of a religion present. Namely: Doctrine, rituals, totems, prayer, and above all it would require that there is some credulity to a higher power/state such as heaven/nirvana that will never be seen by humans.

I'd be interested to see some of those as I haven't yet and I've been in the community for a while.

What I will grant is that, there are those who speak in the same structure as religious people. "In the future there will be no poverty because machines will make everything we need for free" sounds a little too close to "In heaven, you can eat all you want and never get full!"

However similar they sound though, they have radically different theoretical roots.

In fact though there is plenty of hard evidence for progress on all of the accounts you mentioned:

AGI: https://arxiv.org/abs/1701.08734

Immortality: http://www.sciencealert.com/scientists-have-successfully-rev...

mind uploading: Not sure about Whole Brain Emulation progress right off the top of my head.

genetic enhancement: http://www.sciencealert.com/scientists-reverse-sickle-cell-d...

Are they solved? Not by a long shot. Do we know when they will be? Of course not. There is progress though.

As far as I know, nobody is making hard progress on when Jesus is coming back.

    > Namely: Doctrine, rituals, totems, prayer, and above all
    it would require that there is some credulity to a higher 
    power/state such as heaven/nirvana that will never be seen 
    by humans.
The Doctrine: Belief in the Singularity.

The Rituals: Writing talks about Singularity.

Totems: Arguably this is the lacking part.

The Prayers: Their talks about Singularity.

This is a pretty strained analogy. I consider it likely that smarter-than-human (*more capable of achieving goals) AI will at some point become possible and cause very unpredictable but hopefully directable results.

So presumably that makes me a secular-religious nut by your estimation, but I don't observe any of the religious customs you specify above. Perhaps the first, if you use a wide definition.

It would be more interesting if the criticism of transhumanist ideas explained why these ideas are unlikely or impossible, rather than dismiss them with the quasi-ad hominem of comparing them to religious ideas that are either outright wrong, or completely outdated in their real-world impact.

>I think this claim would be more credible if there were the any other characteristics of a religion present. Namely: Doctrine, rituals, totems, prayer, and above all it would require that there is some credulity to a higher power/state such as heaven/nirvana that will never be seen by humans.

It doesn't have those, but it does need lots of money, just like religion.

Not that i'm against the idea

> I'd be interested to see some of those as I haven't yet and I've been in the community for a while.

Never been to Less Wrong, eh?

I've been to Less Wrong though, and this is not the place to look for transhumanist-as-a-religion believers.

Calling LW a cult is a stupid meme based on ingroup/outgroup thinking and a few cases where they may have got carried away too far with considering logical implications of the theories they were developing.

The difference is that most of those things are not known to be impossible. Immortality exists in nature, genetic enhancements have been demonstrated in the lab for simple cases, and building a machine that works like a biological brain also doesn't sound impossible. Mind uploading might still be thwarted by the no-copy-theorem, if our minds depend on quantum states.

The things transhumanists believe are possible are engineering problems. Historically we managed to solve those when we put our minds to it.

Immortality doesn't exist in nature for any complex organism. Geneticists can prevent certain diseases caused by a small number of genes but that doesn't enhance human capabilities beyond the current baseline. AGI researchers might eventually succeed in understanding the human mind well enough to build an equivalent machine but there's no evidence to suggest that such a device will be smarter (or more rational or less insane) than humans. For all we know, current humans could represent the maximum intelligence possible due to undiscovered fundamental physical limits. A human level AGI would still be a huge step forward but it won't bring about a Singularity (geek rapture) as the transhumanists seem to expect.

All of these are far from being engineering problems yet. We're still solidly in the basic science phase.

I still see a major shift, in that we're passing the point of human abstraction levels. Before, everything was at best a tool. An peripheral for the same kind of humans ever to use. Now we're changing ourselves. It's a lot more different.

ps: btw, "good life" hasn't changed to me. Who wants to go to the country side, walk in the forest, sit down a fire with friends, look at the sun, the sky; animals. Enjoy a disconnected cabin ? To me the basics of happiness didn't change, and tech doesn't really improve that either; in a way .. tech is often pornographic in substance; an excitation in "more" (more capabilities, more bandwidth, more speed).

Looks like the vast majority of people calling themselves 'hackers' are digusted at the idea of merely hacking their brain.

Shouldn't that be one of our main goal in life, as 'hackers'?

Hackers are intimately familiar with the fallibity of technology. If hackers shun something, maybe it's because they have reason to.

Maybe because if hacking a brain, is as easy as hacking a toaster or a heart transplant, we're fucked - pretty much.

I get what you're referring to: 'don't brick the firmware' or 'don't mess with the MBR'. But we're talking about a device for which the main caracteristic is, arguably, its neuroplasticity.

We can do better than blindly compare hacking a brain to hacking a piece of technology made in Taiwan (with all my respects to Taiwan).

Neuroplasticity won't protect you from someone generating 0.1A inside your body. You only need 4.5-5V battery to generate a killer current.

Also it can't stop someone from bricking your heart.

You have 'hackers' and the young code-me-a-website rockstars that are now also called 'hackers'. Those two groups have vastly different areas of interest.

SV just can not be trusted with biology. The mottos of failure is OK, lawsuits are cheaper than following laws, move faster than the ramifications, force it to be all or nothing, and just ignoring the fact that there will be fallout. I don't want to be a plaything experiment of a Silicon Valley psychopath.

The brain has so many moving parts, no security, and just far too trusting of what's in it to start injecting thoughts and information into it.

I'm not scared by all this - I think it's positive. I find death at the moment a bit of a downer and merging with tech a way to improve things.

> It seems pretty clear that the whole vector of humanity is to functionally merge with our engineered system in a symbiotic way and then probably see the extinction of the human species in (relatively) short order

As a transhumanist myself, I don't think that either of these things is obvious.

Your focus on the "vector of humanity" is where you err. What we can see when we view the history of life - complexity - on Earth is the evolution not of humanity but of intelligence.

This entails two questions:

Are humans the endpoint of that evolution process? There is no rational reason why the answer should be yes.

What, then. would be the next evolutionary step in intelligence? Two answers - genetically-enhanced humans, and machine-based intelligent actors.

The fascinating - and distinguishing - thing about humans is that we will be the first organism in the history of the process (at least on Earth) to consciously create our evolutionary successors.

More realistically I think will be genetically selecting super smart babies. Most likely this is already being done today. As it proves to be effective people will get better and better at it until we start genetically selecting and even engineering superminds.

There's no free lunch when it comes to genetics. If it was possible to be a lot smarter without negative consequences then we probably would have evolved it already. Most likely there are some downsides such as increased risk of depression, autism, schizophrenia, etc.

Perhaps. Another explanation might be that it doesn't produce enough evolutionary advantages in the context of a human society.

One could also argue that the medical advancements of the last 100 years or so have greatly reduced the evolutionary pressure created by "bodily weakness", and I'd imagine similar arguments could be made in the opposite direction when it comes to other factors (like the norms of a society and its influence on the advantage of mental capabilities of some kind).

It would be very interesting to see in what ways societies "skew" the evolutionary process. Does anyone know of good research/data in this direction?

Those lines of research are usually shut down because of claims of racism. Researching genetic differences between different societies is frowned upon. There are for example people who claim that Jews tend to excel in science because historically they have been banned from many trades and retreated into more intellectual fields like banking.

There's nothing stopping researchers from searching for correlations between certain genes and intellectual ability (or at least performance on standardized tests as a proxy). Bringing race or ethnicity into the issue doesn't add anything.

Hm, yeah. If the research is focused on specific cultures I can see that being a problem in that regard.

According to Kurzweil (and I don't see anything wrong in his logic), improvements due to genetic selection will pale in comparison to exponentially growing intelligence that's outside our biologic brains (but may be interconnected).

This makes me wonder at what stage we can actually start to draw parallels to Huxley's Brave new World... Or are we already past that threshold?

These ideas aren't mutually exclusive.

It does seem likely. I mean, look at how quickly smartphones have become ubiquitous.

But there's a downside. Why will augmentation be any more egalitarian than wealth is now? In Hannu Rajaniemi's vision, the Sobornost are the ultimate 1%.

You may enjoy reading about the world of Numenera: Earth, billion years in the future.


Every Western faith has its eschaton.

The Amish will abide.

That guy needs to finish his itasks for 2017 first.

- Tesla Model 3 production line (First deliveries late 2017. He said he was going to live at the factory to get the cars out the door.)

- Brownsville TX launch facility (first launch scheduled for 2018, not much construction started yet)

- Manned Dragon spacecraft (as of 2015, first crewed launch scheduled for late 2017)

- Falcon Heavy (as of Q3 2016, supposed to launch Q1 2017. Now Q3 2017. Maybe.)

Musk never delivers on time. I think he sets unrealistic deadlines deliberately to scare off competitors and motivate his staff. Yet he still does things so radical that they spawn or renew entire industries, so does it really matter whether things are delayed by a year or two?

exactly. he accelerated development of affordable electric cars by a decade. he basically rebuilt the space launch market. he's got the energy storage and production market in his sights. also don't forget he cocreated the internet payments market.

You should read Edison's biography. You'd understand Musk much more.

Now that you've made this comparison, I can never unsee it. They're extremely similar! This makes me wonder whose going to be Musk's Tesla. I.e., what's the first amazing idea he's going to throw away (out of mainly hubris) that actually solves his problems.

Edison didn't try to run the businesses he spun off. He licensed patents and collected royalties, and had equity in the manufacturing businesses. But he didn't try to run General Electric.

Which one?

BAM - http://www.nature.com/nnano/journal/v10/n7/full/nnano.2015.1...

AFAIK, it's the most promising technology for actually making the neural lace.

TL;DR: If you put electrodes on an angled plastic mesh, you can roll it up, inject it, and it'll safely unroll. Also, brain cells like to grow into it.

This scenario scare the hell out of me. Is injecting/intercepting brain signals directly really the way forward? Do we really want to create monster human species?

IMHO, future AI should be used to enhance human cognitives in a noninvasive way. Never in such a dystopian way as in the movie "The Matrix".

> Do we really want to create monster human species.

1. Many humans today are already "monsters" compared to what was considered normal a few hundred years ago. You could spin a simple hip replacement or bone marrow transfer as "Frankenstein-ian" if you wanted to.

2. If AI becomes vastly more intelligent than non-"monsterified" humans, then the question may become: "Do you really want the human race to be enslaved / extinct instead of creating a monster human species?"

Almost all advancements come with pros/cons and teething issues. How we implement and the the rules around it are key. I like to believe having someone like Musk at the forefront will be helpful. In the same way have to fight for privacy and free speech, the fight for privacy and free thought will surely be an issue as this tech develops. So many positive and negative possibilities here.

Eh. Brain stem, reptile brain, mammal brain, primate brain, neocortex, cybercortex.

Half my mind is already in cyberspace, why not make it half my brain, too?

Edit: Dystopias don't come from technologies, they come from people being shitty.

I am fine with using computers, mobile devices, VR/AR, etc. as long as I have the ability to disconnect and walk away, fix or replace.

What happens with the equivalent of drive-by ransomware on your brain? Send bitcoin to this address or we permanently give you a migraine?

Make the part that can interact with the outside world removable. Only keep the mesh and the external connection, which could (maybe?) be stateless, since (maybe) it would just be all the mesh connection points. Nothing to "hack" (without physical access), nothing to persist the hack.

we can invent new technologies, but we can't stop people from being shitty.

(OTOH we're talking about a technology that could potentially directly modify human psyche, so even that isn't clear anymore :))

> we can invent new technologies, but we can't stop people from being shitty.

Since it's this that is the primary reason that all of the next generation of technologies are incredibly dangerous existential risks, and is related to why other mitigations for existential risks are underfunded compared to the grand projects of being shitty to each other, it seems to me that fixing this should be the main priority of human research.

It involves coming up with at least workable answers to a lot of difficult questions and is a bit of an ethical minefield, but if you believe, as I am inclined to that the alternative is extinction, it seems pretty important to at least make a stab at it.

> Is injecting/intercepting brain signals directly really the way forward? Do we really want to create monster human species?

It will happen, whatever is your personal point of view (or fears) on that subject. In 20y or 200y (or maybe later), but that will happen. Technology is going forward no matter what you think and vote, because there will always be that kind of people that can't stand living a 'standard' life. Go check the Myers–Briggs on personnality types: not everybody is a mainstream xFxJ ('earnest traditionalists who enjoy keeping their lives and environments well-regulated' as wikipedia says).

The real questions are: what pourcentage of humanity will go that way, and how well will humans/neohumans cohabitate.

I don't think there's a big difference from what we do now. Right now we use our hands or speech combined with phone/glasses/watches/computer as tools to communicate with the internet/information. This technology would just cut out the middleman. It would create a lot of interesting opportunities and combined with VR would create extraordinary worlds.

Only way to have enough bandwidth (for a varying definition of enough I guess) is to interface directly with the brain. Doing that non invasively is much more difficult.

There are so many fundamental philosophy of the brain type questions that need to be answered first before we can assume there will be a workable way to get this tech to enhance our thinking or brain power.

I'm thinking more along the lines of Deus Ex.


Any investment into this kind of technology is bound to have positive returns because it'll make neuroscience more effective.

Right now computational neuroscience is using a lot of blunt instruments such as electrode arrays which are implanted into mice for the duration of a few experiments before the animal is put down for ethical reasons. Safe, minimally invasive brain implants would make long term experiments a lot simpler and more ethical.

If I remember correctly, there are needle-like implants with around a thousand contacts and it is quite a difficult task to get the signals out of the brain. Either you have the ADCs directly at the contacts, which means you can't get your density of contacts up, or you have the ADCs outside which will give you a nightmare of wiring. In either case the technology to actually have an interface read out individual neurons is still quite far off, as far as I know.

I'm not quite sure about all of this, so maybe someone with up to date information on the technology can help me out here?

If I may add some details....

The most popular implant is probably Blackrock Microsystems' "Utah Array", which has 96 electrodes arranged in a 10x10 grid (minus the corners). It looks like this: http://aerobe.com/wp-content/uploads/2016/11/utah-3.jpg For scale, the entire electrode grid is about 4mm on a side and the electrodes are between 0.5 and 1.5mm long (depending on the model).

There are a few other models (and similar stuff from other companies), but I'd be surprised if anything with thousands of contacts is in regular in vivo use. There are some in vitro (i.e., cells or tissue slices in a dish) systems with more contacts, but the signal quality isn't nearly as good.

We can read out the activity of single neurons--people have been doing it for single electrodes since the 1960s. It's slightly easier with a single (movable) electrode since you can creep up on the cell until its action potentials are fairly large and well-isolated from the background noise (here, large means about ±150 µV). You can't move the array or its individual electrodes, so you're stuck hoping that the individual shanks end up in good positions. Then, data is recorded at a fairly high sampling rate (say, 30 kHz) and the "spikes" are clustered based on their shapes to get individual neurons' responses.

The ADCs aren't directly at the contacts, but you want the amplifiers and ADCs as close to electrode as possible to avoid all sorts of weird EMI from the mains, other equipment, etc. Getting the grounding and shielding right is a bit of a black art and eats up tons of researcher time. (You'd think "throw it all in a Faraday cage" would work, but...it doesn't).

What else do you want to know? :-)

I've done ephys in mice and gerbils. Spike sorting is nontrivial, and the effects on local tissue from jamming long shank electrodes into cortex are nothing i'd like done to me.

Unfortunately, less invasive recording techniques will never give you the ability to record from single units.

edit: pulled your google scholar and boy am I preaching to the choir...

And none of my array stuff is published yet (grrr!)

I did single-electrode experiments for my PhD and those definitely mess up the brain after a while. The Utah array stuff strikes me as "less bad" in there's only one* big insult to the brain, but it is a pretty bad one: the arrays are inserted with a pneumatic "gun".

I think you're right that non-invasive techniques will never give us single unit data, though I hope we can get some longer-lasting implantable electrodes soon.

I'm interested in the subject and appreciated your post, thanks. A couple of questions:

How does the brain adapt to having the electrodes in there, how long before the probes are accepted as being "part of" the brain?

Do you envision we need lots more sensors than in your example above, or is said number enough for precision input (say, text/words, or navigation in a 3 dimensional position & rotation plus a temporal dimension interface)? I guess the brain would work around the rough edges (or lack of sensor resolution) just like it already does with keyboards, mice, bodies, and language.

It depends!

For most electrodes, the brain doesn't really incorporate the implant. When a single electrode is inserted, you can start recording as soon as the contacts are inside the brain (in practice, you wait a few minutes since the brain is slightly elastic and stuff moves around). In humans, this is how deep brain stimulation is done--the surgeons use the neural activity to figure out when the electrode is in the right place. For larger implants like the Utah arrays, the insertion is a bit more traumatic. Allegedly, you get a pretty good signal right away, then inflammation makes it degrade for a while, and after ~12 hrs, the signal returns. However, the animal/patient is usually recovering during this time, so it's moot.

These electrodes are usually silicone and metal, usually tungsten, platinum/platinum iridium, or iridium oxide, so the brain doesn't really "accept" them. In fact, it tries to encapsulate and reject them, which limits the lifespan of the electrodes. In my experience, a two week old array might have nice, well-isolated neurons on more than half of the channels; after two years, you'd be lucky to get single units on more than a handful of the 96 channels.

However, there's a lot of interest in developing coatings that inhibit this immune response or actually encourage neurons to grow into the array. There's a lot of promising research on this, but nothing (as far as I know) that's commercially available.

As for the number of sensors...it also depends. You can do a lot with a 96 channel array implanted in the right spot, including spelling (https://elifesciences.org/content/6/e18554#bib27) and control of a robotic arm (http://www.jhuapl.edu/prosthetics/scientists/neural.asp) though neither of these is anywhere near "native" performance yet. More electrodes might help, but there's also probably some low-hanging fruit in figuring out the right control paradigms, decoding algorithms, and even where the electrodes are placed.

For research though, more and better arrays would be great. Many brain areas have a spatial structure. In visual areas, for example, cells representing neighboring spots in the visual world[0] are also near each other. Motor and sensory cortex also have a foot-to-face progression. Bigger arrays might let us sample from a more diverse population of neurons at the same time, which could be scientifically interesting and useful for BCI. Denser arrays might also allow for better recordings from single neurons. If you have a sufficiently dense array, you can record the activity of one neuron from multiple sites--this lets you isolate its responses better (this trick is commonly used with bundles of four wires, called tetrodes).

I would also love to get my hands on arrays made from multiple materials. Platinum is great for recording the activity of single units, but lousy for stimulation; its low charge injection capacity means that high stimulation currents damage the electrodes and/or nearby cells. Iridium oxide has a much higher charge injection capacity, but lower impedance and thus, fewer well-isolated cells. A "checkerboard" pattern of Pt and IrOx electrodes would be awesome, but is apparently difficult to make (no chance you run a fab, is there?)

The flips side of all of this is that amplifiers and ADCs are expensive (though much cheaper than they used to be), and adding channels rapidly increases the data files' size too. My experiments generate about 1 GB/minute, and we record 5-8 hours a day, 6-7 days a week.

What else? :-)

[0] The organization is really "retinotopic", meaning that cells receiving inputs from the adjacent parts of the retina are near each other in the brain.

Assuming that the 1 GB/min figure is because you're saving broadband (back of the envelope math assuming you have two Utah arrays with 32 bits/sample at 30 kHz), you can get enormous space savings with compression. Field potentials are highly correlated across channels, as you no doubt know. Depending on your amplifier you may be able to pack everything into 16 bits/sample too.

Thanks for your answer! I'm afraid I'm not a fabricator of platinum electrodes... I hope you find someone who is!

I'm thinking about the immune response from the brain, and whether implementing this array of sensors can be done elsewhere than the actual brain, connecting to nerves instead of neurons, essentially creating a virtual limb. I guess it kind of misses the point of this whole thing, and can't reach as far as brain-implementations, but with the advantage of being more feasible as a solution. I think what I'm getting at is whether we need lots and lots of sensors with very high resolution data, or if we instead can ensure the computer interface is consistent enough that a "muscle memory" can be formed for controlling the virtual limb. I guess I don't have a question really haha, thanks for your time and replies!

You are thinking of Ted Berger's work at USC, among MANY other researchers' efforts. Here is a link to the class: https://classes.usc.edu/term-20161/course/bme-552/

Also, the main issue with their work is glia' scarring inside the central nervous system; the body ensures the implants are time limited.

If anything you're underselling the existing hurdles. It was once thought that the brain was immunoprivileged, but now it's known to have its own immune system. As a result implants are prone to having scar tissue form around them, and after some years it starts to inhibit their ability to perform.

The [Black Mirror][1] (British science fiction television anthology series created by Charlie Brooker) has a few episodes dealing with AI and brain-machine interfaces.

It is very interesting to watch some of the emotional and social implications this kind of technology will bring.

  [1] https://en.wikipedia.org/wiki/Black_Mirror

Upvoted because even though that show makes me sick it does explain very important technological / cultural issues in a very, uh, visceral way.

Why or how does it make you sick?

Also are you regarding a specific season? the camera work? The ideas/stories themselves?

I don't know enough about stories or reality to explain this better, but the episodes seem to have a lot in common with a bad acid trip.

It has the doom and gloom vibe all over it, always, you know something bad is gonna happen eventually.

not OP, but e.g. the very first episode was quite a bit too heavy for me.

Surprised you didn't mention the Westworld series which is related to AI and virtual reality and ironically Musk's ex-wife plays a minor character in it.

If you want to get a really serious treatment on AI (and probably the single one in the history of TV/cinema that actually makes sense), check out Person of Interest (ironically, by the same team which now makes Westworld).

Please don't put URLs in code blocks, it makes them non-clickable.

I also recommend the "Ghost in the Shell" anime movie and TV series. Lots of discussion about the impact of a connected society, and what it means to be human when your body is a machine.

British television (or Media in general) has released some very interesting shows

Musk is wrong that input bandwidth is a limiting factor in intelligence. We don't efficiently use the megapixel our phones show us, let alone the several megapixels on our laptops.

Improving UIs will provide much more bang for the buck for many many decades to come. A neural lace is the equivalent of trying to increase the yield per square inch of the herbs in your window box when you have 100 acres of empty land around you.

Imagine thinking about something and having it appear on your brain interface HUD vs typing a google search on your phone and reading it from the screen. How is that not a bandwidth problem?

Your model of cognition is off. When you are thinking about an orange, you don't have an actual orange-like terabyte structure appear in your brain. You think about individual body reactions you might have to an orange, or you think about specific things you could do with an orange. Which slivers of orangeness appear is highly context dependent.

The fact that you have control over your attention... you can direct it to arbitrary things you could do or feel with an orange... that gives you the subjective impression of seeing the whole thing all at once, but that's just a lie your brain tells you. You only actually experience tiny vignettes.

You imagine your experience of "orange" is this massive huge bandwidth cognitive experience, but you only really need a handful of neurons to maintain a weak signal signifying orange.

A crude drawing of an orange in an app can trigger exactly those same neurons. As long as your mind is busy with other things, you won't even notice the difference. This is why novels work. And it's why comics work. "Understanding Comics" by Scott McCloud does a great job of showing how abstract representations of things can provide richer experiences than realistic ones.

Of course, if you direct your attention to the differences between the crude drawing and a "real" orange, you can interrogate those differences. But the fact that you can explore a rich representation of an orange in your brain doesn't mean that you do when you experience one in daily use.

I want orange to appear on a google search, what bytes need to be sent over the wire from my brain.

Wrong paradigm: imagine being able to have perfect recall of information you looked at once. Or being able to check point your active memories and recover that state of mind at will.

If I'd had to guess and theorise, and it's highly likely I'm wrong, making thoughts available semantically for an computer to parse requires more bandwidth than (once command+control is figured out) recalling of information + presenting it. The latter could be directly tied to the visual nerves only perhaps? Still all part of the same paradigm to me though.

Wrong paradigm: Imagine your brain has the entire contents of Google. The speed of all machines connected to the network. And the creativity of all other linked minds.

But it won't, unless Musk somehow figures out how to "write" information to your brain, and right now we can't even fully figure out how the information is stored, much less how to write it back - the way this will most likely work is that you will search something on google by thinking about it, and it will "appear" in your vision by stimulating your visual nerve. You still have to read it and remember it, you won't magically know Kung-Fu Matrix-style by thinking about it.

That's not substantially better then "easier" technology - i.e. something non-invasive but advanced, like contact lenses with a lightfield display and wireless power.

But if you're going to be interfacing with the brain, then there's a lot we can do probabilistically - i.e. deliberately inducing different parts of the memory centers of the brain to promote recall, targeted to the patterns of activation at previous times.

Modifying the production of neurotransmitters or being able to deliberately dampen some would also lead to some interesting possibilities.

I think you're saying a few different things. But I do agree with your very first statement about bandwidth vs intelligence. As a matter of fact, intelligence itself is not nearly the limiting factor of human societies. If one cannot control the influence of what has accumulated in one's consciousness from the past they cannot see anything as it really is. People hear what they want to hear. Even given the constant firehose of truth that we are being sent through everything that exists around us, people remain blind through the simple fact that they don't want to know.

Despite the quite high bandwidth of the human eye, we only use it for serial input.

Despite being a massively parallel processing unit, it appears that there are only a few queues for data input.

Applying the awesome pattern matching power of the brain to a lot of data simultaneously (in parallel) is not something our brains appear to be capable of. Perhaps it would require too much energy and get too hot?

> we only use it for serial input.

Consciously. Unconsciously, your mind is always looking for threats - big movements - in the periphery of your view. It's a co-processor which runs through a lot of data in order to direct your conscious mind.

> Perhaps it would require too much energy

I think that the problem of energy could be overcome. I think that instead of getting too hot, it would simply produce too much waste, and since the waste evacuation system for your brain only really operates when you sleep (at least, last I heard), this would be the main limiting factor.

We're not using it efficiently, though. And the whole field of UI / UX actually now moves in completely opposite direction - towards how to make things least efficient and least information-dense, but more and more pretty so that it sells and addicts better.

I'd love to see some research into just how much data can humans really process if humans are properly trained and the data is cleverly presented.

My theory has been for a while it is human-brain interfaces that will wake people up to the issues of GPL(v3) vs BSD style licensing.

No iBrain device is touching my brain.

It's hard to resist to a tech when everyone around you is uing it. It's not impossible, but it is hard.

My mum was super anti Facebook when it first came out. Now she has one.

I'm not saying you will have an iBrain, but myself for instance, hope that I will not have one. Maybe though, I won't really have a choice in a world where nearly everyone has an iBrain. Kinda like that show where everyone has smart contact lenses and only some people go against the movement and remove them.

I wonder if these conversations are how we ended up in 21st century with old people adamantly refusing to use computers - at some point in their 20s they decided they don't want to have anything to do with this dangerous technology, and now they are 60 year old and unable to receive email. Will it be us in 40-50 years time, with kids running neural laces with ease, and we will be refusing to use them because we were worried about certain aspects of it?

I think there is a clear distinction between what we are talking about and your example. I'm not talking about refusing to learn a new tech, such as "computers", rather insisting to only use tech that respects my freedom. For example, Linux is my daily operating system largely for this reason.

Stallman knew these issues were and are important, he was simply a man so far ahead of his time most people fail to understand the level of importance of his arguments.

I don't want to compete with AI. I want AI to create efficiency and wealth so I can relax and have a leisure lifestyle. This shows that work-a-holic CEO culture is only about margins and gains and not improving lifestyle and quality of life. Automation should be improving our lives not adding anxiety to compete with it, which we will ultimately lose.

We're probably in for another 1880's labor movement as automation keeps eating jobs. We either decide to benefit from it via strict regulation or we somehow try to compete with it which is greatly lower the value of our labor and only enrich the owners of automation.

David Brin said something once that I found interesting: "as long as I'm still in charge of the desiring, let the damn AI take care of the execution!"

Sadly it's a quirk of human nature that we get more defensive over our things when work gets tough. It tends to push us to more neoliberal policies. If work gets hard to find; Gen Z and the baby boomer generation would rather keep millennials renting, as opposed to massive social spending. Unless there's a massive shift towards the left we'll only see widening inequality due to the insurgence of AI on the job market. This is more of a cynical outlook, but in the last three decades in politics across the globe there's been no progressive popular movements.

Yeah it's not like Ben Sanders almost beat Hillary Clinton, and did better with millennials than she did...

> I don't want to compete with AI. I want AI to create efficiency and wealth so I can relax and have a leisure lifestyle. This shows that work-a-holic CEO culture is only about margins and gains and not improving lifestyle and quality of life.

I suspect, though I'm not 100% sure, that when Musk speaks about AI "competition" there's an implication of existential danger. If true, then from his POV it would not so much about "I want to min-max my CEO experience" but "I'd rather we not go extinct within my children's lifetime"

And that's why he's talking about nuclear disarmament, right?


Seems that Musk wants to be a superhero that saves humanity, but villains that actually exist aren't cool enough for him.

Nuclear weapons disarmament is a political problem, and Musk primarily works within the realm of engineering. Besides, game theoretic arguments can be made against nuclear weapons not being much of a threat at all.

Not everyone wants to be human for ever, some people take joy in their work and want to continue doing it and contributing in a meaningful way, etc, etc. Nothing to do with being a workaholic or the rest. I hate to say it, but 'leisure' gets boring awfully quickly.

You realize that leisure could be any kind of work you want.

The distinction is not leisure vs work. It's freedom vs. constraint.

I appreciate that, however I feel constraints give meaning to a task, you might argue that I can self impose them, I will argue that I wish those constraints to be imposed by society or by the conditions needed to advance science. To do that, eventually as AI's come online, I will have to stop being human.

So everyone should suffer, because you enjoy it?

That's not fair. The argument you seem to be advocating for is that we can make people happier by reducing scarcity, they are offering you a counter example.

'happy' is hard, it could very well be that suffering is an important part of happiness... who knows, but it's a valid line of argument.

Have you actually tested this recently? I used to be concerned about unconstrained leisure bottoming out, but I've recently had about 6 months of a break of sorts and I was never bored. As long as I kept physical, social, emotional needs met AND had some creative output (side projects), I was perfectly content. A real surprise actually.

> 'leisure' gets boring awfully quickly

Not for me, as long as I get to keep all the toys in my garage. The list of projects I want to complete is probably an order of magnitude too big to finish in one lifetime as long as I have a full time job.

I don't want to compete with AI.

You don't have an option.

Said another way, in the long run you either merge with AGI or go extinct.

Did 1880s people have to merge with assembly lines? I think you're being fairly dramatic and too "/r/futurology" here. The whole point of technology is to create tools to make our lives easier, not to become tools. People won't desire this and pushing this kind of pressure on people seems like a recipe for revolt against the capitalist system that has thus far provided so much wealth. The assembly line works for us, we don't want to become an assembly line ourselves and its ludicrous to think such reasoning will lead to extinction. This is a bit like saying that the Olympics will be nothing but a steroids showcase event because everyone will take steroids.

If anything, dissuading people from standing up for their liberty and basic humanity seems to be the best route toward extinction to me.

We kind of do merge with our technology, though. People are fuzzy-bounded; we're part of our environment and it's part of us, especially in the mental domain. In the physical domain glasses, notebooks (offloaded brain memory), telephones (merged voice) are physically merged technologies. In the mental sphere I feel I can say that Wikipedia and the Internet are in some sense merged parts of my mind. They are part of my thought processes and they were an essential ingredient in forming my thought processes too. Take away all my writing and reading and I really would be a different entity, mentally.

He did specify "in the long run", which I think is probably true as the timescales get longer. I doubt anyone would argue that humanity would live lives relatively unchanged from the present say, 3000 years from now. So it's really just a matter of arguing about 'when' and not 'if'.

I'd also like to point out that although we didn't "become" the assembly line, employees certainly have become more replaceable in many of the jobs impacted by the assembly line. Depending on the job, some employees have become the metaphorical "cog in the machine", and can be replaced by other employees with minimal training, as compared to the artisan-based system that predated mass production.

I certainly see your point, however, and I truly hope that we can maintain our liberty and humanity as technology increases, instead of regressing into a kind of dystopian neo-feudalism.

No, but like with any other performance enchanting utility it's going to be effectively peer pressure. If AI+human will be better than just AI people will "merge" with it at that point if it's a true AI you might as well be a different species to them as you'll likely won't be able to even communicate with them after a while.

Mysterious superintelligent AIs, which are too smart to communicate with baseline humans. I think this is just a plot device for post singular fiction. You just can't write superAI explaining its own behavior, because you aren't smart enough. The best you can do is write about mysterious AI, which can't communicate with you or can't explain its own actions because they are too complex to explain.

Returning to your post, I think that human-AI hybrids will be perfectly able to communicate or explain their plans to baseline humans (merging with AI should enhance communication capabilities, not hamper them).

They could have problems explaining their feelings, thought processes, perceptions, as those things may not correspond to anything baseline human experiences. They could have no reasons to talk to baseline humans at all. But it is not inability to communicate.

Yes, having have to get a brain-machine connection implant before we've had a long time to observe safety and side effects has been a fear of mine for many years.

So basically you want AI to enclose you in a simulation where you feel fulfilled and happy?

No, I want it to create value that will be given back to me via taxation.

Could you be more clear on the economic mechanics of your scenario?

You said you want AI to provide wealth. But now you say you want to create value. It looks like you may need to compete after all.

First I read:

"No, I want it to create value that will be given back to me via television."

Why would it consent to you leeching onto its productivity?

I wouldn't look to a workaholic to invent new ways to eliminate work. You might have to figure that one out on your own.

Elon Musk's "bold" ventures are the only things that excite me. Meaning, whenever I hear of a supposed "big idea" that's right around the corner (cough, cough - Magic Leap), I basically dismiss it. Except when it has to do with Elon Musk.

He is the first to admit that his plans will probably fail, but he actually has an incredible track record over 10+ years.

Same here. I'm sceptical about most of the "big ideas" - most of them are marketing bullshit. Doubly so when it's a startup - the goal usually is to bullshit people so that they come on board and enable the founders to have their exit.

But Elon Musk I trust. He's shown time and again that he honestly wants to realize those ideas, and not pursue them for the money. He also has a pretty good track record there. I'm hoping there will be more people like Musk though; I think we desperately need them.

Yeah, unfortunately, people like Elon Musk are, in my highly imprecise estimation, 1 in a billion. Here's the sad part about this statistic: That means we should have 7 Elons alive today, but only 1 has bloomed into full potential.

The rest, we will never know about due to them being born into repressed governments, and/or extreme poverty.

An efficient brain-computer interface would require a good understanding of the inner-workings of the brain. So any effort in building such interface would motivate neurobiological research to get such understanding. I personally believe that once we have an accurate model of the brain, that is one that explains how thoughts, memory, emotions, consciousness and so on arise from brain activity, we're literally done.

Once we reach this point, I think humans will eventually replace their brains with artificial ones, either gradually (Moravec transfer), or in one-go. There will be various motivations : immortality, mind-performance improvements, etc. The end-result will be the same : we will turn into machines. It won't be a merger, it'll be a plain replacement. The scenario where AI robots kill us all will only be different from a subjective point of view.

I'm not convinced that's true. There are countless innovations that we were able to get to work despite having little understanding of the underlying theory. Given how adaptable both biological and artificial neural networks are, I'd say that the hardware is far and away the primary limiting factor, not our theoretical understanding. Once we get a safe and reliable connection between brain and computer, we can figure out the rest through trial and error.

The thing that gets me about this is that we all think that the brain is an understandable object in the end. This may not be the case at all. There are things that we have in our world that are going to stay fundamentally mysterious. Strictly, they will always have 'edge cases' that are logically considered to be paradoxes, yet exist all the same (Godel's Incompleteness Theorem). These things are Hurricanes, where we will only ever really be able to give probabilities of paths or the stock market, where if you canfigure it out you'll have killed the whole thing.

Most importantly, the idea that we can understand the human brain and then manipulate it must be false from our everyday experience. Simply put, if we can understand our brains, then we can understand women, something we all know to be forever impossible.


A Master Programmer passed a novice programmer one day.

The Master noted the novice's preoccupation with a hand-held computer game.

"Excuse me," he said, "may I examine it?"

The novice bolted to attention and handed the device to the Master. "I see that the device claims to have three levels of play: Easy, Medium, and Hard," said the Master. "Yet every such device has another level of play, where the device seeks not to conquer the human, nor to be conquered by the human."

"Pray, Great Master," implored the novice, "how does one find this mysterious setting?"

The Master dropped the device to the ground and crushed it with his heel. Suddenly the novice was enlightened.

If you want to compete with AI, don't make humans easier to hack.

Humans are typically the weak link in a security strategy... so I think this would make them harder to hack.

We're easier to socially engineer, but so far humans are monstrously hard to hack; we're as likely to break unpredictably or not at all, as be broken.

I would argue that humans are quite easy to hack although perhaps not in the way you were thinking. Our perceptions are quite easily and reliably hacked even when we know it is happening, as demonstrated by the success of stage magic and optical illusions[1]. At one point, there was a great deal of fear that companies and adversaries could influence large groups of people with subliminal messages[2]. Although the efficacy of these techniques are somewhat in doubt, on a more mundane level, most people are used to attempts by advertisers and marketers to subvert our desires and preferences to buy certain products. A great deal of research and money has been spent on essentially hacking our desires and exploiting our brain's response to intermittent reward and social cues. This has brought us product placement in movies, celebrity endorsements and video games that produce changes in the brain not much different from those found in drug addicts. Scientists have engineered fast food to exploit our evolutionary desire for sugars and fats -- something that was good at one point but now serves only to make us and fast food executives' wallets fat. I would consider all of these a type of human hacking simply due to the reliability of their effect, if not on an individual level, certainly on groups of people.

[1] - https://en.wikipedia.org/wiki/Optical_illusion [2] - https://en.wikipedia.org/wiki/Subliminal_stimuli

Optical illusions are neat, but I don't think it rises to the level of a "hack" in the sense that anyone is talking about here. Subliminal stimuli is, as the massive warnings on the article suggest, utterly unproven. There is a difference between being swayed or marketed to, or just too lazy to disengage from the onslaught of marketing... and "hacking". If hacking humans worked, the marketing wouldn't be necessary in the first place.

Although some of my analogies were a stretch, I think it might be valuable to regard these types of manipulation as hacking. If nothing else, to make us aware of our own susceptibilty to con artists, fake news and government influence. The reason these techniques are effective are very similar to the reasons hacking methods of computers are effective; they take advantage of systems that evolved or were built for one purpose in order to use them for another, often to the detriment of the victim.

For example, the existence of optical illusions and stage magic are a necessary consequence of particular limitations of our visual system and attention. One could predict the existence of new, never before seen optical illusions purely from knowledge of the way the brain processes visual stimuli. For more information on this, see the works of Roger Shepard[1][2] who did a lot of research into the psychology of perception and mental representations.

This has ramifications for not just human psychology but artificial intelligence. If we want to build a computer system or robot that can process visual information quickly like humans and animals do, then we may very well have to program them with the same simplifying assumptions that humans and animals use to make rapid visual processing tractable[3][4]. A consequence of this may be that these computer systems will be subject to the same optical illusions as humans as a necessary consequence of limited attention and computational resources. Furthermore, the misperceptions that make stage magic possible may be possible to induce in any system that can only pay attention to a subset of the visual stimuli given and that must make assumptions about the intentions of the subject being viewed. These assumptions and inferences are usually accurate under ordinary circumstances when the subject is not trying to deceive the observer, but a clever subject could engineer circumstances where the observer has no choice but to be either deceived or accept that their perceptions have no logical explanation -- hence the woman appears to be sawed in half even though we know this is unlikely; there is no visual information to disprove that she was (the lack of blood is evidence that she wasn't sawed in half but this is only evidence from our past experience with people being cut by blades).

We are susceptible to influence by fake news and celebrity product endorsement due to our evolved preference for information coming from "authority figures" and sources that agree with our existing views[5]. Now, normally one may hesitate to call exploiting these systems "hacking" because the exploiter often doesn't know that that is what they are doing just as someone may stumble upon a new computer exploit without knowing exactly why it works.

Again, I would argue that it may be to our advantage to think of the targeted exploitation of these innate tendencies as a type of "hacking" if only to make it more likely that we can avoid being influenced to beliefs and behaviors that may not be in our long term best interest.

[1] - http://im-possible.info/english/art/classic/shepard.html

[2] - http://rumelhartprize.org/?page_id=110

[3] - http://ilab.usc.edu/publications/doc/Miau_etal01spie.pdf

[4] - http://www.sciencedirect.com/science/article/pii/S0004370202...

[5] - https://en.wikipedia.org/wiki/Authority_bias - see also https://en.wikipedia.org/wiki/List_of_cognitive_biases

Neural interfaces are an obvious direction to go, and we already have well established commercial nerve-interface products, like cochlear implants.

But I think there'll be significant problems in interfacing with the brain in a meaningful way, if you want to bypass the existing interfaces (hearing, sight, touch etc).

I think it will turn out that the internal implementation is a lot stranger than the external interface, it could vary significantly between persons. Perhaps falling into major categories for some aspects, analogous to blood types; but then varying in the details as much as our finger prints - note that even genetically identical twins have unique fingerprints, as they are highly developmentally affected, like the branches of a specific tree. Simikar for retina prints, an example closer to (some say part of) the brain.

I think interoperation with the internal implementation could require actual understanding of the brain - enough to build strong AI. We might even have strong AI before.

It seems very difficult, we haven't the faintest clue at this point. We may need new fields of marhematics. Could take even more than 20 years.

Have fun with that. Cochleas are tricky enough!


OTOH, if something does come out of this, it could have extremely far-reaching consequences. Forget interfacing with AI. Think - locked-in syndrome, or people with various disabilities (blindness, etc).

This is the kind of moonshot that ought to be encouraged.

I worked on some locked-in stuff. There was recently a bit of a furor when a researcher snuck off to venezuela and had an electrode array implanted in his own brain: https://www.technologyreview.com/s/543246/to-study-the-brain.... It, um. didn't go over well in the research community.

We had tried external BCI for locked-in patients based on visually evoked potentials -- thought being that even if you lose the ability to move your eyes you can still steer an on-screen keyboard. EEG is so noisy, though, that it doesn't work unless the patient's looking at the target directly; ie, they can move their eyes. In which case, you should use an eye-tracker because it's $90 and 10 times faster.

How did it not go over well?! When you hear other researchers attack the work this guy is doing, you begin to suspect more of them are politicians looking to protect their own grants than actual scientists looking to make bold advancements themselves.

Agreed. This is a decent book on the topic: https://mitpress.mit.edu/books/toward-replacement-parts-brai...

Pretty much what they said about reusable rockets and electric cars.

They said that about electric cars? They had electric buses before they had petrol ones. In most ways electric is actually simpler. We've just been waiting for the batteries, and Tesla didn't do anything revolutionary there. Tesla saw the batteries were finally there and found a way to hype, market and sell electric cars successfully. That's praiseworthy. The self-driving stuff and over-the-air software updates are also cool. Lots of innovation there. But making electric cars has never been that hard per se.

I'm not sure I buy it either. Car don't have cochleas.

I'm pretty sure we had electric cars before we had ICE cars.

iirc some of the first cars ran on vegetable oil, which is now seen as quite progressive.

And anti-gravity and teleportation and warp-speed and finding a date in LA.

That last one being especially out of mankind's reach

Eh, it's worth understanding the scale and scope of the problem. I've been thinking about this a lot, recently.

The human brain has on the order of 1e11 neurons. The galaxy has on the order of 1e11 stars.

Now, would we confidently predict that we could "solve" the galaxy in less than 100 to 500 years, say? And yet we make very, very aggressive estimates of how quickly we might unravel the secrets of the brain.

Biology is just monstrously, monstrously complex!

The hope is that it may actually be easier to build a useful thinking machine than to fully reverse-engineer the brain.

If we can really build a neural lace in the sense that Iain Banks originated the term, then we've probably also solved cancer and the rest of biology. Imagine the fine-grained control you'd need to manufacture a molecular-scale structure in-situ.

While I'm a bit more optimistic than you, I think you're raising a very important point. We just found out the lungs make half the platelets in the body (http://www.nature.com/nature/journal/vaop/ncurrent/full/natu...) - the idea that the brain will be easy to sort out is definitely an overconfident one.

Well, the issue with the galaxy has more to do with general relativity than the number of stars. The issue right now isn't bottlenecked in any single place like that.

Can you elaborate? I don't think I got your point.

I'm intrigued by the difference between "understanding" a population, in aggregate, and having a fine-grained view of each individual while also knowing how each individual relates to the whole.

So, what might we learn about the galaxy if we knew what / where / when every single bright spot of star-ish size is, that we would not learn from an aggregated simulation? Rhetorical question, obviously.

I don't even know how many people live within 5km of me. How many are human-trafficked, held against their will? etc. We can make approximations, but there's a lot of interesting information at the margins.

Well, most of what you get is verification. When the problem is entirely local, you can verify separately calculated measurements with other measurements. This is why I think we'll be able to move much faster to work on problems of similar magnitudes locally than on relativistic scales.

While you don't know how many humans are within 5km of you, that's certainly easier to verify with confidence N than it is to verify how many humans within 5 light years.

I don't really understand this perspective. Are you arguing that we shouldn't try? Of course it'll be extremely difficult and the work will take place over decades, not months or years, just like Spacex and Tesla. Nobody's expecting Iain Banks's neural lace tomorrow. But if someone doesn't start working on it tomorrow, we won't have it in a hundred years, either.

No, I agree we should try. It was just an "aha" moment for me. It highlighted to me the inconsistency in answer these two questions:

1) How soon do you think we'll be able to tap every neuron?

2) How soon do you think we'll understand the galaxy in terms of each and every individual constituent star (not aggregates).


We can hand-wave estimates for the number of constituents in each of these systems. But it's interesting that we don't really know how many neurons there are in your individual brain, or in mine, etc. We can't yet look across the scales required to e.g. count and localize every single cell in your brain. Also, at what time? It changes. Just interesting questions when you get to a fine scale...

about 80 billion. Fortunately, glial cells matter, too! (don't tell anyone)

True. But not every idea that is labeled as "too hard" is in fact useful (or even doable) idea.

You don't think being able to communicate with a computer by thought is useful?

If it's not doable, no. It's just a nice dream.

If it's doable but the computer can't tell the difference between my attempts to command it and my daydreams, maybe still no.

Clearly you think this is an impossible task. I strongly disagree, given the fullness of time.....tens and hundreds and thousands of years lie ahead of us, eons for engineers and scientists to fill with research and experimentation. Your perspective on this seems very shortsighted to me.

"Neural lace," indeed. No such thing as too many Iain M. Banks references for Mr. Musk, eh?

forgive me if i am wrong, i don't know much about these matters. Just a quick thought. For me Elon Musks ventures seem a bit like angel-investors. Every company he founds has probability of making billions (car-company, rockets, tunnelling etc.), though they are also high risk. Of course he can't spread his risk as broad as a investor. But while risky, every venture seems to be very calculated, the opposite of crazy. I mean he could focus everything on Tesla, but if Tesla bites the dust he would loose everything.

But cause != correlation, maybe he just likes founding stuff.

he needs the money, but he doesn't want it. he's not playing the business game for the game's sake, he wants to advance civilization and the best way to do it is via billion dollar disruptive businesses, so that's what he's doing. i applaud him for keeping true to this after however many years of having access to that much assets.

We should be so lucky he's not purely profit-driven. Even on /r/futurology, supposedly a place of boundless imagination and hope for the future, I get attacked for suggesting that maybe it's not a bad idea to at least investigate the potential behind solar roads. Nope, those are stupid and a waste of money, fuck me for being an idealist.

My point is most of society is self-limiting for whatever reason (obsessed with profit, obsessed with status quo, etc). Luckily we've got folks like Musk who don't seem to give a shit and do what they think is right for humans. I'd love some other examples of people like this.

Another example: Zuckerberg took flak for his Internet.org idea, because clearly he's an evil Capitalist that just wants to profit off the advertising clicks of Indian goat farmers.

Solar roads are stupid because they're a terrible use of resources. You could literally build a roof of solar panels over the road for cheaper than you could build a 'solar roadway', and it would be more efficient to boot. It's nothing to do with idealism - if you were being an idealist you'd be backing a less inherently handicapped proposal.

>inherently handicapped

With today's assumptions, sure. Who's to say some material scientist with a trickle of funding doesn't find a way to mix in some cheap leftover metal into an asphalt mix and wouldyalookathat, for some reason you can now plus a USB cable into the road and it'll charge your phone. I'm being silly, but my point is exactly that: we just assume it won't work because yea duh, it won't work if we don't investigate it beyond "huh, if you put a car on a solar panel, it breaks. This is impossible, clearly."

Idealism is a belief in something unrealistic. It doesn't deserve a place on the stage of solutions. Dave Jones from the EEVBlog [0] channel did a bunch of videos explaining why solar roadways are a bad idea.

[0] https://www.youtube.com/user/EEVblog/search?query=roadways

The problem with solar panels is not that we lack a place to put them. Solar roads don't offer anything other than a bad place to put solar panels (under the tires of our cars).

Why is everyone so certain his motivations are noble?

Because so far he's staying true to the motivations as he described them; he's using business to help solve big problems - as opposed to other entrepreneurs, who use big problems to help their businesses. He's done nothing so far to make me personally doubt that his motivations are as noble as he says they are.

I'm not saying that he won't eventually turn into yet another selfish business baron. That would be a very sad day, though. We need more people with noble intentions and means to execute on them - and such people need to be encouraged, not constantly doubted.

To me he seems like a genuinely noble guy, especially among many other successful capitalists.

wouldn't say noble, just consistent and different from your usual businessmen who treat amassed capital and/or net worth as a measure of success.

This is an interesting story, especially considering the new Ghost in The Shell movie is coming out this week. It really makes me think that something like this might be possible. As we've discussed here before, technology moves really, really quickly and things that we never thought of 20 years ago are pretty common now. Even full on auto-drive cars are on the horizon in a very realistic way. Sure, we thought it would probably happen at some point but we are now within a couple of years of actually being able to purchase such a thing. And we all have pretty powerful supercomputers in our pockets. Is brain augmentation that weird.

Also, I've been thinking that we will need a new interface for our cell phones. The screens can't get much bigger without being uncomfortable, yet we need to interact with them more and more. I was thinking that a contact lens display would probably be within reach soon but maybe we'll just skip the physical stuff and go directly to injecting signals into the brain stem.

There's a "non-invasive" brain-computer interface on the market. It's a little less ambitious; targeted towards people who practice meditation. https://en.wikipedia.org/wiki/Muse_(headband)

This just EEG which has been around forever. I wouldn't call that a brain-computer interface

EEG is a brain computer interface. There has been many BCI projects that uses EEG.

I saw a professor demolish one of these devices by simply instructing his students using it to "think" without moving their eyes or eyebrows. At that point the device stopped working, because not surprisingly the action potential in our faces muscle movements far exceed anything the brain produces in terms of EEG signals when measured from our skin.

Trying to read brain signals from a skin-based EEG device is like trying to listen in on a conversation being had inside a packed sports stadium while you fly over it in a helicopter.

What's the point of this thing? See if you're meditating right?

Anybody know if it works? Could be cool if I could meditate "good" or "not good" in order to turn a light on and off, or turn on my cat feeder or whatever lol

Their app does a fairly impressive job of giving feedback to help you stay on track during meditation.

However, the cool thing about Muse, IMO, is that it's cheap, comfortable (moreso than the Emotiv), gives reasonable signal quality, and has a free SDK for developers. Brain-connecting devices even if they just use EEG open up a lot of doors for interesting products. Assisting in meditation is a great idea, but there's got to be a lot of good BCI ideas that we probably just haven't thought of yet.

There are a lot of these kicking around.

It's a very bare-bones EEG system: the Muse has 4 channels and a few references; EMOTIV has either 5 or 14. The slick part is that they use "dry" electrodes, which do not need to be filled with a conductive gel before use. The signal from wet electrodes is still a bit better (less noise, etc), but the gel is fairly gross to have in your hair.

There is also Emotiv: www.emotiv.com

I've had a casual curiosity on why these billionaires like Musk who seem to care about world wide problems such as AI aren't working to solve the imminent issues with global warming. I'm not talking about reducing fossil fuels, but perhaps coming up with solutions with their resources that addresses a world that isn't so habitable by life in the future. I don't know enough about global warming to know why this might be the case (hence why it's a casual curiosity). Or perhaps there are grand efforts and I just haven't heard about them yet.

Edit: I'm not condemning Elon Musk or downplaying his efforts. This was just an honest question I was hoping someone more knowledgeable than me might comment on.

He's founded tesla and SolarCity with that objective in mind and almost went bankrupt in the process...

My point was more around addressing global warming from the perspective of mitigation rather than prevention. There seems to be a lot of reports that indicate our opportunity for preventing global warming has passed or will soon pass[0][1][2]. So if that's the case, it seems like it would be pretty important to work on solutions to survive the effects of global warming instead.

0. http://www.npr.org/templates/story/story.php?storyId=9988890...

1. https://www.google.com/amp/s/amp.businessinsider.com/climate...

2. https://en.wikipedia.org/wiki/Runaway_climate_change#Current...

I've been seeing studies since the release of the Inconvenient Truth claiming that the point of no return has been reached.

I don't doubt climate change, but I'm very skeptical of studies positing a slippery slope of catastrophic proportions. It's true we don't know the cascading effects of increased CO2 and methane emissions, but that doesn't mean it the unknown is apocalyptic.

And hey - maybe we won't all die!

Humanity could use some thinning out.

Unfortunately we'll take a lot of innocent fora/fauna down as well.

Elon Musk talks about global warming all the time. He even says Tesla is just a means to an end of saving the planet from global warming.

Logistics. Can the problem be solved with engineering and invention alone, or do they need to get governments and political movements involved in order to accomplish anything? Why would anyone with any real ability or drive want to bother with that shit?

It's empirically easier to start an orbital rocket company than to convince hundreds of millions of people to do things they don't want to.

Musk's goal for SpaceX is to help move humanity to other planets like Mars. In fact, he actually wants to die on Mars:


Forget about AI, which is something Elon realized to be an issue only relatively recently. Tesla Motors and SolarCity exist as his response to exactly the problem of climate change - the stated goal of Tesla is not "fast and sexy cars of the future", but "electrification of all transportation", exactly because the latter is an important way to fight climate change.

As for other grand efforts - I haven't heard of them either, and I'd damn love to hear about them. We need to praise and support people who're doing good work.

Because the solutions to these problems are unconscionable to tech nerds, whose entire life style is dependent on the happy belly of Western resource and energy profligacy. The solutions involve powerdown, a rollback of technological progress and refocus on survival and sustainability at a global scale, and quite possibly a significant reduction of the human population.

I'm fairly sure that more than just "tech nerds" are against large scale democide, especially when the alternatives are pretty reasonable (bike lanes and wind power, the horror!)

tesla is musk's response to global warming, at least kind of.

The universities, companies, and individuals involved in this machine learning renaissance are a collective super-intelligence.

It's amazing how fast we can learn now, fuelled by information-sharing over the internet. I'm literally forgetting the names of everyday acquaintances because I'm learning and retaining so much new stuff.

The problem, from an individual's perspective is that, while you can stand on the shoulders of giants, you can't easily commandeer all that brain-power.

I'm imagining getting some time-slices of Terry Tao, Geoff Hinton, <insert other big brain names> 's cognition to devote to my own projects. What would that even look like?

On a different note, if we really could mind-meld, could we ever truly hate or kill each other?

I just started reading the Nexus trilogy by Ramez Naam (http://rameznaam.com/nexus/). It's quite good so far, and explores some of these concepts from a sci-fi bent.

"In the moment when I truly understand my enemy, understand him well enough to defeat him, then in that very moment I also love him. I think it's impossible to really understand somebody, what they want, what they believe, and not love them the way they love themselves. And then, in that very moment when I love them -" "You beat them." For a moment she was not afraid of his understanding. "No, you don't understand. I destroy them. I make it impossible for them to ever hurt me again. I grind them and grind them until they don't exist." Ender & Valentine, Ch. 13: Valentine

Ender's catchy and interesting, but I don't think it would play out that way.

Imagine that at the moment you fully understand your adversary you also fully understand others like yourself. The hurt that lead to the desire to never be harmed again. The harm that this kind of mentality inflicts. All the victims of your proposed victory. ...

I mean, it's a hell of a quote, but at a certain point he just stops the recursion for bad-ass conclusion.

I'm not sure if Musk's contention makes sense because intelligence isn't a uniform attribute to systems (biological or synthetic). Specifically, what passes for intelligence is just the proper applicable, discovery, and/or refinement of algorithms which we can do easily without intelligence (as in self-awareness and raw analytical capability). Just run through all possible paths or solutions on a sufficiently fast computer and you'll eventually get all the possible solutions to any given problem. It's not a matter of intelligence, it's a matter of lifespan and applicability.

How do you prevent a malevolent AI from controlling armies of neurally-linked humans?

The proposed idea with a neural lace would be that we became the "AI". Any AI advancements would be augmenting our shared thoughts.

If you accept the notion that the internet is our backbone, then we're already a super organism with shared thoughts.

Or to reduce it to a catch phrase: If you can't beat [it], join [it].

Use nanobots to clone bigboss

How long before humans are 0wnz0red?


Congrats to Max and team.

Sounds like an Exocortex: https://en.wikipedia.org/wiki/Exocortex

I'm... more excited about digging tunnels than this. Nothing like cyborg immortality to make our societal issues permenantly unsolvable.

On the other hand, it might be easier to get people to care about catastrophic climate change if it goes from "I'll be dead by the time thats an issue" to "I'll be alive to watch my great grandchildren's homes get swallowed by the sea".

On the third hand, it might be easier just to upgrade your immortal body to survive in low oxygen environment.

Wonder if we'll ever see "Humans as AI-ASICs" type services..? (I mean, apart from the movie "the Matrix".)

Bizarrely, humans weren't used for computation in the Matrix. They were batteries...somehow.

IIRC it started out as computation, but was changed to batteries because they thought cluster computing was too hard to explain in a movie script.

There's the one camp that decries how this flies in the face of basic thermodynamics (human body temperature vs. ambient gives abysmally low Carnot efficiency), and the other camp that claims thermodynamics is just invented by the machines to make stuff work inside the Matrix.

Well, a resting human body generates 4x more heat per volume than the fusion reaction at the core of the Sun.


But why didn't the machines just build geothermal generators? I guess it's not much of a story then?

pretty sure the volume of available humans was way smaller than the volume of the core of the sun

Nice try Elon. I've read the Avery Cates series. You aren't "monking" me!

What happens when my brain implants are under DDoS attacks? Or get hacked?

This is great, he always concentrates on the really important problem.

I believe that someday, this guy will save us! He is so amazing!

We will need to prevent malicious brainhacks.

Hello, Cookie.

Am I the only one surprised (and concerned) by how many ventures Musk start at the same time?


An entrepreneur doesn't have to do the day to day heavy lifting. An entrepreneur hires and works with people who do the heavy lifting.

Why's this guy being downvoted? There are plenty of capable people out there that can run a given Muskventure. He's got enough money to spitball an idea, spend a good bit of time hiring the right people, choosing a CEO, and letting it ride. The only issue his it seems like his ideas won't make much money - how can you sell Neuralink when the technology doesn't exist to make it feasible?

But give it 15 years and Neuralink is the only product on the market that allows humans to be relevant in the face of A.I.... Smells like the kind of profits one would have if, for example, they dragged an asteroid to Mars and monopolized water, or weaponized Saturn's orbit and controlled the only port for deep space travel, etc. Seems silly now but guaranteed there's gonna be people wealthy on a scale we've never seen when these starts of things come through.

Or we'll all be dead. Whatever.

> He's got enough money to spitball an idea

Does he? He seems to be highly leveraged personally and within his companies.

Since when can you downvote on HN?

since 500 or so karma.

I am too. He's been successful so far, but it seems to me that at some point, he will be overextending himself, and will start to run into more failures. As a Tesla shareholder, I wish he would focus on making sure that company is successful. Just Tesla, with its car, power, and battery divisions, is a huge project for one person to manage.

Well, if the Neuralink works then it will probably make everything else easier.

Maybe he already has a Neuralink for himself, is a cyborg, and is just doing a parallel construction of it now.

Elon is not the first to take on so many ventures at once. Richard Branson has done it with overwhelming positive results (some losses are bound to happen). It seems that after a certain point, an entrepreneur that is so inclined gets more effective at putting the right processes in place so that they're no longer a bottleneck at the management level.

"...he will be overextending himself..." Perhaps - Robert Browning:

"Ah, but a man's reach should exceed his grasp, Or what's a heaven for?" [1]

[1] https://www.poetryfoundation.org/poems-and-poets/poems/detai...

I've stopped paying attention ever since he spoke of breaking into the tunnels business.

the boring company is basically half joke, half crazy idea and even musk admits it, just read an interview piece where he shows a TBM to a reporter. he doesn't come off as completely serious about it. if it works out, though... we're talking big business.

Tunnels will be important on Mars.


In my book, tunnels, road construction, mining, oil & gas are quintessential examples of diehard reactionary industries making snail pace progress, with entrenched corruption and basically insurmountable barriers to entry.

I imagine he can get away with a certain amount of name recognition w/o running the day to day of the company himself.

From the WSJ article:

> Mr. Musk has taken an active role setting up the California-based company and may play a significant leadership role, according to people briefed on Neuralink’s plans, a bold step for a father of five who already runs two technologically complex businesses.


the guy has money, an incandescent drive to keep things moving and a connection to everyone of ability of connectivity or power.

You dont let opportunities slide with those boxes ticked.


Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact