Hacker News new | past | comments | ask | show | jobs | submit login

So back in the 60's, people looked back at the progress over the previous decades, imagined the future, and thought about space. We'd have commercial space flight any day now. The most poignant scene I can think of: in 2001 A Space Odyssey, the character flies on a Pan-Am spaceship to the moon, and then goes into a phone booth to make a phone call.

Fast forward and it turns out that we had been near the top of an s-curve when it came to space tech, but near the bottom of the s-curve of computers, and few people back then were imagining (could imagine?) how different the world would be 50 years later with everyone carrying around internet-connected supercomputers in their pocket.

I think we may be in the same situation today, where people imagine the future and think AI revolution and computational everything, but are mostly missing that we're at the bottom of a biotech s-curve that is going to blow "computer" progress out of the water over the next 50-60 years.

My guess is that in 60 years our computer technology will be largely similar to today, just faster and nicer. But in the same way that the mature industrial revolution made high-precision manufacturing possible which made incredible computers possible, our mature computer technology is now enabling incredible progress in biotech. And the explosion of biotech will lead to mind-blowing changes that are difficult to even imagine.

From this article: no more keyboards / mice? No typing, you can "think" to write. What about recording your own thoughts and then playing them back to yourself later? How much further can that tech go? And there is so much more beyond BCI, we are just understanding the basic building blocks in many areas, but making amazing progress.

I'm excited about it.




Imagine if we could connect your brain directly to a computer. Imagine if you could do things like instantly and precisely recall any Wikipedia article, any news story, any mathematical formula. Imagine if arithmetic goes from a skill you learn to a thing your brain does with 100% accuracy.

Now imagine if your need for speech goes away: why bother using it when you can just “text” from your brain directly to mine and I instantly know what you said without me having to “read” anything. Instant communication. Instant connection to anyone. Instant ads beamed directly into your brain by Google and Facebook.

Now take it a step further: your mind is now a part of a collective globally connected network. The boundary of where “you” exist and where the rest of the world exists is erased. You can feel what other people feel. You can see through the eyes of an Oscar winner, a surgeon, a head of state, a porn star. Police body cams become police mind cams: what was the cop thinking when they took any given action? What we currently have as YouTube celebrities and Instagram influencers become Mindgram stars. You can see and perceive as them.

Now take it a step further. Death isn’t death. Like the paradox of rebuilding a ship one plank at a time, your mind stops existing in your body and occupies a collection of other bodies. Artificial intelligence mixed in with real intelligence mixed in with remnant intelligence. We can’t imagine what this feels like but we are marching towards it getting ever closer every year.

Now take it a step further. People want to get away from this hive mind concept. They disconnect. They play games. They make games where all NPCs are now simulated to the point where they believe they are real. They are here for the benefit of the players but even the players can’t tell the difference when they are in the game.

Now take it a step further. Inside the simulation someone introduces Hard Seltzer. The in game year is 2021 and a player just read that some NPC somewhere had just created a brain/computer interface. He rips off his headset and goes to unplug the computer because fuck this game, all the DLC clearly ruined it.


> Imagine if we could connect your brain directly to a computer. Imagine if you could do things like instantly and precisely recall any Wikipedia article, any news story, any mathematical formula. Imagine if arithmetic goes from a skill you learn to a thing your brain does with 100% accuracy.

You're talking about transferring information FROM computer TO brain. We have no idea how to do it.

Transferring information FROM brain TO computer is achievable with modern tech (and that's what this link shows), but not vice versa.


The awesome thing is that our brain is great at learning how to use and make sense of new inputs. The big one being literacy, you never think about it but you learned to interpret strange little patterns first as sounds that you hear in your head and then as entire concepts like “cat” or “happy”. The same can be said for spoken language or mathematics or musical notation. I don’t doubt that the human brain will have little trouble learning that X pattern of electrical inputs to a group of neurons means “cat” or the sound of A or even an image of a bird. It won’t come instantly and it won’t be identical to the thing it represents without wiring directly into the visual or auditory regions but it will give us a new sense and a new language.


It’s not really all that good at taking in new inputs, at least if our brains are anything like other mammals. There have been experiments where animals have certain senses depraved (eye sewn shut, for example) early in life, and then opened after their brain had fonished developing. Their brains never «learned» to take in the new sensory signals afterwards. At least not to the extent of a regular individual with all their senses since birth.


I’d be more interested in studies on people who have been deaf from birth and received cochlear implants later in life.


Yep, famously done with two kittens, one with agency to move and the other paws dragging on the ground - it was effectively blind and did not react to stimulus because it did not create meaning.


I remembered those body modifications where people insert small magnets under some finger's skin.

After a while a new feeling arises...


At least half the value or learning arithmetic is that it shapes ones neural network in some fashion in a way to make it better at certain types of thought. Skipping that learning process presumably skips those physical changes as well.


We could train our "soft" neural networks very efficiently with a computer interface. Maybe not as fast as dedicated software neural networks, but the human mind responds very quickly to feedback loops (sometimes destructively).

Which makes me wonder, what will an overtrained brain look like? What kinds of illnesses are we unleashing on the world by attaching an interface like that directly to the brain?


My pet theory is that anxiety is an over trained brain reaction.


In a lot of recovery circles there's an underlying concept of "getting out of your head" where the methodology that arises in each circle attempts to get a person to leave the circular thoughts in their heads and do/think something else.

I think this is why psylocybin is so effective for depression: it induces a state of plasticity in the brain that gives someone an opportunity to fill in the ruts they had been mentally pacing in.


Transferring information TO BRAIN is sort of the raison d'être of computers.

This comment FROM brain TO computer, FROM computer TO computer, FROM computer TO your brain.

It's awesome.


I think the point is that this brain computer interface speeds up the from-brain-to-computer part of the interface. The from-computer-to-brain transfer is the same old fashioned way.


We have perfectly good analog inputs, I'd rather we start with improving those rather than open the digital 6th sense box.


How much can you improve the human ear? More importantly how much can you improve the speed with which you perceive with the human ear and actually understand and retain information from it? You can probably double it’s efficiency. But can you make it take in information with perfect clarify at 10x the rate? 1000x? A direct interface into the brain could hypothetically bypass the ear entirely. And there is precedent for this already: we went from pointing and grunting, to speech, to writing, to digital writing, to the web. Imagine what it might have been like for me to convey this message to you if we lived in a hunter gatherer society before human speech was a thing? Now flip that forward: what specialized tools could we use to speed up communication more? About the only things we have left are real time translation devices and an AR capable of augmenting what we are looking at with relevant labels and articles. Beyond that we have no place to improve without inventing a radically new way to interface, and in nature you either improve or you die. Nothing stands still and neither will this.


Lets start with some nematodes and work our way up. Would probably need to own a Cyber-Scooby before having some digital inputs implanted.


Yes that’s true. But I think two way communication is a goal for a lot of research and whole orders of magnitude more difficult, likely not impossible.


But much more speculative. Just because we can imagine something doesn't mean science can/will achieve it. Brain-to-computer communication seems much more straightforward to achieve from a technical perspective, and has enough potential to revolutionize aspects of our lives on it's own.


Actually, and fortunately, it is impossible. This sort of brain model was preemptively debunked by Kant in Critique of Pure Reason and other works.


I think it's pretty hard to argue that any philosophical work, regardless of how important or impactful or insightful it is, can "debunk" anything.


> Imagine if we could connect your brain directly to a computer.

Please, no. I’d just get even more frustrated at how slow the damn thing is.


I'd argue we already connected ourselves to computers, and we're just using the safest but slowest adapters available right now.


We could argue on "safest", but definitely the compromise on speed, ease of use and safety that I can think of that I’m the most comfortable with


I'm curious, what would a solution look like that's even safer than a touch interface (mouse, keyboard, screen) or voice?


Well, removing audio and images, leaving only text, would make it safer.

I, and probably many others, wouldn’t have stumbled upon some of the things I have. They thankfully are now only blurry memories to me, even though merely evoking them still is nauseating.

It would also dramatically reduce the impact of much of the bullshit content out there, since words appear to have much less emotional impact than images (and appear to be much less appealing), and thus be safer to society as a whole.

A text-only interface would also be much less useful and much more annoying to use.


> since words appear to have much less emotional impact than images

To some. My imagination is quite good and as a child I consumed vast quantities of print media. A lot of it wasn't appropriate for a child to consume.


Sure. It can already be a lot. Some of my worst nightmares come from my own imagination.

But we’re talking safer, not absolutely, perfectly safest.

Also, how would you compare that to the close-up picture of an Iraqi soldier burned to a crisp by a Hellfire missile, hanging out of the carcass of his scorched vehicle, staring straight into your eyes with his charred, empty eye sockets?

Or suddenly being confronted, at 12 years old, by dozens of neatly aligned child pornography thumbnails after clicking on some unclearly named link when exploring I2P, FreeNet and TOR because "decentralised", "anonymous" and "censorship-resistant" networks seemed "cool"?

There’s a reason crap people’s magazines are full of pictures, paparazzi photos can be worth what they are, and websites with a photo of a smiling human (and a clear CTA) convert better than those without.

¯\_(ツ)_/¯


Same here, as evidenced when one rainy Saturday afternoon the family were all together after lunch, grandparents drinking tea, me cross legged on the floor reading when I innocently asked the room “hey, what does cunt mean?’

Grandma turned puce and dad snatched the book from me, wtf are you reading?

James Herbert’s Rats trilogy. Aged 8.

Warped ever since.

Text is plenty.


I couldn't understand why my parents didn't like me calling my brother an orgasm after I had read the term in a book and liked the sound of it :)


There is so much about it that I think would be wrong, difficult, bad and we as a species can’t even imagine what it would be like. How do you install an ad blocker on an interface like that?


Very good question.

Regarding the ad blocker, one thing I definitely can’t wait for are true AR glasses that could act as a real-life ad blocker.

Being out in the street doesn’t mean I have in any way agreed to being constantly drowned in and have my attention stolen away by all this bloody noise.


The notion of a neurally connected Facebook or Google scares the heck out of me. Apart from the countless petabytes of data they would collect, just imagine a world where the powers that be can actually tap into your thoughts, perhaps even implant ideas that they think you should have. At a certain point, we lose our individuality and become subsumed by the global AI, game over.

But before that distant dystopian point is reached, I do hope we develop ways for paralyzed people to regain sensory control and live normal lives.


> At a certain point, we lose our individuality and become subsumed by the global AI, game over.

In a way, we already have. Each and every one of us is constantly influenced by and influencing untold numbers of people, and most beliefs and knowledge are more or less "standardised".

Most people follow the school -> (college ->) 9 to 5 -> retire consensus, and even those who believe themselves to be outliers actually behave how outliers are expected to behave, all of us furthering the goals set by others, some of which died have even died long ago.

Actual individuality is quite rare and usually expressed at a very small scale.


I think your identification of individuality and a measure of uniqueness is a mistake.

We're not individuals apart from others, the others are presupposed. The I is an abstraction in the sense that it presupposes social terms to understand itself. You need a reference, like culture, to be correctly understood as "alternative" (although 'peripheral' in terms of some specific aspects is more correct).

If you're not in a community at all, you're not going to reproduce.

Actual individuality is merely recognizing the exercised autonomy by an agent. You are still an individual even when you behave according to existing mores you did not create.

(The extra social esteem bestowed to relative difference is a cultural trend and a historical phenomenon. It does not determine our species, only our current conditions and predicaments.)


I fail to see how your arguments contradict my words.

Each ant constantly exercises it’s autonomy. Would you nonetheless argue that it has any individuality in the way the gp intended the word?


I make two points that "contradict" here:

Primarily, I argue that we're a social species and cannot employ individuality without a culture and community. This contradicts any notion of being artificially programmed by a deus ex machina. On the contrary, we program and reprogram each other reciprocally and continuously. A strong AI shaping of human kind would be parasitic to existing practice, not a new thing. Kind of like how propaganda presupposes a practice of telling the truth.

Second, and perhaps wrongly so, I address the seeming notion of difference being a moral good; a sentiment I interpret in your final sentence. I might very well have misunderstood it.

Being different is not good, in and of itself. Striving to be different seems to be a recipe for unhappiness, because the internal resources cannot explode the context without a major risk of ending up in social vacuity, where your actions are systemically misinterpreted and the individual itself misinterpret his or herself on the rebound.

The so-called historical individual in Hegel , Nietzsche's grand most free of all superman, relies on contingent reality to succeed; in other words, the successful "being different" is a harmony of the individual's actions and intentions with existing but as of yet unexpressed social trends. Again, individuality presupposes sociality, completely opposing the view that "true individuality" is so rare.

It's unfair to the ant to talk about them in human terms, and I am not an expert in ants. But a human being expressing her taste is an expression of her individuality, even though it's exactly the same thing "everyone else" think as well. What matters is that it expresses her as an individual, not that it differs from the majority.


How would individuality be expressed at large scale, anyhow? Funny thought.


By figuring and trying out new ways of being instead of seeking to conform to archetypes.

By trying out new ways out of repeating situations and creating new behaviours instead of either repeating the same habits or trying to adopt someone else’s response to them.

At an individual’s scale, those would be large and have big impacts.

Much more so than the colour of my living room’s wall, my type of car or defining myself by wearing either shirts or T-shirts and trying to impose on everyone else what I consider to be professional or unprofessional.


What's so dsytopian about that?


>Imagine if you could do things like instantly and precisely recall any Wikipedia article, any news story, any mathematical formula. Imagine if arithmetic goes from a skill you learn to a thing your brain does with 100% accuracy.

I don't think this will be that big of an advantage. I know the first 20 digits of PI off the top of my head, but if you ask me to do something with this information it will still take some time. Having to look up the first 20 digits online would probably be a small portion of that time.

You can have instant access to information, but processing can still take time. I think the arithmetic part is actually more interesting. Being able to look at a table of numbers and running a statistical analysis on it would be very useful.

At the same time though, I think all of this is way harder than we think. Look at programming via voice. The current best UI for it is still pretty lackluster.


> You can have instant access to information, but processing can still take time.

You likely have processing power in your pocket. With efficient brain-to-computer and computer-to-brain interfaces, the processing part could be taken care of too; just externalized.

We would probably end up being a lot _worse_ at things like arithmetic though, since we'd use external processing for even more things (I do believe we're already loosing in things like memory and simple processing by having access to a smartphone at all times).


> He rips off his headset and goes to unplug the computer because fuck this game, all the DLC clearly ruined it.

> The most poignant scene I can think of: in 2001 A Space Odyssey, the character flies on a Pan-Am spaceship to the moon, and then goes into a phone booth to make a phone call.

Interesting that you commit the same fallacy as the parent talks about: you talk about all this complexity in biotech but then assume that there's going to be a headset with a computer in order to connect to the simulation, rather than it being directly implanted into one's brain.


I added that for a bit of color :)


eXistenZ (1999 film) was a mindfuck when I saw it.

But yeah, I think we're getting closer and closer to a true hivemind. It would have to suppress the individual personality, otherwise a lot of people will likely go insane.

Of course, it could be that's acceptable losses or they're cut off from the "advanced civilization" and left to live somewhere far from the cities.

That is, of course, if half the planet isn't flooded and turned into desert by then.


Stephen Baxter's book Coalescent shows how a human hive mind could form via an set of circumstances that could be enforced or simply arise due to the environment. His "eusocial evolution" is a pretty damn creepy idea!

https://en.wikipedia.org/wiki/Coalescent


> Now take it a step further. Inside the simulation someone introduces Hard Seltzer. The in game year is 2021 and a player just read that some NPC somewhere had just created a brain/computer interface. He rips off his headset and goes to unplug the computer because fuck this game, all the DLC clearly ruined it.

Lmao. Hard seltzer isn’t that bad


It’s proof we are in a simulation. What else but a random item generator could have come up with it?


Other countries had it for years. It's called alcopop in the UK and chuhai in Japan (except theirs goes up to 9%.)

"Alcopop" always sounded like a fake word to me but they also insist on calling vaccines "jabs".


This reads like an Asimov story! He really did have some very well informed predictions that seem accurate even nowadays.


> Now take it a step further. Death isn’t death. Like the paradox of rebuilding a ship one plank at a time, your mind stops existing in your body and occupies a collection of other bodies.

Douglas Hofstadter talks about this in "I am a Strange Loop" [1], but he argues that our 'soul fragments' as he calls them are a representation of ourselves in others. Depending on how large of a fragment they hold in our brain, we can perceive the world as they do, and think as the other person. They get to experience the world through us, in a sense, given that we 'allow them to'.

It is an interesting idea, and helps reconcile the death of our loved ones.

[1] https://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp...


Great post, I enjoyed reading it. To take these ideas further, Permutation City, by Greg Egan.

https://en.m.wikipedia.org/wiki/Permutation_City


I read Permutation City recently and found it in line with what we talk about now, which is so impressive for its time (1994). From OP:

> Now take it a step further: your mind is now a part of a collective globally connected network. The boundary of where “you” exist and where the rest of the world exists is erased. You can feel what other people feel.

This reminded me more of a book I've read online [1]. The synopsis doesn't do it justice, but shows the concept:

> All minds are networked together. Everyone collectively votes to punish or promote anyone for the slightest thought. Cater to your orbiters, and they'll vote to give you candy or slaves or even governorship of a metropolis. Challenge or defy your orbiters and ... well, good luck with that. You'll be down-voted until you die.

[1] https://www.wattpad.com/story/84753330-city-of-slaves-sff-co...


> Now take it a step further. People want to get away from this hive mind concept.

Why would they?

All the preceding paragraphs sound like Borg collective, but hey, if it's voluntary, it actually doesn't sound bad. As long as we can keep adtech away.


Yeah... if it's voluntary.

Working is voluntary.

Sure, you want to move to a farm far away from all the madness or you want to sit in a workshop designing robots all day, but you need money, so you "volunteer" to work some job you barely tolerate in or near a city for all of your best years.

The ads are just constant slaps in the face.


You should check out the Nexus series :)


Or watch Forbidden Planet - "Monsters from the Id"


> Now take it a step further: your mind is now a part of a collective globally connected network.

I think you just described the Borg from Star Trek


> we had been near the top of an s-curve when it came to space tech

Near the top of an s-curve in getting-to-space (“heavy lift”) tech, more like.

I’d say the field of actual in-space tech (i.e. technology that takes advantage of low-gravity / low-pressure / low-oxygen environments) is still pretty nascent. We still treat space as “Earth, minus some guarantees” (i.e. something we have to harden for) rather than doing much with the unique benefits of space.

It’ll probably take having a long-term industrial base operating from space to see much change there, though.

Imagine, for example, living on a space station, and having your food cooked using cooking techniques that assume cheaply-available vacuum and/or anoxic environments. :)


Yeah I agree, there are "generations" of technology, and I think that people in the 60's looked at the progress of transportation tech from 1910 to 1960 and thought "at this pace we'll all be zipping around the solar system like it's nothing by the 2000s." It was not easy to form an intuition of why the first generation of space tech was going to hit physics-imposed limits that would "slow that progress."

To be fair we still made lots of progress with space tech after the 60's, and I think via SpaceX and others we are hopefully now starting a new S-curve unlocked by cheaper access to LEO.


60s space tech also hit a lot of limitations of computers of the time. SpaceX's Super Heavy is in some ways a reimagining of N1's first stage, but with control software that makes it viable to deal with engine failures in flight.

But a big problem is also that the progress in space tech wasn't organic. There was no economic incentive, it was driven purely by propaganda, national pride and political goals. Once that fell away it took half a century for economic usecases catch up to a point where private investment was viable.


An interesting thing to think about was people used to think of getting somewhere. Now we think of things coming to us. In a way, we did achieve the "zipping around", it's just that we did it via the internet and wireless communication. Of course, it is not the same, but it is similar.


Space tech might be considerably further along today, had we in the U.S. not limited our R&D after 1969. A reusable shuttle that cost over $1B per flight was interesting innovation in 1981, but it actually represented a dead end for the U.S. rather than the beginning of a new era of exploration.


Once the moon mission was accomplished, we lacked a clear target on which to stay focused. Build cool things, but what for?

Researchers could concoct all sorts of narratives, but it'd lost the spark that held the layman's attention and permitted the spend of political capital.


From what I gather from the era the next goals were pretty clear, even to the general public: "go to Mars" (or in the Soviet case "go to Venus"), and then go to Alpha Centauri.

If the Soviets would have won the race to the moon this might even have happened. But instead the Soviets decided to focus on space stations, and the US declared themselves winner and did largely nothing (by rejecting NASA's Space Transportation System proposal, which was also about a space station and a way to get there cheaply).


The space shuttle was a bad system for a number of political reasons. Its per launch cost was comparable to developing new heavy lifts or developing new missions to the outer planets.

Effectively nasa was spending their RnD money on opex and not getting any political capital back for it. Tragically the program had been set up with the promise of a space truck, if nasa admitted they didn’t deliver a space truck Congress would have been unlikely to fund a new program. Had the budget been allocated entirely to either technological development or novel explorations America’s willingness to fund nasa could have been substantially different.


"food cooked using cooking techniques that assume cheaply-available vacuum " - what would those be?

Vacuum isn't something that's hard to get if you need it, all you need is a motor driving a pump, so if any industrial food process or one of the fancy restaurant chefs would have a good use for it, they would be already using vacuum in cooking. A kitchen vacuum sealer is <100$ (I'm assuming that would count as "cheaply available"), and it's not particularly useful though for most other cooking purposes that come to mind.


> cooking techniques that assume cheaply-available vacuum

That also assumes that the atmosphere you're venting for that "cheap vacuum" is cheap as well.


You don't have to vent. You can just compress into storage.


So to get your cheap vaccuum you first have to make a vacuum in a chamber by sucking the air out before opening it to space? You can just skip the open it to space part and do it on earth!


While I agree that we will have mind blowing biotech improvements in the next 50 to 60 years, I don’t believe it’s physically possible for biotech progress to be as mind-blowing as what happened in computer tech.

In the last 60 years, computers have gone from $160 billion/GFLOP to $0.03/GFLOP; transistors are now smaller and faster than synapses by the same factors that wolves are smaller and faster than hills, and the sort of computing tech that was literally SciFi in the 60s — self-teaching universal translators, voice interfaces, facial recognition — is now fairly standard.

60 years of biotech? If the next time I wake is after 60 years in a cryonics chamber[0] and was told every disease was now cured, that every organ could be printed on demand, that adult genetic modification was fine and furries could get whatever gene-modded body they wanted up to and including elephant-mass fire-breathing flying dragon, and that full brain uploading and/or neural laces a-la The Culture were standard, I would believe it. But if they told me biological immortality was solved (as opposed to mind upload followed by download into a freshly bioprinted body with a blank brain) I’d doubt the year was really only 2081 — not all research can be done in silicon, some has to be done in-vivo, and immorality would be one of them.

[0] this would be very surprising as I’ve not signed up for it, but for the sake of example


If we have full-on adult genetic modification capable of the, ah, dramatic example you provide, we’ve certainly figured out a way to get around in-vivo test difficulties. For better or worse, any biomedical advance comes up against that problem sooner or later.

Therapies to slow aging have the particular problem that it could intrinsically take decades to show an effect, sure- but that’s simply reason to be a bit more ambitious and aim for therapies to reverse aging, which could be tested rapidly in already-old patients. :)


> But if they told me biological immortality was solved (as opposed to mind upload followed by download into a freshly bioprinted body with a blank brain) I’d doubt the year was really only 2081 — not all research can be done in silicon, some has to be done in-vivo, and immorality would be one of them.

But why, biological immortality is already here for many animals like jellyfish. Honestly seems closer to me than uploading.


Because 60 years isn’t enough time to tell if you were completely correct, or if there was something you missed.


Depends on if the tech is something you do to an embryo and have to wait for, or if it's something you can do to someone who is already alive.

If you apply the process to someone who is already 100 and they hit 160 with no further degradation, it would be, at the very least, a pretty good indicator your process had worked.


I thought so too, until I started my PhD back in 2017. The topic was the application of machine learning to neurosignal decoding.

We are absolutely at the top of the s curve in this field. There has been no real progress in two decades. The AI boom has just made things worse by redirecting all of the grant money away from better sensors (which could get results in certain niche applications) and towards new algorithms, which have hit an accuracy ceiling because the data their human operators feed into them is complete shit. The field is so beyond saving that an algorithm that achieves slightly better performance on a single old dataset due to sheer chance is presented as a breakthrough.

Data rates would be in the tens of bits per minute with error rates of 20% on a good day and no feasible plan to improve either of these woeful specs.

I'll be dead before we have a BCI with enough bandwidth that anyone but a dying MND patient would want to get one installed.


I agree with you, and I'd say that we are perhaps in the bottom, or the middle of the S curve in software. Despite all the technological progress in hardware, our software is very slow and buggy. We end up increasing complexity in the name of 'productivity' and going up in abstractions, but we end up lacking fine-grained control and performance in modern systems.

I hope that we see a paradigm shift back towards writing robust and performant systems instead of stacking abstractions. Sure, Monads and Transformers are all fun to use, make code coincise and are very satisfying when they compose well, but, what's the hidden cost, and is it worth it?

As a user that encounters bugs at a disproportionally high rate, I'd say no. The trade-off in increasing abstractions is not worth the trade-off.


I think this has more to do with management timeline expectations and income valuation than software development. I do understand that they're almost inseparable, as software has to make money. But, timelines need to take into account the "craft" of software creation and not just the desk hours, for lack of better terms =/ Short timelines and quick turn arounds don't leave time for refactoring and quality code creation. First passes tend to be the final draft, more often then not.


This issue however, exists in more than just commercial software. Personal example, OSes feel bloated, I switched from BigSur to Mojave in my 2018MBPro and I saw significant improvements in responsiveness with virtually no change in usage behavior. Even software whose supposed features are stability and speed feel bloated, e.g. weird bugs in Unity, photoshop is sluggish without any significant increase in the visible features.


It makes sense though, the cost of hardware per performance has nosedove, whereas the cost of human labor has probably gone up? Why pay devs to write fast code when you can buy faster computers faster and cheaper?

I wonder when the trend reverses. It must, at some point, mustn't it?


> Why pay devs to write fast code when you can buy faster computers faster and cheaper?

Because you can do more in the same time. Speed and responsiveness are features, the issue is, the general population has come to accept that bugs are not only acceptable, but just that, 'bugs' that you can shoo away by restarting the machine/program.

> I wonder when the trend reverses. It must, at some point, mustn't it?

I hope so. As we have seen with spectre and now AMD's equivalent, speculative execution is risky and very complex to get right. We can't rely on ever increasing complexity on CPUs and fabrication processes, at some point quantum mechanics will bite back.


Your experience of using Haskell is that it causes more bugs than less abstract languages?


No, on the contrary, my experience with Haskell is that my code is mostly bug free, but ends up less performant because you can accidentally create huge trunks in the heap and it consumes too much mental stamina.

However, there exists an intermediate plane of abstraction over C and under Haskell that is absolutely horrendous and results in all sorts of weird bugs and unpredictable situations.


I've heard people say that Haskell would be better with eager rather than lazy evaluation, because of the mental burden that it causes. IMO that doesn't seem like a hard problem to solve. We can design pure functional languages with eager evaluation.


Haskell would be better if it used polarity and focusing to make both strict and lazy evaluation first-class citizens in the language. A stricrly-evaluated counterpart to Haskell is just ML, which we've had since the 1970s.


I've coded in OCaml, which wasn't pure immutable, but rather immutable by default. Because it has mutability, that removes the focus on immutable data structures, making it a very different language.


You can make any module strict by using the XStrict language pragma in Haskell.


What about Rust?


It solves memory problems and certainly doesn't shoot your foot with destructors, weird moves and so on like C++, it lacks a garbage collector which imho is a big plus (though, as C# and Java show it is not necessarily a big problem), and I prefer the trait system over than Cxx's virtual classes/interfaces. I also like that the compiler is very helpful, it introduces some friction while writing, but I suppose that is my lack of expertise.


With the track records we have of power and technological abuses, I'm not sure I'm excited about having a direct interface to my head.

In fact, I would be more excited about IRL laws tuning down what some are doing with the current indirect interfaces to my head, such as fake news, propaganda, advertising and manipulations of all sorts.


Pretty much. I'm less excited. Because I was super excited 50 years ago when we were at the bottom of the S-curve on computers and I had no idea they would eventually be commandeered to rip off my parents. I think of it as a "wider" view of the impacts vs a "narrow" view of the benefits.

For me, the nagging question is what happens when biotech has figured out biological systems to the point that everyone stops aging/dying[1]. Does that 10 billion people, give or take, become "the humans" for the rest of time?

[1] Lots of evidence that there is no "reason" for cell senescence, it's just an evolutionary afterthought (you succeeded in reproducing, now go die) and like other things can be "fixed."


I wish I could find the source, but when I've looked into this in the past I was reasonably convinced that the population would go up a bit, but not catastrophically, with the basic idea being that people die all the time for lots of reasons, only one of which is "aging" (aging is multiple things, blah blah). So people's lifespans would be much longer on average, but not infinitely long. Something more like 300 or 400 years, with a pretty big standard deviation.


Is not dying even a good idea? It might be the best way to create a stable system at society level - crash-only computing works pretty well in practice compared to never-crashing systems like Lisps.

Main problem is we take too long to build a replacement person.


> but near the bottom of the s-curve of computers

I think the most interesting exemplar of this is Star Trek. As everyone knows, Star Trek is based on 19th century naval warfare. The battle scenes are hilarious-- a captain calling out orders at human speeds to his crew that executes them.

It's been obvious for 40 years that computers would do the fighting, but in 1966 it wasn't obvious, so the paradigm was Horatio Hornblower.


star trek is not heavy on fighting anyhow, so these battles are plot devices... how would a computer executing and finishing a battle even before humans figure ouut that somethin is happening help with the plot....

in the same vein, why would strikes on a ship result in sparkles on the bridge....


We were also at the bottom of the s-curve of social malware and adtech, which turned out to the real outcome of commoditised computing.

The rhetoric of personal creativity and freedom was flattened by something far less interesting and more toxic.

I wouldn't be so keen to rush headlong into a bioware-connected world until that problem is solved.


This indeed is also my main fear. All the nefarious, intrusive, methods of advertising will get a hundred times worse, but now be directly beamed into our brains. Our hypercapitalist economy will only accelerate and exacerbate this.


> What about recording your own thoughts and then playing them back to yourself later?

More realistic is that your thoughts will be listened by police in real-time plus you could transmit them to other people through instagram. Amazon will be doing food delivery from app to mouth, and people largely will not be moving much, as food will be wholly automated end to end and any hobbies people will be doing via VR goggles. There will also be contraceptives controlled by ministry of reproduction and will be working or not based on couple's social score. edit: I forgot - exposure is going to be a real currency.


I guess I don’t see what is supposed to drive the S curve in biotechnology over the next 60 years. Advanced information technology is a necessary but not a sufficient condition for biotech mastery, and the other conditions are just not in place.


Just to name one thing, I think CRISPR is likely to be seen as fundamental a technological building block as the transistor.


Yeah CRISPR-Cas9 is an incredible breakthrough. I’m just not sure the regulatory environment is geared up for exponential growth, and my assertion is that reverse engineering biology will be a much steeper climb than the history of transistors or rocketry.

I guess my question to you would be, if we were to take a bet on it, what metrics would we use to know who was right? Biotech patents? In the case of rockets and chips we had thrust and transistor density respectively. I’m not sure how easy it would be to quantify progress, though I have a sense of what some of the tangible applications will be when they show up. Maybe mortality rate would be a good measure? Or human population size (a proxy for mortality + food production)?


> What about recording your own thoughts and then playing them back to yourself later?

Wet dream of surveillance tech.

50 years ago, if you had suggested that someday all your social connections would be instantly available to all serious secret services as well as big corps, people would laugh at you. You'd be the negative guy, against progress. 1984 fans were the paranoid tinfoil hat guys. Today it's reality. And only few actually care.

You don't want tech to automatically transcribe your thoughts. That's the only domain today where real privacy still exists.

There is a nice Black Mirror episode about this.


I think the big revolutions are going to be in biology - fast super computers allows us to make advances we couldn't have made any other way.

Computers are an enabling technology for basically every other advancement - in fact it's hard to imagine breakthroughs at this point that don't involve computers in some way - even 'just' as a tool for collaboration.


'Computers' are more like, industrial consciousness, though. What happens when we can grow brain cells on demand? (and support their function). Imagine upgrading yourself like you upgrade your computer. More brain. Less sleep. More hands. Armored skin. Photosynthesis.

Imagine you're a shapeshifter. You can copy any aspect of any living organism on Earth, integrate tech directly into your body, and the smartest people are coming up with new useful things to add.


Aaaand it's all banned. Enhancing your performance? Heresy!

That's assuming such research is allowed in the first place. Humans are innately repulsed by biology, by organisms. Imagine an arm with its skin ripped open to the bone, gushing blood everywhere. Imagine your guts hanging out from your belly.

And the ethical/moral rules we built over thousands of years will not allow the majority to sit by and watch some atrocious (from their POV) experiments.


So, humanity becomes the borg?


Also, taking economics to a new level, once we have got past the scarce-resource/pollution based short term thinking and it no longer is viable to have a volatile structured economy, where there is no longer a need for the profit motive. Super computing could easily take this task up of organising and fairly structuring the economy.


> So back in the 60's, people looked back at the progress over the previous decades, imagined the future, and thought about space. We'd have commercial space flight any day now.

We do have commercial space flight. Commercial space flight has exploded over the past few decades. The sky is full of communication satellites, imaging satellites, sensor satellites, even the occasional vanity satellites.

Everything worth doing in space, we're doing.

What we don't have is things that aren't worth doing in space, like Pan-am flights to the moon.


"What about recording your own thoughts and then playing them back to yourself later?"

Oh hell nah, it's enough craziness in real time, I sure as hell don't need to replay that.


> Fast forward and it turns out that we had been near the top of an s-curve when it came to space tech

It’s true that the types of rocket engines we have today are not that much more advanced than the ones we made 50 years ago. But rocket motors have never been mass produced, and we haven’t poured billions of dollars into research (let alone commercial production). We haven’t had the decades of improvement that came with integrated circuits, but we haven’t done the decades worth of R&D either.


That's actually a good prediction. Biotech is untapped field for moonshot-scale breakthroughs, even the current mRNA vaccines is only a small part of what will be possible in 10-15 years.


I think this is why the grand-parent is perhaps making an error in thinking that e.g. space travel or computing are silos unto their own.

Having worked in wet labs and deep learning labs, I think we've a lot to gain from increasing our ability to simulate experiments in silico and automate biological processes.

A lot of the room for improvement has been carved out by improvements in machine learning.


I agree with you :)

> But in the same way that the mature industrial revolution made high-precision manufacturing possible which made incredible computers possible, our mature computer technology is now enabling incredible progress in biotech.


True, I missed that!


There’s a startup called MindPortal (YC W21) that seems focused on using tech to review thoughts and control things:

https://www.ycombinator.com/companies/mindportal


I agree.

Your comment made me think of Natalie Woods' last movie: "Brainstorm."

https://en.wikipedia.org/wiki/Brainstorm_(1983_film)


> From this article: no more keyboards / mice? No typing, you can "think" to write. What about recording your own thoughts and then playing them back to yourself later? How much further can that tech go? And there is so much more beyond BCI, we are just understanding the basic building blocks in many areas, but making amazing progress.

While this itself is certainly an interesting concept, I'm worried at its consequences when implemented in our hypercapitalist economy: We'll almost certainly, along with this incredible interaction technology, have advertising beamed directly into our consciousness or something similarly intrusive. It's honestly terrifying how much worse intrusive tracking and advertising would get with this technology.


The Mood Organ from Do Androids Dream of Electric Sheep


we have something today that's 3-4x faster than pecking on a smartphone keyboard: voice to text.


I extremely doubt that, I type faster with predictive text than I speak.


Entirely depends on the individual. With a virtual on-screen keyboard, I can rarely type even one word without error. It's like my fingertips are just too big to hit the keys accurately. Swipe-keying is somewhat better/faster but I'm much better with real physical keys. Speech-to-text used to be pretty bad but with my current phone it's better than typing, for me. The downside is I hate talking to computers.


Have you used SwiftKey? I find it corrects 99% of my errors, to the point where I just press keys in the vicinity of what I want to type and it comes out correct.


I've used talon voice for software development as a replacement for typing a while back due to an arm injury. It ain't faster than typing.


But only 90% accurate, thats 10% not accurate enough.


but it is improving, and will be good enough long before there's a viable brain-computer interface.


I see it the other way:

We’ve had a revolutionary S-curve with computing/artificial reasoning in inventing transistors — but we know we’re still at the bottom of two related S-curves, quantum computing (an exponential increase in many problems of interest) and IOT/smart systems where our automated reasoning is embodied in something. We know somewhere up those curves lies the ability to make new kinds of minds.

I think both of those will prove to be bigger than bio-science... and more over, bio-science will require them to a) do the experiments and b) find uses for the technologies.

I think human augmentation will turn out to be like spaceflight: humans are near the top of their S-curve already.

Instead, I think biology research won’t come into its own until AGI research does and we have an idea of how to make new biological systems.

Of course, that might kill us all. Horribly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: