Fast forward and it turns out that we had been near the top of an s-curve when it came to space tech, but near the bottom of the s-curve of computers, and few people back then were imagining (could imagine?) how different the world would be 50 years later with everyone carrying around internet-connected supercomputers in their pocket.
I think we may be in the same situation today, where people imagine the future and think AI revolution and computational everything, but are mostly missing that we're at the bottom of a biotech s-curve that is going to blow "computer" progress out of the water over the next 50-60 years.
My guess is that in 60 years our computer technology will be largely similar to today, just faster and nicer. But in the same way that the mature industrial revolution made high-precision manufacturing possible which made incredible computers possible, our mature computer technology is now enabling incredible progress in biotech. And the explosion of biotech will lead to mind-blowing changes that are difficult to even imagine.
From this article: no more keyboards / mice? No typing, you can "think" to write. What about recording your own thoughts and then playing them back to yourself later? How much further can that tech go? And there is so much more beyond BCI, we are just understanding the basic building blocks in many areas, but making amazing progress.
I'm excited about it.
Now imagine if your need for speech goes away: why bother using it when you can just “text” from your brain directly to mine and I instantly know what you said without me having to “read” anything. Instant communication. Instant connection to anyone. Instant ads beamed directly into your brain by Google and Facebook.
Now take it a step further: your mind is now a part of a collective globally connected network. The boundary of where “you” exist and where the rest of the world exists is erased. You can feel what other people feel. You can see through the eyes of an Oscar winner, a surgeon, a head of state, a porn star. Police body cams become police mind cams: what was the cop thinking when they took any given action? What we currently have as YouTube celebrities and Instagram influencers become Mindgram stars. You can see and perceive as them.
Now take it a step further. Death isn’t death. Like the paradox of rebuilding a ship one plank at a time, your mind stops existing in your body and occupies a collection of other bodies. Artificial intelligence mixed in with real intelligence mixed in with remnant intelligence. We can’t imagine what this feels like but we are marching towards it getting ever closer every year.
Now take it a step further. People want to get away from this hive mind concept. They disconnect. They play games. They make games where all NPCs are now simulated to the point where they believe they are real. They are here for the benefit of the players but even the players can’t tell the difference when they are in the game.
Now take it a step further. Inside the simulation someone introduces Hard Seltzer. The in game year is 2021 and a player just read that some NPC somewhere had just created a brain/computer interface. He rips off his headset and goes to unplug the computer because fuck this game, all the DLC clearly ruined it.
You're talking about transferring information FROM computer TO brain. We have no idea how to do it.
Transferring information FROM brain TO computer is achievable with modern tech (and that's what this link shows), but not vice versa.
After a while a new feeling arises...
Which makes me wonder, what will an overtrained brain look like? What kinds of illnesses are we unleashing on the world by attaching an interface like that directly to the brain?
I think this is why psylocybin is so effective for depression: it induces a state of plasticity in the brain that gives someone an opportunity to fill in the ruts they had been mentally pacing in.
This comment FROM brain TO computer, FROM computer TO computer, FROM computer TO your brain.
Please, no. I’d just get even more frustrated at how slow the damn thing is.
I, and probably many others, wouldn’t have stumbled upon some of the things I have. They thankfully are now only blurry memories to me, even though merely evoking them still is nauseating.
It would also dramatically reduce the impact of much of the bullshit content out there, since words appear to have much less emotional impact than images (and appear to be much less appealing), and thus be safer to society as a whole.
A text-only interface would also be much less useful and much more annoying to use.
To some. My imagination is quite good and as a child I consumed vast quantities of print media. A lot of it wasn't appropriate for a child to consume.
But we’re talking safer, not absolutely, perfectly safest.
Also, how would you compare that to the close-up picture of an Iraqi soldier burned to a crisp by a Hellfire missile, hanging out of the carcass of his scorched vehicle, staring straight into your eyes with his charred, empty eye sockets?
Or suddenly being confronted, at 12 years old, by dozens of neatly aligned child pornography thumbnails after clicking on some unclearly named link when exploring I2P, FreeNet and TOR because "decentralised", "anonymous" and "censorship-resistant" networks seemed "cool"?
There’s a reason crap people’s magazines are full of pictures, paparazzi photos can be worth what they are, and websites with a photo of a smiling human (and a clear CTA) convert better than those without.
Grandma turned puce and dad snatched the book from me, wtf are you reading?
James Herbert’s Rats trilogy. Aged 8.
Warped ever since.
Text is plenty.
Regarding the ad blocker, one thing I definitely can’t wait for are true AR glasses that could act as a real-life ad blocker.
Being out in the street doesn’t mean I have in any way agreed to being constantly drowned in and have my attention stolen away by all this bloody noise.
But before that distant dystopian point is reached, I do hope we develop ways for paralyzed people to regain sensory control and live normal lives.
In a way, we already have. Each and every one of us is constantly influenced by and influencing untold numbers of people, and most beliefs and knowledge are more or less "standardised".
Most people follow the school -> (college ->) 9 to 5 -> retire consensus, and even those who believe themselves to be outliers actually behave how outliers are expected to behave, all of us furthering the goals set by others, some of which died have even died long ago.
Actual individuality is quite rare and usually expressed at a very small scale.
We're not individuals apart from others, the others are presupposed. The I is an abstraction in the sense that it presupposes social terms to understand itself. You need a reference, like culture, to be correctly understood as "alternative" (although 'peripheral' in terms of some specific aspects is more correct).
If you're not in a community at all, you're not going to reproduce.
Actual individuality is merely recognizing the exercised autonomy by an agent. You are still an individual even when you behave according to existing mores you did not create.
(The extra social esteem bestowed to relative difference is a cultural trend and a historical phenomenon. It does not determine our species, only our current conditions and predicaments.)
Each ant constantly exercises it’s autonomy. Would you nonetheless argue that it has any individuality in the way the gp intended the word?
Primarily, I argue that we're a social species and cannot employ individuality without a culture and community. This contradicts any notion of being artificially programmed by a deus ex machina. On the contrary, we program and reprogram each other reciprocally and continuously. A strong AI shaping of human kind would be parasitic to existing practice, not a new thing. Kind of like how propaganda presupposes a practice of telling the truth.
Second, and perhaps wrongly so, I address the seeming notion of difference being a moral good; a sentiment I interpret in your final sentence. I might very well have misunderstood it.
Being different is not good, in and of itself. Striving to be different seems to be a recipe for unhappiness, because the internal resources cannot explode the context without a major risk of ending up in social vacuity, where your actions are systemically misinterpreted and the individual itself misinterpret his or herself on the rebound.
The so-called historical individual in Hegel , Nietzsche's grand most free of all superman, relies on contingent reality to succeed; in other words, the successful "being different" is a harmony of the individual's actions and intentions with existing but as of yet unexpressed social trends. Again, individuality presupposes sociality, completely opposing the view that "true individuality" is so rare.
It's unfair to the ant to talk about them in human terms, and I am not an expert in ants. But a human being expressing her taste is an expression of her individuality, even though it's exactly the same thing "everyone else" think as well. What matters is that it expresses her as an individual, not that it differs from the majority.
By trying out new ways out of repeating situations and creating new behaviours instead of either repeating the same habits or trying to adopt someone else’s response to them.
At an individual’s scale, those would be large and have big impacts.
Much more so than the colour of my living room’s wall, my type of car or defining myself by wearing either shirts or T-shirts and trying to impose on everyone else what I consider to be professional or unprofessional.
I don't think this will be that big of an advantage. I know the first 20 digits of PI off the top of my head, but if you ask me to do something with this information it will still take some time. Having to look up the first 20 digits online would probably be a small portion of that time.
You can have instant access to information, but processing can still take time. I think the arithmetic part is actually more interesting. Being able to look at a table of numbers and running a statistical analysis on it would be very useful.
At the same time though, I think all of this is way harder than we think. Look at programming via voice. The current best UI for it is still pretty lackluster.
You likely have processing power in your pocket. With efficient brain-to-computer and computer-to-brain interfaces, the processing part could be taken care of too; just externalized.
We would probably end up being a lot _worse_ at things like arithmetic though, since we'd use external processing for even more things (I do believe we're already loosing in things like memory and simple processing by having access to a smartphone at all times).
> The most poignant scene I can think of: in 2001 A Space Odyssey, the character flies on a Pan-Am spaceship to the moon, and then goes into a phone booth to make a phone call.
Interesting that you commit the same fallacy as the parent talks about: you talk about all this complexity in biotech but then assume that there's going to be a headset with a computer in order to connect to the simulation, rather than it being directly implanted into one's brain.
But yeah, I think we're getting closer and closer to a true hivemind. It would have to suppress the individual personality, otherwise a lot of people will likely go insane.
Of course, it could be that's acceptable losses or they're cut off from the "advanced civilization" and left to live somewhere far from the cities.
That is, of course, if half the planet isn't flooded and turned into desert by then.
Lmao. Hard seltzer isn’t that bad
"Alcopop" always sounded like a fake word to me but they also insist on calling vaccines "jabs".
Douglas Hofstadter talks about this in "I am a Strange Loop" , but he argues that our 'soul fragments' as he calls them are a representation of ourselves in others. Depending on how large of a fragment they hold in our brain, we can perceive the world as they do, and think as the other person. They get to experience the world through us, in a sense, given that we 'allow them to'.
It is an interesting idea, and helps reconcile the death of our loved ones.
> Now take it a step further: your mind is now a part of a collective globally connected network. The boundary of where “you” exist and where the rest of the world exists is erased. You can feel what other people feel.
This reminded me more of a book I've read online . The synopsis doesn't do it justice, but shows the concept:
> All minds are networked together. Everyone collectively votes to punish or promote anyone for the slightest thought. Cater to your orbiters, and they'll vote to give you candy or slaves or even governorship of a metropolis. Challenge or defy your orbiters and ... well, good luck with that. You'll be down-voted until you die.
Why would they?
All the preceding paragraphs sound like Borg collective, but hey, if it's voluntary, it actually doesn't sound bad. As long as we can keep adtech away.
Working is voluntary.
Sure, you want to move to a farm far away from all the madness or you want to sit in a workshop designing robots all day, but you need money, so you "volunteer" to work some job you barely tolerate in or near a city for all of your best years.
The ads are just constant slaps in the face.
I think you just described the Borg from Star Trek
Near the top of an s-curve in getting-to-space (“heavy lift”) tech, more like.
I’d say the field of actual in-space tech (i.e. technology that takes advantage of low-gravity / low-pressure / low-oxygen environments) is still pretty nascent. We still treat space as “Earth, minus some guarantees” (i.e. something we have to harden for) rather than doing much with the unique benefits of space.
It’ll probably take having a long-term industrial base operating from space to see much change there, though.
Imagine, for example, living on a space station, and having your food cooked using cooking techniques that assume cheaply-available vacuum and/or anoxic environments. :)
To be fair we still made lots of progress with space tech after the 60's, and I think via SpaceX and others we are hopefully now starting a new S-curve unlocked by cheaper access to LEO.
But a big problem is also that the progress in space tech wasn't organic. There was no economic incentive, it was driven purely by propaganda, national pride and political goals. Once that fell away it took half a century for economic usecases catch up to a point where private investment was viable.
Researchers could concoct all sorts of narratives, but it'd lost the spark that held the layman's attention and permitted the spend of political capital.
If the Soviets would have won the race to the moon this might even have happened. But instead the Soviets decided to focus on space stations, and the US declared themselves winner and did largely nothing (by rejecting NASA's Space Transportation System proposal, which was also about a space station and a way to get there cheaply).
Effectively nasa was spending their RnD money on opex and not getting any political capital back for it. Tragically the program had been set up with the promise of a space truck, if nasa admitted they didn’t deliver a space truck Congress would have been unlikely to fund a new program. Had the budget been allocated entirely to either technological development or novel explorations America’s willingness to fund nasa could have been substantially different.
Vacuum isn't something that's hard to get if you need it, all you need is a motor driving a pump, so if any industrial food process or one of the fancy restaurant chefs would have a good use for it, they would be already using vacuum in cooking. A kitchen vacuum sealer is <100$ (I'm assuming that would count as "cheaply available"), and it's not particularly useful though for most other cooking purposes that come to mind.
That also assumes that the atmosphere you're venting for that "cheap vacuum" is cheap as well.
In the last 60 years, computers have gone from $160 billion/GFLOP to $0.03/GFLOP; transistors are now smaller and faster than synapses by the same factors that wolves are smaller and faster than hills, and the sort of computing tech that was literally SciFi in the 60s — self-teaching universal translators, voice interfaces, facial recognition — is now fairly standard.
60 years of biotech? If the next time I wake is after 60 years in a cryonics chamber and was told every disease was now cured, that every organ could be printed on demand, that adult genetic modification was fine and furries could get whatever gene-modded body they wanted up to and including elephant-mass fire-breathing flying dragon, and that full brain uploading and/or neural laces a-la The Culture were standard, I would believe it. But if they told me biological immortality was solved (as opposed to mind upload followed by download into a freshly bioprinted body with a blank brain) I’d doubt the year was really only 2081 — not all research can be done in silicon, some has to be done in-vivo, and immorality would be one of them.
 this would be very surprising as I’ve not signed up for it, but for the sake of example
Therapies to slow aging have the particular problem that it could intrinsically take decades to show an effect, sure- but that’s simply reason to be a bit more ambitious and aim for therapies to reverse aging, which could be tested rapidly in already-old patients. :)
But why, biological immortality is already here for many animals like jellyfish. Honestly seems closer to me than uploading.
If you apply the process to someone who is already 100 and they hit 160 with no further degradation, it would be, at the very least, a pretty good indicator your process had worked.
We are absolutely at the top of the s curve in this field. There has been no real progress in two decades. The AI boom has just made things worse by redirecting all of the grant money away from better sensors (which could get results in certain niche applications) and towards new algorithms, which have hit an accuracy ceiling because the data their human operators feed into them is complete shit. The field is so beyond saving that an algorithm that achieves slightly better performance on a single old dataset due to sheer chance is presented as a breakthrough.
Data rates would be in the tens of bits per minute with error rates of 20% on a good day and no feasible plan to improve either of these woeful specs.
I'll be dead before we have a BCI with enough bandwidth that anyone but a dying MND patient would want to get one installed.
I hope that we see a paradigm shift back towards writing robust and performant systems instead of stacking abstractions. Sure, Monads and Transformers are all fun to use, make code coincise and are very satisfying when they compose well, but, what's the hidden cost, and is it worth it?
As a user that encounters bugs at a disproportionally high rate, I'd say no. The trade-off in increasing abstractions is not worth the trade-off.
I wonder when the trend reverses. It must, at some point, mustn't it?
Because you can do more in the same time. Speed and responsiveness are features, the issue is, the general population has come to accept that bugs are not only acceptable, but just that, 'bugs' that you can shoo away by restarting the machine/program.
> I wonder when the trend reverses. It must, at some point, mustn't it?
I hope so. As we have seen with spectre and now AMD's equivalent, speculative execution is risky and very complex to get right. We can't rely on ever increasing complexity on CPUs and fabrication processes, at some point quantum mechanics will bite back.
However, there exists an intermediate plane of abstraction over C and under Haskell that is absolutely horrendous and results in all sorts of weird bugs and unpredictable situations.
In fact, I would be more excited about IRL laws tuning down what some are doing with the current indirect interfaces to my head, such as fake news, propaganda, advertising and manipulations of all sorts.
For me, the nagging question is what happens when biotech has figured out biological systems to the point that everyone stops aging/dying. Does that 10 billion people, give or take, become "the humans" for the rest of time?
 Lots of evidence that there is no "reason" for cell senescence, it's just an evolutionary afterthought (you succeeded in reproducing, now go die) and like other things can be "fixed."
Main problem is we take too long to build a replacement person.
I think the most interesting exemplar of this is Star Trek. As everyone knows, Star Trek is based on 19th century naval warfare. The battle scenes are hilarious-- a captain calling out orders at human speeds to his crew that executes them.
It's been obvious for 40 years that computers would do the fighting, but in 1966 it wasn't obvious, so the paradigm was Horatio Hornblower.
in the same vein, why would strikes on a ship result in sparkles on the bridge....
The rhetoric of personal creativity and freedom was flattened by something far less interesting and more toxic.
I wouldn't be so keen to rush headlong into a bioware-connected world until that problem is solved.
More realistic is that your thoughts will be listened by police in real-time plus you could transmit them to other people through instagram. Amazon will be doing food delivery from app to mouth, and people largely will not be moving much, as food will be wholly automated end to end and any hobbies people will be doing via VR goggles. There will also be contraceptives controlled by ministry of reproduction and will be working or not based on couple's social score. edit: I forgot - exposure is going to be a real currency.
I guess my question to you would be, if we were to take a bet on it, what metrics would we use to know who was right? Biotech patents? In the case of rockets and chips we had thrust and transistor density respectively. I’m not sure how easy it would be to quantify progress, though I have a sense of what some of the tangible applications will be when they show up. Maybe mortality rate would be a good measure? Or human population size (a proxy for mortality + food production)?
Wet dream of surveillance tech.
50 years ago, if you had suggested that someday all your social connections would be instantly available to all serious secret services as well as big corps, people would laugh at you. You'd be the negative guy, against progress. 1984 fans were the paranoid tinfoil hat guys. Today it's reality. And only few actually care.
You don't want tech to automatically transcribe your thoughts. That's the only domain today where real privacy still exists.
There is a nice Black Mirror episode about this.
Computers are an enabling technology for basically every other advancement - in fact it's hard to imagine breakthroughs at this point that don't involve computers in some way - even 'just' as a tool for collaboration.
Imagine you're a shapeshifter. You can copy any aspect of any living organism on Earth, integrate tech directly into your body, and the smartest people are coming up with new useful things to add.
That's assuming such research is allowed in the first place. Humans are innately repulsed by biology, by organisms. Imagine an arm with its skin ripped open to the bone, gushing blood everywhere. Imagine your guts hanging out from your belly.
And the ethical/moral rules we built over thousands of years will not allow the majority to sit by and watch some atrocious (from their POV) experiments.
We do have commercial space flight. Commercial space flight has exploded over the past few decades. The sky is full of communication satellites, imaging satellites, sensor satellites, even the occasional vanity satellites.
Everything worth doing in space, we're doing.
What we don't have is things that aren't worth doing in space, like Pan-am flights to the moon.
Oh hell nah, it's enough craziness in real time, I sure as hell don't need to replay that.
It’s true that the types of rocket engines we have today are not that much more advanced than the ones we made 50 years ago. But rocket motors have never been mass produced, and we haven’t poured billions of dollars into research (let alone commercial production). We haven’t had the decades of improvement that came with integrated circuits, but we haven’t done the decades worth of R&D either.
Having worked in wet labs and deep learning labs, I think we've a lot to gain from increasing our ability to simulate experiments in silico and automate biological processes.
A lot of the room for improvement has been carved out by improvements in machine learning.
> But in the same way that the mature industrial revolution made high-precision manufacturing possible which made incredible computers possible, our mature computer technology is now enabling incredible progress in biotech.
Your comment made me think of Natalie Woods' last movie: "Brainstorm."
While this itself is certainly an interesting concept, I'm worried at its consequences when implemented in our hypercapitalist economy:
We'll almost certainly, along with this incredible interaction technology, have advertising beamed directly into our consciousness or something similarly intrusive. It's honestly terrifying how much worse intrusive tracking and advertising would get with this technology.
We’ve had a revolutionary S-curve with computing/artificial reasoning in inventing transistors — but we know we’re still at the bottom of two related S-curves, quantum computing (an exponential increase in many problems of interest) and IOT/smart systems where our automated reasoning is embodied in something. We know somewhere up those curves lies the ability to make new kinds of minds.
I think both of those will prove to be bigger than bio-science... and more over, bio-science will require them to a) do the experiments and b) find uses for the technologies.
I think human augmentation will turn out to be like spaceflight: humans are near the top of their S-curve already.
Instead, I think biology research won’t come into its own until AGI research does and we have an idea of how to make new biological systems.
Of course, that might kill us all. Horribly.