Hacker News new | past | comments | ask | show | jobs | submit login

if you can recall the author/title i’d love to look at it! i’m curious how the story deals with the disjointed sensory input in the “revived” agent. e.g. does Grandma experience sight/sound/touch/feel? if not, how does she cope with sudden full-body paralysis/numbness/loss-of-familiar-senses? did she agree to this, or know it was going to happen? and so on.

i remember one subplot to a Cory Doctorow book focused on a lab that was trying to develop a self-aware machine and the barrier there was that the machine would commit suicide as soon as it understood the broader context of its being. sort of makes me wonder that in order to achieve the sorts of AI you’re talking about, we need not just map the brain but also understand (or bruteforce) enough of it to avoid agent crises. the barrier to that could be larger than just developing a wholly new neural network (idk).




There's a few bits in Neal Stephenson's "Fall; or, Dodge in Hell" that explore this.

And also, "Lena" by qntm -> https://qntm.org/mmacevedo


Google 'MMAcevedo' for a horrifying short story take on this concept.



The bobiverse covers this, but is definitely not what the op meant. There's also an arc in the Welcome to nightvale podcast that is similar (using frozen brains) but still different.


The whole Bobiverse series is built around this concept of one guy's brain scan and him embracing his new existence.

By the way, do you have the title of the book you mentioned?


the book was Walkaway: https://en.wikipedia.org/wiki/Walkaway_(Doctorow_novel)

the only reason i didn't name it before is that i couldn't recall the name off-hand.


It doesn't sound quite like the story the previous poster was thinking of, but Hannu Rajaniemi's Jean le Flambeu series (beginning with The Quantum Thief) has uploaded, edited, and copied human minds as a significant component of the story.


Can't be Quantum Thief, because in Hannu's -verse, AGIs (or 'dragons', IIRC) exist, are actually pretty easy to make, and much smarter than the uploads or derived human brains - so smart that they are far too dangerous for any side to use, and that's why they still run on brains.


In Sam Harris' podcast he mentions an ethical consideration when trying to create an AI that's conscious: an entity that's fully aware of itself yet stuck in a "box" with no way to get out. The perfect torture machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: