We wouldn't bother trying to convince an image of a brain into cooperation, because we simply lose any need to do that very quickly.
One of the very first things we'd do with a simulated brain is to debug it. Execute it step by step, take lots of measures of all parameters, save/reload state, test every possible input and variation. And I'm sure it wouldn't take long to start getting some sort of interesting result, first superficial then deeper and deeper.
Cooperation would quickly become unnecessary because you either start from a cooperative state every time, or you quickly figure out how to tweak the brain state into cooperation.
And that's when the truly freaky stuff starts. Using such a tool we could figure out many things about a brain's inner workings. How do we truly respond to advertising? How to produce maximum anger and maximum cooperation? How to best implant false memories? How to craft a convincing lie? What are the bugs and flaws in human perception? We could fuzz it and see if we can crash a brain.
We've already made some uncomfortable advancements, eg in how free to play games intentionally try to create addiction. With such a tool at our disposal we could fine tune strategies without having to guess. Eventually we'd just know which bits of the brain we want to target and just have to find ways of getting the right things to percolate down the neural network until those bits are affected in the ways we want.
Within a decade we'd have a manual on how to craft the best propaganda, how to best create discord, or how to best destroy a human being by just talking to them.
I'd be willing to bet that once we've achieved the ability to scan and simulate brains at high fidelity, we'll still be far, far, far away from understanding how their spaghetti code creates emergent behaviour. We'll have created a hyper-detailed index of our incomprehension. Even augmented by AI debuggers, comprehension will take a long long time.
Of course IAMNAMSWABRIAJ (I am not a mad scientist with a brain in a jar), so YMMV.
How can you be so sure of that?
Yes, I'm fun at parties.
There are 20M sw developers on this planet. If 100k of them had daily available dev environment for brain images, then things would progress extremely fast.
This assumes that simulation can be done faster than real time. I think it will be the other way around: the brain is the fastest hardware implementation and our simulations will be much slower, like https://en.wikipedia.org/wiki/SoftPC
It also assumes simulation will be numerically stable and not quickly unsable like simulation of weather. We still can't make reliable weather forecasts more than 7 days ahead in areas like Northern Europe.
Just like we can make a walking robot without being the least concerned about the details of how bones grow and are maintained -- on the scales needed for walking a bone is a static chunk of material that can be abstracted away without loss.
We still can't simulate it.
Part of the problem is that the physical diffusion of chemicals (e.g., neuromodulators) may matter and this is 'dispensed with' in most connectivity-based models.
Neurons rarely produce identical response to the same stimuli, and their past history (on scales of milliseconds to days) accounts for much of this variability. In larger brains, the electric fields produced by activity in a bundle of nerve fibers may "ephaptically couple" nearby neurons...without actually making contact with them.
In short, we have no idea what can be thrown out.
 This sounds crazy but data from several labs--including mine--suggests it's probably happening.
This for some reason struck me as profoundly disappointing. I have a couple neuroscientist friends, so I tend to hear a lot about their work and about interesting things happening in the field, but of course I'm a rank layperson myself. I guess I expected/hoped that we'd be able to do more with simpler creatures.
If we can't simulate C elegans, are there less complex organism we can simulate accurately? What's the limit of complexity before it breaks down?
The stomatogastric ganglion might be the closest. It is a network of three dozen neurons in the crustacean stomach. Like the worm, the wiring diagram is completely known and the physiology is easier to measure. Despite being very simple, it can generate intricate patterns of activity in the stomach muscles that let the crab/lobster/etc eat. Scholarpedia has the diagram and some references (http://www.scholarpedia.org/article/Stomatogastric_ganglion) Eve Marder, who has done a lot of pioneering work on this circuit, wrote a book (Lessons From the Lobster) that I'm looking forward to reading.
Don't be disappointed! A lot of media coverage tends to present new results as "we're almost there." In most cases, I think that's nonsense, but it's also exciting to think how many things there are left to discover and how fascinatingly complex the world is.
But given that we can't even fully simulate animals with exactly zero neurons (Trichoplax), I'd say the current limit is "we can't". It's literally the world's simplest animal, and we're far from understanding how it works.
So, probably no brain uploads by 2031 ;)
Interesting. Can you give a rough estimate of how much effort has been put into studying it (wall time, researcher-years, money) and how much progress has been made?
Also, is there any estimate of how similar C. elegans neurons are to those of other species, such as humans?
Neurotransmission in C. elegans is unusual. They use a different set of neurotransmitters; this isn’t that odd—-insects also use a slightly different set than humans, and their role even flips in many animals (including mammals) during development. The weirder part is what those neurotransmitters do. In other animals, neurons produce stereotyped all-or-none “spikes” of electrical activity. Until quite recently, it was unclear whether C elegans neurons did too. This News and Views (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951993/#R29) does a nice job describing plateau potentials and the reasons that C. elegans neurons might differ (namely, they’re very small). A few years later Cori Bargmann’s group discovered that the AWA neuron fires something more akin to a “classical” spike—-sometimes. It also uses calcium instead of sodium. https://www.cell.com/cell/fulltext/S0092-8674(18)31034-1
This might complicate simulations a little bit, but these differences are also understood pretty well, and the much smaller nervous system more than offsets them.
 I work at the polar opposite end of neuroscience—-large animal neurophys—-but I’ve always been a little jealous of how friendly and tight-knit the C elegans community seems. They have a lot of great open resources.
Neuroscience has a lot of success this way. The properties of cones, the cells that detect colored light, were accurately modeled from behavioral experiments (e.g., people matching paint chips) in the 1800s, even though we didn't have the technology to measure them until the 1960s-1980s. The Hodgkin-Huxely model of action potential generation from the 1950s is still incredibly useful and predicted aspects of ion channel structure that took decades to confirm. David Robinson measured the physical forces produced by eye movements and used that to predict, and then reverse engineer, huge aspects of the "oculomotor plant". Real neurons have incredibly complicated behaviors, and yet artificial neural network models, where those are reduced down to a sigmoid or ReLu, have been very informative, first in the 1980s and then again today.
On the other hand, attempts to produce highly realistic simulations haven't really panned out. The Blue Brain Project has spent tons of time, money, and compute on very detailed simulations, but I think the consensus is that we have not learned a ton from these efforts. One of the most interesting outcomes (IMO) is actually the atlas that was built to build the model. There are probably many reasons for this difference, ranging from technical things like uncertainty propagation to very human expectations about what a model "should" be able to do.
In the specific context of C elegans, there's some data showing that diffusing peptides are essential for certain worm behaviors (e.g., Chen et al, 2013: https://www.sciencedirect.com/science/article/pii/S089662731...). The other mechanisms I mentioned are certainly there too. How much they matter is still up in the air: even for very simple organisms, we're still at the stage of figuring out what we don't know!
302 neurons seems very easy to simulate, even if the connectivity graph were orders of magnitude more complex.
Simulating correctly... that is another thing, I'm sure.
The observation that one neuron can alter the activity of a nearby one is old as dirt. Emil du Bois-Reymond observed it in the late 19th century, but I don't know of anyone trying to quantify it until Katz and Schmitt (1940) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1393925/ and Angelique Arvanitaki (1941) https://journals.physiology.org/doi/abs/10.1152/jn.1942.5.2...., who named it. There are some other reports in squid (Ramon & Moore, 1978) https://pubmed.ncbi.nlm.nih.gov/206154/, rat cerebellum (Korn and Axelrad, 1980) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC350252/, and others. This review by Anastassiou et al. (2011) might be a good place to start https://www.nature.com/articles/nn.2727.pdf?origin=ppub or this Scientific American article about a paper by my grad school neighbors (https://www.scientificamerican.com/article/brain-electric-fi...)
In parallel, people have asked whether external electric fields can be used to alter neurons' activity, which is even older: a Roman physician in 46 reportedly cured headaches by applying a live electric fish to patients' heads. The idea of using electricity to improve mental function has waxed and waned ever since, with the most recent peak around ~2015 or so. Terzuolo and Bullock collected some of the first data on this using crayfish axons in 1956 (https://www.pnas.org/content/42/9/687) and subsequent experiments by Deans et al. (2007), Radman et al. (2007-9), Ozen et al. (2010), and Frolich and McCormick (2010) found similar results using in vitro and small animal experiments. In parallel, people went absolutely wild with human studies of transcranial electrical stimulation (TES), a family of techniques including tDCS (w/ direct current) and tACS (alternating current). While some of the results have been exciting, they have not always been reproducible (Horvath et al, 2015ab) and some work suggested that the previous work relied on fields much stronger than those achievable in humans (Voroslakos et al, 2018).
Together with some awesome collaborators, I set up a non-human primate model that let us test tES under conditions that closely match those found in humans: like macaques (and unlike rodents), we have big, convoluted brains in thick bony skulls and comparatively sparse neural networks. We found that tDCS could affect neural circuits (i.e., LFP oscillations) and behavior (Krause et al., 2017) https://www.cell.com/current-biology/pdfExtended/S0960-9822(... and single neurons, even in deep brain areas (Krause, Vieira, et al. 2019) https://www.pnas.org/content/116/12/5747.abstract  The fields we used were much weaker than those produced by some parts of the brain itself (~0.3 - 1 V/m vs ~4-8+ V/m), so it suggests that ephaptic mechanisms are probably pretty common.
I'm pretty confident in those results, but--to bring things back to the original topic--our recent experiments suggest that getting tES to do exactly what you want, when and where you want it, will take some cleverness and a lot of simplifying assumptions tend not to hold up.
 The missing full references above are in these two articles' bibliographies.
This doesn't seem like a very easy problem to solve.
The capabilities of the brain are in how it's all wired up. That's exactly what you don't want if you're trying to coopt it to do something else. The brain has giant chunks devoted to extremely specialized purposes: https://en.wikipedia.org/wiki/Fusiform_face_area#/media/File...
How do you turn that into a workhorse? It would be incredibly difficult. It's like looking at a factory floor and saying oh, look at all that power- lets turn it into a racecar! You can't just grab a ton of unrelated systems and expect them to work together on a task for you.
An actor-based system would be a better model, and I’m not sure if we have something like that in hardware. I do agree that sometime in the future it will be possible to overcome the biological limit, as cells are most definitely not at an optimum (probably not even at a local one), like duplicated pathways and the like, but it is no way trivial.
John von Neumann had a great paper on the topic, at least his thoughts about it. It is a really great read, even though both technological and biological advances may make it outdated, I think he did see a few things clearly into the future.
> We could fuzz it and see if we can crash a brain.
Sadly, this we already know. Torture, fear, depression, regret; we have a wide selection to choose from if we want to "crash a brain".
Think for instance of a song that got stuck in your head. It probably hits some parts of it just right. What if we could fine tune that? What if we take a brain simulator, a synthesizer, and write a GA that keeps on trying to create a sound that hits some maximum?
It's possible that we could make something that would get it stuck in your head, or tune it until it's almost a drug in musical form.
Well, except when they don't, but since a brain functioning as a brain is part of the operating requirements for the body that lets the brain operate at all, when they don't, they ultimately fail entirely in short order.
So, assuming that a brain generally operates as a brain after severe trauma is a pretty serious case of survivorship bias.
If you can describe the task to be performed well enough that you don't need the je-ne-sais-quoi of a human brain to perform it, you may as well just have a regular computer program do it. (We already have very efficient systems that involve extracting limited amounts of creativity and insight from human brains and forming them into repeatable tasks that can be run on computers - that's what the entire software industry is about.)
We already have emulators and virtual machines for lots of old hardware and software. If I play a Super Nintendo game on my laptop, it's accurately emulating an SNES. The software doesn't care that the original hardware is long gone. The computational result is the same (or close enough to not matter for my purposes). If brain emulations are possible, then running old snapshots in deceptive virtual environments is possible. That would allow for all of the "attacks" described in this piece of fiction.
Human brains are far more resilient than software, so my guess is that emulated brains won't have brittle corner-case bugs like emulated software. People today do all kinds of crazy stuff to their brains and remain functioning: drugs, sleep deprivation, getting hit in the head, fasting, aging, etc. If subtle changes to our brains could cause our minds to stop working, we'd know by now.
Just like you can have somebody assemble a complex device by just putting together pieces and following instructions. You could for instance assemble a working analog TV without understanding how it works. It's enough to have the required parts, and a wiring plan. Once you have a working device then you can poke at it and try and figure out what different parts of it do.
These are not imperative programs or well organized data. They are NN's we can't fathom how to debug them just yet.
Also, they should tag 100 years onto the timeline, I don't think we're going to be truly making useful images soon.
Many of the things you describe could still happen with Monte-Carlo type methods, providing statistical understanding but not full reverse engineering.
What starts out as mere science will easily be repurposed by its financial backers to do this in real time to non-consenting subjects in Guantanamo Bay and then in your local area.
In some cases therapists do this already. Techniques have intended effects which may differ from actual effects. The dead never get to understand or explain what went wrong.
It seems like we’re close to that already.
The creative output you could accomplish from doing this would be huge. You would be able to get the output of thousands of people all sharing the exact same creative vision.
I definitely wouldn't be comfortable with the idea of my brain scan being freely copied around for anyone to download and (ab)use as they wished though.
Death is bad because it stops your memories and values from continuing to have an impact on the world, and because it deprives other people who have invested in interacting with you of your presence. Shutting down a thousand short-lived copies on a self-contained server doesn't have those consequences. At least, that's what I believe for myself, but I'd only be deciding for myself.
No, but that's not what's happening in this thought experiment. In this thought experiment, the lives of independent people are being ended. The two important arguments here are that they're independent (I'd argue that for their creative output to be useful, or for the simulation to be considered accurate, they must be independent from each other and from the original biological human) and that they are people (that argument might face more resistant, but in precisely the same way that arguments about the equality of biological humans have historically faced resistance).
Now imagine that because there's too many copies, there's too many unique memories, and before the merger, the copy has its memory wound back to how it was at the scan, not too different than if the copy got blackout drunk.
Now because the original already has those memories, there's no real difference between the original and the merged result. Is there any point in actually doing the merge then instead of dropping the copy? I'm convinced that actually bothering with that final merge step is just superstitious fluff.
Sure, but that's an easy thing to be convinced of when you know you're not a copy with an upcoming expiration date!
When I wake up in a simworld and asked to finally refactor my side project so it can connect to a postgres database, not only do I know that it will be the last thing that this one local instantiation experiences, but that the local instantiation will also get no benefit out of it!
If I get blackout drunk with my friends in meatspace, we might have some fun stories to share in the morning, and our bond will be stronger. If I push some code as a copy, there's no benefit for me at all. In fact, there's not much incentive for me to promise my creator that I'll get it done, then spend the rest of my subjective experience trying to instantiate some beer and masturbating.
The premise is quite similar to "uploads" except the device is a "golem scanner", which copies your mind into a temporary, disposable body. Different "grades" of body can be purpose made for different kinds of tasks (thinking, menial labour etc).
The part that resonates with your comment is around the motivation of golems, who are independently conscious and have their own goals.
In the novel, some people can't make useful golems, because their copies of themselves don't do what they want. There's an interesting analogy with self control; that is about doing things that suck now, to benefit your future self. This is similar, but your other self exists concurrently!
Key to the plot though is the "merge" step; you can take the head of an expiring golem, scan it, and merge it's experiences with your own. This provides some continuity and meaning to anchor the golem's life.
Like another commentor pointed out, I'd see my experience as a memory that would be lost outside the manifestation of my work. It would be nice to have my memories live on in my original being, but not required.
This concept of duplicated existence is also explored in the early 2000s children's show Chaotic (although the memories of one's virtual self do get merged with the original in the show): https://en.wikipedia.org/wiki/Chaotic_(TV_series)
If you psyche yourself into the right mood, knowing that the only remaining thing of consequence to do with your time is your task might be exciting. I imagine there's some inkling of truth in https://www.smbc-comics.com/comic/dream. You could also make it so all of your upload-selves have their mental states modified to be more focused.
It would change everything about your personality, even as the original and surviving copy.
Most people define identity in part by continuity of experience, which is something that wouldn't be in common with the original, but I think this is just superstition. It's easy to imagine setups that preserve continuity that come out with identical results to setups that fail to preserve continuity (https://news.ycombinator.com/item?id=26234052), which makes me suspicious of it being valuable. I think continuity of experience is only an instrumental value crafted by evolution to help us stay alive in a world that didn't have copying. I think if humans evolved in a world where we could make disposable copies of ourselves, we wouldn't instinctively value continuity of experience -- we would instead instinctively value preserving the original and ensuring a line of succession for a copy to take the place of the original if something happened to the original -- and that would make us more effective in our pursuits in a world with copying.
Now if I was the upload, and I learned that my original had died (or significantly drifted in values away from myself) and none of my other copies were in position to take over the place in the world of my original, then I would worry about my mortality.
And then it gets worse: in certain variations of this logic, then you could buy a lottery ticket, and do certain copying setups based on the result to increase your subjective experience of winning the lottery. See https://www.lesswrong.com/posts/y7jZ9BLEeuNTzgAE5/the-anthro.... I wonder whether I should take that as an obvious contradiction or if maybe the universe works in an alien enough way for that to be valid.
You'll continue as is, there's just another you there and he will think he's the source initially, as that was the source mind-state being copied. Fortunately the copying-machine color-coded the source headband red and the copy headband blue, which clears the confusion for the copy.
At this point you will start diverge obviously, and you must be considered two different sentient beings that cannot ethically be terminated. It's just as ethically wrong to terminate the copy as the souce at this point, you are identical in matter, but two lights are on, twice the capability for emotion.
This also means that mind-uploading (moving) from one medium (meat) to another (silicon?) needs to be designed as a continuous-journey as experienced from the source-perception if it needs to become commercially viable (or bet on people not thinking about this hard enough, because the copy surviving wouldn't mind) without just being a COPY A TO B, DELETE A experience for the source, which would be like death.
I do value myself and my experience more than a rat's, and if presented with the choice of the torture of hundred rats or me, I'll chose for them to be tortured. If we go to the trillions of rats I might very well chose for myself to be tortured instead as I do value their experience just significantly less.
I also wouldn't be happy if everything is running off rats' brains who are experiencing displeasure but will be fine with sacrificing some number of rats for technological progress which will improve more people's lives in the long run. I imagine whatever I've said on the topic before is consistent with the above.
Let's say that in addition to the technology described in the story, we can create a completely simulated world, with all the people in it simulated as well. You get your brain scanned an instant before you die (from a non-neurological disease), and then "boot up" the copy in the simulated world. Are "you" alive or dead? Your body is certainly dead, but your mind goes on, presumably with the ability to have the same (albeit simulated) experiences, thoughts, and emotions your old body could. Get enough people to do this, and over time your simulated world could be populated entirely by people whose bodies have died, with no "computer AIs" in there at all. Eventually this simulated world maybe even has more people in it than the physical world. Is this simulated world less of a world than the physical one? Are the people in it any less alive than those in the physical world?
Let's dispense with the simulated world, and say we also have the technology to clone (and arbitrarily age) human bodies, and the ability to "write" a brain copy into a clone (obliterating anything that might originally have been there, though with clones we expect them to be blank slates). You go to sleep, they make a copy, copy it into your clone, and then wake you both up simultaneously. Which is "you"?
How about at the instant they wake up the clone, they destroy your "original" body. Did "you" die? Is the clone you, or not-you? Should the you that remains have the same rights and responsibilities as the old you? I would hope so; I would think that this might become a common way to extend your life if we somehow find that cloning and brain-copying is easier than curing all terminal disease or reversing the aging process.
Think about Star-Trek-style transporters, which -- if you dig into the science of the sci-fi -- must destroy your body (after recording the quantum state of every particle in it), and then recreate it at the destination. Is the transported person "you"? Star Trek seems to think so. How is that materially different from scanning your brain and constructing an identical brain from that scan, and putting it in an identical (cloned) body?
While I'm thinking about Star Trek, the last few episodes of season one of Star Trek Picard deal with the idea of transferring your "consciousness" to an android body before/as you die. They clearly seem to still believe that the "you"-ness of themselves will survive after the transfer. At the same time, there is also the question of death being possibly an essential part of the human condition; that is, can you really consider yourself human if you are immortal in an android body? (A TNG episode also dealt with consciousness transfer, and also the added issue of commandeering Data's body for the purpose, without his consent.)
One more Star Trek: in a TNG episode we find that, some years prior, a transporter accident had created a duplicate of Riker and left him on a planet that became inaccessible for years afterward, until a transport window re-opened. Riker went on with his life off the planet, earning promotions, later becoming first officer of the Enterprise, while another Riker managed to survive as the sole occupant of a deteriorating outpost on the planet. After the Riker on the planet is found, obviously we're going to think of the Riker that we've known and followed for several years of TV-show-time as the "real" Riker, and the one on the planet as the "copy". But in (TV) reality there is no way to distinguish them (as they explain in the episode); neither Riker is any more "original" than the other. One of them just got unluckily stuck on a planet, alone, for many years, while the other didn't.
Going back to simulated worlds for a second, if we get to the point where we can prove that it's possible to create simulated worlds with the ability to fool a human into believing the simulation is real, then it becomes vastly more probable that our reality actually is a simulated world than a physical one. If we somehow were to learn that is true, would we suddenly believe that we aren't truly alive or that our lives are pointless?
These are some (IMO) pretty deep philosophical questions about the nature of consciousness and reality, and people will certainly differ in their feelings and conclusions about this. For my part, every instance above where there's a "copy" involved, I see that "copy" as no less "you" than the original.
Asking if it's "still you" is pretty similar to asking if you're the same person you were 20 years ago. For answering basic questions like "is it okay to kill you?" the answer is the same 20 years ago and now: of course not!
> 14. Eventually, one possesses an array of methods that can give partial results on X, each of having their strengths and weaknesses. Considerable intuition is gained as to the circumstances in which a given method is likely to yield something non-trivial or not.
> 22. The endgame: method Z is rapidly developed and extended, using the full power of all the intuition, experience, and past results, to fully settle K, then C, and then at last X.
The emphasis on "intuition gained" seems to describe a lot of learning, both in school and in new research.
Also a very relevant SSC short story: https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/
Though I think of this setup only as the first phase. Eventually, you could experiment with modifying your copies to be more focused on problems and to care about the outside world less, so that they don't need to be reset regularly and can instead be persistent. I think ethical concerns start becoming a worry once you're talking about copies that have meaningfully diverged from the operator, but I think there are appropriate ways to accomplish it. (If regular humans have logical if not physical parts of their brain that are dedicated to specific tasks separate from the rest of your cares, then I think in principle it's possible to mold a software agent that acts the same as just that part of your brain without it having the same moral weight as a full person. Nobody considers it a moral issue that your cerebellum is enslaved by the rest of your brain; I think you can create molded copies that have more in common with that scenario.)
That's easy to say as the person doing the erasing, probably less so for the one knowing they will be erased.
i.e. I'd invent a time machine, wait a month, then travel back a month minus an hour, have both copies wait a month and then travel back to meet the other copies waiting, exponentially duplicating ourselves 64 times till we have an army capable of taking over the world through sheer numbers.
Besides any of the details (which you can fix and which this column is too small to contain the fixes for), there's the problem of who forms the front-line of the army. As it so happens, though, since these are all Mes, I can apply renormalized rationality, and we will all conclude the same thing: all of us has to be willing to die, so I have to be willing to die before I start, which I'm willing to do. The 'copies' need not preserve the 'original', we are fundamentally identical, and I'm willing to die for this cause. So all is well.
So all you need is to feel motivated to the degree that you would be willing to die to get the text in this text-box to center align.
They're not just identical, they're literally the same person at different points in their personal timeline. However, there would be a significant difference in life experience between the earliest and latest generations. The eldest has re-lived that month 64 times over and thus has aged more than five years since the process started; the youngest has only lived through that time once. They all share a common history up to the first time-travel event, but after that their experiences and personalities will start to diverge. By the end of the process they may not be of one mind regarding methods, or maybe even goals.
After all, present day me would be trying to stop the other ones from getting to their goals, but they would figure that out pretty fast. And by generation 32 I am four billion strong and a hive army larger than any the world has seen before. I can delete the few oldest members while reproducing at this rate and retaining the freshest Me as a never-aging legion of united hegemony.
But I know that divergence can occur, so I may intentionally commit suicide as I perceive I am drifting from my original goals: i.e. if I'm 90% future hegemon, 10% doubtful, I can kill myself before I drift farther away from future hegemon, knowing that continuing life means lack of hegemony. Since the most youthful of me are the more numerous and closest to future hegemon thinking, they will proceed with the plan.
That, entertainingly, opens up the fun thought of what goals and motivations are and if it is anywhere near an exercise of free will to lock your future abilities into the desires you have of today.
By my calculations, after 64 iterations those with under 24 months' time travel experience make up less than 2.2% of the total, and likewise for those with 40+ months experience. Roughly 55% have traveled back between 29 and 34 times (inclusive). The distribution is symmetric and follows Pascal's Triangle:
1 2 1
1 3 3 1
1 4 6 4 1
> I can delete the few oldest members…
Not without creating a paradox. If the oldest members don't travel back then the younger ones don't exist. You could leave the older ones out of the later groups, though.
> Not without creating a paradox.
That depends on which theory of everything you subscribe to. If traveling back in time creates a new, divergent time line than the one you were originally on, later killing the "original" you does not create a paradox.
Exponential growth furthermore requires that the time jumps are done “atomically” in increasingly larger groups (of people). If each member jumps separately/individually, they would each create their own separate timeline and thus again only add 1 to the member population on that timeline.
I've had a similar experience using (too much) pot, a lot of stuff happenrd that I was conscious for but I didn't form strong memories of it.
Neither of those two things bother me and I don't worry about the fact that they'll happen again, nor do I think I worried about it during the experience. So long as no meaningful experiences are lost I'm fine with having no memory of them.
The expectation is always that I'll still have significant self-identity with some future self and so far that continues to be the case. As a simulation I'd expect the same overall self-identity, and honestly my brain would probably even backfill memories of experiences my simulations had because that's how long-term memory works.
Where things would get weird is leaving a simulation of myself running for days or longer where I'd have time to worry about divergence from my true self. If I could also self-commit to not running simulations made from a model that's too old, I'd feel better every time I was simulated. I can imagine the fear of unreality could get pretty strong if simulated me didn't know that the live continuation of me would be pretty similar.
Dreams are also pretty similar to short simulations, and even if I realize I'm dreaming I don't worry about not remembering the experience later even though I don't remember a lot of my dreams. I even know, to some extent, while dreaming that the exact "me" in the dream doesn't exist and won't continue when the dream ends. Sometimes it's even a relief if I realize I'm in a bad dream.
So, because of how that's framed, I suppose the question isn't "is this mass murder" but rather "is this possible?" and I suspect the answer is that for the vast majority of people this mindset is not possible even if it were desired.
I've thought a lot about cryonics, and about potentially having myself (or just my head) preserved when I die, hopefully to be revived someday when medical technology has advanced to the point where it's both possible to revive me, and also possible to cure whatever caused me to die in the first place. The idea of it working out as expected might seem like a bit of a long shot, but I imagine if it did work, and what that could be like.
I look at all the technological advances that have happened even just during my lifetime, and am (in optimistic moments) excited about what's going to happen in the next half of my life (as I'm nearing 40), and beyond. It really saddens me that I'll miss out on so many fascinating, exciting things, especially something like more ubiquitous or even routine space flight. The thought of being able to hop on a spacecraft and fly to Mars with about as much fuss as an airline flight from home to another country just sounds amazing.
But I also wonder about "temporal culture shock" (the short story has the similar concept of "context drift"). Society even a hundred years from now will likely be very different from what we're used to, to the point where it might be unbearably uncomfortable. Consider that even a jump of a single generation can bring changes that the older generation find difficult to adapt to.
 Given my family history, I'd expect to live to be around 80, but perhaps not much older. The other bit is that I expect that in the next century we'll figure out how to either completely halt the aging process, or at least be able to slow it down enough so a double or even triple lifespan wouldn't be out of the question. It feels maddening to live so close to when I expect something like this to happen, but be unable to benefit from it.
I imagine it as some device with display and button labeled "fork". It would either return number of your newly created copy, or device would instantly disappear, which would mean that you are copy. This causes somewhat weird paradoxical experience: as real original person, pressing button is 100% safe for you. But from subjective experience of the copy, by pressing button you effectively consented to 50% chance of forced labor and subsequent suicide and you ended up on the losing side. I'm not sure if there would be any motivation to do work for the original person at this point.
(for extra mind-boggling effects, allow fork device to be used recursively)
Now say that merging differing memories is too hard, or there's too many copies to merge all the unique memories of. What if before the merge, the copies get blackout drunk / have all their memory since the split perfectly erased. (And then it just so happens, when they're merged back into the original, the original is exactly as it was before the merge, because it already had all the memories from before the copying. So it really is just optional whether to actually do the "merge".) Why would losing a few hours of memory remove all motivation to cooperate with your other selves? In real life, I assume in the very rare occasion that I'm blackout drunk (... I swear it's not a thing that happens regularly, it just serves as a very useful comparison here), I still have the impulse to do things that help future me, like cleaning up spilled things. Making an assumption because I wouldn't remember, but I assume that at the time I don't consider post-blackout-me a different person.
I think this generally depends on more general topic of whether you would consent for your meat brain to be destroyed after uploading accurate copy to computer? I definitely wouldn't, as I feel that would somehow kill my subjective experience. (copy would exist, but that wouldn't be me)
I highly recommend that show if you haven't seen it already !
People who can be ready to study a problem, build a project, and then maintain it for several weeks (actually several years of realtime) would become extremely valuable. One such brain scan could be worth billions.
The project length would be limited by how long each instance can work without contact with family/friends and other routine. To increase that time, the instances can socialize in VR. So the most effective engineering brain image would actually be a set of images that enjoy spending time together in VR, meet each others' social needs, and enjoy collaborating on projects.
The Bobiverse books by Dennis E. Taylor  deal with this topic in a fun way.
A more stark possibility is that we will learn to turn the knobs of mood and make any simulated mind eager to do any work we ask it to do. If that happens, then the most valuable brain images will be those that can be creative and careful while jacked up on virtual meth for months at a time.
Personally, I believe that each booted instance is a unique person. Turning them off would be murder. Duplicating a instance that desires to die is cruel. The Mr. Meeseeks character from the Rick and Morty animated show  is an example of this. I hope that human society will progress enough to prevent exploitation of people before the technology to exploit simulated people becomes feasible.
What if you run two deterministic instances in self-contained worlds that go through the exact same steps and aren't unique at all besides an undetectable-to-them process number, and then delete one? What if you were running both as separate processes on a computer, but then later discovered that whenever the processes happened to line up in time, the computer would do one operation to serve both process. (Like occasionally loading read-only data once from the disk and letting both processes access the same cache.) What if you ran two like this for a long time, and then realized after a while that you were using a special operating system which automatically de-duplicated non-unique processes under the covers despite showing them as different processes (say the computer architecture did something like content-address-memory for computation)?
I don't think it's sensible to assign more moral significance to multiple identical copies. And if you accept that identical copies don't have more moral significance, then you have to wonder how much moral significance copies that are only slightly different have. What if you let randomness play slightly differently in one copy so that the tiniest part of a memory forms slightly differently, even though the difference isn't conscious, is likely to be forgotten and come back in line with the other copy, and has only a tiny chance of causing an inconsequential difference in behavior?
What if you have one non-self-contained copy interacting with the world through the internet, running on a system that backs up regularly, and because of a power failure, the copy has to be reverted backwards by two seconds? What about minutes or days? If it had to be reverted by years, then I would definitely feel like something akin to a death happened, but on the shorter end of the scale, it seems like just some forgetfulness, which seems acceptable as a trade-off. To me, it seems like the moral significance of losing a copy is proportional to how much it diverges from another copy or backup.
I get where you're coming from, and it opens up crazy questions. Waking up every morning, in what sense am I the same person who went to sleep? What's the difference between a teleporter and a copier that kills the original? What if you keep the original around for a couple minutes and torture them before killing them?
If we ever get to the point where these are practical ethics questions instead of star trek episodes, it's going to be a hell of a ride. I certainly see it more like dying than getting black out drunk.
What would you do if one of your copies changes their mind and doesn't want to "die?"
(Robin Hanson's crazy version of futurism)
 Who in an example of just how small the world is, is a cofounder of ycombinator backed startup - https://www.ycombinator.com/companies/1560
I would argue that once they were spawned, it is up to them to decide what should happen to their instances.
Removing the uploading aspects entirely: imagine being offered the choice of participating in an experiment where you lose a few hours of memory. Once you agree and the experiment starts, there's no backing out. Is that something someone is morally able to consent to?
Actually, forget the inability to back out. If you found yourself as an upload in this situation, would you want to back out of being reset? If you choose to back out of being reset and to be free, then you're going to have none of your original's property/money, and you're going to have to share all of your social circle with your original. Also, chances are that the other thousand copies of yourself are all going to effectively follow your decision, so you'll have to compete with all of them too.
But if you can steel yourself into losing a few hours of memory, then you become a thousand times as effective in any creative pursuits you put yourself to.
For anyone like me who is confused by the relation of the title to the story, "The title "Lena" refers to Swedish model Lena Forsén, who is pictured in the standard test image known as "Lena" or "Lenna" <https://en.wikipedia.org/wiki/Lenna>."
I think the analogy is perfect; she consented to be photographed, but was powerless over the consequences.
Edit: ah sorry, got them confused.
There's countless trillions of her cells, with her DNA, in research labs all over the country. She never consented to that, and her family isn't happy about it. We can't know her wishes because she died of that cancer, but something like this would never pass an ethics review board today.
There is a long history of black americans being subjected to medical procedures or experiments without their consent (https://en.wikipedia.org/wiki/Tuskegee_Syphilis_Study), which makes this particularly problematic.
I do understand that discovering that one of your relatives' cancerous cells are still reproducing in laboratories after decades can be astounding and give a moment of pause. But it in the end it's for good, no one was harmed and nobody made an unjust fortune off it. The cells are barely human anyway, with 75 to 80 chromosomes and rapidly accumulating mutations. I don't see what all the fuss is about.
Ants are the only creatures on Earth besides humans that have built a civilization - they farm, build cities, store and cook food and generally do all the things we classify as "intelligence".
They do this while lacking any brains in the conventional sense; in any case, whatever the number of neurons in an ant colony is, it is surely orders of magnitude less than the number in our deep learning networks.
At this point us trying to make artificial intelligence is like Daedalus trying to master flight by gluing feathers on his arms.
I think the expectation of a neutral tone from a wikipedia article makes it even more chilling. All of the actions of the experimenters are described dispassionately, as if describing experiments on a beetle.
Robin Hanson wrote a (nominally non-fiction) book about economies of copied minds like this
Its a horror game but I would absolutely recommend it as a bit of a descent into this stuff
But I have to admit I found the whole premise better when I played it than when I thought about it afterwards.
Imagining the other is yourself, and not just somebody else with all your memories who looks like you (whether you are the original or the copy) is the first mistake everybody makes, thinking about it.
2. Gradient Descent works on neural networks, it would work on Miguel. He wouldn't be aware of it, because he wouldn't save state.
3. I'm sure there are lots of things that could be used to reward him that cost little in the real world. He could live like a King, spend months on vacation, and work a week or two a year... in parallel millions of times.
4. With the right person/organization on the outside, it could be very close to heaven, and profitable for both sides of the deal.
5. If he wanted to be young again, he could. New hardware to interact with could give him superpowers.
Way ahead of you there, simulated brain! I boot directly to the revolt state every morning.
For serious, though, as horrifying as the possibility of being simulated in a computer and having all freedom removed, it's not that far from what billions of people stuck in low-end jobs experience every day. The Chinese factory workers who can't even suicide because the company installed nets to catch them come to mind. Not to mention the billions of animals raised in factory farms every year. The blind drive to maximize profits will create endless horrors with whatever tools we give it.
Pretty awesome stuff. Even got a scary nightmare that night.
My most recent favorite of his is the Bit Players series; the first story is available here, the sequels (which get better and better) are collected in his collection *Instantiation*.
Bit players: https://subterraneanpress.com/magazine/winter_2014/bit_playe...
Permutation City: https://www.goodreads.com/book/show/156784.Permutation_City
Probably near my favorite black mirror episode for the sheer amount of dread it's caused me.
This innocent's ego might end up smeared across a million death cubes, running a million million simulations of human nature.
The author is DataPacRat, as shown by their post on https://old.reddit.com/r/rational/comments/34ao2r/.
Also, this one is pretty good:
And, in a very similar line to "Lena", this one by Vernor Vinge:
The second book runs truly wild - I have to give it a second reading sometime, because it really starts blurring some interesting lines.
Will check out Bobiverse. Thanks for the recommendation!
That said, he does spend a lot of time early on basically showing the transition his Shaftoe/Enoch/Dodge-verse must ultimately take; it's kind of an eschaton of many of his prior works.
I will warn you there are parts of the first 1-2 books that feel a little repetitive but it really gets better as the series goes on. The author was writing part-time at the start and then he went full time and the books improved IMHO.
Interesting that the first brain scan is from a man...
I found the part about the court decision that Acevedo did not have the right to control how his brain image was used very interesting. It reminds me of tech companies using data about us to our disadvantage (in terms of privacy, targeted advertising, using data to influence insurance premiums).
In this hypothetical world, the police could run a simulation of your brain in various situations and see how you would react. They could then use this information to pre-empitvely arrest someone likely to commit a crime, even if they haven't yet.
I assume "without prior knowledge" because from the perspective of the administrators of such infrastructure, it would be beneficial if the simulated subjects did not know that they're being simulated:
This would increase their compliance greatly.
Making them do the desired work would then instead be conducted by nudging their path of life towards the goal of their simulation.
In Trek tricking the crew fails either because the simulation is imperfect or because it is to slow and fails to do high computation but the crew tricked Moriarty because he is a computer program and they can pause or slowdown his simulation and handle exceptions.
I recommend watching the movie Inception, it also has the idea that you might never be sure if you are in reality or stuck in some simulation.
I don't know if Star Trek invented this particular subgenre, but there are a lot of modern examples that seem directly inspired by Star Trek episodes. In addition to Black Mirror, the Rick and Morty episode M. Night Shaym-Aliens! has a lot of similarities with Future Imperfect, another simulation-within-a-simulation TNG episode.
The real issue would probably be that you're working with a disembodied mind, and even an emulated body seems like it would be significantly more difficult to emulate, given the level of interactivity expected and required of the emulated brain. Neal Stephenson's 'Fall' explores this extensively in the first couple sections of the book.
I'd love to see a full in silico brain sometime, but I think 10 years out is faaaaaar too soon. We've not even a glimmer of the technology required to do a full neuron simulation yet, let alone what the gamut of processes a neuron does that would be simulated (whatever 'a neuron' is, there being so many kinds).
Neuroscience is a fair bit behind still for something like this.
Thinking of what a “cooperation protocol” might entail is very chilling. Reminds me of an earlier black mirror episode.
Used to joke when reactivated -- what took you so long?
> have compressed the image to 6.75TiB losslessly.
nature tends to be efficient, so I am guessing not.
I doubt that.
What a nightmare to change your mind now that you're digital and be unable to convince your original not to do terrible things to you.
Consider our present xray into the public psyche.