I often find when I experience Deja Vu I think I remember doing a thing I did the day before a long time before that. It feels like I have two memories, one of a recent event and one a vague recollection of something a long time ago. This sounds like exactly what one would expect if two memories are recorded, one suppressed for a few days and one forgotten in a few days, but the suppression fails.
Does anyone else have Deja Vu like this?
I think the theory is that occasionally there are slight issues with the timing/synchronization of the memory being stored in certain parts of the brain. Since the research shows that memories are written twice, if such a desync occurred, maybe deja vu is some perception of disagreement between short and long term memories?
For some reason, I find the experience of deja vu somehow quite ... pleasing. I am not sure if this is true for everyone.
I would practice by doing the same thing I did the day before or a week before, then try to focus on the feeling and enhance it as soon as it started to show up, then kind of "let myself live in deja vu" for longer and longer periods.
Now I can give myself that feeling anytime. I first relax and look around, take in as much as I can. Then go back to doing something else, then relax again and pretend I have already been where I am.
Such an odd sensation when your brain is telling you you're experiencing a repeated situation, but your mind is telling you it's impossible.
Next time you get deja vu, try to "remember" what happens next. It's scarily accurate after you willingly try it once or twice.
My overall hypothesis is somewhat inline with yours (although I don't have nearly the expertise to properly examine these things) - that somehow the brain 'skips' filing the memory properly, and the experience being recalled milliseconds later as "now" is being mis-remembered as "past".
Very cool machine our brain.
Is this because the clearer "short term" version is completely lost?
"It is immature or silent for the first several days after formation," Prof Tonegawa said.
What's going on during those first several days? It's probably re-arranging it's model of the world to account for those new memories.
"The idea you need the cortex for memories I'm comfortable with, but the fact it's so early is a surprise."
The neocortex is constantly making predictions about the future, so it makes sense that it has some short term memory of its own to make those predictions from.
Insects and plants are closely tied together, with insects being essential for the flowering of certain plants.
Are mammals so closely tied to any species outside of the class?
It instead ushered in a new age of surgical intervention that saved a lot of lives and prolonged life in general. I know I'm not alone in having a relative that would have been dead twenty years ago if not for the ability to radically alter the mechanics of a malfunctioning heart.
We shouldn't cease to try to understand the world because of the risk new knowledge brings. Rather, we should also understand the risk and act to intercept and mitigate it. But don't condemn Alzheimer's patients to a slow twilight death because of the risk of strong AI; that's not an acceptable tradeoff.
This depends on the estimated probabilities of worst case scenario. I don't think you can just handwave it as "always worth it".
I have yet to hear a realistic strong AI nightmare scenario (i.e. one that doesn't presume a magically-smarter-than-all-of-us-combined AI as a McGuffin with no solid functional grounding) with a probability anywhere near offsetting that cost. Besides, stacking the constant chronic cost of a few tens of thousands of deaths against a nonzero-probability species-ending scenario is nearly an apples-to-oranges comparison---by the logic of simple probability-to-risk modeling, we should be stopping all disease research right now and pouring 100% of that money into fast asteroid detection and mitigation solutions.
Practically speaking, I think we have plenty of time while we are doing the research to cure a current and real disease to puzzle through the detection and mitigation strategies for a strong AI threat that is---at best---decades out (and, some would argue, will come to pass inevitably with or without our Alzheimer's research, yes?).
This also explains deja vu, but possibly not as many on this thread have tried to explain.
I've written about this about two years ago, the leading theory of deja vu being that your brain processes and stores an experience in the long term memory before it's had a chance to properly store it in short term. By the time the short term memory has caught-up, the event feels already lived, because it's already been processed and stored in long term memory.
At least, that's one theory.
So can this be extended to movie-plot-like memories be wiped, or false memories implanted? How did they identify the memories to be targeted and where they were stored?
There's lots of exciting work to reverse-engineer and extract rules from software neural nets. Can the same be possible in hardware nets too, or will attempting to measure it interfere and distort it?
The same group has done all that stuff with optogenetics and molecular techniques:
This is an old article and may be superceded by more recent research but false memories may not have to be implanted, in some sense we do it ourselves every time we recall a memory
There's truth to your sentiment, that the current level of knowledge is always somewhat arbitrary, but to say there's "nothing" that suggests a connection between the two is far too dismissive of an interesting question posed by the original comment.
As a tangent, this seems to me to be a problem with Daniel Dennett's ideas and why, in the end, David Chalmers seems to be gaining ground with every passing year.
Also this sentiment casually dismisses the fact that behind computing there's a whole lot of theoretical work that stems not from technology itself, but from how reality works (i.e. math).
What do you mean by "we" and "nature"?
Line of reasoning? That evolution is an optimization process, and that human intelligence is an optimization process - hence both are likely to reach good solutions for intelligence eventually, and if there's a strong optimum in the design space of those, then both are likely to converge at least in some areas.
What's "we"? Human technological civilization.
What's "nature"? Evolutionary process that already produced working brains.
I'm not entirely sure that "optimal" has an agreed upon definition. At best, "optimal" is relative to the system within which it is being applied. "Optimal" in a Trump world is very different from "optimal" in a Bernie Sanders world. Optimization seems to require some objective. In a practical sense, you cannot optimize a piece of software if you don't know what you are optimizing for.
It is a bold premise that the evolutionary process and human technological civilization have the same optimization goals.
We have similar, when we optimize for "what works" (instead for e.g. "what sells").
The only existing instance of what we're trying to build - intelligence - is something a dumb, random, incremental optimization process following simple rules managed to somehow stumble upon. Now, if there's a strong local optimum in the design space of intelligent machines, then it seems plausible that evolution ended up there, and that we may stumble upon it too, thus converging with the evolutionary solution somehow.
Now I'm not saying our solution will be identical to biological brains. We have different goals (hell, we have goals, nature does not). But we're likely to end up doing many aspects of it in a way that resembles biology.
The core observation here is that it's the structure of reality (implications of laws of physics) that shape the search space we're traversing. Compare flight. Yes, human planes are very different from birds - but that's because they have less efficient energy sources, and also because we want them to go faster (have you ever seen a supersonic bird?). Still, both share some aspects, like the airfoil. Both we and nature "discovered" those because airfoils are dictated by the laws of physics - that's how you do flight in gases.
Basically, what I'm saying is that humans always describe things in terms of the technology of their age, but that doesn't mean it's wrong. Better technology means better description. Birds are unlike planes, but analyzing them using the model of airfoil we developed is a good idea and leads to more and better understanding.
> Optimization seems to require some objective. In a practical sense, you cannot optimize a piece of software if you don't know what you are optimizing for.
Yes, optimization always has an objective - that's how we define it, in contrast to complete randomness. But the objective can be implicit or explicit. Explicit goals require a mind to be involved. Evolution has only implicit objectives, human-driven process have both (because we suck at knowing what we actually want).
But the second important part of an optimization process is the shape of the optimization space. This here is defined by laws of physics. And insofar as evolution's implicit objective is in some aspects similar to our objective, both get similarly influenced by the shape of the optimization space :).
Totally. "Better technology means better description" is a great idea/concept here.
Since reading Michel Foucault while I was studying in the UK (someone I feel like I never even heard mentioned in US university), it made me rethink the "what/essence" of the things we do/build. Just like the supposed lesson-learned (or potentially still to be learned) in the finance world after the financial crisis: models are models not reality. We develop a culture around the descriptions that we use but in the end we aren't truly describing the essence of the "what (i.e. the thing described)". We are layering various descriptions, cultural-ideas, and pre-conceptions on top of the thing itself in order to better communicate some aspect of it to other people. In a sense, different technologies gives us a shared set of concepts with which we can communicate with each other.
Your point is great. Just because they are "descriptions" doesn't mean they are wrong/right. They are a shared language that we use to communicate with each other complex ideas and, hence, we better understand previously wish-washy concepts.
I look forward to human trials.
We advise you to avoid calendars for a few weeks, until you adjust.
(How does he remember what "remember Sammy Jankis" means? How does he remember he has short-term memory loss?)
Yeah, it's a slight mental leap, but "remember you have Sammy Jenkins' medical condition" doesn't quite have the same ring to it. :)
I have no good knowledge of neuroscience, so I'll be happy to read other thoughts on this.
Personally, it seems to me that this is an evolved technique to direct us to:
act on impulses at the moment of an event, instead of contemplating rational choices that may take too much time to arrive at (thus limiting our chances of survival);
pursue rational choices in calmer times, trusting us to draw on stored memories of similar events to help make better decisions (the purpose of Deja Vu?)
For the link between the hippocampus and the cortex being required to retrieve memories; maybe those memories are useless without the emotional context, and the hippocampus is needed to relive those emotions to have a more accurate recollection of the memories.
It might not make sense (I know almost nothing about neuroscience), but it's a thought.