When you get to have L5 memory of nearly same speed and price as L4 (RAM) then whole architecture will have to change.
Forget contemporary memory busses, forget contemporary R/W mechanisms, forget contemporary databases.
Busses et al would be redesigned, sure, but contemporary databases, software, virtual memory techniques etc, not so much. You can just tune their "magic numbers" accordingly.
The thing is that architecturally nothing has really changed from 70's. And even if it did, Oracle cannot just go and tweak some "magic numbers" (what would those be?) since the idea of spinning plates at the end of a long and perilous pipeline is at the core of any contemporary DBMS. To change this you cant just assign a junior developer for a week or two to "optimize" for SSD.
It is an monumental effort requiring that you throw away and/or revisit every presumption you made in the last 30-40 years that it took to build these DBMS.
It is similar to introduction of internal combustion engine to the world of horse and carriage.
At first glance, just mount the engine on the carriage and voila! It turns out that carriage was fundamentally changed to accommodate the internal combustion engine. To the degree that besides four wheels, modern ICE powered carriage looks nothing like old horse powered carriage.
Yes. What about the rest 99.999% of the industry?
The thing is that architecturally nothing has really changed from 70's.
Which is my case exactly. Despite going from KB of main memory to TB, and from a few MB of hard disk to PB, "nothing has changed". What makes you think this case will be different?
And even if it did, Oracle cannot just go and tweak some "magic numbers" (what would those be?) since the idea of spinning plates at the end of a long and perilous pipeline is at the core of any contemporary DBMS.
I was talking about the OS level, not Oracle's. Tuning virtual memory and related pipelines.
And, no, the "idea of spinning plates" it's not "at the core" of DBs. Don't even know what you imply by this. That, for some reason, Oracle say wouldn't take advantage and run orders of magnitude faster on pure dynamic memory? Oracle --and all DBs-- already runs just fine on non spinning SSDs. And most DBs already have special tunings to keep the working set or even everything in the main memory, and never touch the disk. That the underlying storage is a spinning platter of some dynamic memory 100 or 1000 times faster will not matter much.
I'll admit, I've not followed this sort of hardware closely but my understanding was that SATA is set on its own track, apart from the other buses like PCI and USB.
I'm sure people will make SATA SSD's.
There is an architecture revolution to be had, but it isn't as easy carving off a high speed bus from the CPU and turning your database speed up to a factor of 20.
If the write lifetime is a million writes, at 10ns/write you can burn that up in a millisecond.
Maybe you trust your server hardware to last for three years. You get about 38 writes per hour. I hope you aren't syncing a critical chunk of data once a minute.
So you have an insanely fast, nonvolatile store, but you can't just write to it willy-nilly. Let the engineering begin! (I'm sure it has for some people.)
 Malware can be expensive in a hurry. A couple quick loops and you can ruin 1000 blocks/second.
 If you have less than a few dozen servers you end up playing quality roulette. You have no idea which models from which vendors are going to hold up, so you make a guess based on reputation and price. Sometimes you are right, sometimes you are wrong. Sometimes you have a 50% mortality in 6 months. When you get a model that works well, by the time you know it lasts, you can't buy it any more. My strategy was always to buy a batch, run them in non-critical positions for 6 months to screen for early failures, if they were good, move the critical functions to them, but then get off of them before the age related failures begin. Three years was about my limit.
Device controllers used to be divided between a northbridge (memory, video) and southbridge (pcie, sata, usb, etc), but in the latest generation northbridge components are integrated into the CPU.
This is literally like using transistors for the first time in the late 1950s.
Load/store architectures may go away due to this. Imagine 32Gb of CPU registers.
Sounds like the most expensive context switch ever.
Anyway, I fail to see the point of this discussion, TFA states that MRAM attains speed comparable to that of the DRAM, which is much slower than CPU cache (at least one or two orders of magnitude slower), so that won't go away just now.
Also, the article speaks of "write speeds" (whatever that means) of tenth of nanoseconds but says nothing of latency. I suppose there are no refresh periods, which might improve over DRAM a little. It all seems very vague so far, I'm looking forward for some more technical and all-encompassing performance numbers.
Amazon is a big player with AWS and this is the perfect opportunity for someone to come in and eat their lunch and change everything.
The good news is we don't need hard disk any more. This will indeed change a lot of what we know about operating system.
My point is to remove the distinction between the register file and the main memory so that the entire CPU's working set is linear and no copies are required, therefore drastically increasing speed.
When you do this, you lose all the cache control latency and context switch overhead, resulting in a much smaller and faster core, leaving plenty of space for 32Gb on die :)
No existing architectures will do this as they rely on the memory hierarchy. I'm talking about a new architecture.
That has been tried several times before. As long as small is "enough faster", small&fast+large+overhead beats large. (In really fast processors, active register values are lots of places, so they don't even access the register file except for values that haven't been used for a while.)
> When you do this, you lose all the cache control latency and context switch overhead, resulting in a much smaller and faster core,
Huh? Context switch overhead is time, not space. Cache control is negligible space.
> leaving plenty of space for 32Gb on die :)
Not yet you don't. None of this stuff is as dense as dram and DRAM is just now hitting 4Gbit. Since fast processors do take some space....
IMHO the impact on software architecture is likely to be much less pronounced at first, because software ecosystems -- OS kernels, libraries, utilities, applications, etc. -- can evolve only gradually over a period of many years. Consider that Ken Thompson and Dennis Ritchie reportedly created the /usr directory because they ran out of space on a 1.5MB hard disk (!) more than 40 years ago (!), yet we're still living with this directory ( see http://lists.busybox.net/pipermail/busybox/2010-December/074... ).
I guess you could keep at least some volatile memory for storing sensitive information such as encryption keys. Of course unless the rest of the MRAM is encrypted you may indeed leak potentially sensitive data.
Maybe dedicated hardware could encrypt/decrypt the RAM contents on the fly when the CPU or the devices access it, but it sounds costly.
Liked the article, but that's a self-referencing statement