Hacker News new | past | comments | ask | show | jobs | submit login

Ok, a couple of things. First, I found it really amazing at how convoluted Intel plotted their numbers on that latency graph so that it would "Up and to the right" which everyone "knows" is good.

The second thing is that Intel has been claiming multiple GB/second of throughput as well. They really do believe this will be a replacement for DRAM on some platforms. As in you read into your L3 case from this stuff and you flush to it when you write out a dirty cache lines. And while that will make the overall system slower, it gives it literally instant stop/start capability if the key parts of your architecture are static (can retain data at 0 clock). What that means is a laptop that can turn it self off between waiting for sectors to read in from the disk or packets to come off the network, or keys to be pressed by the user. Non-illuminated run times in days off of a battery source rather than hours.

Imagine a 1.2TB of this stuff on the motherboard substituting for DRAM. So you've got every application and all the data for your applications already "in memory" as far as the chip is concerned. App switching? Instant, app data availability? instant. Quite a different experience than what we have today.




> What that means is a laptop that can turn it self off between waiting for sectors to read in from the disk or packets to come off the network, or keys to be pressed by the user.

Yeah, no. It might be able to turn of the CPU, but everything else would have to keep running. Graphics hardware does not deal well with frequent state changes. Network hardware would still need to run to keep its clock in sync with the signal.

So, you get to turn off a handful of components, which is where we already are today - CPUs spend most of their time in deep sleep states. USB devices and chipsets implement sleep states. Graphics hardware clocks down significantly.


> Quite a different experience than what we have today.

For example, I only have to wait at most one second for an app to load, if it's not already open and minimized. It's comparable to the time it takes me to click a button. And my computer is rebooted only once every few months, so it's always up.

This improvement will be more meaningful for random access database operations.


Is this technology costly ?

EDIT: Does not seem to be.

“You could put the cost somewhere between NAND and DRAM. Cost per bit, it’s likely to be in between them somewhere. But actual cost will result from the products we bring to the marketplace.”

http://hothardware.com/news/intel-and-micron-jointly-drop-di...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: