
Probe Memory Packs 138 Terabytes per Square Inch - rbanffy
https://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/new-approach-to-stmenabled-memory-promises-thousand-times-more-data-storage
======
nine_k
In short: a scanning tunneling microscope moves individual hydrogen atoms,
forming a readable binary code.

New and cool: fully automated, and works well above room temperature (unlike
more traditional cryogenic designs).

Expectedly impractical: very-very slow write speed, probably pretty slow read
speed, because everything is mechanical. Only a couple dozen bytes actually
written in the experiment.

~~~
sophistication
Is there hope the technique can be accelerated to practical speeds at all?

~~~
chii
Or that a stray cosmic ray won't disrupt the molecules and cause read
corruptions?

~~~
fooker
That's not a big deal.

With this density, you can just duplicate (or more) the data and still have an
useful capacity. Parity based methods would do better, I guess.

~~~
rbanffy
Duplication halves the write speed (and only tell you the data is bad, at half
the read speed). ECC would reduce RW speed somewhat still, but, at least,
would allow one to correct the data and rewrite it.

~~~
eximius
Duplication is a form of ECC (admittedly a bad one). If I write everything 3
times, I can correct from any single error, but I will erroneously correct on
two errors. If you're limited to duplication, you'd want four copies for 1
error correction and 2 error detection.

~~~
rbanffy
> 3 times

That would be triplication. ;-) Duplication is only single error detection.

------
duozerk
I find it impressive that such advanced atomic storage is _only_ 100x better
than our current storage tech.

~~~
Cthulhu_
The article mentions 1000, but yeah, only three orders of magnitude if I'm not
wrong on those terms, and we've already gone up eight orders of magnitude
since the 80's ([https://ourworldindata.org/wp-
content/uploads/2013/05/Increa...](https://ourworldindata.org/wp-
content/uploads/2013/05/Increasing-Hard-Drive-Capacity-
from-1980-till-2011-Wikipedia.png)).

------
jfoutz
It's not hard to imagine using the hydrogen position as a state, rather than
presence. if it's in the left cell, on, if it's in the right cell off. if
you've got that then it's not hard to imagine, rather than one single probe
pointing down, v, but instead a whole array vvvvvvvv. if the array of probes
can be fabricated on a wafer, you can have massive parallelism.

the memory would have cycles, a read 0 phase, any bit that needed to be
flipped to one would be picked up, the array shifts to the left, any probe
holding an hydrogen would then write. then a read 1 phase, which prepares for
the zero writes.

Obviously this depends critically on arrays of probes, which might not be
possible. If it is, there's no reason to think this can't be massively
parallel.

~~~
jfoutz
now that i think about it, if you can make the array of probes, you don't need
any moving parts.

    
    
       vvvvvvvvvvv
       '',,','''''
       ^^^^^^^^^^^
    

build two in opposition and pull up or down for each state.

far far easier said than done i'm sure. :D

------
crispyambulance
It makes me wonder, is there a hard physical limit on what is possible for
data storage density?

Hardrives are roughly 1Tb/in^2 in areal density right now. SSD's are a little
better.

This one using hydrogen atoms is ~100x better.

Hydrogen atoms are pretty damn small. Aside from improvements with 2D packing
factor, is this pretty much the limit?

~~~
biggerfisch
There is such a limit to areal density. The math involved dives into entropy
and the planck length, but the short answer is that any information at a
quantum level must be quantized in some form and the smallest possible
quantization is the planck length squared, called the "planck area".

reference:
[https://physics.stackexchange.com/a/2283](https://physics.stackexchange.com/a/2283)

~~~
bsamuels
this is probably a silly question, but why isnt the smallest possible
quantization just planck length, or planck length cubed?

~~~
spiralx
The holographic principle says that all of the information required to
represent a 3-D volume is encoded on the 2-D surface of its boundary i.e. in
one sense there is no difference between e.g. a black hole and its surface.

[https://en.wikipedia.org/wiki/Holographic_principle](https://en.wikipedia.org/wiki/Holographic_principle)

------
acd
The incorrect readable bit removings should be fixable with error correcting
codes such as ReedSolomon
[https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_cor...](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction)

------
dghughes
Those images almost make it look to be a small abacus. Could more information
be written if written as an abacus rather than binary? Stupid question I know
since I don't know how an abacus works. But I assumed more information could
be represented in the same area.

------
debt
I do wonder what will happen when storage capacity greatly greatly exceeds
network capacity. That is it will only be possible and cheaper to send large
amounts of data by physically moving in and not using a digital network.

~~~
haimau
That is already being done for Peta and Exabyte scale data. Check out
[https://aws.amazon.com/snowball/](https://aws.amazon.com/snowball/) for
example.

------
ashleyn
Now comes the challenge of mass producing a reliable product.

~~~
rbanffy
I don't even think about reliable at this point. Usable would be a good start.
Bandwidth must be terrible.

~~~
bitminer
About .2 bits/sec. Not usable unless parallelized millions of times.

------
fooker
I suspect the speed is going to kill this idea. But even at 20 year old
mechanical hard drive speeds, there should be some use for this for backup.

~~~
smueller1234
Actually, that's highly doubtful. A backup solution will have to be able to
actually restore a meaningful fraction of the stored data to be useful. Worse
yet, if seeking is expensive, even smallish but scattered amounts of data can
be problematic.

Tapes suffer from that big time. They've grown in storage space, comparable to
hard drives, but since it's normal to have libraries with many tapes per
drive, there's severe practical utility limits even for backups.

------
madeuptempacct
How does the latest "wide-market, in production" storage work now?

------
slx26
Why do we need such high density slow storage anyway? Would that lead to more
energy-efficient systems, even if just due to space reduction? I mean, for
regular data (video, music, images, text), we already have better resolutions
than we can perceive (and going much further would be just mainly wasteful,
and bottlenecks seem to be _processing_ that data), and big services that host
this kind of content, like youtube and others, well... can't be expected, on a
long term, to _not_ forget most of its content. So besides a few research
applications... genuinely, is there any really important application that we
have big trouble handling right now? I mean, the world storage requirements
won't keep increasing forever, no?

