He then said that he could not explain how the new 1/20th of of a wavelength drawing works because new physics had been learned, and it had not been in his physics book.
> new physics had been learned, and it had not been in his physics book.
Again, there is a difference between learning new techniques and phenomena running on top of known physical law, and learning new fundamental laws. When engineers say "new physics", they refer to the former, but when physicists say "new physics", they refer to the latter. And there hasn't been any new physics, more or less, since the 60's.
The computational limits are thought to be encoded in the fundamental laws, in a way very analogous to the limiting speed c.
I think the keyword here is "current".
While the development of "new physics" is unlikely, those developments have never failed to surprise us before. We have ever increasing capabilities to both explore and observe the world around us. I find it highly unlikely that we have cracked all there is to physics.
While all this discussion about the actual laws of physics is very important, I would like to underline the fact the the other half of the reasoning about whether 128 bits are enough is centered around the fact that we are of course not able to handle that amount of energy, it sounds so obvious for all of us that this argument is used as a punchline.
While I cannot imagine how it could be done (or why), this is the kind of things doesn't involve changing our knowledge about basic physical laws.
If you need smaller wavelengths still, you could use gamma lasers - though so far these are only possible in theory. There is some speculation that positron annihilation could be used to drive such a laser.
edit: looks I was wrong - while ultraviolet lasers exist, the currently available ones still don't have a small enough wavelength for say a 45 nm process. Other tricks like multiple patterning and immersion are used.
The paper describes the "ultimate laptop" as being a computer that can use one atom to store 1 bit of information. Why is an atom the smallest unit of matter a computer can store a bit in?
(It does seem implausible for 1 bit per atom to be right. I mean, atoms themselves vary in hundreds of ways - # of protons, # of electrons, etc. Just using each element to represent a byte would seem to get you more than 1 bit per atom.)
However, the theoretical limits turn out not be entirely useless even if we can never attain them in practice; they offer some insight about the nature of the universe, and it may yet be some thoughts on how the universe can process information that cracks open the problem of what happens in black holes, rather than what you might think of as conventional physics. There's been a lot of interesting work done in that area; for instance, read http://en.wikipedia.org/wiki/Holographic_principle and observe how many times the word "information" comes up.
All asset quantities are signed 128 bit integers under the hood. However, when the wallet interface displays a quantity, it uses a scale factor to place the decimal point. An asset type denominated in a fiat currency like USD or EUR typically uses a scale of 2. An asset type denominated in a commodity currency like GAU (grams of gold) typically uses a scale of 7, allowing a number like 31.1034768, which is the precise number of grams in a troy ounce. If you want to see it in kilograms, you can use scale 10.
I think storage will eventually move to the Venti model of hash addressing. It just makes much more sense on a network and with current archive-everything practices (there are some really big wins to be gained from compression and dedup there). BitTorrent is doing this already.
From a wikipedia entry... "[The computer] Deep Thought does not know the ultimate question to Life, the Universe and Everything, but offers to design an even more powerful computer, Earth, to calculate it. After ten million years of calculation, the Earth is destroyed by Vogons five minutes before the computation is complete."
The network latencies are terrible, however.
The usual limit that is mentioned for storage is Laundauer's principle (http://en.wikipedia.org/wiki/Landauer%27s_principle), which states that we need to use at least ln(2)kT Joules of energy to write one bit, where k is Boltzmann's constant and T is the temperature of the storage system.
Assuming the best case T=Cosmic Background Temperature=2.7K (although this is optimistic since we will also be heated up by the sun and by the Milky way), this give 8.8e15 J to write 2^128 bits, or 3.6e19 J to write 2^128 512-byte blocks.
That's much less than what Bonswick calculates, and less than the energy to boil the ocean!
Then storing 2^64 512-byte disk blocks requires 1.5 grammes of diamond (7.5 carats), while storing 2^128 blocks requires 3e16 kg (a ball of diamond with a radius of 12 km).
2^128 is a big number, but perhaps not big enough to last us through the singularity.
For example, you might split the address space into smaller sections (executable, read only, read/write, shared, etc) which immediately cuts down the space you have. Or you might decide that your method for allocating space is to pick a random part of the address space, and check if it is taken (or perhaps something slightly cleverer). In this case, you will get better performance if there aren't as many collisions.
> I've had people tell me every year, for years, that Moore's Law was
> about to end. I've said the opposite, and have yet to lose the bet.
> Limits on spot density are fundamentally arguments about 2D storage.
> Once we move into 3D -- and this work is already underway -- we will
> get many more orders of magnitude to play with.
I doubt he was thinking flash memory at the time, but 3D storage is already in use. One of the bigger limitations with this is heat, so we have people writing papers like:
Unless I'm reading those links wrong, they are just stacking flash memory on top of each other. How is this different than finding a way to pack hard drive platters closer together? You're just taking several 2D storage mechanisms and stacking them on top of each other. You're not using the third dimension for anything.
(It's actually about 10^38.5.)
A neat calculation for the wow-factor, but not really very informative.
"Let's start with the easy one: how do we know it's necessary?
Some customers already have datasets on the order of a petabyte, or 250 bytes. Thus the 64-bit capacity limit of 264 bytes is only 14 doublings away."
It raises a huge red flag from a business point of view. In 2004 "some customers" had datasets that would cause issues sometime around 2015 - a problem that ZFS purported to solve (among admittedly much more relavant features). In planning for a future that was possibly ten years away for "some customers" instead of focusing on what was relevant to the larger customer base, Sun managed to continue their slow profitability death march until Oracle finally snatched them up this year.
Something to keep in mind when you decide to add "cool theoretical feature X" or "unparalleled scalability" to your four month old startup...
a) That the customers Sun served in 2004 didn't care about anything ten years away. That their time scale for data on disk was less than a decade. And that marketing to them about how ZFS planned for the future was not effective.
b) That building a 128-bit filesystem (as opposed to a 64-bit filesystem) substantially impacted the amount of time it took to engineer the filesystem or impacted the adoption rate by customers. Clearly those are not facts in evidence. Since we know that an integer behaves pretty much the same regardless of whether it's 32, 64, or 128 bits, it's probably safer to assume the opposite.
I take the opposite lesson: I think when planning your startup, a little thought into "are we representing data in a way which we can extend into the future?" is not a terrible idea, especially when the choice is between a 32, 64, or 128 bit integer.
"ZFS is the most amazing filesystem I’ve ever come across. Integrated volume management. Copy-on-write. Transactional. End-to-end data integrity. On-the-fly corruption detection and repair. Robust checksums. No RAID-5 write hole. Snapshots. Clones (writable snapshots). Dynamic striping. Open source software."
I never disputed ZFS being a good file system. I claimed that 128 bit support wasn't a key feature and was in fact a potential waste of time.