Xilinx is already doing something like this: https://www.semiwiki.com/forum/showwiki.php?title=Semi+Wiki:...
Eventually this might even converge on something like a spherical shape, that's when the sum length of the connections is minimal (the surface area of the sphere is too, and that again is a heat removal problem).
The paper quoted:
"To minimize memory latency these 4B machines will likely be smoking-hairy-golf-balls3.
The processor will be one large chip wrapped in a memory package about the size of a golf
ball. The surface of the golf ball will be hot and hairy: hot because of the heat dissipation,
and hairy because the machine will need many wires to connect it to the outside world."
With a footnote:
"Frank Worrell used this metaphor in 1985."
The paper is here:
It's very interesting reading, especially when you look at how much he got right.
We don't quite have those smoking hairy golfballs just yet but we're slowly getting there.
Ideally for volumes, yes, one would go for spheres. But in reality more often than not that translates to a cylinder because of technology trade-offs, with perhaps some endcaps. And not surprisingly, computing has already been there and done that.
If you take a bunch of compute nodes, roll the rack row up to minimize cable length, you end up with ...a classic Cray design from the '80s. And yes, the wiring in the middle of a cylindrical Cray was as 3D as it could physically get...
Looking at the picture, the technology of the era was an obvious dead-end:
and the trend is clear, just as it was back then with VLSI. Now the race is on to integrate those wires at a very large scale... (Cray also had a classic, visually appealing design approach to dump the heat, not unlike the enthusiast PC market today with commercial liquid coolers and some high-end HPCs)
Optical would probably be even better than RF (and that too has its problems).
60Ghz CMOS is already a thing http://www.seas.ucla.edu/brweb/papers/Journals/RJan06.pdf
Watch out for the "Hazards of Prophecy"
I recall when I was there in '99 talking with people about stacking cores. Then a few years later some friends there were doing 64 core research in ~2003-4-ish...
I'm sure they have evolved the voxels since then.
Miniaturization happens not only on the surface (length and width) but also depth. This is where silicon oxides come into play. Flash tech relies either on charge trapping in the oxide itself or in a floating gate. Both become sensitive to effects like tunneling when you scale them down in the 10nm magnitude.
The Bleak Future of NAND Flash Memory:
SSDs need additional circuitry, chips and memory, but MicroSD cards aren't 100% flash either. This shows that the ceiling is very high even with current lithography.
My guess is that there's just not much market for a $200k+, 50 TB 2.5" drive, but it would be possible to make in high volume production.
If you need storage capacity, traditional hard drives will easily beat it in terms of TCO.
If you need IOPS, it's much better to use an array of smaller high performance drives.
I could picture it being useful in some military and space exploration applications but that's about all.
There's little reason to pay a massive premium for huge SSDs if your controllers limits you to a small percentage of the potential capacity of the drive. So instead you get things like the PCIe cards with 4+ controllers that gets RAID'ed together by the drivers.
As the controller bandwidth catches up, I'm sure that'll change.
Doubful, since they are not radiation hardened.
I could rather see a market for smaller size SSDs which contain lots and lots of chips and thereby have their own RAID subsystems.
one can get near the same performance for much cheaper by using consumer electronics with redundant systems and a voting mechanism between systems.
I, on the other hand, am running out of free space on my 4TB capacity desktop, and will be purchasing larger storage devices soon.
In my case it's my secondary computer, so I don't see a need to store much in the way of media (Movie, Music) etc on it, which is about the only way I imagine I would use that much space.
The OS and Applications takes up a decent chunk, as does VirtualBox, but other than that it's just documents and source code.
Oh, and I have an external 3TB drive where I house all of my backups and captures of old desktops. It's completely full, and I will be purchasing another ~3TB drive in the near future for backup purposes.
Making full disk backups and archiving huge collections of media assets you'll only ever use 1% of is just a huge burden of mind and a huge loss of portability and flexibility. Not to mention a waste of time and money.
I know you say you're an amateur as if professionals would have even more, but in reality, they've probably realized what a time sink it is and focused on just the small set of stuff that actually matters.
"You couldn't even sift through that shit in a lifetime, let alone use it all." "media assets you'll only ever use 1% of"
Assume someone's daily routine involves 3 hours (below average) of TV/movies, in HD. 5-6GB a day, 3TB is reached in under two years. 1% of 3TB is reached in a single week. Even hanging on to a small fraction of the media they watch for future viewing/sharing would fill up 3TB in a handful of years.
"Making full disk backups and archiving huge collections of media assets you'll only ever use 1% of is just a huge burden of mind and a huge loss of portability and flexibility."
Managing full disk backups means buying a disk and pointing an automated program at the folders you want. Average time per week: 0 minutes.
Managing huge collections of media assets: This means navigating services to click on movies and shows to download, and occasionally moving a season into a different folder. Average time per week: 5 minutes.
Burden of mind: Remember to buy a drive every several years, otherwise none.
Loss of portability: Keep it on an external drive if you want portability? Not sure what is meant here.
"Not to mention a waste of time and money."
Tell that to anyone with basic cable, as far as watching the content goes. Storing it after watching has negligible additional time/money cost.
Now do you see what the problem is with that comment? While it's certainly possible to be a digital hoarder, having two shelf's worth of movies on a hard drive is extremely weak evidence toward that.
75GB of TV
300GB of Movies
80GB of Music
84GB OS Images
64GB Installers (I rip game cds so I don't have to keep the discs around)
4GB Misc Audio
14GB Misc Video
240GB of cross-device backups
This stuff adds up. If I actually ripped my entire music, tv, and movie collection I'd easily fill up the rest of the disk. Mostly I'm just impatient about only 20MB/s reads off old cds and dvds.
I don't back it up, but it is RAIDed with 1-disk fault tolerance, and that's pretty much good enough for me. I mostly only ever add new content to the NAS or read existing ones, so I'm not worried about versioning.
Best of both worlds imo.
I deleted a few hundred GB of movies a while back and realized I didn't even need a multi-terrabyte drive at all. Feels good man.
There are cloud services that solve this but they have their own set of issues. Many are expensive compared to basic direct/network attached storage, others require photos to be public, others come with dubious guarantees that they will still exist next year.
I get a call from my sister with a 128GB MacBook Air every month or two asking to remind her how to move her baby photos/videos to her external drive. Every time we do it I am surprised how complicated and error prone the process is (we've managed to accidentally permanently delete photos). If anyone has a good recommendation for an alternative to iPhoto (and not cloud based) that understands the concept of archiving photos to a network/usb disk that is not always present I would love to hear them.
It supports both network and impermanently attached usb disks.
Instead, she should be using a private youtube for her videos, and Flickr for photos. The rest should fit in a dropbox account. Not only will her data have a team of professionals trying to prevent the inevitable fuck-up, and she'll enjoy 'cloud' benefits, but also she'll have a chance to curate her collection during upload, because no one has enough time in their life to look at 200GB of home movies and photos every 6 months (it's turning into a landfill)
It is much thanks to me that she does not up it on the cloud. I don't trust cloud services, so that's what I suggest to her -- I'm happy to sysadmin her devices whenever she needs me to though.
I hope in the future there are truly Easy-to-use-for-moms storage devices, along with the software to go with them, that gives them all the benefits cloud may have.
However, I do have something like 6TB of active desktop storage (not including NAS) at capacity because of software, audio samples, disk images, virtual machines, backups, documentaries and shows, etc.
Everyone just uses computers differently. My mom's storage is something like 2TB and I think about 70% of it is home movies and photos. The remaining 30% are duplicates of some of the same files, likely on the same disk in different directories.
If I'm making a home movie, I'll just copy the files to the SDD temporarily. I have 30GB+ free most of the time.
The entire point is that pretty much any other (read: non-Apple) laptop/desktop you buy nowadays is going to come with at least 500 GB, and generally 1 TB nowadays.
No need to get up in arms and whine that Apple is doing everything just right for your tastes.
edit: I obviously wasn't referring to 1 TB SSDs. The parent's comment said that 128 GB is pretty paltry for storage capacity nowadays, as the thread is concerning the rate of growth of SSDs.
Is that the case? Most machines I've seen around the cost of the Macbook Air still come with hard disks or 128G SSDs - maybe a couple might push that to 256.
Indeed, I don't think I've even seen a laptop with 1TB of SSD, aside from the very-top-end-I-upgraded-it-especially Retina Macbook.
Which laptops are these which come with 500GB SSDs in the base models? Or are you suggesting that Apple should make a laptop with a 500GB spinning disk in the base model? Because funnily enough, they have one; the non-retina 13" MBP. Of course, no-one buys that, because it's worse than the cheaper Airs by all metrics but storage capacity, but it's there if you want it.
I'll take that any day over extra storage I don't need.
That's for non SSD drives. So every other laptop/desktop you're gonna get with those storage sizes also has rotating rust disks, with are more volatile due to moving parts, have much worse read/write speeds and are noisy.
Yep, clarified in my edit.
>So every other laptop/desktop you're gonna get with those storage sizes also has rotating rust disks, with are more volatile due to moving parts, have much worse read/write speeds and are noisy.
That's cute. So now hard disks are so obtrusive, fallible, and out-dated that they're literally not an option anymore?
I think some people have such a love affair for the acronym SSD that they forget, disk access is still hundreds of thousands of clock cycles regardless.
SSD benchmarks are clearly better than HDD all around, but not by the orders of magnitude that you seem to believe.
A time N improvement is still better that no or marginal improvement, even if it's not 2 orders of magnitude better.
It makes all the difference between waiting for 5 minutes for some IO process to finish and waiting for 1 minute. 1 second would have been nice, but 1 minute is already a game changer.
Are you being sarcastic?
If they really wanted to keep the price down, they wouldn't strive for 2x or higher profit. MacBooks (any Apple hardware) feature component-set easily sold at half the price. You are simply paying for the Apple logo.
Almost as HP, Acer, Asus, Fujitsu, Toshiba, Sony and co. don't want an easy few hundred million dollars. Or as if you're wrong and there's more than a logo involved.
You have absolutely no clue what you are talking about.
I'm paying for a high quality machine with many features already built in, integrated with a well designed, power-sipping, user security and user privacy optimized operating system that uses modern best practices under the hood that scale to current and future multicore processors, with a solid UNIX at its core with hardware drivers that just work. And amazing support like free replacement motherboards, hard disks, batteries, chargers, keyboards, screens, etc. under AppleCare if anything goes wrong.
Apple wants to keep the price down so they can keep their margins high
For enterprise cache-like applications this makes sense, but with DRAM prices not that far off (only a few times), I wonder if battery-backed DRAM might actually offer better value (and theoretically could be far higher performing) than having to replace worn-out SSDs periodically.
15 years? I'd say no longer than 25 years -- if we're still carrying around computing devices by then (and it hasn't all just been subsumed into the cloud)
van der Waals' radius of silicon is 0.21 nm http://en.wikipedia.org/wiki/Silicon so current 19nm process nodes are already 45 atoms wide, there simply isn't much room for improvement left.
and it looks even deader for Flash than it does for CPU's, as ever finer processes are already producing ever slower and less reliable Flash cells. Those can be, and certainly are, mitigated by ever more extensive read/write parallelization and error correction in the controller, but only so much. "The drive is aimed at read-intensive applications" is a nice way to say that write performance sucks, relatively speaking.
So, if the semiconductor industry won't come up with something unheard of (which it just might, given the scale of the stakes involved) this might be one of the last radical upgrades available.
By 2030 seems like a fairly safe bet (barring major disaster before then).
[Edit: Yes, thanks, I meant TB, not GB. Fixed.]
Sounds reasonable, considering that we were at 1GB (HDD in laptops) 20 years ago.
Capacity will grow until it hits one of two types of limits: physics or the market. If physics doesn't allow for 1PB SSD in a notebook, you won't get it. If the market doesn't see the value in 1PB notebook disks (and I'd claim they won't see it, even after 15 years) you won't see notebooks with 1PB SSD.
"The drive is aimed at read-intensive applications, such as data warehousing, media streaming and web servers. The typical workload envisioned for the 4TB drive is 90% read and 10% write, SanDisk stated."
I don't think you're asking the right question. The more iops, the less space and the less cost. I'd say Netflix is always optimizing for both.
The right question to ask is what is the break-even point between space and cost where these drives make sense, and that would depend mostly on how popular the most popular content is. If everyone were watching the same few things, but enough that it can't just all fit in RAM, it would make a lot of sense to have a lot of these drives. But if the watching is spread out across a lot of content, then not so much.
According to instantwatcher.com, there are 6843 movies on Netflix streaming currently. If we conservatively say that the 1080p stream is 2gb, and the other formats add up to another 2gb, that's a total of ~27 TB of space required.
There are 3657 seasons of TV, let's say an average of 2gb per episode, and an average of 14 episodes a season. That's ~102 TB.
So napkin math suggests ~129TB of storage to hold all of Netflix, or 33 of these drives.
Since they need redundancy and to have servers distributed around the country for peering, and these drives will probably cost ~$10k each (5x premium on consumer), I doubt they'll be putting their long tail content on them.
How many people in Boston are currently watching Mission Impossible, the TV series from 1966? Or content like it? A standard hard drive can serve 500 people HD streams so long as you cache ahead into memory intelligently in big enough chunks.
Within a few generations, SSDs will probably be cheap enough to not even bother with mechanical, for this and many other use cases.
One drive can fit 2000 movies. 75K iops/drive / 183 iops/client = 400 clients/drive (this is 900 Mbps, but the drive says up to 400 MB/s, so not bandwidth limited).
Put 16 in a 2U server for 6400 clients. This is 14 Gbps, so you need two 10 G NICs to complete our Netflix appliance.
But you are right, if caching works we don't need all these drives. With 128 GB of RAM we can fit 64 movies in RAM. I could easily believe that most people only watch the popular movies.
Do you have something to back that assertion up? Seriously, I'm interested, so if you have links to further information, I'm sure everyone would appreciate it.
If I find a good system-level paper, I'll post it.
Anyway, SSD drive wear tends to be rated like this: 5 years for x number of full drive writes per day. For good drives, x is 1. For really good SLC NAND based drives x could be as much as 10, but they are very expensive. For cheap consumer drives x could be 1/10. Based on vendor sales meetings, the aggressively large drives also have low write endurance (but then they say they are not optimized for heavy writes).
Better drives should have a built-in write counter so you can track the remaining life.
Another interesting parameter is their power-off "data retention" time. At end of life, this tends to be specified to be just 90 days.
They were still going after 600TB of read writes though with some degradation:
Netflix doesn't do their own hardware, they use AWS. One can also assume that the cost of the content far outweighs the cost of delivery.
I've heard certain details I'm unwilling to share, but some of what they do is public (https://www.netflix.com/openconnect/hardware).
Also, they pay developers to work on optimizing network performance of FreeBSD for this custom hardware.
Given AWS is one big Tetris game, jamming instances in where they fit, this has worked out well for both parties. Until now, with 'now' being defined as the dawn of distributed commodity compute.
If we start with a HDD/SSD price ratio of 1:7 (based on a quick check with Amazon) and hope for SSDs to get cheaper/TB by 50% each coming 18 months (?) we'd have a strong incentive to switch by year 2019.
Inexpensive, large external HDDs will fill the gap on media storage for consumers long after the main drive has been switched to SSD for speed.
Consumers aren't going to wait for SSD to catch up to 5tb HDD externals that you'll be able to get for $129. They'll simply buy a system with 512gb to 1tb of SSD storage, and pick up the external if they need it.
I think you are being a little premature. All iMacs have hard drives by default.
HP/Lenovo have the top two spots, they're selling tens of millions per quarter. Apple barely breaks 5 million.
Apple doesn't have great business nor governmental support, which eliminates them from being considered at thousands of companies.
The revenue Dell pulled from their consumer division (which includes desktop/laptops/thin-clients/mobile devices and so on) is $8.9 billion.
> End User Computing revenue was $8.9 billion in the quarter, a 9 percent decrease. Operating income for the quarter was $224 million, a 65 percent decrease. Dell desktop and thin-client revenue declined 2 percent, mobility revenue declined 16 percent, and software from third parties and peripherals revenue declined 6 percent.
In Apple's '14 Q1 report, their Mac revenue was $6.4 billion. iPad: $11.4B and iPhone $32B for the total of ~$50B.
I did find articles from early 2012 calling Apple the biggest but that was of course based on iPads almost entirely. For example  has some figures from then, 5.2 million macs to over 10 million units each from HP, Lenovo and Dell.
I'm surprised by how crap the power usage is on the 730 though - it seems to reflect how the processor architecture in it originally came from the DC line, rather than the other way around.
Who would you say is competing with Intel in that space? Samsung doesn't make nearly the endurance guarantees, Crucial kept having hilarious firmware bugs, and I can't think of many other ubiquitous vendors with durability guarantees...
They also have VERY limited writes/sector count (brobably in the hundreds) before they will start to fail, this has been handled by another PR statement "The drive is aimed at read-intensive applications"
Enterprise pays for SSDs that's tested throughly at firmware, controller, memory levels. They're not paying for the fastest, just the most reliable.
Many of the consumer SSDs are not tested to this extent, that's why the prices are so cheap. Look at OCZ as an example.
Would you as a business owner pay 100$ for 1000 SSDs that may glitch out early and have firmware bugs or would you pay 300$ for SSDs that's solid without having to go through firmware updates, glitches, and so on?
Latter looks pretty good to me, particularly if you're used to spinning rust.
For the typical consumer, not necessarily what to decide what to stuff into an HP 980, is it now clear that cost per byte is evening out and that soon the typical array of laptops in your local Best Buy including the cheaper of the lot will have SSDs?
Tl;dr, are hard drives being completely phased out faster and faster?
I wonder if we will ever get nand storage below mechanical in price per GB. Even in 2022 I can image 8 - 10 TB mechanical disks being only $100 still, while the 2TB SSDs cost about that. Because by then we are probably hitting some physical limits.
1. MLC at 512GB-1TB is all that needed for the boot/app drives for the next 5 years. They're already at decent prices, 3D stacked chips and another smaller process should be possible within 5 years to bring 1TB to about 200$.
2. TLC at 4TB+ for media/archival storage. I'd foresee 2TB TLC drive at 100$ within 5 years.