Hacker News new | past | comments | ask | show | jobs | submit login
256TB SSD from Samsung (samsung.com)
94 points by chukfinley2 10 months ago | hide | past | favorite | 144 comments



I was looking for some 8TB m2 drives recently and they are still surprisingly expensive, considering you can get 2TB ones on sale for less than $100. But there are barely any options too, still a niche market https://pcpartpicker.com/products/internal-hard-drive/#A=600...


Aside from lack of demand, there are some significant technical reasons that contribute to 8TB M.2 drives being expensive niche products, both stemming from the fact that 8TB of flash is still a lot of chips: PC OEMs prefer M.2 SSDs that are single-sided because many thin laptop designs require that, and many low-end SSD controllers literally don't have the pin count necessary to interface with that many flash chips.


The lack of demand vanishes once prices come down to something reasonable.


I think this might be an appropriate time to make the economist's distinction between the quantity demanded at current prices vs the overall shape and location of the demand curve. Lower prices would shift the market equilibrium to a different point along the demand curve, but there would still be far less demand for 8TB drives than for 1TB drives. Shifting the demand curve itself upward would require something like a big change in usage patterns causing more consumers to need 8TB of local storage. Without such a shift, consumers will mostly be happy to save money on 1-2TB drives rather than move up to higher capacities they don't need.


Those $100 2TB drives are probably because of this: https://www.theverge.com/22291828/sandisk-extreme-pro-portab...


Btw, since the thread is about both Samsung and SanDisk now, I can tell a personal experience with both regarding the same kind of issue.

I had the exact same problem with SanDisk as explained in The Verge article. I had it in January, before it became a known major issue, so I guess they were not prepared for that. However, the process was smooth enough, and I just filled a ticket with details, received a couple of questions, send it to their service center and received a new one later. It works fine now.

But I had a similar situation with Samsung SSD too, it died after a couple of months. But they made it impossible to replace it. The process is so bad that I was never able to get into any support for that case, and never received any meaningful response. So I just gave up eventually, so I guess I just donated few hundred dollars to Samsung.


Not sure. Samsung, WD, SK hynix 2TB drives are regularly on sale around ~$100

https://www.reddit.com/r/buildapcsales/search?sort=new&restr...


I also had a problem with Sandisk 2TB portable SSD. But it dies for a few days and comes back. It's wired. I will not keep anything important in that drive after this incident.


Have you checked if there are available firmware updates for your model?


I had that Sandisk 3 years ago. Same thing happened to me. Lost a lot of data. Now, I banned anything by Sandisk and even evangelized anything but Sandisk - probably a couple 10mils of Sandisk revenue averted. I'm surprised, 3 years after, Sandisk SSD extreme STILL has same problem. Now probably easier to say "I told you so".


Sounds like someone is salty over not following 3-2-1 backups.


I see Samsung 2TB drives on sale for around $100 periodically, are they subject to the same problem?


Samsung drives have been fairly bulletproof, even among their budget offerings.

I've recently had an unexpected run of failing WD Blue SSDs and just had a new Sandisk drive (not one from gparent) fail, right after I backed another drive up to it.


I think with the rise of streaming and cloud storage, the need for large storage devices on a personal level is really declining for most people.


I think with the pricing of cloud storage and photo, video and assets taking more and more space, the opposite is true.

For example apple requires larger storage to even use prores:

https://support.apple.com/en-us/HT212832

Or Baldur's gate 3 takes 123GB of storage for the install.


People swallow the cost because it is easier than migrating. My partner has made a lot of albums, tagging in google photos extensively. When her account reaches the free limit, I offered her to use google takeout and host her pictures in a local nextcloud instance but told her I didn't had immediate time to dig out if getting back tags and albums from google tameout's metadata was possible and she just grabbed her card details and subscribed to the drive expansion.

Gamers are an exception, but gaming is a niche market. Most people using computers are using laptops with integrated intel gpu and never install any game.


> For example apple requires larger storage to even use prores:

I don't think this is about capacity, but rather it's about performance: small SSDs are slower because they have fewer flash chips to write to in parallel. But that effect goes away as you move up to higher capacities and hit other bottlenecks; even high-end PCIe gen5 SSDs that are too power-hungry for anything other than desktops and servers reach full speed by 2TB.


Well, you wouldn't normally try to put a 123GB game on Google drive.


Alternatively, with the ever increasing exploiting of online providers, and the ongoing government led attacks on secure encryption, then maybe personal/local storage will turn out to be the smarter option after all. ;)


Dont get lulled into a false sense of privacy, the governments can and will hack anything connected to the Internet, sometimes illegally for trivial or whimsical reasons.


True and probably why Apple sells laptops with such ridiculously low storage of 256MB or charges insane amounts for reasonable amounts - because they want to force you into reliance on their services. But this penny pinching policy does cause major problems eg my daughters system files have taken up 90% of her SSD - so now she can no longer upgrade or install new software.


Why the downvotes? This is observably true “for most people”


This is bizarre to me. Cloud storage kills the need for small drives. So for the individual with modest storage needs, the cloud is relatively inexpensive and thought free. But the moment you get over 2TB the cloud gets extremely expensive. So if you're a person who needs to store a lot, large drives make a lot of sense.

This to me, it's the opposite of observably true. You would think large drives would be most in demand for personal use.


> Cloud storage kills the need for small drives.

I'm not sure that cloud storage kills anything. There's reasonable mistrust in it and much of the world lives with unreliable infrastructure..


Most people aren't able to self-host and maintain a more reliable infra for various reasons: lack of knowledge, lack of free time, they don't necessarily have 2 different houses, lazyness, etc.


are you only asking/observing yourself and your tech-savvy friends?

cloud storage doesn’t make local storage irrelevant. just less relevant. drives still fail, new machines are built, etc, etc. and when that happens to the average consumer, they don’t seek out the biggest drive available. just whatever is down the street.

and just down the street doesn’t want to stock dozens of massive drives, lest the average consumers balk at the prices.


The cloud isn't supplying storage, the cloud is providing reasons not to need storage in the first place (piracy vs streaming).


Cloud services are providing both things.

Most of the people in my life rely on cloud storage for their digital lives - mostly Google Drive or iCloud. It’s the primary destination for photos and videos for many people, and services like Google docs both store files and remove the need for traditional local storage.


Most people never stored their TV on their personal devices in the first place. People shifted from rental to streaming / DVD & Blu-Ray to streaming.

Every year I have more pictures and more documents and more stuff in general to store. That or I have to fuss with pruning things, which is a PITA and I'm reluctant to do.


Naive question: Does the decline of smaller local storage undermine economies of scale that benefited larger local storage?


I only use cloud storage as a convenience / easy backup, not as "permanent" storage. My need for large storage devices hasn't really declined, it's pretty much crept up year by year.

Also - it starts getting very spendy for cloud storage after a few TB.


It's QLC NAND (~1000 program/erase cycles from https://www.purestorage.com/knowledge/what-is-qlc-flash.html).

eesh.

[Update] from https://www.anandtech.com/show/20007/samsung-teases-256-tb-s...:

"Samsung's 256 TB SSD is based on 3D QLC NAND memory and probably uses innovative packaging to cram multiple 3D QLC NAND devices into stacks."

3D QLC should have higher endurance than regular QLC. We'll have to wait and see for the spec sheet.


• 4-bit per cell = QLC ······ 1'000 write cycles per cell

• 3-bit per cell = TLC ······· 3'000 write cycles per cell

• 2-bit per cell = MLC ····· 10'000 write cycles per cell

• 2-bit per cell = eMLC ··· 30'000 write cycles per cell

• 1-bit per cell = SLC ··· 100'000 write cycles per cell

The above degradation cycles are optimist. Looks like the SSD technology is not evolving, but counter-evolving in terms of data integrity; there are neither MLC, eMLC nor SLC disk nowadays (seven years ago widely available).


> Looks like the SSD technology is not evolving, but counter-evolving in terms of data integrity; there are neither MLC, eMLC nor SLC disk nowadays (seven years ago widely available).

Some of this is from more advanced SSD controllers with more robust error correction, enabling drives to get better uncorrectable bit error rates out of flash with a higher raw bit error rate. But most of this is because SLC and MLC simply had more write endurance than people needed, and sacrificing endurance to get higher capacity made for more useful drives. And as larger SSDs have become more affordable, SSDs have been taking over more use cases from hard drives, so unlike a decade ago SSDs aren't just for frequently-modified hot data any more—you don't need as many P/E cycles or DWPD when most of your data isn't changing very often.


I think they crossed the limit of sacrifice that is understandable given the endurance of the materials currently used in NAND.

Cell modes are done by charging a cell to a level on write, and measuring the voltage on read. The more bits per cell, the more voltage divisions, and the sooner these voltages divisions turns unreadable, while the material degrades. Technically is on the manufacturer's firmware to decide what cell mode is going to use the NAND, assuming the hardware allows it (ADC/DAC, etc).

But even if the data is not changing very often, the controller of the disk needs to do internal writes for to refresh the cell, or even change the data of cell, in order to avoid data loss.

So IMHO, they are not giving more capacity, due that capacity is going to be consumed for to try to keep alive the data. If one fills the QLC disk of the article for long term storage, bad things will happen, as I understand it.

May be should be the user who decide what bit mode must use the disk.

But what is happening, I think, is along years they are stretching like chewing gum the NAND incrementing the bits by cell, hopping the people adopt the reduced endurance as a normality at prices of data-stored, not of NAND chips-used, at time they limit their chips production.


> So IMHO, they are not giving more capacity, due that capacity is going to be consumed for to try to keep alive the data. If one fills the QLC disk of the article for long term storage, bad things will happen, as I understand it.

Your opinion aside, QLC drives really do offer more capacity for your dollar, and the extra write cycles necessary to stave off data degradation do not render the drives useless. You absolutely can fill a QLC drive and leave the data on there. You may need to perform scrubs a bit more often than you would for an equivalent TLC drive if you're not writing enough new data to keep things fresh, but that's hardly a showstopper.

> But what is happening, I think, is along years they are stretching like chewing gum the NAND incrementing the bits by cell, hopping the people adopt the reduced endurance as a normality at prices of data-stored, not of NAND chips-used

People have been predicting a catastrophe of low write endurance for many years, and it keeps not happening. There's some plausibility to the notion that consumers may end up getting screwed over by cheap drives that cannot hold up under reasonable usage, but consumer QLC drives have now been on the market for five years: the time limit on those drive warranties is now expiring, with no sign of an epidemic of premature drive failures. And in the enterprise SSD space, it is foolhardy to presume the big customers that would consider buying 256TB SSDs do not understand their workloads and endurance requirements well enough to correctly determine whether QLC is suitable for their needs.


> but consumer QLC drives have now been on the market for five years: the time limit on those drive warranties is now expiring, with no sign of an epidemic of premature drive failures

The failure is not shown as an epidemic, it is shown firstly as a degradation; in forums under questions about why turned so slow the disk that find as answer to enable the RAPID mode (so using RAM cache), in other post as regrets about the purchase, and things like that.

those who know what kind of thing they are buying with QLC store disposable data, at time they do not fill the disk.


I really don't see how conflating performance problems with data retention problems helps illustrate your point.


Firstly arise the performance problem, if one keeps using it comes the data progressive loss, those are the ones at forums talking about high trim data errors.


[2018] doi.org/10.1145/3224432 https://people.inf.ethz.ch/omutlu/pub/3D-NAND-flash-lifetime...

[2021] doi.org/10.1145/3445814.3446733 (use sci-hub)

" 3D NAND density-increasing techniques, such as extensive stacking of cell layers, can amplify read disturbances and shorten SSD lifetime. From our lifetime-impact characterization on 8 state-of-the-art SSDs, we observe that the 3D TLC/QLC SSDs can be worn-out by low read-only workloads within their warranty period since a huge amount of read disturbance-induced rewrites are performed in the background. "

[..]

" the SSDs entered an era where one can wear out an SSD by simply reading it. "


> The PM9D3a achieves a mean time between failure (MTBF) of 2.5Mhr, a 25% improvement over the previous generation, allowing customers to operate more stable services.

This number doesn't reflect that, but maybe the MTBF is just a distraction.


Yes, MTBF as reported on the spec sheets is purely a distraction when it comes to SSD reliability.


It is very eesh. But tbf, I would precisely use this for single-drive monthly cold storage backups of a large raid nas for example.


NAND doesnt work for cold storage.

https://www.ibm.com/support/pages/node/6382908

> The JEDEC spec for Enterprise SSD drives requires that the drives retain data for a minimum of 3 months at 40C

>A system (and its enclosed drives) should be powered up at least 2 weeks after 2 months of system power off.


The bit that everyone likes to leave out about the 3 months at 40C retention standard is that it applies to drives at end of life, with all their rates write endurance used up. Drives that are not so worn-out have much longer retention—and a drive used only for cold storage of infrequent backups would hardly use any of its write endurance.

The recommendation to have the drive be powered up for two weeks after taking it out of cold storage is only so that the drive can do it's low-priority background checks of data integrity. If you do a full scrub of your data from the host system, you don't need to worry about that, because you'll be giving the drive all the same opportunities to discover bitrot.

None of the above changes the fact that SSDs are still quite expensive for cold storage purposes.


I'm curious what form factor the drive is. I've wondered if you could squeeze a bunch of m.2 SSD's into a 3.5 HDD case, how much capacity you would have. I wouldn't be surprised if this SSD ends up fitting the 3.5in standard form


I think the current replacement for 3.5" style drives is done with the U.2 connector / form factor (?). https://en.wikipedia.org/wiki/U.2

Not sure if anyone has maximized the space like you're thinking however.


I think all the record-setting high capacity drives have moved over to EDSFF E1.L "Ruler" form factor, though I'd also expect 64+TB drives to be offered in EDSFF E3 sizes that are the more direct successors to U.2/U.3 2.5" form factors.


TIL! Maybe I'll have a "EDSFF E1.L" in my homelab someday :)


Heat's going to be an issue, but I guess if the 3.5 enclosure was skeletal or otherwise provided adequate heatsinking, it'd be okay.


I'd worry about heat issues doing that.


Would love 2 of these in my nas. My spinning drives are going on 8yr old now and starting to get noisy. Not mention the entire thing is heavy.


How often are you moving your NAS?


I carry it with me wherever I go


My name is Nate. Nate Attached Storage?


Seems legit...


Hah! Not often, but I do have to shift it around to clean the fans every few months. It's in a big ole tower case. Have 8 drives. It's a bear.


How long will it take to resilver an array of these?


You'd be using a different strategy, such as erasure coding or a variant/evolution, where you don't need to reinstate the contents on a new device when it's inserted into the array.


I remember there was a lot of stigma for QLC drives, but will these eventually get developed so speed/reliability > MLC/TLC?


No, because MLC/TLC speed/reliability is a moving target.

QLC is taking over more and more of the market every year and will continue to do so though because it’s among other things pretty perfect for consumer devices, offering good enough performance especially with a fat cache in front of it, combined with lower prices.


Are there actual situations where you would even want this much data on a single drive?


I am 100.0% sure this is what someone has said for 256GB drives, 256MB drives, 256KB drives, and the same person will be saying it for 256YB drives.


I clearly remember getting an 80gb drive, circa 2001-2002 I think, and talking with my friends about how impossible it would be to ever fill.


The problem is that the size of media has been growing exponentially.

When I keep wondering how my phone is running out of space every time its images / videos. Even when you look at an app that is like 400 MB its not 400 MB of code, its like 350 MB of images and 50 MB of code.


I’d argue media storage usage is starting to level off somewhat because we’re approaching the limits of human perception. For movie content people with average to good eyesight can’t tell the difference between 4K and 8K.

Environmental regulations also bite, an 8K tv that’s “green” is going to have to use very aggressive auto dimming. Storage capacity growth to me looks like it’s outstripping media size growth pretty handily.

Now this isn’t to say I can’t think of a few ways to use a few yottabytes of data but I don’t think there is a real need for this for media consumption. You might see media sizes increase anyways because why not store your movies as 16K RAW files if you have the storage, but such things will become increasingly frivolous.


I would agree with you; but as technology improves we move the goalposts.

iPhones for example capture a small collection of images at the same time which are able to be replayed as a small animation (or loop) called “Live photos”.

I am certain the future will hold for us: video which allows us to pan left and right.

These both require more space.


Interestingly enough I've been messing around with ffmpeg recently and the newest high end codecs (VVC / h266) drop HD video size by 30% or more, it's pretty crazy.

It'll be very interesting where AVIF and similar next generation image formats go in the near future, hopefully we'll get some reduction from the exponential growth.


I laughed today when it was announced Buldar's Gate 3 is 120GB


Even with all these advancements, 120GB for a game is still and always will be a lot!


Having many very high resolution textures adds up. I’m sure we’ll see it rise, especially in the age of generated textures and materials.


>I’m sure we’ll see it rise, especially in the age of generated textures and materials.

if generative AIs get good enough then I suppose at some point the data transmitted for games and media could be significantly less than now -- you'd 'just' need to transmit the data required for the prompt to generate something within some bounded tolerance.

Imagine a game shipping no textures, generating what was needed on the fly and in real time to fulfill some shipped set of priorities/prompts/flavors.

we're not there yet but it seems like on-the-fly squashing of concepts into 'AI language' is going to be a trend in lossy compression at some point.


There are actually a lot of procedural games out there, I think No Man's Sky uses some of those techniques, but they definitely have been around since the 80s. The thing now is that the fidelity can be much higher, for sure.


Even 50MB of code is insane.


I remember being a kid at Babbages at the mall in the 90s and some guy told my friend and I that he just built a system with 8 gigs of storage, and my friend I talked about it endlessly as the coolest thing ever.


If it helps, I bought circa 1993 an Apple Powerbook (for at the time an awful amount of money) running System 7, that came with 40, 80 or 120 MB disk:

https://en.wikipedia.org/wiki/Powerbook_160

I chose the 80 MB version as the 40 was too little, and the 120 was way too much for non-professional use (impossible to ever fill).


While I agree, it's been hard filling up the 2TB drive in my laptop.

My home server has a couple dozen terabytes (on spinning metal) and, with current fill rate, it's predicted it'll need an increase in space only after two of the drives reach retirement according to SMART. It hosts multiple development VMs and stores backups for all computers in the house.

Another aspect is that the total write lifetime is a multiple of the drive capacity. You can treat a 256TB drive as a very durable 16TB drive, able to last 16 times more writes than the 16TB one.


>While I agree, it's been hard filling up the 2TB drive in my laptop.

Then you're defiantly not torrenting enough "definitely legit" content as I am. Once you sail the dark seas it piles up quick. Or maybe I have ADHD.


Don't even have to set your sail; this landlubber likes likes to shoot videos with a smartphone, and these days, recording a few minutes of a family event, or even your plane taking off, in decent quality, will easily give you a multi-gigabyte video file. And that's for normal videos; $deity help you if you enable HDR.

And yes, this is the universal answer to "how much storage is enough" - use cases will grow to consume generally-available computing resources. Today it's 4k UHD + HDR; tomorrow it'll be 8k UHD + HDR, few years later it will be 120 FPS lightfield recording with separate high-resolution radar depth map channel. And as long as progress in display tech keeps pace, the benefits will be apparent, and adoption will be swift.


I'll be curious to see the file sizes for Apple's version of 3D video capture in their Vision goggles. After one, two or three generations, I'm sure the first gen files will look small and lacking.


Of course. It won't encode touch and smell.


I've actually found my videos are not increasing as rapidly as I would expect. I've been reencoding in x265 and the file size difference is shocking. Right now I'm not ditching the existing original files but I may do that at some point, or just offload to a cloud service like Glacier


I’m right up next to a limit on live (easily-accessible, always visible in photo apps) cloud storage, with years of family photos and video taking about 95% of that.

I definitely don’t want to delete any of it, so I have been just hoping for bigger storage to be offered soon, but…

I hadn’t considered that re-encoding could be an option. I take standalone snapshots of everything every few months so if re-encoding would make a significant difference I might have to try this.

Do you have any tips on tools, parameters etc. that work well for you, please?


I use a shell script with ffmpeg. I encourage you to check out what works best for you but honestly the quality is pretty stellar with just a really simple one like

    mkdir -p reencoded

    ffmpeg -i input_filename.mp4 -c:v libx265 -crf 26 -preset fast -c:a aac -b:a 128k reencoded/output_filename.mp4
That's a fast single-pass constant quality encode - a two-pass encode would be better quality for the size but I find that very acceptable. It knocks down what would be a ~2gb file all the way to between 800mb - 1200mb with very reasonable quality, sometimes even more - I've seen a 5gb file become a 400mb file (!!). You can experiment with the -crf 26 parameter to get the quality/size tradeoff you like. I run that over every video in the directory as a cron job basically.


I think, for me, it satisfies some kind of hoarding instinct. I have a hard time keeping 'random junk' laying around my apartment, but I have absolutely no problem keeping a copy of a DVD I ripped 15 years ago that I will probably never watch again, and would probably be upset if it disappeared for some reason.


Or just download a few modern games.

No torrents here, absolutely none.


Blu-rays can take up 25gb each, so just a decent collection of those could easily consume most of one of these drives. If you want to do basic model tuning in stable diffusion, each model variation can take 7gb. This level of storage would mean you could almost setup a versioning system for those. And finally, any work with uncompressed data, which can just be easier in general, could benefit from it.


256TB is 10,000 25GB BD movies.

Even with brand new 25TB 3.5" drives, it's 10 of them, each holding 1,000 movies, for a total of 20,000 hours of entertainment or, roughly, 2 years of uninterrupted watching.

That's a lot.


Oh look at Mr. “I pay legitimate streaming services for all my tv shows and movies” over there. (=

I have a 12 TB NAS that is 99% full at the moment. Should I delete movies I may want to watch later, knowing full well they aren’t easily available on the streaming services I pay for? Ha.

It fills fast!


The people streaming their data are just using someone else's SSD... at the end of the day all this data we generate and consume sits somewhere


I'm also deduplicating that data by not storing it locally.


They have 16TB NAS drives for $300 now so if you have decent disposable income just upgrading the drives one by one is probably a decent strategy.


Sounds like you need a bigger NAS. I throw a fresh 10TB drive in mine every 3-4 months.


Start. Smart to cycle drives out after 3 years too.


I'm thrifty, I buy drives that businesses already cycled out after 3-5 years.


You laugh in the face of danger; a real risk taker! (=

I love all the Backblaze drive update posts about the lifespans of storage media.


It's interesting to think that, as flash densities surpass hard disks, it'll become cheaper to store data on flash than on spinning metal once you factor in rack space and power consumption.

Won't take that long.


Usenet is my backup. I've tried to make Backblaze my backup a couple times but the ETA on completing the first pass is always right about never.


For the kind of usage a streaming device has, an SSD is overkill. For that, spinning metal is probably a better choice. OTOH, 256TB of spinning metal take up space and is quite noisy.


Once you start saving media or playing with ai models space goes quickly.


That's what the server is for.


And there are many reasons for one to prefer having their workstation be their server.


In fairness, it has a screen and keyboard connected, but no mouse. Adding a mouse would be trivial.

Anyway, 20TB takes a 3.5" bay, something my laptop lacks.


>256YB drives

Ah yes, Yagnibytes.


Y'otta look that one up.


"640K ought to be enough for anybody"


Was this the same person who said that 640KB ought to be enough for anybody?


They are probably still right. How much of the computing resources we all now have access to do we actually need?


Many novels are more than 640KB of ASCII text.


They are, but ought they?


Makes you wonder how people read long novels before they had enough RAM!


I vividly remember seeing a 5TB drive at Fry's Electronics sometime around 2010-2013 and thinking to my self "Who in gods name would ever need that much space"

I now have 24 terabytes in my NAS


But practically don’t you reach a threshold where storing that much data on one drive makes it a bottleneck and safety risk until the speed of the surrounding systems catch up?


I do digital histology, and our (research) lab currently has 204TB of image files. They live in a data center, of course, but if my institution decided to spin us off as a company or something and we needed to move the data, it'd be way faster to download it to disk and upload it in the destination center. I'm not really sure we'd do it with just one giant drive instead of a whole lot of 1TB ones, but who knows.

(I'm currently working on sending 100TB of images to some colleagues at the NIH for a study, we're doing it about 500GB a night for the next year or however long it'll take just because there's no hurry on the data, so it's not just some academic thought exercise!)


Exactly, we just gobble up all the storage there is. In diagnostics it's easily 250GB per patient just for HEs. And if stuff like CODEX or light-sheet (or some other 3D) microscopy become common place even these drives won't be enough.


That's just 5 20tb drives. Some cloud providers will copy it on to them if you send them in, and usually cheaper than the bandwidth cost. Same for ingress


A drive these days is a CPU, memory, and some flash chips. If the CPU and memory are swappable (isn't in consumer SSDs, no idea about enterprise), then one drive today is really many independent pieces of storage media. Thus, you'd imagine the failure case to be more like the failure case for one entire storage server (pipe drips on it, tornado sucks it up) rather than worrying about the failure case for mechanically-linked hermetically-sealed platters spinning at high speed.

There is always the chance that all the flash chips fail at the same time because of a manufacturing defect. That has always been the gamble over multiple drives as well; many documented cases of all the drives in a RAID array failing at the same time. (This happened to me once! Terrible shipping from NewEgg damaged all 3 drives in my 3x RAID-1 array. I manually repaired them by RMA-ing the disks one at a time; fortunately different blocks failed on different drives and with 3 drives you could do a "best of 2".)

No matter how many independent drives you have, you will always need your data stored in multiple data centers to survive natural disasters. So I don't see 256TB in one device any differently than putting 32 8TB SSDs in one server. If you need that much storage, you spend less of your day plugging it into your computer. Savings!


This is why:

>Compared to stacking eight 32TB SSDs, one 256TB SSD consumes approximately seven times less power, despite storing the same amount of data.

This is aimed for servers, where the electricity costs are higher because they are running 24/7 and space is limited. Also less power means less heat.


I bought my first HDD in the mid-90s: 850MB. There was a 1.2GB model, but I thought, "Why would I ever need that much space?" This was before videos, before mp3s, and images were all low quality jpegs.


3d scans are the future

Imagine how much data could you generate daily!

Thos aside, a FHD movie in good quality still takes 10-30gb

We are still limited by size and internet speed and need to compress data


Back in the summer of 1989, I did an internship at Imprimis (later bought by Seagate). The big thing they launched that summer was the Wren VII hard drive, which was the first consumer hard drive with 1GB of storage. It was massive!

I learned a lot about Statistical Quality Control that summer, and built them some tools for improving their SQC across all their models.


Yes, there are big data applications where direct-attached storage density has a large impact on the economics of working with and analyzing that data. This is mostly due to bandwidth constraints and the fact that many analytical workloads can run efficiently with limited compute/memory relative to storage. Using a vast number of servers when they are not actually required introduces its own challenges. Sensing and telemetry data models sometimes have this property.


Yeah, when you're trying to pack as much storage into a small space, such as data centers with expensive real estate. Or where distance is constrained due to latency, such as high end machine learning workloads.


npm install **/*


It's likely that `size("npm install */*")` grows faster than drive capacities.


OLAP/data analysis.

If I have a century’s worth of point-in-time data, and I want to quickly run various tests against it: data marshaling will kill my soul if I’m using an HDD or various incarnations of hot-cold disk schemes.

Granted, I’ll have to read it in 1TB “pages” since motherboard and RAM engineering haven’t gotten us very far.


some kind of scaled up database server.

they build now with many smaller SSD attached, and maybe with this large one all maintenance process will be encapsulated in one component, so you can think about it as NAS but placed directly inside the server.


Datacenters where you are paying a huge premium on every square inch of physical space used. Also currently every other common way to store this much data eats a lot more power and generates more heat.


I think any data center would want some of these as long as the speeds are enough to reconstruct a disk after a failure in a reasonable time in a RAID configuration.

Data density is a hell of a drug.


Yes, there are scientific computing problems that are significantly easier if you can store the entire dataset together.


someday those 60GB bluray rips I see on trackers might be viable for a local media server.


I’d argue they already are, it’s not infeasible to get ~320tb into an overstuffed NAS (two 10 drive arrays with two drives as parity) currently with a few drives being overly hot and we’re seeing HAMR hit the market now and should probably easily see density double before too long. We’re at the point you can home movie library the size of a streaming site’s library for less than 10k without needing a rack. If you drop density a bit and use used enterprise SAS drives you could probably get things done for 5k + a decent power bill. Still inaccessible to most but plenty accessible to an enthusiast with some disposable income.

QLC would be nice for such a home application over noise/space/power usage concerns but the cost is still extremely high.


its wild because the 4gb h265 ones are great

I have 4k screens and an okay sound system, who are these whatever-philes that think these 60gb rips change the experience meaningfully

like, how many audio tracks you need and the other 50gb so you can stare at black pixels and say “wow these blacks are so impressive”


I can absolutely tell the difference and it’s frankly annoying to collect something which will become obsolete not just within your lifespan but when you’ll likely have a lot of lifespan left to go. 4K blurays to me hold up compared to 5k/6k/8k raw footage to the point I wouldn’t be too troubled if I had an old format on my NAS when an h.267 8k version came out.

You can have a great experience by streaming videos directly off shady streaming sites and using a cheap Hisense TV but there will always be a market for people dropping tens to hundreds of grand on some elite home theatre with inky blacks and perfect sound and maybe a datacenter storage array in their garage. It’s practically a hobby.


you know, I’ve never actually bought a bluray video before, maybe I’ll pop one in my ps5 and see if I’m blown away, or more likely if the other sources start having things I cant unsee

I did recently get back into piracy, and did a couple comparisons since I figured 4gb h265 is so last decade, and I really wasnt amused by the larger bluray rips. I have 20/10 vision acuity and can also see a broad vivid colorspace. I have great monitors and screens chosen for that color space and quality too.


Honestly if we compare a 10gb compressed re-encode to a 4K bluray rip or remux I would have trouble telling the difference but 4gb is pretty small especially for certain kinds of movies.

There’s certain scenes where I can definitely see compression artifacts or whatever and others where quality differences cannot be noticed.


My guess: this probably solves the issue with compression artifacts creating annoying blocks of color in dark/black areas of the video, which is increasingly important as the past decade had movie and TV show makers all switch to shooting everything ridiculously dark.


How far away are you sitting from your TV, and how big is it?


Compact and shock-proof storage for video data? Sure.

Other cases?

It can be a 1T in some use-cases, but DPWD is probably low.


Are you referring to the relationship of network to fill it?


Any bets on the price for this?


Its probably one of those 'if you have to ask, its out of your budget' type things.


A couple teradollars ;-)


If they actually brought it to market now as a high-volume product, it would probably need to be closer to $20k than $25k to be successful; I haven't tried to estimate how much of a premium it could get away with on account of the high density.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: