To parallel an old saying: "big, fast, reliable - pick two."
 The terms "triple-level cell(TLC)/quad-level cell(QLC)" are entirely misleading, since they imply only a multiplicative increase in the actual number of distinct voltage levels in each cell. One wonders if they are deliberately downplaying the associated reliability issues by making it seem multiplicative instead of exponential. These should really be called triple-bit/quad-bit or eight-level/sixteen-level cells.
 Remember when flash disks had real binary capacities? I have a 64MB USB drive that really contains 131,072 user-accessible 512-byte sectors --- and it's still working, because it's SLC flash.
 I've worked with NAND flash over the years and noticed something interesting: you can very easily find datasheets for SLC and MLC (2-bit) flash, which give the endurance/retention. But there is very little on TLC (3-bit) flash --- they're almost all NDA'd leaks, which a few years ago were already rare, but seem to have mostly disappeared now --- and basically nothing on QLC/4-bit. Why the secrecy? It makes one wonder if there's something inconvenient about this high-density flash, that they don't want people to know...
In this case, perhaps they decided to pick "big" twice.
Eventually error correction will take more bits than it saves and the number of levels per cell will stall.
I would say our SSD's are actually much more resistant to errors than hard drives were. Because the error correction in spinning disks is generally crap.
With flash, manufacturers have been forced to include extremely good ECC schemes because cells die all the time. Since errors are generally random and happen in large numbers, the life of the drive becomes predictable instead of the death cliff we used to see with spinning disks.
Part of it is also Sandforce. The flash to make SSD's existed a few years before they came on the scene but they were the catalyst. They created a sophisticated controller that could do very good error correction and wear leveling, allowing the first practical SSDs to be built.
Doesn't this mean there isn't much room for improvement?
Edit: I'm dumb. Big storage. Duhhh!
By the way, unless there is some technology I don't know about, microSD cards have consistently been the highest density storage devices for many years running.
That's because they're not much bigger than the raw NAND flash die itself, encapsulated in plastic: http://bunniestudios.com/blog/images/microsd_lineup.jpg
Of course, biological data storage (DNA, proteins, etc.) has the potential to be several orders of magnitude denser ( https://news.ycombinator.com/item?id=4396931 ), but the technology is still in development and highly error-prone.
Now the same storage is the size of a thimble.
Of course the competition was 74 KByte floppies at the time.
which apparently has been moved out of the basement of Gates Hall but is still on display elsewhere at Stanford.
(Note that this is from 1996, so it's apparently 40 GB rather than 1 TB.)
Unfortunately, almost all phones continue to support microSD cards "up to 32 GB". Some will (reportedly) work with 64 GB cards, but if I understand correctly, others need hacks/tweaks to access >32 GB cards ...
Looks like you're in luck.
Nevertheless, in order to be fully compliant with the SDXC card specification, many SDXC-capable host devices are firmware-programmed to expect exFAT on cards larger than 32 GB. Consequently, they may not accept SDXC cards reformatted as FAT32, even if the device supports FAT32 on smaller cards (for SDHC compatibility).
That seems like a gross "abstraction layer violation" to me --- like making SATA controllers which work with SATA6 HDDs only when formatted with NTFS. The filesystem should have nothing to do with SD, which simply implements a block device abstraction. No doubt Microsoft was involved in this ridiculousness...
There is a point to this: SD cards are used in all sorts of low-level embedded applications, with firmware that uses very inflexible (usually assembler) code to address the card. The assumptions about the SD wire protocol + the FAT32 filesystem + even the particular location of the FAT table on disk are all knotted together in these devices.
Because of this, it's not even just that SDHC requires FAT32; SDHC requires FAT32 done with a specific formatting tool released by the SD standards group. Because your OS might put the FAT anywhere, but the tool puts it in exactly one place, and that's the place that dumb embedded devices expect to find it.
which is the same reason that there is no implementation in the kernel, only fuse. ugh.
From experience, the Galaxy S7 supports at least 128GB (a quick search returns a 256GB max). S6 didn't support microSD, S5 supports 128GB, S4 supports 64GB, S3 supports 64GB.
Some of us are already living in that future by using an mp3 player with two uSD slots :)
In my experiments in the past I was shrinking 80GB to about 15GB.
So if anyone has a huge music collection, you can still probably fit it on your phone without going crazy on the sd card. Just checking my collection is 2 months uninterrupted runtime right now.
Also my TRS-80 Color Computers, my Apple IIgs systems - I need my own personal computer museum for all my old systems...
Reliability depends on the particular card and vendor. I'm using a JetDrive (an SD card specially designed to snugly fit the MacBook Air's SD card slot) and have had zero problems.
IIRC you can even get a Mac to boot off an SD card (e.g. to run Debian occasionally), but the performance is better with a USB3-connected SSD.
It would be nice if I could ditch the bulky spinning hard drive (or even SSD drive) and use a microSDXC card. I've tried that before, and while it did work, they quickly (within maybe a month or two) wore out and got errors on them. I can usually get at least 3 or 4 years out of a traditional spinning platter drive.
As impressive as this feat is, and it really boggles the mind, I've had a large number of SanDisk MSDs fail on me before I switched to Samsung. I'm not very likely to switch back.
I'm doing a project soon involving a Raspberry Pi. I've used rpi for various things before but the different thins time is that it's going to be installed at the location of a client. I am quite nervous that the microSD card will fail. For my own stuff I can easily write the image to a new card and it doesn't matter. But with a client where they expect the thing I'm making to work it would be painful to have it fail.
Is there some ready-made Linux distro for Raspberry Pi that is extremely small in size and which is made to load it whole self into memory and not write back to the SD card at all? It'd be extremely helpful to know about it if any such distro exists. Preferably it should be based on Rasbian just slimmed down and with logging etc removed. That way the kernel itself would be more reliable so that it won't crash randomly for reasons that have otherwise been fixed in Raspbian but which might not be in an independently maintained kernel.
You could also use network boot:
To answer your question directly, yes there are Linux distros that minimise/eliminate SD card usage. Aside from being able to mount the SD storage as a read-only device (like you can with live Linux CDs), there are some minimal distros that boot to RAM. One example is Tiny Core Linux:
Latest RPi build is was released only a few months ago:
For Pi Zero:
For Pi 3:
As a last bit of advice, there are quite a few different storage options available for the Raspberry Pi. Here's one that you may be interested in (you can buy the hard drive seperately if you didn't want to use the Raspberry Pi Zero):
Obviously SD cards are even worse than SSDs when it comes to reliability.
Now, it's more like ~1K for (good) TLC and closer to 100 for QLC. There are no official numbers I can find on TLC/QLC endurance or retention (besides a lot of handwavy and weasel-worded marketing saying it's "good enough"), but the degradation is exponential with the number of bits, not multiplicative, so one can make an educated guess.
 http://www.farnell.com/datasheets/1382714.pdf note date, and see page 3.
 http://pdf.dzsc.com/88888/2008422102742719.pdf also note date, and look at 3rd item in the revision history
I was going off of the datasheets for flash chips I've been using, but I guess things are different for really high-density ones.
In 2028, maybe a 40TB microSD might cost $300? (@10 years of mp4?) And by 2040, 4 PB petabytes? (@1000 years of mp4?) Who's gonna have time to watch all that?
So much local client memory is coming. With less network dependency in more local storage, will our apps process memories faster?
What if it had an 8k camera recording in every cardinal direction as well as facing upwards and below for a 'VR' like experience?
What if it had 16 of those to page between different perspectives in that scene?
4U-Rack volume: 450.85x(4*44.45)x800 = 64.128904e-3 m³
At 100% usable volume ratio that would be 555228 Micro-SD-Cards or 222 Petabyte and cost $55,000,000,000 (assembly and shipping not included).
The storage density of the cards is 2.77e19 bit/m³.
Assuming a saturated gigabit (100 MB/s) connection, uploading your backup would take 70.4 years.
555228 x $250 = $139M, not $55B.
You could use smaller cards, but then you have the nightmare of trying to find a stable supplier that doesn't have a ton of fakes in their supply chain.
Even then, your competition is 10TB HDDs, and you need 25 cards + card readers + cooling to match one drive. Performance is better, but probably not enough to justify the expense.
PS: Your lack of faith disturbs me.
A small SSD for the main OS - then a ton of space RAIDed on these.
Now I can place all my music files inside my player!
I think the latest iterations of these cards (at least brand name anyway) are a lot more reliable than SD cards of yore. Those were meant pretty much strictly as a FAT32/exFAT camera card that you only write to, read off all files from, then reformat to reuse; and were not really meant as a random access device, organizing files, deleting individual files, modifying them, like what an OS would do.
I do highly recommend running f3 tests on whatever you buy, f3write followed by f3read, to check for fake flash. There's a lot of fake flash out there.
I test all my flash / hard disks with it and have sent plenty back over the years which failed!
It would certainly be an interesting mail archive system as well.
I am less worried about reliability and more about "oh crap I lost another one" - but with 400GB - thats the risk of losing a lot of data.
Easily 100 to 200 hours of 1080p material, if we go with a bit rate in the range of 2 to 4 gigabytes per hour.