Hacker News new | comments | show | ask | jobs | submit login
SanDisk crams 400GB into a microSD card (engadget.com)
139 points by jonbaer on Aug 31, 2017 | hide | past | web | favorite | 120 comments



This is almost certainly going to be 3 or 4 bits per cell flash, which has exponentially worse retention and endurance than 1 (SLC) or 2 (MLC) bits per cell for only a multiplicative increase in density[1], and requires even more (fragile) algorithms for error correction/avoidance/longevity. I wouldn't be surprised if this "400GB" was actually 512GB (2^39 bytes, or "512 real gigabytes"[2]) of raw capacity, with 112GB of spare area. OK for short-term "transfer" storage, like the camera applications alluded to in the article, but definitely not for long-term archival or maybe even medium-term. Perhaps there should be a separate category for devices like this: "pseudononvolatile".[3]

To parallel an old saying: "big, fast, reliable - pick two."

[1] The terms "triple-level cell(TLC)/quad-level cell(QLC)" are entirely misleading, since they imply only a multiplicative increase in the actual number of distinct voltage levels in each cell. One wonders if they are deliberately downplaying the associated reliability issues by making it seem multiplicative instead of exponential. These should really be called triple-bit/quad-bit or eight-level/sixteen-level cells.

[2] Remember when flash disks had real binary capacities? I have a 64MB USB drive that really contains 131,072 user-accessible 512-byte sectors --- and it's still working, because it's SLC flash.

[3] I've worked with NAND flash over the years and noticed something interesting: you can very easily find datasheets for SLC and MLC (2-bit) flash, which give the endurance/retention. But there is very little on TLC (3-bit) flash --- they're almost all NDA'd leaks, which a few years ago were already rare, but seem to have mostly disappeared now --- and basically nothing on QLC/4-bit. Why the secrecy? It makes one wonder if there's something inconvenient about this high-density flash, that they don't want people to know...


> To parallel an old saying: "big, fast, reliable - pick two."

In this case, perhaps they decided to pick "big" twice.


As Bunny famously remarked: ”You are not storing your data — you are storing a probabilistic approximation of your data”.


I think this is a short-term issue just because error correction is so good. We have many error correction algorithms that approach the theoretical limit for efficiency.

Eventually error correction will take more bits than it saves and the number of levels per cell will stall.

I would say our SSD's are actually much more resistant to errors than hard drives were. Because the error correction in spinning disks is generally crap.

With flash, manufacturers have been forced to include extremely good ECC schemes because cells die all the time. Since errors are generally random and happen in large numbers, the life of the drive becomes predictable instead of the death cliff we used to see with spinning disks.


What does error correction algorithms have to do with whether the disk spins or is solid state?


It just happens that hard drives still have crap error correction. Probably because most of their failure modes are catastrophic anyways.

Part of it is also Sandforce. The flash to make SSD's existed a few years before they came on the scene but they were the catalyst. They created a sophisticated controller that could do very good error correction and wear leveling, allowing the first practical SSDs to be built.


> I think this is a short-term issue just because error correction is so good. We have many error correction algorithms that approach the theoretical limit for efficiency.

Doesn't this mean there isn't much room for improvement?


> To parallel an old saying: "big, fast, reliable - pick two."

Small, surely?

Edit: I'm dumb. Big storage. Duhhh!


By "big", I assume he means biggest storage, not smallest form factor.

By the way, unless there is some technology I don't know about, microSD cards have consistently been the highest density storage devices for many years running.


By the way, unless there is some technology I don't know about, microSD cards have consistently been the highest density storage devices for many years running.

That's because they're not much bigger than the raw NAND flash die itself, encapsulated in plastic: http://bunniestudios.com/blog/images/microsd_lineup.jpg

Of course, biological data storage (DNA, proteins, etc.) has the potential to be several orders of magnitude denser ( https://news.ycombinator.com/item?id=4396931 ), but the technology is still in development and highly error-prone.


Got it. Thanks!


My first day working at Google back in 2000 they had just inked an exclusive deal with a hard drive manufacturer for their brand new high-capacity 40 GB drives. They had a 4U rack stuffed full of 25 of them, and everyone was standing around oohing and ahing and saying, "Wow, that's a TERABYTE!"

Now the same storage is the size of a thimble.


I remember I bought a 4.7GB Seagate hard drive, for $250, at 1997.


I remember my Dad buying a 20MB drive in 1993 for around double that..


I started a company, Corvus Systems, that sold 5 MByte hard disk drives for $4000. We sold a zillion.

Of course the competition was 74 KByte floppies at the time.


At the time I copied games for ZX-80 on a tape recorder. One 90 min cassette had 260KB of storage. And now IBM make a 330TB one.


Back in my day we only had stone tablets which had space for just one dot. A whopping 1 bit of storage! /s


That reminds me of the racks of 18GB Seagate drives Terraserver used for their maps storage. It was a nice double entendre of terra earth and tera 10^12


Are there archived photos from those days? I think that would be interesting to see


If you get a chance to go to the Google visitor office in Mountain View, they have old racks on display.


See also

http://infolab.stanford.edu/pub/voy/museum/pictures/display/...

which apparently has been moved out of the basement of Gates Hall but is still on display elsewhere at Stanford.

http://infolab.stanford.edu/pub/voy/museum/pictures/display/...

(Note that this is from 1996, so it's apparently 40 GB rather than 1 TB.)


Yep, that server is indeed still at Stanford. It is now in the basement of the Huang building in the engineering quad.


Oh that's awesome, thanks! History is an indulgence for me these days, but I think it serves me as well.


Didn't know they had one. I'm a little far away, but we may visit my girlfriend's family there around Christmas, so it's on the list if I can make it. Thanks!


And Google won't let you use them in their devices... because the cloud...


IMHO, the most likely use case outside of people who shoot a lot of RAW photos is for people building appliances out of stuff like Raspberry Pis that needs a lot of storage. Things like MAME cabinets, DVRs, security systems, and the like. The $250 price point on these is obviously a barrier for now, but it should come down reasonably quickly if history is any guide.


As with previous capacities, another use case is to have your entire music library available in your phone. I use one of the previous 200 GB uSD cards with my whole music library. It's nice to have everything available everywhere I go. Now even those with >200 but <400 GB of music are able to do the same.


"another use case is to have your entire music library available in your phone."

Unfortunately, almost all phones continue to support microSD cards "up to 32 GB". Some will (reportedly) work with 64 GB cards, but if I understand correctly, others need hacks/tweaks to access >32 GB cards ...


My 2 year old Sony Xperia works quite happily with a 128 GB card. A quick survey of recent (2017) models of Samsung, Sony, Motorola, Huawei, and Xiaomi on GSM Arena shows that they all take up to 256 GB (with Motorola being the exception at 128GB). This also seems to be the case for older Samsung phones I looked at.

Looks like you're in luck.


My phone and tablet work fine with 128GB cards. The phone is a Samsung Galaxy S4 running Android 7.1.1, the tablet is an Amazon Fire HD 5th Gen. Any device that supports sdxc cards must support up to 2TB. The exFAT filesystem is required for these, so you can't use FAT32 anymore. sdxc host devices started coming out in 2010, pretty much everything that uses sd cards these days has support for them.


FAT32 has an individual filesize limit of 4GB, but otherwise works fine for a 2TB partition:

http://www.cdrlabs.com/images/stories/reviews/silicon-power_...


Yes, but SDXC drivers don't necessarily support FAT32 on SDXC cards. The spec only requires exFAT. SDHC used FAT32.


That's interesting. It seems the SD specs (which include SDXC) are mandating a specific filesystem. According to https://en.wikipedia.org/wiki/Secure_Digital#SDXC

Nevertheless, in order to be fully compliant with the SDXC card specification, many SDXC-capable host devices are firmware-programmed to expect exFAT on cards larger than 32 GB. Consequently, they may not accept SDXC cards reformatted as FAT32, even if the device supports FAT32 on smaller cards (for SDHC compatibility).

That seems like a gross "abstraction layer violation" to me --- like making SATA controllers which work with SATA6 HDDs only when formatted with NTFS. The filesystem should have nothing to do with SD, which simply implements a block device abstraction. No doubt Microsoft was involved in this ridiculousness...


"SD" is basically the name of a standard stack, not a particular layer. You can reformat an SDHC card to not be FAT32, but then it won't be an SDHC card any more.

There is a point to this: SD cards are used in all sorts of low-level embedded applications, with firmware that uses very inflexible (usually assembler) code to address the card. The assumptions about the SD wire protocol + the FAT32 filesystem + even the particular location of the FAT table on disk are all knotted together in these devices.

Because of this, it's not even just that SDHC requires FAT32; SDHC requires FAT32 done with a specific formatting tool released by the SD standards group. Because your OS might put the FAT anywhere, but the tool puts it in exactly one place, and that's the place that dumb embedded devices expect to find it.


since they receive patent licensing on exFAT, you can bet that! :)

which is the same reason that there is no implementation in the kernel, only fuse. ugh.


And exFAT is various sorts of patent encumbered, as I understand it.


I didn't realize this was an issue. Does this include flagship phones or is it mostly lower- and mid-range?

From experience, the Galaxy S7 supports at least 128GB (a quick search returns a 256GB max). S6 didn't support microSD, S5 supports 128GB, S4 supports 64GB, S3 supports 64GB.


sdxc has been in the spec since 2010. Anything that supports over 32GB (sdxc) can support up to 2TB with no hardware changes. I still have an S4 and use a 128GB card, but I've got a custom ROM. Most of the "up to" values for support are simply the biggest card that was available when the device was released, not the biggest the device can read. That's almost always going to be 2TB, the maximum that exFAT can support.


My Lg G4 has no issue qith a 200gb microSD card.


Nowadays phones are dropping SD card support because they want everyone to touch the 'cloud'.


Are there any battery cases for iPhones that could use this? I know it's possible to have SD card readers over lightning, and it would be pretty cool to have tons of extra space for photos, music, or video. iOS would lock you out of certain use cases, but there are probably some that would make this worthwhile. So many people have big battery cases, and adding this in wouldn't take much (and would be a killer differentiator).


"Now even those with >200 but <400 GB of music are able to do the same."

Some of us are already living in that future by using an mp3 player with two uSD slots :)

https://www.head-fi.org/threads/xduoo-x3-dsd-24bit-192khz-cs...


I have about 175GB of music, but a lot of that is flac. If I were putting it on my phone, considering its DAC and the fact I'm probably using bluetooth headphones I wouldn't hesitate to make all that 96kbit Opus instead and have no recognizable quality loss.

In my experiments in the past I was shrinking 80GB to about 15GB.

So if anyone has a huge music collection, you can still probably fit it on your phone without going crazy on the sd card. Just checking my collection is 2 months uninterrupted runtime right now.


Opus is amazing. Also your DAC and phone sound processing don't matter when using Bluetooth. It's digital straight to the headphones


Phones support Opus now? Or is the decode entirely in the CPU? Phones have more than enough oomph to do it, but you'll probably lose some battery life that way.


Trying to find any info on this is next to impossible. I'd still be really surprised if modern SoCs were shipping hardware mp3 decoders still though, and if they do, they probably support Opus since almost all VOIP data uses it now.


Seconded, Opus is amazing. Hell, 96kbps is probably overkill for a phone, the codec is that good.


I personally use an SD Card as an external harddrive for my macbook. I get an extra 250GB for much less than a new macbook with a larger HDD.


I use a 4GB SD card as a massive HDD on my ... Amiga.


I keep wishing I had the room to set my Amiga 1200 up properly (my 2000 too); I'd definitely upgrade the hard drive in it, which, iirc, is a 120 mb drive.

Also my TRS-80 Color Computers, my Apple IIgs systems - I need my own personal computer museum for all my old systems...


The Surface Pro devices have a fairly discrete MicroSD slot underneath the rear stand. Given that upgrading the internal storage isn't really feasible, $250 to quadruple the storage of your $1200 device isn't all that unrealistic.


Yeah but if I know my Windows, installing stuff on D: may be fraught with problems depending on the program.


can you launch apps from there? is it reliable?


Not the person you're responding to, but yes, you can use an SD card in a MacBook Air/Pro to store apps and you can launch them from there, whether it's formatted as FAT32, exFAT or HFS+.

Reliability depends on the particular card and vendor. I'm using a JetDrive (an SD card specially designed to snugly fit the MacBook Air's SD card slot) and have had zero problems.


If you already have a TF card you want to use, you can buy an SD adapter that will fit flush. There are a couple of different sizes, but I think you can get them for most/all models, e.g.:

https://www.banggood.com/Micro-SD-TF-to-MiniDrive-SD-Adapter...

IIRC you can even get a Mac to boot off an SD card (e.g. to run Debian occasionally), but the performance is better with a USB3-connected SSD.


ZFS is also possible.


For awhile, I kept my World of Warcraft install on one, and it worked fine.


I used mine for Final Fantasy XIV for a full two years with zero problems, as well as various anime and other games.


I just rebuilt an old iPod classic and it uses microSD cards for storage, instead of the old hard-drive. It would be awesome to load it up with 4 of these; for a 1.5TB+ iPod. Stupidly expensive though.


I'll happily put one of these in my Nintendo Switch for all the downloadable games.


I wonder how their reliability is compared to a traditional spinning platter drive, for the typical workload of a desktop.

It would be nice if I could ditch the bulky spinning hard drive (or even SSD drive) and use a microSDXC card. I've tried that before, and while it did work, they quickly (within maybe a month or two) wore out and got errors on them. I can usually get at least 3 or 4 years out of a traditional spinning platter drive.


The M.2 form factor gets you most of the way there. They're significantly smaller than normal HDD/SSD, and can cope with being used hard, unlike mickey mouse SD card stuff.


For anyone else wondering what the M.2 format actually is and what compatibility issues there are with it:

https://www.youtube.com/watch?v=opwON-7J_wI


I was going to say something about reliability.

As impressive as this feat is, and it really boggles the mind, I've had a large number of SanDisk MSDs fail on me before I switched to Samsung. I'm not very likely to switch back.


Conversely the only Samsung microSD card I ever had failed on me. I've also had some trouble with SanDisk cards from time to time. My general experience is that microSD cards are okay but I'm not at all happy about them.

I'm doing a project soon involving a Raspberry Pi. I've used rpi for various things before but the different thins time is that it's going to be installed at the location of a client. I am quite nervous that the microSD card will fail. For my own stuff I can easily write the image to a new card and it doesn't matter. But with a client where they expect the thing I'm making to work it would be painful to have it fail.

Is there some ready-made Linux distro for Raspberry Pi that is extremely small in size and which is made to load it whole self into memory and not write back to the SD card at all? It'd be extremely helpful to know about it if any such distro exists. Preferably it should be based on Rasbian just slimmed down and with logging etc removed. That way the kernel itself would be more reliable so that it won't crash randomly for reasons that have otherwise been fixed in Raspbian but which might not be in an independently maintained kernel.


First of all, it should be noted that it's now possible to boot from USB, bypassing the need for a microSD card, though the feature is classed as experimental:

https://www.raspberrypi.org/documentation/hardware/raspberry...

You could also use network boot:

https://www.raspberrypi.org/documentation/hardware/raspberry...

To answer your question directly, yes there are Linux distros that minimise/eliminate SD card usage. Aside from being able to mount the SD storage as a read-only device (like you can with live Linux CDs), there are some minimal distros that boot to RAM. One example is Tiny Core Linux:

http://tinycorelinux.net/welcome.html

Latest RPi build is was released only a few months ago:

For Pi Zero: http://tinycorelinux.net/9.x/armv6/releases/RPi/

For Pi 3: http://tinycorelinux.net/9.x/armv7/releases/RPi/

As a last bit of advice, there are quite a few different storage options available for the Raspberry Pi. Here's one that you may be interested in (you can buy the hard drive seperately if you didn't want to use the Raspberry Pi Zero):

http://wdlabs.wd.com/products/pidrive-node-zero/


I don't know what your requirements are, but look at making the rootfs read-only if possible. This is how a lot of consumer devices running Linux work.

https://hallard.me/raspberry-pi-read-only/


Not raspbian based, but tinycore[1] has a raspberry pi boot, the whole O's is loaded in to ram at boot

[1]http://distro.ibiblio.org/tinycorelinux/ports.html


If you want reliability, don't get a Pi. Look at the selection of Odroids and use one of their emmc modules.


With the higher density flash cells on the same surface area, one wonders about the heat dissipation (especially in mobile phones). This and the associated workload (read vs write) ultimately determine the lifespan of these cards.


Heat levels are indeed something these cards must ensure. I use the Samsung Evos in Blackvue dash cameras which will get up to 80-90c in the sun in the summer.


endure..


One of the issues is that the protocol used by an SD card isn't as efficient nor battle-tested as SATA. It might not be much of a problem with relatively low capacities in the handfuls of GBs but could cause problems as you go beyond that.


Flash memory is usually rated to what, order of 100,000 write cycles? But it has much better endurance on reads. If they aren't used as something like a pagefile, they can usually last awhile. But I guess it depends on the use case, and I'm assuming this will be similar to what exists now.


Maybe if you have an expensive high end enterprise SLC SSD. Consumers use the cheapest SSDs available and that's usually TLC with 5000 or less write cycles and soon we will be on QLC which will only have up to 1000 cycles. The more bits a cell stores and the smaller the manufacturing process the lower the amount of write cycles. 2D NAND hit a scaling brickwall and the only way to scale NAND is now to stack layers on top of each other.

Obviously SD cards are even worse than SSDs when it comes to reliability.


5K~10K was MLC 8 years ago[1], and there's evidence that some cheaper/smaller-geometry flash didn't even make it that far[2]

Now, it's more like ~1K for (good) TLC and closer to 100 for QLC. There are no official numbers I can find on TLC/QLC endurance or retention (besides a lot of handwavy and weasel-worded marketing saying it's "good enough"), but the degradation is exponential with the number of bits, not multiplicative, so one can make an educated guess.

[1] http://www.farnell.com/datasheets/1382714.pdf note date, and see page 3.

[2] http://pdf.dzsc.com/88888/2008422102742719.pdf also note date, and look at 3rd item in the revision history


Toshiba says their upcoming 3D QLC NAND will be good for about 1k P/E cycles. But like most recent P/E ratings, that's almost certainly given under the assumption that you're using LDPC error correction or something equally robust, so you can't compare directly to the ratings for older SLC NAND.


Huh, wow.

I was going off of the datasheets for flash chips I've been using, but I guess things are different for really high-density ones.


This modern uSD tech is astonishing, beyond any science fiction. I no longer pay attention to the transistor sizes or the CPU clock rates, but the advances in the data storage technology look amazing to me.


Same here. Clearly, it's the work of the devil.


IF 'SanDisk made a 4 GB microSD card on July 2006, at first costing $99 (USD)' [1] and in 2017 'SanDisk crams 400GB into a microSD card' (costing $250 USD) THEN

In 2028, maybe a 40TB microSD might cost $300? (@10 years of mp4?) And by 2040, 4 PB petabytes? (@1000 years of mp4?) Who's gonna have time to watch all that?

So much local client memory is coming. With less network dependency in more local storage, will our apps process memories faster?

[1] https://simple.wikipedia.org/wiki/MicroSD#History


But as time goes one video etc gets larger, e.g. 1080p to 4K, to 6K to 8K etc. Usually, the extra space we get is used up soon.


This might be another "64kB of memory should be enough for anybody" prediction, but will we ever need to go past 8K resolutions? Even 4K is crazy high resolution and anything higher feels unnecessary and pointless if our eyes can't see smaller.


If 360 video kicks off, we might see resolutions resolutions much higher than 8K that ensure the viewport is always 8K. 10 bit colour is something that could get wider adoption. Also I'd assume bitrate will creep up with network speeds and storage prices, to reduce compression artifacts.


I can see 8K res coming to 27" computer screens, I've got an iMac 5K 27" and there is room for improvement with the pixel density when it comes to video/photo editing. But yeah I don't see it going much higher for displays, maybe for large cinemas. But I'm sure there will be other video/photo technologies coming up that will use large amounts of data. People just haven't invited it yet due to storage constraints.


There are probably infinite dimensions that can be added to a visually recorded piece of media.

What if it had an 8k camera recording in every cardinal direction as well as facing upwards and below for a 'VR' like experience?

What if it had 16 of those to page between different perspectives in that scene?


If we put the economic part aside, what kind of storage density could we reach with these in a 4U case? I mean something like a custom build Backblaze Storage Pod [1] but for microSD cards.

[1] https://www.backuppods.com/


Card volume: 11x15x0.7 mm³ = 1.155e-7 m³

4U-Rack volume: 450.85x(4*44.45)x800 = 64.128904e-3 m³

At 100% usable volume ratio that would be 555228 Micro-SD-Cards or 222 Petabyte and cost $55,000,000,000 (assembly and shipping not included).

The storage density of the cards is 2.77e19 bit/m³.

Assuming a saturated gigabit (100 MB/s) connection, uploading your backup would take 70.4 years.


"555228 Micro-SD-Cards or 222 Petabyte and cost $55,000,000,000"

555228 x $250 = $139M, not $55B.


You are right. My math was off!


Power and cooling requirements would likely prevent you from achieving absolutely insane density I think. Plus, you would go broke trying to fill the thing with $250 cards.

You could use smaller cards, but then you have the nightmare of trying to find a stable supplier that doesn't have a ton of fakes in their supply chain.

Even then, your competition is 10TB HDDs, and you need 25 cards + card readers + cooling to match one drive. Performance is better, but probably not enough to justify the expense.


Your 10TB HDDs have a density of only 2.56e16 bits/m³ which is far inferior to microSD cards.

PS: Your lack of faith disturbs me.


Unless you have a fully automated SD card library (tape library for sd cards) then you have to at least consider the volume of the smallest possible SD card reader instead of taking just the SD card's volume.


You could possibly connect them all with through holes or very thin PCBs. No need for a library or a full reader.


Wouldnt you work directly with the manufacturer brand at a point beyond 10k? Are fake SDs mixed in at distributors like Synnex? I always presumed it was a retailer thing, instead of ordering from Computer2000 or Synnex lets have a 1000 of these from Alibaba?


Oh, and, you know, the fact you have to plug the cards into something to make them useful.


hmm... an array of µsd in RAID


This was a common thing in the early days of OSX and Macs that had lots of USB ports. People would gather up all their miscellaneous flash storage devices (CF, USB, MS, etc...) and mount them all at once. OSX had a function in Disk Utility that would take all those pieces of disparate memory and turn them into a single logical disk. It was a nice way to make use of leftover old tech instead of throwing it in the garbage.


Actually imagining a machine with an array of micro SD slots inside filled with a bunch of these (mini version of the bblaze pod mentioned above, but in your laptop -- that would be really interesting.

A small SSD for the main OS - then a ton of space RAIDed on these.


We should call it a Beowolf


Found the old guy


From 200GB to 400GB in two years, so Moore's law is still rocking.

Now I can place all my music files inside my player!


As opposed to the microSD card with the most capacity, what is the microSD with the best reliability for use with, e.g., a Raspberry Pi?


The Lexar high endurance. If you want to find SD card reliability info, check websites and videos that do reviews for dashcam SD cards. Their primary focus is on reliability because they need the cards to be able to be written on constantly without failing for a long time.


I run Lexar high endurance in my dashcam had issues with other cards but with the Lexar cards. Its different technology(MLC) so it doesn't have the capacity as other cards but gains in durability. If i remember it stores 2 bits per cell instead of 3.


Another aspect is the random read & write speed. Typical microSD cards are abysmally slow at those.


This is getting crazy! One thing I love about the microSD market is it (pretty much) eliminates the need to upgrade your iphone to the 128/256/512 version and pay $xxx extra when you can just keep one of these puppies, and even take it with you to your next upgrade, or swap to Android etc.


Maybe a dumb question, but how are you using microSD with your iPhone?


There are cases with an integrated lightning -> SD adapter.


You can only use the storage with the app that came with the adapter.


I'm lost, as I thought iPhones didn't have SD slots.


On reliability, I've got a much smaller 32GB Samsung EVO microSDXC in an adapter in an Intel NUC formatted Btrfs being used as rootfs for Fedora Server, for 6 months with zero errors of any kind including regular scrubbing.

I think the latest iterations of these cards (at least brand name anyway) are a lot more reliable than SD cards of yore. Those were meant pretty much strictly as a FAT32/exFAT camera card that you only write to, read off all files from, then reformat to reuse; and were not really meant as a random access device, organizing files, deleting individual files, modifying them, like what an OS would do.

I do highly recommend running f3 tests on whatever you buy, f3write followed by f3read, to check for fake flash. There's a lot of fake flash out there.


If you like testing your media then you'll probably enjoy: https://github.com/ncw/stressdisk my project to test media.

I test all my flash / hard disks with it and have sent plenty back over the years which failed!


One of the things I was thinking about building was a write once logging system. Even at the high levels of logging that I run on various servers it is only 10s of megabytes per day which this kind of thing could absorb for years. And as a write once system it would never erase (not even have the capability of erasing) allowing for 'walking backwards' in the logs in the event of intrusion or malicious behavior.

It would certainly be an interesting mail archive system as well.


do you have any idea how many microSD cards I have lost in my life...

I am less worried about reliability and more about "oh crap I lost another one" - but with 400GB - thats the risk of losing a lot of data.


I've been afraid of the same, but have yet to lose one. Maybe I've just been lucky.


And they're too small to have a useful area for writing on.


Can't you just write to the MBR!


> That means it can hold up to 40 hours of full HD video, in case you were wondering.

Easily 100 to 200 hours of 1080p material, if we go with a bit rate in the range of 2 to 4 gigabytes per hour.


This raises the inevitable question: what's the bandwidth of a 747 full of these cards?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: