It bugs me that unmounting SD cards / USB drives and physically removing it are two seperate actions. It's a regression in usability after floppies and CD-ROMs.
To be fair, this was always a problem, also with old computers and OSes. There was no link between pressing eject and magical things happening in the machine to commit data. The data was simply already committed so there was no need to have any additional layers of security.
The difference between then and now is that new computers have enormous amounts of memory and use it to provide write-back caching. This means the OS can tell applications that data is committed when it's not. This is an OS level decision which is done for performance and has nothing to do with the physical eject-buttons on your devices.
You may call this a small nit, but I think it's an important one. The eject-button on old machines would suffer the exact same problems if it used an OS which offered write-back cache.
You could say old Macs didn't have this problem, which is true. But it didn't have an eject-button. It relied on the OS doing that. And then instead you had the problem of people not being able to get their disks out of their machines if the OS for some reason refused to release it (or that users didn't realize they had to drag the disk-icon to trash).
My first time on a Mac, about 1990, I felt pretty confident having plenty of early PC experience. A graphic designer I shared a house with had left me alone with (and permission to use) her Mac.
I quickly discerned there was no software of interest actually installed, but found a floppy that looked something like a game. It wasn't, and I went to hit the ... wait, there's no eject button?
After a fruitless few minutes during which I failed to notice the eject key on the keyboard (of all places!) I eventually opted for tweezers, and gently teased the floppy out of the drive.
Imagine my surprise when I restarted the Mac to find it was unable to start, having apparently completely forgotten about it's own HD! The thing had to go back for servicing before it would even boot properly, iirc.
My reputation as the house's resident computer expert was severely impacted, needless to say.
Still, seeing how many people in my office will just pull USB flash drives without ejecting them I almost see the appeal now.
* Drag the disc icon to the trash can to eject the disc.
* Pull the power cord since there were no other way they knew of to turn it off.
Mac's reputation as a usable computer was severely impacted, needless to say :).
I didn't quite like the way Linux handled this though, not allowing the disk to be ejected as long as any file descriptors were still open, which meant you had to hunt down the processes keeping them open.
But it would be extremely helpful for writable media like flash to be able to do full write-back caching and only flush the cache when the eject button is pressed, then EBADF any further reads or writes to any open file descriptors.
Also useful when something is hogging the audio device in /dev.
For those completely oblivious to this: This was actually not about getting data written to disk, but to physically protect the drive from its own mechanical construction. The "park" command would cause the harddrive to move its magnetic read and write-heads away from the internal platters, so that when it was powered down and the disks stopped spinning, the platters would not be damaged or experience needless tear and wear.
Yeah, I'm glad we no longer have to do that exercise.
(This used to be how you'd tell the tape-spooler to rewind..)
Always, always sync -r after writing to a zip drive on Windows. Even minutes after you think it is finished writing, do not trust the OS to have finished flushing to disk.
Also, not syncing to disk immediately can improve the lifetime of flash media, which wasn't a concern in the old days.
However, I'm somewhat disappointed by current filesystems not using journaling to make it possible to pull out a USB drive without harm (other than losing the latest writes).
As for Macs not having the problem, this doesn't require qualification. Not only did Macs not have the problem, they treated disks as logical _volumes_ rather than physical devices. If you ejected a disk mid-save on a Mac it would complain, and if you inserted the wrong disk it would reject it and ask for the right one. If you did the same thing on a DOS/Windows machine it would correct the new disk (and leave the old disk corrupted too).
Similarly, you could copy files from one disk to another without having two disk drives in Finder (it was a pain in the neck, swapping the disks, but it worked). Other computers needed special software and relied on the user to insert the correct disk.
This advantage continues to this day. Windows still refers to devices by physical location.
Nope, not since NT. I learned that the hard way when moving a HDD containing Win2k around. Even though it was hooked to IDE master it would come up as D: as that was what it was assigned after it got formated with NTFS.
You can go into disk manager and assign a partition any letter than is not already in use.
You can even mount a partition into a existing FHS, much like on unix, but i can't say i have seen it used often.
Basically the letters are there for backwards compatibility.
There was on the Apple Lisa. Or rather ejecting a disk was something you did in the OS which made sure everything was written/closed before allowing the disk out.
It was definitely confusing to new users: Insert a disk was something which anyone could do, since it only involved simple movement of a physical object, but ejecting it required you to know and understand the OS.
This was quite an "impedance mismatch" between what should ideally be two symmetrical actions, and I remember I had to help out my mother quite a few times :)
Fortunately the paperclip trick was in there.
The media itself is fully passive, while a SD card or thumbdrive have logic sitting between the USB interface and the raw flash. Thus optical media can be "rescued" by simply getting a new drive.
And you really need to mistreat a optical disc to make it completely unreadable.
Or perhaps someone could buy the minidisc tech off Sony and resurrect magnet-optical storage?
Windows used to require you to 'eject' but doesn't any more, which I think is due to the above. Please correct me if I'm wrong.
No, it's always required, because applications may be writing to the drive.
I recall some distros tried to mount USB drives with the sync option back in the early days of USB media.
Problem was that this, in combo with FAT, would basically kill cheap drives lacking wear leveling.
This because every write action was accompanies with a allocation table update. And thus whatever flash cells housed that table would see a massive amount of writes.
Actually i think the mount man file still have a warning about this in the sync option entry.
I am so paranoid now about removing USB drives after I accidentally removed one while it was still writing to the drive (which I had no idea because there was no LED on the drive) and it bricked the entire device.
I generally wait until my computer has shutdown completely now before removing any important drives, just to be sure.
I don't even bother to unmount it; what would unmounting add, if the OS can recognize that the device is gone and clean up its mount tables? Is there some filesystem that waits until unmount to recalculate indexes or something?
(Okay, sure, the OS could tell me the disk is in use because I've got an overlay filesystem mounted on top of it, and would I kindly wait forever for that to stop being the case. But screw you, OS, I need this USB. Just send the FUSE server or SMB daemon or whatever a SIGPIPE.)
Also, unmounting helps you to be confident that nothing's started doing more IO since you ran sync.
for i in `ls /media/$USER`
ls /media/$USER |grep -v cdrom0 |grep -v cdrom> /tmp/disk_status
if [ ! -s $file ]; then
kdialog --passivepopup "Nothing Mounted " 1
kdialog --passivepopup "Mounted devices: `cat /tmp/disk_status`" 2
A more accurate title would be "putting an SD card in a floppy disk case"
It's slightly amusing to me that floppy disks were once called "HD", even though the abbreviation stood for high density rather than high definition.
It worked by combining two completely separate techniques; the first, using more tracks, is pretty self-explanatory --- a lot of floppy drives, despite officially supporting 80 tracks, could actually move the head to 81st and 82nd tracks (rarely, even higher), and since the media was in the form of a uniform magnetisable disk, that let an extra 2 tracks worth of data be accessed.
The second technique was to reduce the number of sectors per track and the gaps between them, by making each sector bigger. To ensure interoperability with different drives rotating at different speeds, there's actually quite a lot of "padding" between each sector, as well as ID/CRC data for each one. If I remember correctly, in the highest capacity mode, 2M wrote only one huge sector per track instead of over a dozen in the usual format. This is similar to how hard disks have transitioned to using 4KB sectors internally, instead of the traditional 512B.
Everything is under the control of the drive, not the media.
They advertise their size as in: tell the drive what to expect. But the drive is in full control of where to write on the disk, and how densely.
As such, you could actually teach an old drive new tricks by coming up with a firmware that used a better encoding scheme, and then encoding the rest of your disk with that scheme.
† (though it ran on the CPU, rather than some microcontroller in the drive. "DOS" stands for Disk Operating System for a reason: a large part of it is a set of routines programs can call to make the CPU micromanage the read-head and stepping motor of the dumb disk-drive controller, with concepts like "files" being a mostly-optional thin wrapper above that, rather than some impenetrable abstraction layer.)
My first experience installing Linux was using a bunch of disks which were unable to handle even 1.44MB (even though they were sold as 2HD). I couldn't tell, of course, and I had no idea why my install kept on failing or generating errors part way through the installation process :(
Oh, that exists! From 1998: https://en.wikipedia.org/wiki/FlashPath
A floppy disk drive-compatible SD card adapter. Needs special software, though.
But in the end it's all about the density of the disk so I'm not sure if you could use a driver to store more data.
Also you will need very accurate positioning of the head. So the drive is also a limiting factor.
I think the largest floppies where 32MB.
edit: Didn't realize they even made MemoryStick and SD Card floppy adapters. So this already existed, sort of! https://en.wikipedia.org/wiki/FlashPath
Note that i think you need to use special files that are read by the drive.
I think they are meant as a life extension for older CNC machines and industrial looms.
The second. With a better read/write head you could store more data on them (for example 2.88mb was available).
But the physical media of the disk may or may not have the necessary resolution.
They could replace Blu Ray with this new format, so all those sci-movies in the 80ies where there are films on floppys - those will turn out to be right.
I love that we can fit tens of GB in a space the size of a fingernail, but it does get rather fiddly to pass around. USB thumbdrives are nice, but are thick enough that they don't really stack up well, or fit into folders and binders nicely.
The 3.5" form factor really is kind of a sweet spot for more than just nostalgia, IMO.
What is it that makes this the OC's fault? Or am I not getting something here?
What is it about GP's comment that makes you think they assign blame?
So if you wanna say, that things were not what one expected, one must say so explicitly, to make it clear, that this is not generalizable.
Maybe that does make it more clear?
It would have never occurred to me, to hack something like this together. I really love it when someone comes up with creative ideas to reuse old stuff and mixing in some new things as well.
This was magnet-optical based though, thus it involved having a laser heating the media before the magnetic arm could flip the bit.
That said, i think there was a version of the original MD format sold as a data drive in Japan. I think Sony had a model or two of their Vaio laptops with it built in.
That said, what basically sunk the format was slow write speed and that the Sony Music tail was basically wagging the Sony Electronics dog. The MD was from day one laden with DRM, and the HiMD took it even further (want to put those MP3s on there? Had to format convert to Atrac via some crazy program. And the only way to get them back out was via the analog hole).
This just at the time as large capacity flash was coming to market, and you could dump any random MP3 onto them with a drag and drop operation.
> Had to format convert to Atrac via some crazy program
The HiMD machines did not make nice mp3 players because they really weren't meant to be that. Only the final HiMD decks had mp3 codec.
I never converted from mp3 to atrac because atrac was a better codec. I didn't like feeling like I was listening to computer files anyway, and usually dubbed from a cd player.
> the only way to get them back out was via the analog hole
The last SonicStage update lets you download audio. This is important because these devices were used to record original content.
As for what was better or worse i will not get into. MP3 was already turning into the de-facto standard, and thus Sony did themselves no favors by having people go the long way round.
I guess i missed the news about the last Sonicstage release, as by that time i had given up on how Sony was handing tings.