Chrome, on my system, is even more abusive. Watch the size of the .config/google-chrome directory and you'll see that it grows to multi-GB in the profile file.
There is a Linux utility that takes care of all browsers' abuse of your ssd called profile sync daemon, PSD. It's available in the debian repo or [1] for Ubuntu or [2] for source. It uses `overlay` filesystem to direct all writes to ram and only syncs back to disc the deltas every n minutes using rsync. Been using this for years. You can also manually alleviate some of this by setting up a tmpfs and symlink .cache to it.
Launchpad is such a shitty website, aimed at Ubuntu and only Ubuntu, links to source code or more information are nowhere to be found...
Searching on Github, this seems to be it. Turns out, there's releases for Arch, Debian, etc. and it's even in the repositories. No need to add a ppa. https://github.com/graysky2/profile-sync-daemon
For Debian and co:
$ apt-cache show profile-sync-daemon
$ sudo apt-get install profile-sync-daemon
I've long gotten used to "bird" doing it's thing (something with the cloud I guess). But how can a twitter client write 3GB (while I'm not even actually using it?)
The OP is complaining about excessive write traffic, which wastes power, steals disk-time from other applications, and may wear out SSDs prematurely.
A large profile file does not imply excessive write traffic, so it's not clear from your report that Chrome is hitting the same problem at all. Definitely worth watching, but big-file != excessive-writes.
Been using on my system ever since I started running everything from my USB drive... to prevent excessive freezing twice a minute. On NixOS, add the following two lines to your configuration.nix:
beware! profile-sync-daemon erased my browser profiles upon first run under Ubuntu Xenial due to apparent bug in eCryptfs support (and how /home was mounted). Report was submitted to the developer.
`mount -l` shows a filesystem of type `overlay` for each browser (in my case, chromium, chrome, firefox). Range from 50MB to 200MB. I have it to sync to disc every 15min - so I reckon it empties out the ram whenever it syncs. I've never had a problem (8GB of ram on the machine I initially installed it on) so I haven't watched to see what the max size is. The developer [1] is great - issues are resolved very quickly.
because not losing data is infinitely better than saving your ssd life a couple years. we've had ssds for a long time now. still waiting for one to die on me. meanwhile I'm on my 27th dead spinning disk.
Some people are unjustifiably worried about SSD endurance.
The reality is that even with fairly heavy use, most of them will far outlive the computers they're in. And their owners for that matter!
I purchased my current SSD in March 2015 - it's a Samsung 850 EVO 512GB. It's done 7332 Power_On_Hours, and 36486573259 Total_LBAs_Written, which works out as just under 17 TB written, or about 31.6 GB/day.
That kind of sounds like a lot. But let's put it in context. Even the previous generation of TLC NAND SSDs were recorded in endurance tests doing around 1 PETABYTE of writes before failing. The 850 EVO, with it's 3D V-NAND, should be capable of at least double that.
For arguments sake, lets assume it will last for 1 Petabyte. I've written 17TB in 1.5 years, or 11.3TB per year. At that rate, this drive is still going to last at least another 90 YEARS.
>The reality is that even with fairly heavy use, most of them will far outlive the computers they're in.
I'm not sure what the lifetime of a computer is. In my family, we are still occasionally running machines from circa 1995. They work fine. When should I stop using my MacBook? The only reason I'd stop is if it died in a way that can't be cheaply fixed.
I'm curious what useful work such a computer can perform, given that a typical 1995 computer had a 33 MHz processor, 8 megs of RAM and 1 gigabyte of hard disk space.
1995 is a stretch for anything modern but I have kids playing Minecraft on Pentium 4’s (with decent graphic cards though - that is the critical part). These boxes would have been new around 2003 or so.
Most of the SSD failed before a petabyte... and started encountering lots of errors before that (forget about the Samsung Pro series which is the only one that survived till its multi-petabyte end).
The warranty on your EVO is just 150 TBW. The older generations typically had better endurance, the newer generations are more dense (cheaper to produce) but less durable.
If you keep with 17 TB per one and half year your SSD should be OK for your for at least next 11 years.
Yes, but endurance tests almost invariably show that the warranty ratings are extremely conservative. The 1TB 850 PRO (which uses exactly the same NAND chips as the 850 EVO, just a different controller) has been endurance tested to more than 7 Petabytes. See: http://packet.company/blog/
The older generations typically had better endurance, the newer generations are more dense (cheaper to produce) but less durable.
That was true when comparing MLC flash to TLC. But the 850 EVO and PRO use 3D V-NAND, which has significantly more endurance than previous-generation TLC. This seems to be confirmed by endurance tests.
your SSD should be OK for your for at least next 11 years
The warranty runs out after 5 years, anyway. But that does not mean it will suddenly stop working on that date!
No, you're wrong. The 850 PRO uses MLC while the EVO uses TLC. But given that they're both constructed on Samsung's 3D-VNAND technology, their endurance is still much higher than the competitors out there.
Yes, you're right. But I don't expect an EVO to last for a full 7 PB of writes like the PRO did. But even if it lasts a fraction of that (say, approx 1 PB, which previous generation 840 EVO was able to achieve) then it's still 90 years of life at current usage levels! A (1 TB) 850 PRO would last over 600 years!
In all likelihood the drive will never get anywhere near any of these values - it'll be replaced with newer, better tech at some point. It may then get used for a backups or secondary storage, but in those scenarios the daily writes will drop enormously.
It's not "much" higher, actually, 3D MLC as implemented by Samsung is just "up to twice" better than the planar MLC given the same die area and capacity:
"Samsung V-NAND provides up to twice the endurance of planar NAND."
But if the size of the cell drops, the number of P/E cycles drops. Samsung's endurance declarations are real and to be believed, they initially used bigger cells than some of their other chips (or competitors), but Samsung engineers know what they do, that "some tests" achieved "much more" can be either an accident or due to the errors in the test methodology (and I very much suspect the later, because it also allows the accidents to be taken as the "success"). At the time these SSDs appeared, Samsung declared twice less TBW than they do now, so now the declared endurance is surely not too pessimistic, but based on the real knowledge of what's inside.
Do you ever miss the upside of all that persistence? I find my history useful, I find continually re-logging in a pain (obviously excepting the kind of sites where I don't want my login to persist i.e. financial etc).
However I also only close my browser once every couple of weeks as well. System updates etc.
that is the beauty of it. You can disable. Be glad you have firefox and not only IE or chrome (where you can't disable all the things)
But for 99% of the users, they want it.
Heck, anecdote for anecdote: the very first thing i do on new machines is to open up firefox, go to settings, set the browser to startup with my previous session's tabs.
Does it? Consider this scheme: maintain two physical directories, one of which is current (and is named by a symlink) and the other of which is old. When doing a sync, you rsync the newest data into the old directory, then update the symlink. It'll probably result in somewhere between 1x and 2x the writes as the non-atomic single-directory scheme, depending on how well adjacent diffs combine. It will also result in twice the space used (barring some very clever filesystem).
This will copy /path/source to /path/dest/$NEW_BACKUP (which is a time stamped folder). But it will take into account the previous backup. If a file hasn't changed, it will create a hard link. If it changed, it will then copy the whole file.
And that's it.
Since it's the name is a time stamp. When you need to restore, just read ./prev_backup or just list the directory content, sort it and read the last entry.
This isn't really the sync daemon's fault; it's Linux's (or rather, ext4's, and the Linux VFS ABI's) for not supporting multi-inode filesystem transactions. NTFS has them; APFS will have them. Linux should add them too.
There is a Linux utility that takes care of all browsers' abuse of your ssd called profile sync daemon, PSD. It's available in the debian repo or [1] for Ubuntu or [2] for source. It uses `overlay` filesystem to direct all writes to ram and only syncs back to disc the deltas every n minutes using rsync. Been using this for years. You can also manually alleviate some of this by setting up a tmpfs and symlink .cache to it.
[1] https://launchpad.net/~graysky/+archive/ubuntu/utils [2] https://github.com/graysky2/profile-sync-daemon
EDIT: Add link, grammar
EDIT2: Add link to source