So then "without rebooting" in the title is inaccurate, no?
I haven't studied TFA carefully. But reinstalling running Linux systems via SSH is pretty routine, no? Using debian-installer with network-console, I mean.
I occasionally reinstall remote servers with LUKS via SSH. I just login, build the installer, and reboot into it. Then I SSH to the installer, and almost complete it. Just before rebooting, I go to single-user mode, and setup dropbear in initramfs. Then I reboot.
So OK, that's two reboots, not one. And if it fails, I just reboot using the control panel. If I've really fucked up, I reinstall and try again.
I investigated this many years ago to see if it was, but I found scant information on compatibility requirements when using kexec to hand over execution to an arbitrary kernel.
The problem seems really hard though. One issue that stands out to me is that even if you properly shutdown the old kernel, will all system devices be in a 'good enough' state to be reinitialized properly by the new one? Or do some devices require a reboot for some reason?
When I've built custom kernels, I'm pretty sure that the new kernel wasn't active until I rebooted. But rebooting after even dist-upgrade has just become automatic.
For unattended upgrades, you can disable automatic reboot. But then, I think there's risk that some upgrades won't take effect.
I'm genuinely curious is all. At the time I was pursing this I decided it was going to get too complicated and that I had to live with a reboot.
1. Most firmware is sufficiently broken that Linux drivers are already hardened against devices being brought up in arbitrary states.
2. kexec walks the device tree to shut down all devices before starting the new kernel. This usually gets devices closer to a startup state, or at least a smaller number of known shutdown states.
select volume C:
for i in dev proc sys run; do mount --move /oldroot/$i /$i; done
A lot of such guides have a fair amount of "then use tool X to do important part Y" without explaining any of it, making it horrible if you don't know X or the details of Y.
Source: I once tried to fix and resize half-broken LVM volumes and if the sonology guys didn't take time to help me I would probably have just ended up wiping it all and recovering all 16 TB from backups because internet self help is still a harsh place on some linux fs matter.
That would be amazing support.
(My Synology just works so I don't know what the support is like.)
Was as a personnal customer and no paid support (beside having bought the product of course), within the second year of my purchase. Overall I've had a fair amount of support requests with them for my personnal NASes and the couple dozen I manage for professionnal purpose, and I'm very happy with that relationship.
PS: my original support request was very detailled though, I did not just go and ask "doesn't work, fix it !"
btrfs filesystem resize 4g /
I certainly wouldn't shrink mounted filesystems even on those that support it because storage systems are quite fickle and the threat of silent corruption is real.
For most filesystems what you normally do is a full dump and restore. Closing all file handles and live migrating to a moved root filesystem is someone's idea of a showing off, it is absolutely not something anyone would do in production.
P.S. I realize an existence proof makes us mathematically happy, but it's a little disingenuous to suggest file system experience on Linux and Windows is comparable merely because of the existence of some uncommon file system with similar capabilities. The reality is most Linux users are on ext4 rather than on btrfs or zfs, and most Windows users are on NTFS rather than ReFS or FAT32, so their experience with shrinking file systems is not going to be remotely comparable.
yes, it does.
btfs is already the default in SUSE derivatives. It remains to be seen how the other distributions will handle it within the next ~3 yrs.
its only just become stable enough to be usable in production.
In the specific example of btrfs there is an extra layer of indirection just like there is with ZFS. Filesystems live inside pools of devices, and when one is shrunk that leaves room for another one to grow. You wouldn't want to resize the physical volume or partition unless the device is shared with other types of filesystems.
(This is why ZFS and btrfs sometimes are referred to as "layering violations". Other filesystems expect the logical volume manager to pool devices into logical block devices.)
So resizing an btrfs filesystem absolutely makes sense even in isolation.
You can do that just fine on Windows, hell you can even do it with a nice GUI using computer management.
Do end-users really mess with partitioning usually (outside of formatting brand new disks I suppose)? I'm not asking rhetorically, I suppose there must be a use case if MS implemented this (tricky) feature but I can't really imagine any of my non-techies friends and relative decide to shrink a partition (actually most of them probably aren't aware of the concept of partition in the first place).
Now personnaly I find it weird, I would rather use the excuse to wipe and start clean if root, and for non root just making the partition you want and copy the content instead feels cleaner, but is it fairly common nonetheless.
They indeed don't know the concept of partitions, but they google "replace my hard drive by a larger one" and follow a guide usually (and such guide contains link to a specific duplicate took they can buy, of course).
More generally, I was trying to approximate "the set of users who aren't solely sysadmins of remote systems". Substitute for it whatever word you see fit.
And I never suggested a developer would be incapable of following these. Seems like you got so sidetracked in arguing that you forgot what the discussion was actually about. See my original comment: https://news.ycombinator.com/item?id=19357672
If this is still the case then it's not really a far comparison to Windows because you could then do the same thing from a boot media in Linux (ie boot into it, resize and you're done). In fact you probably could also do that via a GUI (maybe gparted?).
In any case I do agree that Linux does still leave a lot to be desired when it comes to making some of the more advanced file system operations far more complicated for end users than they need to be.
Not if all you're doing is shrinking from the end. When was the last time you tried? I'm guessing the XP days?
To be fair, it might have been. I'm not a heavy Windows user but the fact I can't recall the last time I resized the system volume probably says more about how long ago it was.
But about Windows, last year I had to resize the partitions on my work desktop, there are enough restrictions around changing things on the disk that Windows is installed that it became a week long research task for our IT support people.
You're suggesting the research phase for partition management is somehow easier on Linux than on Windows?
The question was explicitly about shrinking the root filesystem without booting a livecd or any other OS.
Gparted cannot do that directly because the root filesystem cannot be unmounted, and the appropriate answer for nearly everyone else (boot from a livecd and use gparted) doesn't apply because the question explicitly bars this option.
However, you should avoid being in that situation entirely, which is nowadays very feasible: all modern servers include remote management facilities, and all sensible virtual server providers give you some way or another to boot from a network image (and get a console session through VNC).
If your provider doesn't give you these options you should definitely switch providers. Not having this option means that you are always one mistake away from total doom (server won't boot -> you will never ever be able to access it again).
I beg to differ, Computer management > Disk management has allowed you to do that in a visual and safe way since at least windows seven in 2009. God knows since how long the underlying cli commands have been available.
Of course if you want to shrink the rootfs without rebooting you'll have to do it while mounted and I'm not sure if that's supported by any Linux FS out there (outside of NFS I suppose). That being said I think that's understandable, implementing resizing of a live FS seems very tricky to get right and not extremely useful IMO.
Can Windows really let you shrink NTFS while mounted? That's a pretty impressive feat if that's true, I wonder what motivated that.
Yes, I have done it, even on the running system partition, and no reboot required. Just right click the partition in Disk Management and select Shrink. It will calculate the smallest size the partition can shrink to, and you can use that or any larger size.
This article is from a vendor of such a utility, but it also describes how to unlock some of the unmovable files within Windows:
This is a tool that is part of most Linux installers and tested by huge numbers of people, and yet things still went wrong. Shrinking filesystems is hard, and this was offline. Shrinking filesystems online is much harder.
presumably because it couldn't make the filesystem as small as I asked for, for whatever reason
I forked it to try and convert Debian installs from ext4 to btrfs. Unfortunately while it does work, that's a very bad idea since btrfs-convert produces a fs that fails to operate properly on the long run (IIRC there is - was? - some increasing random space usage that cannot be reclaimed, ever).
BTW, the process to convert to btrfs is excellent, only creating btrfs metadata in unallocated ext3 space, writing the btrfs header at the last minute, and using subvolumes allowing you to keep the ext3 metadata around as long as you want to roll back (obviously losing any subsequent modifications) because barring the header the whole ext filesystem and data is untouched.
I suppose Apple did something similar to convert from HFS+ to APFS so swiftly and so reliably.
The recovery mode could even be initiated remotely, so you could re-flash a device without ever touching it. Of course you have to be careful, if the re-flash failed you could be SOL :) Apparently I need to go back and improve it so we can re-flash without rebooting!
These days you can use things like containers (Balena also looks very cool) to achieve a similar goal in possibly a "safer" way. But the idea of being able to re-flash the entire system while running it felt sort of like changing the engine of a car while driving it down the freeway!
At first, it surprised me there wasn't more standard tooling out there for this kind of thing, but as I got more into it, I realised how specific to our particular needs my solution had become, and I could see how it would be hard to offer something generic that would be a good fit for a wide range of use-cases without being super-bloated.
Recently we ran into another use-case for this in production actually, we needed to wipe a lot of servers in our datacenter remotely and we figured one of the options would be to install some OS in memory with the relevant wiping tools, pivot_root to that, unmount all disks and then perform the wipe. In the end we went a different route and opted for a custom PXE-boot image instead that the servers would boot into that scripted the whole thing.
The first step is to update the kernel from an i386 one, so that it could run both i386 and amd64 binaries, but then you essentially overwrite every package with the version from the new architecture, and hope like hell it doesn't mess up.
At the time I had a pair of servers, a mail-host, and a web-host, and I managed to successfully upgrade both, although it was a little scary. At least I had console access if things did get horribly screwed up.