I wouldn't technically call this "boot" since the kernel has already booted...
If get google-drive "mounting" support into grub, then I'll concede.
This just places the rootfs on some strange place.
btw, I have a project in my drawer, to place rootfs of my NixOS on IPFS.
The problem with booting Linux off very high latency devices is the kernel tends to time out I/O requests after too short a time (60 seconds I think) so you have to adjust those timeouts upwards.
It's possible to require that any images used be signed using a specific key that is configured in the hardware ahead of time. Even if you don't do that, the same setup can be helpful for provisioning a bunch of machines without accessing any external network. You can configure a small box to act just as a DHCP server and to serve a machine image for network boot. Then you can have all the machines on this subnet automatically load that image as it is updated without the need for any further configuration on each device.
I've seen organizations do something similar to this for trade shows when they want a bunch of machines that visitors can interact with and don't want to have to keep them updated individually. Just update the image once and reboot each machine.
I dunno, I actually think a public key is better than a hash, because it lets you sign updated images without having to update things on the client. Obviously it should be user-controlled, but this feels like a legitimate use.
I don't really see it being that much more complicated. Signing the image is just one extra step when you publish, but it also means that you never need to update client machines unless the key is compromised.
Well, it kind of does. Normally, the PXE network booting will use DHCP (or bootp or whatever) to fetch the boot image location, then it will fetch that boot image. Historically, that has worked this way:
1. bootp says boot image is at <ip address>/path/to/img
2. PXE network stack fetches that image via TFTP (which is awful)
3. PXE network stack boots that image
In most cases, the boot image would be a chainloader like pxelinux, and that would fetch a config file which told it the kernel path, the initrd path, and the commandline, and then the user could choose to boot that image, and then pxelinux would fetch the files via TFTP (which is still awful) and boot them.
In this new, HTTP-based case, we replace each instance of "TFTP" with "HTTP", which we can authenticate (ish), which we can easily firewall, which doesn't have weird compatibility issues, and so on.
Note that, before now, you could replace pxelinux with iPXE, and iPXE could fetch files via HTTP (which is awesome), but you still had to fetch iPXE and its config file via TFTP.
Note that TFTP is an unauthenticated, UDP-based, extremely limited protocol which has almost no support for anything but the most basic "get this file" or "take this file" functionality. Being able to replace it is a joy and a wonder.
I'm wondering if this is how we did a net install of a custom Distro back in a former job, but I don't recall. I just remember it being insanely easy to install the distro over the network, even on a VM.
if it was a decade ago, PXE/tftp booting was pretty common (at MetaCarta we shipped dell 2650/6650 servers around then, and while field upgrades were from DVD, the QA lab had some "synthesize keystrokes through a KVM to select netbooting" and then a tftpserver that had the image you wanted to install in a MAC address specific filename, so the machine picked up the intended image. We got the idea from another boston-area startup (Vanu Inc) that put similar Dell servers in software-configurable cellphone towers, iirc)
PXE is still the king in large DCs. I can install ~250 servers in 15 minutes with a single xCAT node over traditional gigabit Ethernet. Give another 5 minutes for post-install provisioning and presto!
Once you have PXE you can do all the things -- NFS boot, HTTP boot, iSCSI boot, and so on. There are several open source projects that support this. I think the most recent iteration is iPXE.
That's true, though I always have felt that if I needed PXE+TFTP to boot the bootloader I might as well just load a kernel+initrd from the same place and be done with it; I couldn't remove the TFTP requirement so anything else would just be extra things to configure. If UEFI can really do pure HTTP (as discussed upthread) then I may need to reevaluate. (Well, for Raspberry Pis I'll have to keep TFTP, but maybe in other contexts I can drop it)
> While standard PXE clients use only TFTP to load parameters and programs from the server, iPXE client software can use additional protocols, including HTTP, iSCSI, ATA over Ethernet (AoE), and Fibre Channel over Ethernet (FCoE). Also, on certain hardware, iPXE client software can use a Wi-Fi link, as opposed to the wired connection required by the PXE standard.
Does iPXE have a ca-certificates bundle built-in, is there PKI with which to validate kernels and initrds retrieved over the network at boot time, how does SecureBoot work with iPXE?
There was an article recent for somebody doing it on an Orange Pi [1]. IIUC, you can have one RasPi with an SD Card (I use USB drives but w/e) to be the PXE server and then the rest can all network boot.
Welcome back, diskless workstations! We've missed you... oh, wait, no, we really haven't.
This is technically neat, but... How often does the memory card break on a Raspberry? How often does the network break (either Raspberry hardware or upstream)? There are fewer things to break when you run from local hardware.
You are thinking about this wrong. Imagine having a single disk image for 100 Pis. Now imagine having to burn that image to a hundred MicroSD cards, now suddenly you want to update the disk image.
As others have said, you can also use PXE, but http is a bit easier to deal with.
There is a hosting company with something like 44k Raspberry Pis. Are you going to be the guy to update them?
That's one improvement, but network booting can also help us home-gamers who don't have a hundred Raspberry Pis that are all doing the same thing.
Many of us have a handful of Pis at home doing whatever they do, each with their own unique MicroSD card. In this configuration, every time the number of Pis doubles, the overall MTBF for their collective storage halves. Backups are a pain since each Pi is a unique and special snowflake, and are thus somewhat unlikely to actually get accomplished. When a MicroSD card does die, that Pi's configuration and all of the work that went into making it do whatever it likely disappears with it.
However, when booting over the network:
A handful of Pis are at home doing whatever they do, and booting from a reasonably-resilient NAS (eg a ZFS RAIDZ2 box) somewhere in the house (which is a great idea to have around for all kinds of other reasons, too). Adding more Pis does not decrease storage MTBF at all, since there is no MicroSD card to die. Backups become simple, since ZFS snapshots make that kind of thing easy even if each Pi's disk image is a unique and special snowflake. Space-efficient periodic snapshots become achievable, making it easy to unfuck a botched change -- just roll back to an hour ago or yesterday or whenever things last worked and by using that snapshot instead. Undetected bitrot becomes zero. Speeds (for many workloads) might even increase, since at least the Pi4 can handle wire-speed network traffic without any real sweat.
It's not a great fit for everyone, but it may result in a net long-term time savings for some of us folks here who tinker with stuff at home if enough steps are automated, and it seems likely to result in fewer frustrating surprises.
> How often does the memory card break on a Raspberry?
I have no data on this, only anecdata - since I've started being interested in the Home Assistant project, I've seen countless problems with people who've done an upgrade, rebooted their HA Pi and had some kind of Disk IO issue because the card died.
As I understand it, it's the constant logging to disk for logfiles and databases that ends up killing MicroSD cards. It seems to be particularly bad for clones and cheap ones off eBay/Amazon. It's still apparently a problem even for high quality "endurance" MicroSD cards.
I always put the rootfs in the kernel. It mounts on mfs or tmpfs. SD card is read-only. After boot, I can pull out the card. No need to boot over HTTP.
If it is pulling a filesystem from tape into memory and booting from that, it could be pretty quick. Reading sequentially from tape, if you are already at the right location which is easy if that location is the start of the tape, isn't particularly slow at all – non-sequential access is where tape storage becomes very slow due to massive latency in the physical mechanisms.
Not really. Chromebooks don't use the LAN. They can run code locally, or on the server in a different timezone. However with Sun if you needed more CPU you could log into all the machines on your local network - all machines shared the same filesystem(NFS) and passwd (I forget this was), so using all the CPUs in the building was easy. It was unencrypted, but generally good enough until the Morris worm.
Of course moderns servers have far more CPU power than even the largest LANs back in 1986. Still those of use who remember when Sun was a big deal miss the power of the network.
> all machines shared the same filesystem(NFS) and passwd (I forget this was), so using all the CPUs in the building was easy.
Sun did this through NIS, originally Yellow Pages/YP, but name changed for trademarks.
When I worked at Yahoo, corp machines typically participated in an automounter config so your home would follow you around, it was super convenient (well, except when the NFS server, which might be your personal corp dev machine under your desk, went away, and there was no timeout for NFS operations... retry until the server comes back or heat death of the universe). They used a sync script to push passwords out, rather than NIS though --- a properly driven sync script works almost as fast, but has much better availability, as long as you don't hit an edge case (I recall someone having difficulty because they left the company and came back, and were still listed as a former employee in some database, so production access would be removed automatically)
What people really want is sub-second booting, especially in embedded. It is a hard problem but somehow nobody seems interested in doing the hard CS research to solve it.
There's tons of work on millisecond boot times going on, in kata-containers, confidential computing, and various "serverless" implementations. I wrote a paper about it nearly a decade ago too[1].
And I still can't boot my Linux system in a reasonable time. Perhaps the true problem that needs to be solved is that everybody is somehow (forced at) reinventing the wheel every time.
The real problem is linux is just a kernel - they cannot force you to have good hardware. If you want fast boot you need to start with the hardware: a lot of hardware has an long init sequence so there is no way the kernel can boot fast as it cannot boot until that hardware is initialized. Then you can look at the kernel, step one is strip out all the drivers for that slow to init hardware you don't have (since those drivers have to insert waits into the boot while they check for the hardware you don't have). If you do this you can save a lot of boot time.
Of course in the real world the people who select your hardware don't talk to the people who care about software. So you are stuck with slow boots just because it is too late to go back and do a fill million dollars each board re-spins now that we know our boot times are too slow.
It gets worse, even if you select fast init hardware that doesn't mean it really is fast. I've seen hardware that claims to not need long inits, but if you don't insert waits in the boot there are bugs.
I haven't kept up with modern linux - is there a tool that automates that?
e.g, records what drivers have been used over some number of boots, and then offers to disable all the drivers that haven't been used.
I don't think this is related to slow hardware, maybe bad drivers, but not slow hardware.
I consistently get a faster boot both on Windows and on MacOS with reasonably slower specs than my Linux desktop.
The linux boot is fast, some 5sec maximum. But Windows is almost instant, of course it uses the notorious fast startup, but even so I expected more from linux, being some lightweight as it is
At least on my completely unoptimized desktop, majority of boot time is already spent in UEFI firmware, not in kernel or userspace startup. So realistically there is limited opportunity to optimize the boot times.
That's not what I'm doing. I'm saying if your distro, your hardware, or your setup takes significantly more time, examine why. The hardware part is tough because we're effectively locked into whatever is cheap on the market, everything else is 100% fixable. Fast booting is not really a hard problem, especially in embedded where you know & control the hardware.
Yet, just about any system I've used boots slowly. Your argument is like saying that software bugs are not a real problem because you can simply find and fix them if you look hard enough.
My Framework laptop took 2.423s after starting userspace to be "done", without me making any effort on that. (Measurements for the part before that aren't useful on this setup because my initrd waits for user input passphrase.)
2.423 seconds is on the fast end of what I've seen, congratulations. For most systems I've seen it would be at least 5 seconds and when comparing that to loading a webpage, I would consider closing the tab.
I can only guess here. But remember that software package management was a pain too and it took someone to do a Ph.D. on the topic to give us NiX (and it still isn't perfect).
Ah I see where you're coming from. I don't see any reason to expect that's the case here though. Package management has some fairly obvious tough CS problems inherent in it -- dependency resolution with version upgrades inherently feels NP-hard, for example. Whereas booting is about making hardware that initializes quickly and then making software that abstracts over a variety of hardware well... within the development budget you have. And then you're stuck with backward compatibility as everything changes. I could be wrong here but it feels like a costly engineering problem more than anything else.
(Note I'm not saying you can't do a PhD in it and improve the situation -- you could probably do that for any problem, honestly. Just saying that I think you could get most of the way there by just paying the engineering cost.)
Dependency resolution with versions is indeed NP-hard, if versions "conflict" (2 versions of the same package can't be installed at the same time). What if they don't conflict, and you just wanna install the fewest possible package versions to satisfy all dependencies? That's NP-hard too.
I'm just seeing that this is a forever lingering problem and I think if only engineering costs were involved the problem would have been solved by now.
At that point you might as well run Ceph and give your diskless workstations a writable block device via RBD. The overhead of an S3 operation per file is quite high.
There are some easier solutions for just S3, like Minio, which I imagine is likely much easier to setup than Ceph (though ceph is not that hard with cephadm).
I read the “How to shrink a file system without a live cd. So here’s my one. How to shrink a file system without a live CD as part of a single command install script of a program.
My sbts-aru sound localizing recorder program does that on the pi.
I’m willing to bet that no other project on the Internet does this, but I’d love to be surprised. Let me know.
It installs the majority of the code, then reboots, shrinks the file system. Creates additional partitions and labels them installing file systems. Then finishes the install and comes up running.
Speaking of booting Linux from places, what I would like to be able to do is carry a Linux image around with me on my (Android) smartphone, plug the phone into a USB port on a laptop and boot the Linux image from there on the laptop. Does such a thing exist?
This really is nice to have and a sibling comment has already linked to DriveDroid, the solution I'm using for this.
Back in the CyanogenMod days, I had an even better setup: there was an app that also let you emulate a USB keyboard and mouse, so I could, with some command-line trickery, boot a computer from an ISO on my phone, then use that same phone as a keyboard and mouse/trackpad, including in the BIOS.
It's not about `ConfigFS` as a whole, but specifically `CONFIG_USB_CONFIGFS_MASS_STORAGE`, that is left disabled, while lots of other `CONFIG_USB_CONFIGFS_BLA` are enabled.
This and more can be seen in the `device info` screen of the App mentioned above
Should have said *proper ConfigFS support. Anyway, had no prior interest in this kernel feature until you mentioned the anomaly that is specific to certain vendors.
You can also do `zcat /proc/config.gz | grep CONFIGFS_` in a root shell (su) inside termux to get what options are set by the default kernel.
Android stopped exposing USB Mass Storage, because it's problematic for the core use case of letting you grab pictures and what not from your phone, because it requires exclusive access to a filesystem; that wouldn't be a big deal for you, I don't think, you probably just want to create a large file and expose that as a device, but the implications of exposing the sd card (or the virtual sd card) as mass storage are why it went out of style.
What I'd like to know is why my 2023 phone is still every bit as flaky as my 2018 phone was. For a while I was blaming my Linux solution but every time I try to use it on Windows it's just as flaky.
Fundamentally, accessing files on a live filesystem is a solved problem, and has been since before smart phones. I don't even know how they made such a broken setup.
(I believe the problem with USB mass storage is that it's closer to an IDE/SCSI protocol than a filesystem protocol. You can't have one bit of the system running around "accessing files" while you've got something else "moving the simulated drive head and writing this sector". In principle you could put the work in to make it all work out, but then it would be as flaky as the media access is now, only for a good reason rather than laziness/lockin.)
The situation with USB is a lot more complicated than it appears. See things like this for instance: https://unix.stackexchange.com/questions/120368/make-a-compu... The controllers that sit between the port and the computer can create significant limitations versus what is theoretically possible if you were directly bit-banging the cords.
You can do this with an ethernet cable, if you have one and an ethernet port on both ends. You can manually set up a network on just that cable and transfer at full speed. (AFAIK all modern ethernet ports are capable of figuring out that they need to crossover in this situation and you haven't needed a special crossover cable in a long time.)
If the sd card is mounted by your computer, you can't run any apps on the phone that need to use the sd card. That means, apps you moved to the SD card for space reasons, or apps that might save photos to the SD card (such as messengers).
If your computer messes up the filesystem, then you're in a world of hurt.
If multiple apps can access the filesystem at the same time, why couldn't also some app (background system process) read from / write to the filesystem in an android multi access compatible way, while serving the mass storage device API on the other side
That's possible, but really challenging to do because mass storage is block oriented, the host (desktop) is likely to do read and write caching, and there's no mechanism for a host to say I finished writing this file, and there's no mechanism for a device (phone) to say blocks changed from under you.
The paradigm for block oriented filesystem access is exclusive access, and filesystem code is built around that. There's some niche filesystems around multiple simultaneous access to block devices, but I don't know if there's any that are open source; mostly people don't setup scsi/sas/das disk arrays with two hosts anymore, and when they do, they're much more likely to have exclusive access with failover than simultaneous access.
If you had a team of really detail oriented developers capable of getting this done for Android and desktop platforms, wouldn't you rather they work on something else?
Another approach might be to build a virtual filesystem to export as a block device on usb connection that's essentially a snapshot of the current one, and then you sync any changes that were written on usb disconnect, but then you need to manage divergent changes and that's unfun too.
SMB over USB would be terrible in many ways, but probably handle this use case much better.
> If you had a team of really detail oriented developers capable of getting this done for Android and desktop platforms, wouldn't you rather they work on something else?
If you can improve the world (mass storage device is really wide spread) this way, why not?
For better or worse, most people's photos never get transferred to a computer. Heck, there's a ton of people who don't have a non-phone computing device. Pushing their photos to the cloud so they see them after their current device dies is better than working on a new filesystem that allows for multiple host simultaneous access and porting that to everything. I could make a huge list of more tractable things that this hypothetical team could work on to make Android better for way more people.
Top on my list would be getting it so the touch screen just works every time. I can't count the number of times I've had to turn the screen off and on, because the touch screen came up in a way that I can't swipe up to get the code entry because swiping from the bottom to the top of the screen doesn't move it enough. I've had this happen on pretty much all my androids.
Things like booting faster would actually be nice. Especially since sometimes phones reboot in pockets. Setting up applications for faster starting would be amazing. It's not all in Google's court, but a basic hello world with the Android IDE starts up rather slow, even if you've noticed you need to compile a release build.
It doesn't, but consider that the vast majority of us already carry our phones everywhere.
Would carrying an extra USB stick be that big of a hassle? No, but I can see the need for booting up a ready Linux image being extremely situational so the vast majority of time you're just carrying dead weight.
You can have a stick with one boot and one commonly formatted (FAT32/exFAT/ext) partition, Linux image being stored in later. Then it's like a normal stick that can also be used to boot Linux. Ventoy automates this process, allowing you to throw any ISO in a specific directory and boot it.
What do you do about the USB cable though? A flash drive you can plug in directly, it's guaranteed to work. A phone requires you to either carry around an extra cable (arguably more annoying to carry than a flash drive) or take the risk that you won't have the right cable available nearby when you're trying to (say) boot a laptop.
I have a few Verbatim "Tuff and Tiny" USB drives. Like this but without the plastic clip part. I can fit them in my wallet because its about the thickness of 2 credit cards which are also in my wallet.
Reminds me of the credit card sized (literally [1]) USB stick I still have somewhere but it was too annoying to carry around and hope that next time that cheap stick still works...
Using the phone directly still seems the cleanest and most reliable way. Or maybe a combination of both, like those magnetic plugs [2] but with an integrated USB stick. Bonus points if you don't have to take it out at all (until needed) by either connecting the other magnetic part for data transfer and charging or data through USB OTG and wireless charging. One can dream... but the technology will shrink even further so who knows.
USB sticks attached to keychains are already widespread in some communities (DJs for example), I'm sure us software people could do it too if we wanted to :)
While not booted from, wimlib's support for pipable WIMs means through some shenanigans, you can install modern Windows from tape. I had a bootstrap ISO that would fire up Windows PE, mount my USB DAT tape drive, rewind it, prep the onboard storage, then image direct from tape to disk and make it bootable.
I posit that because wimlib supports pipable WIMs that we could pipe an endless stream of QR codes to it (thus making the "installing Windows from QR codes" possible)...
My first IT job involved installing a lot of Windows 95 from floppy disk. Luckily each PC I bought came with a set, so I'd build up some "good sets" over time after discarding all the disks that had read errors.
The first time I installed SLS Linux (pre-Slackware), it took some 25 1.44MB floppies and I owned ~20 empty ones. I left the installer running overnight and downloaded more floppies the next day at school. It took an extra day because some floppies had bad sectors, and had to be re-downloaded..
Somewhere in my parents' house there is a massive box with floppies for office 95 (or whatever it was called back then). Not 40 floppies massive, but still a large number.
I think we managed to only ever install it once successfully without error.
Also, fun semi-related fact: In my country we called 8" and 5.25" floppies "floppies", and the smaller 3.5" ones were called "stiffies" - because the larger ones were floppy, and the smaller were, well, stiffer. Do with this information as you please.
installation was more than overnight. once it was up and running, it was remarkably responsive, in the short run, but would invariably need to do a lot of seeking to launch any app. The sort of thing if you are sitting nearby you could give it some input every couple minutes.
For some fringe use cases one could drop a readily installed (and defragmented!) OS image to the tape and boot it up. I've only had some floppy tape drives and parallel-port attached Ditto. They didn't support random access, or at least I never had a driver that could do that.
I seem to recall some vendor (HP?) selling external tape drives at some point that supported bootable, bare metal Windows restore from tape.
I believe it worked by supplying the recovery software as a bootable ISO image in ROM on the drive and emulating a bootable (USB? SCSI?) CD-ROM drive at boot.
If your intended system volume was going to require drivers that weren't built into WinNT, you needed to press F6 at a specific point during installation. This would allow you to load a driver that makes the volume visible / usable.
This process was specific to installing storage drivers needed for the system volume. All other driver installation happened elsewhere.
My memory says there was actually a "Press F6 to load system storage drivers" prompt or something displayed by the installer, but it wasn't displayed for all that long a time and I imagine it was effectively invisible for many people. I recall spamming F6 to make sure I wouldn't miss the prompt.
Actually there were two separate times during the installation process that you could press F6 to provide storage drivers. The first had no visible prompt! The second has the prompt you remember.
Here's how I remember it: The Windows CD itself had drivers built into the installer so that it could discover hardware. However, if you had a brand new storage controller, you might find that even Windows NT CD's installer wouldn't recognise it, so it would tell you that there were no storage devices found. To get around this you had to press F6 right at the start of the CD boot, before the Windows logo appeared. After a few seconds you could provide your storage drivers on a floppy disk, and the Windows installer program would continue to load. This time, the installer would recognise your disks. Then during the installation you would get a visible F6 prompt to provide your storage drivers. This allowed you to provide extra storage drivers that would be bundled with the installed OS.
Most people didn't know about the first F6, because I think NT installer had some sort of very basic, generic storage drivers that would work in most cases. If you had some very recent array controller, you would likely need to know about the "invisible F6 prompt".
Any current or future OS should have its filesystem completely decoupled from the OS itself -- thus allowing booting/running the OS off of any kind of plain or esoteric storage device, local or network, present or remote, physical or cloud-based, interrupt/DMA based or API/protocol based, block-based or file-based, real or virtualized, encrypted or not encrypted, tunnelled or not tunnelled, over another protocol or not over another protocol, using TCP/IP or UDP or even just raw 1's and 0's over whatever electronic communication channel someone invents next, etc., etc.
Old time OS programmers typically didn't need to think about these things...
Current and future OS designers might wish to consider these things in their designs, if they desire maximum flexibility in their current or future OS...
This inspired me to study the possibility of booting on one linux and then chrooting to another linux. Reason being that I cannot update the first one, it being too old, but it has important janitorial purposes. With the help of ChatGPT I made this script, where everything seems to work including x-windowed programs.
sudo mount /dev/sdb2 /mnt
sudo xhost +local:
sudo mount --bind /tmp/.X11-unix /mnt/tmp/.X11-unix
sudo cp ~/.Xauthority /mnt/root/.Xauthority
sudo mount --bind /dev /mnt/dev
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
sudo mount --bind /dev/pts /mnt/dev/pts
sudo unshare --uts chroot /mnt su -l timonoko
sudo umount /mnt/proc
sudo umount /mnt/sys
sudo umount /mnt/dev/pts
sudo umount -l /mnt/dev
sudo umount -l /mnt/tmp/.X11-unix
sudo umount -l /mnt
I remember first getting my cable modem at the house, and I was able to install BSD over the network using a boot floppy.
That was an "amazing" thing to me back in the day. I had the bandwidth to do it, a simple floppy to start the whole process and...there it was! BSD on my machine.
I'm not sure if you can still do that today. Pretty sure the files were FTP hosted somewhere (or even TFTP). I think today it's all ISOs.
> On the brink of insanity, my tattered mind unable to comprehend the twisted interplay of millennia of arcane programmer-time and the ragged screech of madness, I reached into the Mass and steeled myself to the ground lest I be pulled in, and found my magnum opus.
> Competitiveness is a vice of mine. When I heard that a friend got Linux to boot off of NFS, I had to one-up her. I had to prove that I could create something harder, something better, faster, stronger.
sounds like they're well aware of the traditional way to do it, and are deliberately going out of their way to do something different and weird.
Considering how slow and buggy it is to use as a rootfs, you can instead put an initrd on Google Drive and just boot that. You'll need to make it by hand to get it to a reasonably small size, so picking up a copy of Linux From Scratch, and using libmusl or libuclibc along with BusyBox, will go a long way towards a functional system in a small size.
If you want a fuller system you could try 1) convert the filesystem to tmpfs after boot and install packages to RAM, or 2) mount a remote disk image as your roofs rather than keeping individual files remote. The former will be blazing fast but you're limited by your RAM. The latter will be faster than fuse, benefit from io caching, and not have the bugs mentioned.
UEFI provides a pretty complete environment; it would probably not be too hard to write a .efi program that connected to network and downloads whatever you want from Google Drive (or anywhere else) into RAM and runs it. For that matter, IIRC Linux can already build a combined kernel+initrd into a .efi, so you could make this semi-generic by writing a gdrive.efi that downloaded an arbitrary .efi from gdrive and booted it.
btw, I have a project in my drawer, to place rootfs of my NixOS on IPFS.