Moving PCs to a more mobile-like environment, where apps are always sandboxed and all permissions are explicit, sounds so tempting. But then you realize that manufacturers will always cross the line: MS and Apple apps get special privileges, the bootloader will be permanently locked, and you'll eat your DRM because you have no other choice.
I prefer to curate my own apps and if they misbehave, they're uninstalled immediately. The downside there is this philosophy feels a lot like, "I don't need a virus scanner because I'm too good to get stung by a virus". Still, it seems better than the alternative hell.
>
Moving PCs to a more mobile-like environment, where apps are always sandboxed and all permissions are explicit, sounds so tempting. But then you realize that manufacturers will always cross the line: MS and Apple apps get special privileges, the bootloader will be permanently locked, and you'll eat your DRM because you have no other choice.
It's always worth emphasizing imo that technological prophylactics like mobile-style sandboxing are redundant on operating system distributions whose social processes of development take responsibility for things like this (auto-updating, startup behavior, default app associations, file type associations, etc.) away from apps and centralize it under the control of the user via the OS' built-in config tools.
To be more explicit: free software distros generally lack software that behaves this way, and often actively patch such behavior out of ill-behaved cross-platform applications (who also sometimes lack it themselves, in builds targeting free operating systems). The problem is, as you note in your second paragraph, just as much that we're getting our software from untrustworthy distributors as it is that our desktop operating systems do too little to protect us from untrustworthy software. In some cases, the problem is rather that our software distributors have too little capacity to intervene, both for technical reasons (e.g., they don't have the damn source code) and social/economic ones (e.g., the model is to accept as much software as possible rather than to include a piece of software only after it is clear that there is capacity to scrutinize it, sanitize it, and maintain a downstream package of it).
You can avoid 99.999% of this kind of crap just by running a free software base system and avoiding proprietary software. Better app sandboxing is great! We should demand it and pursue it in whatever ways we can. But installing software directly from unscrupulous sources and counting on sandboxing to prevent abuse is also an intervention at the least effective stage. Relying on a safer social process of distributing software goes way further, as does keeping shitty software off our systems in the first place! These should be the approaches of first resort.
The problem is, reliably catching programs misbehaving is non-trivial even for the technically capable, let alone everybody else which easily results in a lot of damage having been done by the time they’re caught, if they’re caught.
I don’t think there’s a practical way forward for desktop OSes that don’t embrace sandboxing and tight permission controls, at least for the masses. Even for myself, I’d be more comfortable using an OS like that — if trust of the corporations making the OSes becomes an issue, the solution that’s apparent to me is to run a Linux distribution with deeply integrated permissions and sandboxing, not to run without either.
I think that the way is to design an entirely new operating system, and I have some ideas how to do it (but not a name, yet). It will be FOSS, and the user can easily reprogram everything in the entire system. It uses capability-based security with proxy capabilities; no I/O is possible (including the current date/time) without using a capability. Also, better interaction of data between command-line and GUI will also be possible.
Linux and other systems tend to be overly complicated; they need to add extra functions to the kernel because of the problems with the initial design; but, I think it can be done better in a simpler way.
Webapps are also too complicated, and does not really solve the permissions issue properly (the set of permissions doesn't (and cannot) include everything, the possibility of overriding by the user is difficult and inefficient, etc), and also isn't very well designed either.
There are sandboxing systems available on Linux but they have their own problems; e.g. many have no way to specify the use of popen with user-specified programs, or what needs to be accessed according to user configuration files and/or command-line arguments, user-defined plugins, or cannot properly support character encoding in file names (due to issues with D-bus), etc. (Using D-bus for this is a mistake, I think. The other things they have done (other than D-bus), also don't handle it very well.) There is also issue of being unknown what permissions will be needed before the program is installed, especially when using libraries that can be set up to use multiple kinds of I/O.
Yes. My ideas have some similarities with Genode but also many significant differences. For example:
- The design is separate from the implementation. The kernel of the system will be specified and simple enough that multiple implementations are possible; the other parts of the system can also be specified like that and be made, and the implementations from different sources can be used together. (Components can also be replaced.)
- The ABI will be defined for each instruction set, and this will be usable same for any implementation that uses that instruction set; the system calls will be the same, etc.
- The design is not intended for use with C++ (although C++ can be used). It is intended for use with C, and for use with its own programming language called "Command, Automation, and Query Language". Assembly language, Ada, etc are also possible, although the core portable design supports C, but would be written in such a way that the abstract system interfaces are defined in a way that is not specific to any programming language.
- All I/O (including the current date/time) must be done using capabilities. A process that has no capabilities is automatically terminated since it cannot perform any I/O (including in future, since it can only receive new capabilities by receiving a message from an existing capability). (An uninterruptable wait for one of an empty set of capabilities also terminates a process, and is the usual way to do so.)
- It does not use or resemble POSIX. (However, a POSIX compatibility library can be made, in order to compile and run POSIX-based programs.)
- It does not use XML, JSON, etc. It has its own "common data format" (which is a binary format), used for most files, and for command-line interface, and some GUI components, etc.
- The character set is Extended TRON Code. The common data format, keyboard manager, etc all support the Extended TRON character set; specialized 8-bit sets are also possible for specialized uses. (This is not a feature at the kernel level though; the kernel level doesn't care about character encoding at all.)
- Objects don't have "methods" at the kernel level, and messages do not have types. A message consists of a sequence of bytes and/or capabilities, and has a target capability to send the message to.
- Similar than the "actor model", programs can create new objects, send their addresses in messages through other capabilities, and can wait for capabilities and receive messages from capabilities. (A "capability" is effectively an opaque address of an object, similar to a file descriptor, but with less operations available than POSIX file descriptors allow.) It works somewhat similar to the socketpair function in POSIX to create new objects, with SCM_RIGHTS to send access to other objects.
- A new process will receive an "initial message", which contains bytes and/or capabilities; it should include at least one capability, since otherwise the process cannot perform any I/O.
- There is no "component tree".
- Emulation is possible (this can be done by programs separate from the kernel). In this way, programs designed for this operating system but for x86 computers can also run on RISC-V computers and vice-versa, and other combinations (including supporting instructions that are only available in some versions of instruction sets; e.g. programs using BMI2 instructions work even on a computer that doesn't support those instructions). Of course, this will make the program less efficient, so native code is preferable, although it does make it possible to run programs that don't.
- Due to emulation, network transparency, etc, a common set of conventions for message formats will be made so that they can use the same endianness, integer size, etc on all computer types. This will allow programs on different types of computers (or on the same computer but emulated) to communicate with each other.
- A process can wait for one or more capabilities, as well as send/receive messages through them. You can wait for any objects that you have access to.
- The file system does not use directory structures, file names, etc. It uses a hypertext file system. A file can have multiple streams (identified by 32-bit numbers), and each stream can contain bytes as well as links to other files.
- Transactions/locks that involve multiple objects at once should be possible. In this way, a process reading one or more files can avoid interference from writers.
- Better interaction between objects between command-line and GUI, than most existing systems.
- Proxy capabilities (which you can use C, or other programming languages, including the interpreted "Command, Automation, and Query Language") can be defined. This is useful for many purposes, including network transparency, fault simulation, etc. If a program requires permission to access something, you can program it to modify the data being given, to log accesses, to make it revocable, etc. (For example, if a program expects audio input, the user can provide capability for microphone or for an existing audio file etc)
- There are "window indicators", which can be used for e.g. audio volume, network, permissions, data transfer between programs, etc.
- The default user interface is not designed to use fancy graphics (a visual style like Microsoft Windows 1.0, or like X Athena widgets, is good enough).
- USB is no good. (This does not mean that you cannot add drivers to support USB (and other hardware), but the system is not designed to depend on USB, so avoiding USB is possible if the computer hardware supports it, without any loss of software functionality.)
- System resources are not the same like Sculpt; they are set up differently, because I think that many things would be better done differently
- As much as possible, everything in the system is operable by keyboard. Mouse is also useful for many things, although keyboard is also usable for any functions; a mouse is optional (but recommended).
- There are many other significant differences, too.
> - Due to emulation, network transparency, etc, a common set of conventions for message formats will be made so that they can use the same endianness, integer size, etc on all computer types. This will allow programs on different types of computers (or on the same computer but emulated) to communicate with each other.
If you're going there, you could consider just going to wasm as the binary format on all architectures.
> - There are "window indicators", which can be used for e.g. audio volume, network, permissions, data transfer between programs, etc.
Kind of like qubes? More so, obviously, but it reminds me of that.
> - USB is no good.
What? USB is extremely high utility; just this makes me think you'll never get traction. By all means lock down what can talk to devices, do something like https://usbguard.github.io/ or whatever, but not supporting USB is going to outweigh almost any benefits you might offer to most users.
(Also on the note of things that will impede uptake, throwing out POSIX and a conventional filesystem are understandable but that's going to make it a lot harder to get software and users.)
> If you're going there, you could consider just going to wasm as the binary format on all architectures.
There are several reasons why I do not want to use wasm as the binary format on all architectures, although the possibility of emulation means that it is nevertheless possible to add such a thing if you wanted it.
> Kind of like qubes?
Similar in some ways.
> What? USB is extremely high utility; just this makes me think you'll never get traction. By all means lock down what can talk to devices
I had clarified my message, since "USB is no good" does not mean that it cannot be used by adding suitable device drivers. However, it means that the rest of the system does not use or care about USB; it cares about "keyboard", "mouse", etc, whether they are provided by PS/2, USB, IMIDI, or something else. However, USB has problems with the security of such devices, especially if the hardware cannot identify which physical port they are connected to, which makes it more complicated. Identifying devices by the physical ports they are connected to is much superior than USB, for security, for user device selection, and for other purposes; so, if that is not available, then it must be emulated.
For distributions that do have a USB driver, something like USBGuard could be used to configure it, perhaps. However, USBGuard seems to only allow or disallow a device, not to specify how it is to be accessible to the rest of the system (although that will be system-dependent anyways). (For example, if a device is connected to physical port 1, and a program has permission to access physical port 1, then it accesses whatever device is connected there if translated to the protocol expected by the virtual port type that is associated with that physical port.)
Even so, the system will have to support non-USB devices just as easily (and to prefer non-USB devices).
> Also on the note of things that will impede uptake, throwing out POSIX and a conventional filesystem are understandable but that's going to make it a lot harder to get software and users.
Like I mention, a POSIX compatibility library in C would be possible, and this can also be used to emulate POSIX-like file systems (e.g. by storing a key/value list in a file, with file names as the keys and links to files as the values). Emulation of DOS, Uxn/Varvara, NES/Famicom, etc is also possible of course.
However, making it new does make it possible to design a better software specifically for this system. Since C programming language is still usable, porting existing software (if FOSS) should be possible, too.
I guess I wasn't thinking of my primary line of defense: webapps! Naturally sandboxed and least privilege'd. And many of them are locally selfhosted in containers too.
Native apps for me tend to fall into a few narrow categories:
You may be interested in "immutable" distros like OpenSUSE's Aeon, Fedora's Silverblue or the kind-of-Debian Vanilla OS. If you go and try Vanilla, by all means try the beta.
I feel the same way about e-bikes: expensive, proprietary parts and form factors everywhere. Oh, your battery is worn out? You need one that's custom molded to your downtube? That's too bad.
I think that at least some of this comes about because it's still relatively early days for the form-factor. As the industry matures, as it becomes more cutthroat everything will become more comodified and therefore standardised.
Look at some of the cars of (say) the late-19thC where not even the steering wheel was standard. So, while e-bikes are probably not quite that early stage right now, they've not advanced terribly far from the plain vanilla bicycle yet.
There are thousands of e-bike manufacturers. Many use the so-called "dolphin" battery pack which is fairly standard and always removable. The dolphin doesn't look as sleek as an in-frame battery but it's replaceable and it will usually provide longer range.
I've been expecting a ebike that could take the tool eco-system batteries: DeWalt 60v, Eco 58v, Milwaukee 18v. Probably would need to dock several of them with the exception of the Eco.
Unless you find out it's potted, the BMS needs to be reprogrammed and there's a custom mesh or holder that doesn't work with some standard cells because of tolerances...
I'm pretty sure you can find the same radio hardware platform but FCC certified for GMRS (or so the label says anyway). Maybe they added filtering to get it to pass? That means a $35 GMRS radio with USB-C charging, swappable antennas, and higher transmit power.
He's already seen Meshtastic, which is something I definitely want to play with for his exact use-case: coordinating with friends while skiing.
Sorry this isn't a helpful answer but over in Android-land, Syncthing does exactly this for me right now. I paired Syncthing with a script that pushes any new photos to a self-hosted gallery. It's as fast if not faster than Google Photos and totally independent of any Google ecosystem. Add another offsite Syncthing machine and now you have a magical offsite backup.
This is something I really want, but I've never been quite sure how to set it up properly. Ideally I'd want to run it in the cloud so I don't have to be on my home network (and don't have to expose my home network in that way). I have a VPS that I use for a variety of things, but it doesn't have enough space for my photos. Syncthing doesn't seem to support S3 as storage.
I suppose I could put it on a machine at home, and expose it to the internet (perhaps using Wireguard), but I have very limited upload bandwidth (25Mbps), and would still want to sync the files to S3 (say with a script that runs nightly). I guess the initial sync would take forever, and then new photos would be relatively quick.
I guess I could also put it on my VPS and use something like Amazon's NFS service as the backing store. But I expect that would be quite a bit more expensive than the lower-cost S3/Glacier tiers I'd prefer to use.
With that kind of upload speed, I can see why you'd want cloud hosting. I'm paranoid enough to want a local copy so my first instinct is to still sync to home with an inotify script to trickle-push everything to S3 (quicker to start than a nightly script).
Oh my goodness, the irony of your username while arguing against precisely defining the sun's position in the sky!
But you're still correct: When most people are using time in conversation, they need it to be meaningful with respect to solar time. If your replacement method can't do that, no one will adopt it.
Don't serialize to raw integers unless you absolutely have to. Serialize to a string value: it's future/oopsie proof and helps with debugging. The nature of iota is pushing people away from bad habits.
But yes, getting warnings about missing enums in switch statements is very handy. But Golang's type system never aspired to be as rigid and encompassing as C++, Haskell, Rust, etc.
>But Golang's type system never aspired to be as rigid and encompassing as C++, Haskell, Rust, etc.
Well, didn't have to aspire to all that to at least make an effort to be more helpful, especially in trivial aspects, like having an actual enumerated type, or an Optional/Error type...
I don't think you understand how minimalist Golang is... the std lib only has room for anything you would ever need for a web service including a full web server, builtins like hashmaps, a bunch of things for concurrent programming like channels, go routines, etc. There's also language features like returning tuples and destructuring them which is actually more advanced than it's peers.
To include an optional type would be against go's identity.
> I don't think you understand how minimalist Golang is... the std lib only has room for anything you would ever need for a web service including a full web server, builtins like hashmaps, a bunch of things for concurrent programming like channels, go routines, etc.
I can't tell if this is sarcasm.
If it isn't, I don't see how what you've mentioned is minimalistic or how adding an option type would be against Go's identity.
Yes? All the time? "Stay out of trouble" is definitely not universal among Linux users. Not too long ago, the idea would have been kind of a fairy tale.
I prefer to curate my own apps and if they misbehave, they're uninstalled immediately. The downside there is this philosophy feels a lot like, "I don't need a virus scanner because I'm too good to get stung by a virus". Still, it seems better than the alternative hell.