Each of those three operating systems have some weaknesses that makes me jump between them quite often. Things like: Forced updates in Windows 10, certain lack of hardware, application, and game support in Linux, and MacOS "requiring" Apple hardware (I know about hackintoshes, built some in my day, but they always felt like a house of cards).
There are plenty of other strengths and weaknesses to each of them, I just feel as if I'm always giving up something awesome and gaining something painful when switching. One would think by now we would have more competitors (and I consider the various distributions/desktop environments of Linux as a single competitor to Windows and MacOS), trying out new ways of user experiences, and/or trying to bring together what makes each of them great into a single source. Of course, so much is driven on ROI and developer buy-in, so I understand the reality and complexity.
The above is my dream :)
I'm pretty sure you've discovered a simplex (a thing like a CAP triangle) here! These all trade off against one-another:
1. highly-integrated first-class hardware support
2. support for all third-party components under the sun
3. low security-vulnerability surface (and thus a long security-critical update cadence)
Windows picks 1 and 2 at the expense of 3. Linux picks 2 and 3 at the expense of 1. Apple picks 1 and 3 at the expense of 2. (Hackintoshes trade some of 1 for some of 2, ending up fully satisfying neither.)
I'm pretty sure you can't have 100% of all three, no matter your resources. They're trade-offs, not in time/effort, but in design-space.
Linux supports much more consumer hardware out of the box than it did a decade ago, but if you ever do encounter a driver issue, you'll need to know your way around the terminal and be able to google-fu your way around some forums. The people on this site might be able to get through it without too much pain, but we're in the minority.
Windows definitely supports more hardware out of the box, and even when something doesn't work, it gives you some nice GUIs to manage the drivers. At the end of the day writing drivers sucks, and it's tough to provide support for the virtually endless supply of components and periphs without paying people to write those drivers.
You rarely have to install any drivers. The kernel comes with drivers for almost every driver under the sun.
On the other hand, there are a lot of CVEs for Linux, and while they do get fixed quickly, the security vulnerability surface area is quite large. Especially considering the sheer size of kitchen sink kernel that is.
In all these cases, a third-party driver is essentially being given the the reins to the system, intentionally, as part of the user’s expected workflow. Windows and macOS (and, I would suspect, any OS solving for enterprise requirements) both have places where they allow third-party drivers to participate in these elevated APIs; while Linux doesn’t really (Linux only has third-party driver-blobs that fit to specific isolated in-kernel sandboxes.)
My thought re: #3 was that:
• with Linux (and disregarding the few large corporate binary blobs for wi-fi and GPU), mostly corporate-sponsored FOSS devs write PRs for the kernel, and then the kernel maintainers “take receipt” of that code by merging it, taking all further responsibility for that merged code, making it essentially equivalent to code they wrote themselves (and at that point, the code is now owned by the kernel project, such that new development should not occur out-of-tree against the original development, but as in-tree patches against the merged code);
• with Windows, corporate third-parties (but often fly-by-night Shenzhen ones, if that’s the hardware you buy) maintain most drivers, and Microsoft just certifies them, like apps in an app-store (which ensures mostly that they don’t crash Windows, and isn’t a security audit);
• with macOS, Apple writes most drivers. Of the drivers third-parties do write, Apple takes ultimate control and responsibility for QAing that code and customizing it for macOS—Apple does their own builds, hardware-matrix testing, packaging/deployment, etc., essentially making each particular point-release of a driver into a part of the base OS. However, they don’t take ownership of this code; the driver will rot unless the third-party sends them a new version to start integration on all over again.
• with Hackintosh’ed macOS, random FOSS hobbyist developers write the (extra required-for-your-build) drivers, and nobody is guaranteed to be maintaining or QAing them.
A security-vulnerability surface isn’t just about surface area, but also about how much control and visibility—essentially, trust—the OS maintainer has into that surface area.
Windows forces updates as often as it does, because Microsoft doesn’t have good visibility into which third-party drivers will be affected by which updates (which they could use to bunch together all driver updates addressing a particular CVE); nor do they have control of those third-party drivers enough to structure them such that their updates can be woven into a seamless, non-restarting update process. Security-critical updates just... show up, on their doorstep, coded in arbitrary silly ways (to deal with the arbitrary silliness of the original background services and/or tray utilities that talk to the driver) and they have to deal with that.
Linux—at least where the base OS is concerned—doesn’t have any such “opaque-to-the-maintainers” surface area, so Linux can get away with fewer “restart required” conditions. They have both visibility and control.
macOS—because Apple demands ultimate control of all macOS drivers (to the chagrin of companies like Nvidia)—can demand that drivers take a particular form (i.e. microkernel daemon servers with activation ports) where Apple are easily able to just in-place upgrade many drivers as part of a package installation, without restarting the OS. The fact that they’re already in relationships with these driver developers also gives them visibility into the progress of addressing CVEs.
(macOS does want to restart for point-release updates pretty often, but that’s down more to Apple’s BSDish philosophy of “the base OS is a whole that should be replaced with a new whole, rather than getting into a state where both old and new components are running.”)
I might have been using linux wrong. Because as a daily driver media consumption device, it has been anything but stable. I suspect it is because it was a laptop with a discrete graphics card, which for some reason completely trips linux up.
> Forced updates in Windows 10
I think this has been solved in a recent update
That is where Linux's reputation for stability most likely comes from.
However as a desktop machine mainly used for media consumption I would take windows over linux any day. No doubt due to poor graphics and audio driver support.
No question. I love my linux VMs. They are rock solid.
> for media consumption I would take windows over linux any day. No doubt due to poor graphics and audio driver support.
This is me right now. You boiled it down to the essence of it.
I have a desktop with a discrete Nvidia card, and on Linux I'm easily able to turn on my second monitor after the system is already booted and have it be recognized. I'm super curious to hear what issues you ran into with this sort of thing, because it all works without hassle for me.
Dell Precision with Intel, Ubuntu Mate, rock solid. VLC, 4k monitor, great media tools. No telemetry and UI shenanigans.
I have been running Linux daily at home because it's been rock solid for years on my home laptops. And that has involved plenty of media consumption.
I used to dual boot, but on the latest I couldn't untangle Windows from 100% disk usage and failed Windows Updates -- this on a laptop that was literally a few weeks old. So long, Windows.
Perhaps I am not a typical user, but nonetheless, I rarely have trouble with Linux. Everything just works.
I would love such an OS too! Sadly I'm not aware of anything that ticks the three, but at least ElementaryOS  attempts to tackle the last two. I enjoyed it three years ago, though I had to move to MacOS for work so I'm unsure about how it is now.
As that page describes, the "query" command (or its equivalent GUI) can be used to write filesystem queries, e.g.:
query ((MAIL:from=="*joe*") && (MAIL:when>=%2 months%))
Or AS/400, which is more or less a SQL database.
This made a lot of sense for a system intended for high reliability transaction servers and nothing else. Banks loved Tandems.
Attempts were made to use it as a database, but stuff was always getting corrupted.
The original MacOS suffered badly from being a cram job into 128KB with no hard drive or MMU. No CPU dispatcher, no memory protection, brittle ResEdit and TextEdit. The trouble was, that architecture persisted for 17 years, long after the hardware improved.
The Mac grabbed most of the good UI aspects of the Lisa, but stripped out all the decent OS concepts. It worked to sell the first Macintoshes, but didn't scale out later, and by that time they'd killed off the Lisa.
People keep on saying that, but how really true is it? Many operating systems come with bundled relational databases–e.g. most Linux distributions come with more than one relational database implementation bundled. Does that make Linux "more or less a SQL database"? How deeply integrated is DB2/400 into the OS/400 kernel (or equivalent term, such as "System Licensed Internal Code")? There is a paucity of public information about the actual nature or depth of this integration.
I think an OS with a truly deeply integrated SQL database would expose things like a table/view of all files in the filesystem, a view of running OS processes, etc. As far as I am aware, OS/400 (or "IBM i" as they are calling it nowadays), doesn't have any such features.
To add to this confusion, IBM ported DB/2 to OS/400 atop the native OS/400 data object model. You can use SQL but that just gets compiled to native operations against what are known as "physical files" (confusing name, yes) and "logical files". Physical files are fixed-length record files with data in them. Logical files are indexes atop physical files. A PC analog to this would be FoxPro, Dbase, Paradox, Alpha, etc.
Many OS/400 programs access these files using the native model; not SQL. Tandem and HP Non-Stop work in very similar ways.
I get the impression that S/38 and OS/400 have some really interesting concepts at the core of the OS, but it is questionable how well the higher levels layered on top leverage those concepts.
"Everything is an object" would be a lot more powerful if IBM let customers/ISVs define their own object types, when as far as I am aware they don't. Yet IBM will define dozens upon dozens of object types for all kinds of obscure requirements, many of which are no longer even relevant today .
The thinking at the time was that disk drives would become obsolete, and would be replaced entirely with some form of solid state memory.
What were they smoking? Bubble Memory?
Windows has it, though user-level tools are lacking. https://docs.microsoft.com/en-us/windows/win32/extensible-st...
- are composed of multiple processes; can include process management code
- can access hardware directly (for efficient arrangement, caching and retrieval of data)
- include support for multiple users, concurrent sessions
- stored procedures require compiler tooling and runtime (VM)
- can be platform-independent, with an abstraction layer connecting them to the underlying OS
It obviously wouldn't be worth the effort, but how many times faster would our computers be?
Also I don't think it makes it easier. What if I want to delete a process but I accidentally end up deleting a file because I mistyped the name of the process (or the file has the same name as the process; which one should be deleted?)?
I think we need separate commands because the user needs to be in a different mindset when doing these operations.
I'm actually not a huge fan of the Unix philosophy for that reason; you end up with a lot of general purpose commands which work together in theory and you can
combine them in an infinite number of ways... But in practice, they are too general and this means that combining them becomes too slow for a lot of scenarios...
If commands are too small and too general, you will always end up having to write long sequences of multi-lined commands chained together in order to do anything useful and performance will be bad; you might as well just write C/C++ code.
For querying information, I think the idea of representing everything as the same data structure really makes sense. The Unix philosophy or a relational view is perfect for this.
However, for more sematically complex things that have possibly surprising side effects, such as starting a service, setting up a new user account, connecting a drive, installing a printer, I think we need separate commands or procedures to encapsulate all the complex behavior.
As for UNIX philosophy of stringing together lots of general purpose commands to get the desired result - the nice thing about that is you don't need to know programming, and it's very convenient for one off scripting.
I beg to differ. It's a workflow by and for programmers, or at least people who think like programmers.
It's not clear whether the author is arguing that all existing OS data structures should be replaced with a single database system, or if they simply want all OS data to be capable of being queried/updated via a standard database style interface.
The former is simply hubris ("I know the perfect data structure for EVERYTHING!") while the latter is pretty much implementable as an interface layer on top of an existing OS.
For more info check https://www.researchgate.net/publication/2364452_Data_Struct...
Scalars, list, tables(relations), tree/graph.
You can build generalized operations on them and it will work even if the internal structure (ie: list can be done on top of vectors or linked list or tables or tree) differ.
This is what I working on in my own little language (http://tablam.org). Certainly requiere specialize per structure and some stuff is hard to generalize but the idea hold.
The author mentions this removes the need for ids, but really it would just be hiding them in a non-serializable way.
It's all about the user experience. System should be friend and still enable the user to increasingly get more out of the computer. It also needs to be extremely responsive.
AmigaOS got a lot right back in the 80s. BeOS (and now Haiku as spiritual successor) took a lot from that (system kits vs amiga's default set of shared libraries, and the concept of datatypes) and added some concepts of its own.
Meantime, the mainstream systems have only gone backwards.
My dream of a general purpose OS is definitely an open source microkernel, multiserver RTOS with capabilities and an user experience that builds on those two systems. It is important for it to be an RTOS, as unbounded response times would make the system fail as a general purpose system and as a personal computer system.
The closest existing system would be Genode. It currently only meets the technical side unfortunately, but it could meet the user experience requirements with some work.
Ignoring drivers for the moment, your typical desktop application does not have a need for a bounded response time. Desktop applications typically do not fail due to a dialog box being popped up a few milliseconds late.
The underlying drivers, on the other hand, do tend to need bounded-response time, but this is typically performed by the use of the interrupt handling subsystem, whether directly handled or split into upper/lower.
Even non-RTOSs should be able to provide real-time response to drivers in this way and the trade-offs taken by an RTOS scheduler are not always appropriate or optimal for a desktop, user-facing system.
A great many people work with audio or want to work with audio, and Windows and OSX are lightyears ahead of Linux in terms of ease of use and overall quality.
You'd think the PulseAudio fiasco would have improved the situation, but I believe the developers thought as you did, and so built a system that's woefully encumbered with latency. Jack improves the situation WRT latency, but it comes at the cost of an altogether brittle and broken integration with PA, and thus with all major Linux desktops.
Bounded response times may not be needed for your word processor, but there's plenty of folks out there who would be better served by it. Hell, I'd love a snappier editor on account of predictable response times to keypresses.
PA is in rather good shape now & it has capabilities its predecessors didn't. But it's very hard to get people to update their notions about something once it sets in.
No matter how much Microsoft tries, Windows will always be 'insecure, virus-ridden', macOS will always be super stable, beautiful, no matter how much Apple screws up & Linux will always be unusable on the desktop no matter how many decades I'll spend enjoying it on the desktop at home and at work.
Sadly, it seems a person's mind tends to naturally lean conservative and it takes a herculean effort to regularly update one's preconceived notions.
It's an exercise on self-delusion to pretend PA can replace ALSA as long as it has this blatant issue.
Patch sets like -ck simply don't help with this.
Try running cyclictest from rt-tests for a while to see the true horror of Linux behavior.
No they won't fail but a responsive user interface is critical. A process running on an RTOS won't complete any faster (probably the opposite), but displaying some sort of acknowledgement that a key has been pressed or a button has been clicked without any human perceptible delay is really nice. I'd love for the user interface portion of an OS to run with real time constraints.
That's just the easy way to achieve hard realtime.
It is, fortunately, not the only way.
I remember people who used the QNX demo disk on old machines describing how they could do big compiles in the background with the system still responsive to input. I'm guessing the parts interacting with the user had higher priority.
Real-time might help in another way: mitigating covert channels. Gotta know the timing of secret-handling code. Then, make the observed timing fixed-length or random. Don't always need the timing to do that, though. For instance, the separation kernels just need timing for the partitions with whole thing stopping regardless of where code is at.
One last benefit might be easier performance monitoring and diagnostics. If it's predictable, one might be able to assign a range or profile to it. Then, raise alarm if anything goes out of profile. Just speculating here: never built one.
Not a few milliseconds, but a few dozen seconds might be a different story. While it might be hard to predict or measure an exact number, your deadline is however long it takes for the user to equate it to "forever" and kill the process / leave the website / uninstall the app / etc..
In general, being able to make these kind of promises means that the OS has to be pretty conservative about the code it allows to run, and a 'best effort' OS or some 'soft' RTOS is probably going to be faster or more efficient.
In terms of UI responsiveness, I'd bet that crappy software running on a 'hard' RTOS can still manage to feel sluggish, because the UI delays probably aren't actually in the OS itself.
Or, in other words, the operating system is not general purpose, because it is not suitable for these uses.
Do note these uses do include the likes of audio. If there's no way to guarantee latency, then a spike will surely eventually cause an overrun, which will be perceived as an audio hiccup. Sometimes, this is just an annoyance. On a live performance or in pro audio work, it can be fatal.
A good microkernel will allow mixed criticality tasks to share the system. A lot of recent research in the topic has brought SeL4 to this point.
This sounds more like a complete version of the Fuchsia Operating System, which has both technical and is more of a user-facing OS. That probably has more of a chance of meeting these requirements at the rate that it is being developed.
Windows sorta does that with the Registry but A) the registry isn't the filesystem, B) the registry is a filesystem on top of NTFS and C) the registry sucks.
If the Registry had the power of,say, PostgreSQL to back it up, it would suck a lot less. Of course, for large files, you should still be able to rely on more conventional filesystems as well as for backwards compat.
It has no schema. Imagine if it had some sort of schema, declaring what sub-keys and values are allowed under each key. Imagine if the schema was self-documenting, with each key/value declaration in the schema had an associated description explaining what it was for.
Imagine if it had richer data types. For example, a "link" type, in which a value actually has the name of another key. If you try to delete the target, either it doesn't let you, or it sets the value to some sort of null value. (Basically, something like foreign keys in a relational database.) And an index to quickly find "back-references" (show me everything that points to this key.)
The registry should have included a database of installed packages/applications, and every key should have been marked with what package/application owns it, along with some sort of indexing to make it quick to find everything a package/application owns. Application uninstalls would have been a lot cleaner, and issues with apps leaving behind junk in the registry avoided.
Transactions: This was added in Windows Vista. But it could have been there from the beginning.
OTOH, remembering the registry was originally implemented in Windows 3.1, which had minimum system requirements of a 286 with 1MB of memory, maybe my suggestions above just wouldn't have been feasible.
And we have to do that because we don't want all our software to have to provide the basic services, but we don't really get that: we still have to provide all the libraries it needs, some of which emulate an OS anyway by providing a big, complex runtime.
So I don't think this is an OS problem, it's a build problem. If I can ask a machine what services it has, and ask the software what services it needs, it should be possible to find a minimal solution.
I think NixOS gets very close to this ideal, even though it's obviously built on a traditional OS. But if we're going to talk about the "ultimate OS" I think it's worth it to ask, "how much can we take away from this?"
And the reason to want that is simply driving towards the goal of making software components that are able to work together reliably rather than building these monolithic devices that only a handful of veterans can understand.
I'll have to give it a spin, not sure when that will be, but I'd love to explain to a security audit that we don't patch an instance because there's nothing on it but the application.
> The hierarchical organization of file systems manifests itself in nesting of directories (folders). On the other hand, directories are merely named views of a certain subset of files, selected according to some criteria. Thus, a directory can be thought of as a "database view," a named database query. It follows immediately that a file may appear in as many "directories" (views) as one wishes to. For example, one "folder" may show all files tagged as "sales reports", while another directory contains files modified within five days. Searching a file system and creating and populating a folder becomes therefore the same activity. Since saved views are database objects themselves, one can reference views within views if one so wishes. There is no required hierarchy however: one may create two views that refer to each other, or any other network of views that best suits the problem.
I built koios to solve this problem, but I didn't take the fuse approach because it seemed like a lot of extra work for the user, and puts databases first rather than files, which is un-UNIX.
It can be really nice if you have the time to look up the documentation. It doesn't have the WYSIWYG simplicity of files, though.
Every single access of every single piece of data, for anything, will incur all of those overheads every time. There is no way to opt out or choose a different balance of guarantees or trade-offs without implementing them on top of the OS provided ones.
In a database, preparing a query compiles the access plan (ie all the above) on the server side. There's no reason it would be any less efficient than an OS call today.
Could in fact be more efficient, since you would probably be able to leverage set-based operations on the server (ie OS) side rather than iterating everything on the caller.
'On the server side' isn't magic. You don't get server side operations for free and these 'server side' operations would be occurring on the same computer.
Personally, I'd like to see version control and file compression integrated in my filesystem. I do not want my operating system to be distributed or rely on network access.
I also think a separation between administrative access to the full system and views of applications may be a good idea. It seems to me that the ideal OS would heavily restrict the file and network access of applications and there would be a separate administrative domain with full access but limited complexity. The operating system then manages the sharing of needed data between applications and network access that the user allows. I think having the system parse a standard configuration file format for applications would make sense in this model, but that configuration file can still just be text. Also having a sub-user permissions structure similar to the overall system users seems like a good idea to me. Many applications that users run are hostile and dealing with that requires major changes vs current desktop operating systems. Database vs. filesystem is a minor issue in comparison and personally I don't see the appeal of using a database.
Shame google didn't use it instead of Android, they had to invent Go to get back some of the AOS advantages.
For such a project you don't really want to just adopt existing ideas and implement them. I don't really understand why Google is doing Fuschia for this reason: architecturally it's nothing special. Is the GPLd Linux so bad? It took them this far.
If I were to do a new OS I'd explore ideas like all software running on a single language-level VM with a unified compiler (e.g. a JVM), a new take on software distribution ... maybe fully P2P, rethinking filesystems and the shell, making it radically simpler to administer than Linux, etc. You'd need to pick a whole lot of fights and make a lot of controversial decisions to justify such a lot of work: you'd need to be right about things other people are wrong about, not just once but multiple times. Then maybe if people see it working well they'd join you and port existing software, albeit, the porting process would inevitably involve some rewriting or even deeper changes, as if you can just run existing software and get all the benefits you probably didn't change anything important.
An operating environment means you don't have to worry about stuff like device drivers. It can run inside a Docker container. (At most companies, try suggesting "let's run a brand-new OS that nobody has heard of", and you'll get an emphatic "no"... say "here's this app we want to run, it is packaged as a Docker container", and often nobody will even ask what is inside that Docker container, even if it contains your operating environment with the app running on top.)
You can start out using facilities of the host OS like the filesystem and networking stack. Later, if you want to, you can implement your own filesystem (create a 100GB file on the host filesystem, pretend that is a block device and your custom filesystem exists within it). Or your own networking stack (which can run in user space using facilities like Linux tun/tap drivers.)
An operating environment can always evolve into a standalone operating system at a later date.
I do agree our file-systems need to be more RDBMS-like. I'd like to see "Dynamic Relational" experimented with more. You don't necessarily need to pre-wire columns: add them as you need, unless not permitted for a given table.
The layers that sit directly above the hardware need to be simple and efficient (think like OSI layer 1 and 2). Actual hardware beneath this needs to be even simpler so it can focus on what good hardware should be: high performance. For example, having hard drives implement database-like concepts in hardware is bad. Because then you have to change hardware if your concepts evovle, which is expensive. So let the hard drive do what it does best, which is get data off of a platter or NAND, and let a layer on the OS abstract that for higher-level layers.
The UNIX API is the best we got so far I think, for OSes that are actually useable on a wide variety of hardware platforms. There's a reason why files are byte streams and "type" information is not part of a file - it's not a storage device's job to do anything but store and retrieve data fast and reliably. And it's not the job of the immediate lower layers of an OS to do anything but facilitate that and interface with a higher layer, like an SQL daemon.
The user facing layers high up, like the shell, are technically not OS facilities, they are "default applications" that, in a perfect world, would work under any OS.
Even directories, file metadata and caching are too high level and not part of an operating system if you want to go that far.
Prediction: Linux kernel will be replaced in the Data Center, but its replacement will be developed by the cloud providers. Eventually they’ll need something custom fit to running isolated Node, JVM, and other runtimes as efficiently as possible for Serverless
The important hardware in this age is the 'collection of datacentres'. Things like Kubernetes and whatever Amazon calls their thing these days are the proto 'operating systems' for this hardware. What we currently think of as Operating Systems are more like threads.
Depending on what "thing" you're talking about, it might also be Kubernetes. See: EKS.
I don't care how it's built, as long as we have above.
Bring back the Windows XP GUI (with GPU accelerated window theming and a few select other things) and I will be a happy bunny.
I like concept of some Linux OS'es but the ecosystem has got some huge it takes an expert to truly understand it. For a server I really only need a network drivers, SSH client, and the server software. I maybe nostalgic but I miss the good old days.
But if you want something simple, try out the base OpenBSD install. No magic, no complexity, just simplicity and elegance.
conf.d to indicate list of entries as files for instance.
But let’s say we sit down and write a new OS: should the API for sockets be the same as for local files? Should we be able to write and read from processes by catting to and from some synthetic process file on disk? Should I be able to mount the internet on a directory and interact with sites be ls’ing their directory? Maybe I should be able to mount remote cpus and pin tasks to them? We can even take this further: registers as files, memory addresses as files, pixels and windows as files, etc etc.
All of these things sound super nice, and in many ways, they are. The everything is a file concept can be taken further than even Plan9. And fundamentally, this is what the post is arguing for. Except instead of everything is a file on a file system, everything is a table on a database.
The advantages of this approach are pretty obvious, we provide a simple and consistent interface: read/write for files, or select/insert/delete for tables. This allows a development surface that appears very simple and straightforward.
The problem is that this simplicity is an illusion and just a black box abstraction over what’s really going on. In many ways, it actually makes things more complicated as read/write become incredibly polymorphic. Maybe the API stays the same (syntax) for everything but the actual semantics still remain complex, maybe even more complicated than distinct API’s.
Even when the complexity of the semantics between the two approaches are similar, there are other problems. There’s arguments to made the heterogeneous resources should not all be provided by a homogeneous interface. For example a call to a data structure that has O(n) probably shouldn’t have the same interface as O(n^2). It makes it very easy for a developer to write incredibly inefficient code.
At its core, this dichotomy is best epitomized by Richard Gabriel’s “Worse is Better.” In the essay he talks about the difference between the New Jersey school and the MIT School. One of the differences is how the two schools think about APIs. The MIT approach is to design opaque and complete interfaces that solve the problem correctly at the expense of underlying complexity. The New Jersey or Unix approach is to value simplicity of the system at the expense of a more complicated API.
You can see an example of this in the read() system call. read() in Unix is hard to use and annoying as hell. There’s many bugs that stem from it’s misuse. The system call can (and does!) return less information that asked for even if it didn’t hit an EOF. Making read() always fill the buffer except for an EOF is a very hard problem and would have created a lot of extra complexity in the system. The MIT approach would be to implement this complexity as a simple interface is more important.
As you can probably imagine, there’s pros and cons to both approaches. Maybe the MIT way is better because many more people are going to be using read() than actually hacking on the OS. Or maybe the Unix way is better because the underlying simplicity of a system allows developers to attain a mental model of what’s going on.
If someone is interested, look at the OpenGenera source code (MIT school) vs say Plan9. OpenGenera is undoubtably a sublime and beautiful system but the code required to do it is just absurd. Plan9 is maybe less sublime, but the code itself is dead simple. Also you could compare the GNU userland vs the OpenBSD one.
tl;dr: There are costs to homogenous APIs for heterogeneous things: complexity, hard to form a mental model, easy for a programmer to misuse unintentionally. There are benefits to: consistency, beauty, easy programming.
Personally I like to take a balanced approach and decide on a case by case basis on how to trade off simplicity and correctness. Being a dogmatic programmer isn’t a good thing and it certainly doesn’t help your employers.
I can see a database interface being useful if you constrained the scope of what an entry is and don't allow those operations to actually modify the objects.
The extent of the scope could start at what are traditionally considered OS resources, not random blocks of texts, newsgroups, or pixels, but things that would generally be more useful to query.
Not being allowed to actually modify an entry through 'delete' presents a problem. The database and the code that is allowed to modify its state would have to be synchronized/notified on an update.
In this way, the database is a front-end to the OS.
not if you look at average consumer patterns (sadly)
If that danger is avoided, the broad concept itself might have merit, most especially in replacing the way we deal with filesystems which should have changed decades ago. I believe that storage should be separated into mutable and immutable categories. Immutable data (software packages and the like) should be managed by a universal networked content-addressable versioned immutable data store. Something like Perkeep mixed with the Internet. If you did something like 'apt-get install firefox', it would lookup the current versions GUID from a DNS-esque service and check the local store, then the local network store, and then hit the Internet store of data if necessary to pull down the content. It could be thought of like extending storage from registers, L1 cache, L2 cache, L3 cache, main memory, disk outward to the network, with data traversing each level only when necessary as a kind of overgrown cache control. Very storage-constrained contexts could purge anything immutable with confidence that it would be able to pull it back in when necessary. Call it 'swap to the cloud' if you're a marketer who doesn't realize how horrific that sounds to technical folks.
Mutable storage would be handled as a local database, it would be all of your personal content and similar, stored and indexed with ways that are efficient for data which can change rapidly and which must be preserved as it may be the only copy. Individual disks should, ideally, be presented to the user as a unified whole with reliability information baked in and managed by the OS in the background. Whether and which data is replicated, striped, etc should just be something people set with their hardware providing maximums (if you have 3 drives, you can choose 300% reliability as its max and based on the drive reliability, age, SMART status, etc, the system will rebalance what is stored where in order to best preserve the users desires, only bothering the user when things cross a certain threshold of uncertainty). Data is important. Not just for corporations, but for everyone.
Software Transactional Memory should be built into the hardware level and supported at the OS level, and can be supported at the OS level before hardware has a chance to catch up and accelerate it.
The OS should present itself sort of like Smalltalk "images" operate to the user, albeit with most of the moving pieces covered by a thin veil unless you launch a 'system edit' sort of mode which would make the running objects editable and present live development tools. Code editors should not simply be text editors, but tools which know they are operating on code and treat it as such, allowing one to operate on an AST or syntax tree view for certain operations.
Vector graphics and the concept of graphics 'surfaces' rather than a simple framebuffer should be the 'step 0' of supporting moving to a more modern system. Take advantage of modern pixel densities on high-resolution screens.
<META NAME="Date-Revision-yyyymmdd" CONTENT="19971128">
<META NAME="Date-Creation-yyyymmdd" CONTENT="19950521">
It uses soup unions to manage removable storage, and worked really well for the 512k memory available.
These days I'd recommend an embedded SQLite database for program configuration and data - somewhat standard, easily recoverable, backups are easy, uniform access from a program's API etc.
Lastly, I'd say the ultimate OS would have a bit more than this surely - minimal formally verified TCB, Arrow datastructures, Managed (but not GC'd) memory, native scripting language that blurs the distinction between users and developers, fully scalable auto-generative but skinnable UI, just to name a few.
I guess the trick with all of this is a suitable language which forms the OS base (think LISP OSes, Micropython, Squeak, AOS, NewtonOS), you get that correct and the rest can be layered on top.
There's no sense storing videos, baby photos and "quarterly report new (2) backup newest 08.2019_john.xlsx" in a database.
I was looking forward to seeing it in action based on reports of what it was.
WinFS imposed all of those things and the associated overheads and much, much more every time you accessed anything.
Ever heard of Oracle Database Filesystem (DBFS)?
With that being said, my ultimate distro is Linux with a Desktop Environment that is less braindead than Gnome.
Nah man, sadly most of the really useful software is quite unstable under wine, if it even runs :(
I'm currently trying to figure out how to properly drive a 4K monitor with my laptop and not have it end up lagging. Supposedly hardware acceleration is enabled on Firefox, but even trying to play even a 1080p YouTube video drops frames. On Windows it is buttery smooth, so it's not a hardware issue. Don't even get me started about scaling...
(Currently using GNOME on Wayland)
I want linux with:
— KDE and only KDE
— Development SDK based on Qt (other toolkits strongly restricted, gtk and gnome gone wrong way)
— Packaging based on something like APK from android or like DMG (no repos by default, only for base system maybe, but other methods of installing not restricted).
Snaps and flatpack are trash because their goal is so called security and isolation, when normal human being need just convenience (hate repos on desktop).
Yes, it's macOS model, but with linux you can have different filesystems.
You could certainly make a distribution in which using anything but Qt would be very very difficult. But who would want to use such a thing? Besides you, of course.
Moreover, your comment isn’t a cogent response to the post. Did you even read it? He’s talking about a new paradigm of OS, something fundamentally different. What you’re talking about is the same old same old, but worse.