Just to clarify a few things.
I just joined Mozilla Devrel. None of this article has anything to do with Mozilla.
I know that none of the ideas in this article are new. I am a UX expert and have 25 years experience writing professional software. I personally used BeOS, Oberon, Plan 9, Amiga, and many others. I read research papers for fun. My whole point is that all of this has been done before, but not integrated into a nice coherent whole.
I know that a modern Linux can do most of these things with Wayland, custom window managers, DBus, search indexes, hard links, etc. My point is that the technology isn't that hard. What we need is to put all of these things into a nice coherent whole.
I know that creating a new mainstream desktop operating system is hopeless. I don't seriously propose doing this. However, I do think creating a working prototype on a single set of hardware (RPi3?) would be very useful. It would give us fertile playground to experiment with ideas that could be ported to mainstream OSes.
And thank you to the nearly 50 people who have signed up to the discussion list. What I most wanted out of this article was to find like minded people to discuss ideas with.
History has yet to concide open desktop operating systems in favour for smartphone era silo platforms.
What I'd really like to see is some data viz and machine learning tools to analyze the dependencies of open source software, and then intelligently cut extra strings. Fewer deps makes more reliable software.
You raise some interesting points in your article. I wonder how to comment or discuss this the most efficient way (here or elsewhere?)
Have you looked at haiku?
What you describe as modules, I think Alan Kay calls "objects" in the smalltalk/Xerox tradition.
Have you looked at his research project: STEPS reinventing programming?
Some bits and parts of this is open source.
Imagine a full system running on 10000 LoC I think this could be a step forward.
Also this blurs if not throws away the distinction between "desktop and web(remote)" applications. Because if integration of remote objects is sandboxed but still transparent you get improved usability.
Also I think you go not far enough. Databases for file system are fine but I think the ida of widgets or UI libraries altogether is not feasible anymore.
The system has to adopt to a individual level, people have different needs and workflows.
Highly adoptable and conversational interfaces are needed.
There's no silver bullets here, but we might be able to silver plate a few.
I entirely agree.
> Permissions at install time are a good idea.
I'm actually not so sure about this. I think the "iOS model" of asking for permissions when necessary is much better than the "Android model"* of a big list of permissions on install, preventing use of the app if any of them aren't granted (leading to users giving apps less scrutiny over their permissions than than they would under the iOS model).
* I believe some recent versions of Android (M?) may support the "iOS model" in some form.
Now, the real problem is that permissions are too general. The "access/read/write files" permission is all grouped up in one place, so you end up with tons of directory for every app in your root directory (that don't get deleted when uninstalling the apps that generated them), and you allow unnecessary access to other files as well. Or the network permission, which could lead to all sorts of traffic, while many developers just need it for ads.
Maybe whats needed is more of a trust model. Users could ask "what would Bruce Schneier do" for example. If Bruce [substitute trusted person of your choice] would install this app, then I'm happy to do it as well.
Of course, you have a firewall if you're rooted but I'm not rooted when on a Nexus device.
Actually it is the Symbian model.
E.g. OSes have an `open()` system call, which can potentially access any part of the filesystem, and then they layer an ever growing permission system to restrict this.
Can we design a system where there is no `open()` call at all? Instead, the program declares "I accept an image and output an image", and when it is invoked, the OS asks the user for an image, making it clear which program is asking for it. Then the OS calls the program, passing in the selected file.
This model has other advantages such as introspection, composability, validation and isolation (e.g. a program that declares it takes an image an output an image has no access to the network anyway and cannot send you image over the network.)
Alternatively there could be standard way to pass an image (or any other input) to a program - similar to a command line arg in current systems, for instance.
Yes you cannot store filenames, but you could store some other serialized token generated from a file and the token could be used to recreate the file object. Alternatively, if you have an image based system, you don't have to convert the file object to a token explicitly - you just hold the references to the file objects and they're automatically persisted and loaded.
The point of such a system would be that programs cannot explore or read the filesystem as there is no filesystem API. But programs can operate on files explicitly given to them. So exploring the filesystem is restricted to some primitives that have to be used explicitly. The guarantee then is if I invoke a program without giving it a file or folder, I know it absolutely cannot access any file.
But if I can do those things (especially the second), then that seems to open at least some attack vectors (that would obviously depend on the actual rules).
- The programs behave somewhat like classes - they define input and output 'slots' (akin to instance attributes). But they don't have access to a filesystem API (or potentially even other services, such as network). Programs can have multiple input and output slots.
- You can instantiate multiple instances of the program (just like multiple running processes for the same executable). Unlike running unix processes, instantiated programs can be persisted (and are by default) - it basically persists a reference to the program and references to the values for the input slots.
- When data is provided to the input slot of an instantiated program (lets call this data binding), the program produces output data in the output slot.
- You can build pipeline of programs by connecting the output slot of one program to the input slot of another. This is how you compose larger programs from smaller programs. This could even contain control and routing nodes so you can construct data flow graphs.
- Separately, there are some data stores, these could be filesystem style or relational or key/value.
The shell isn't a typical shell - it has the capability to compose programs and bind data. It also doesn't execute scripts at all - it can only be used interactively to compose and invoke the program graphs. A shell is bound to a data store - so it has access to the entire data store, but is only used interactively by an authenticated user.
So interactive invocation of a program may look something like this:
> /path/to/file1 | some_program | /path/to/file2
# this invokes some_program, attaches file1 to the input slot, saves the output slot contents to file2.
> some_program_for_file1 = [/path/to/file1 | some_program]
> some_program_for_file1 | /path/to/file3 # runs some_program on existing contents
(update file1 here...)
> some_program_for_file1 | /path/to/file4 # runs some_program on new contents
> /path/to/folder | filter_program(age>10d, size<1M) | some_program | /path/to/output_folder
> interesting_files = [/path/to/folder | filter_program(age<1d)]
> interesting_files | program_one
> interesting_files | program_two
A lot of your requirements for a "modern" OS are pie-in-the-sky or just seem very particular to your taste. I didn't much here that you want that I preferred, so outside of the bloat (especially Windows and Ubunto requiring GPUs to process 3-D effects), I see more disadvantages with your changes than otherwise.
Abstractions are a correct way to do things when we don't know what "correct" is or need to deal with lots of unknowns in a general way. And then when you need to aggregate lots of different abstractions together, it's often easier to sit yet another abstraction on top of that.
However, in many cases we have enough experience to know what we really need. There's no shame in capturing this knowledge and then "specializing" again to optimize things.
In the grand 'ol days, this also meant that the hardware was a true partner of the software rather than an IP restricted piece of black magic sealed behind 20 layers of firewalled software. (At first this wasn't entirely true, some vendors like Atari were almost allergic to letting developers know what was going on, but the trend reversed for a while). Did you want to write to a pixel on the screen? Just copy a few bytes to the RAM addresses that contained the screen data and the next go around on drawing the screen it would show up.
Sometime in the late 90s the pendulum started to swing back and now it feels like we're almost at full tilt the wrong way again despite all the demonstrations to the contrary that it was the wrong way to do things, paradoxically this seemed to happen after the open source revolution transformed software.
In the meanwhile, layers and layers and layers of stuff ended up being built in the meanwhile and now the bulk of software that runs is some kind of weird middle-ware that nobody even remotely understands. We're sitting on towers and towers of this stuff.
Here's a demo of an entire GUI OS with web browser that could fit in and boot off of a 1.4MB floppy disk and run on a 386 with 8MB of RAM. https://www.youtube.com/watch?v=K_VlI6IBEJ0
I would bet that most people using this site today would be pretty happy today if something not much better than this was their work environment.
People are surprised when somebody demonstrates that halfway decent consumer hardware can outperform multi-node distributed compute clusters on many tasks and all it took was somebody bothering to write decent code for it. Hell, we even already have the tools to do this well today:
There's an argument that developer time is worth more than machine time, but what about user time? If I write something that's used or impacts a million people, maybe spending an extra month writing some good low-level code is worth it.
Thankfully, and for whatever reasons, we're starting to hear some lone voices of sanity. We've largely stopped jamming up network pipes with overly verbose data interchange languages, the absurdity of text editors consuming multi-core and multi-GB system resources is being noticed, machines capable of trillions of operations per second taking seconds to do simple tasks and so on...it's being noticed.
Here's an old post I write on this some more, keep in mind I'm a lousy programmer with limited skills at code optimization, and the list of anecdotes at the end of that post has grown a bit since then.
and another discussion
I disagree - the technology is extremely hard. You're talking centuries of staff-hours to make your OS, if you want it to be a robust general-purpose OS and not a toy. Just the bit where you say you want the computer to scan what's in your hands and identify it? That in itself is extraordinarily difficult. You mischaracterise the task at hand by pretending it's simple.
For example, a Kinect would be a lot more useful in ideal OS. You could bind gestures to window manager commands, for example.
See: Every performance inter-process system ever...
Could we cover a number of cases with copy-on-write semantics and system transactional memory? Sure, but the tech isn't broadly available yet, and it wouldn't cover everything.
Sometimes you just need to share a memory mapping and atomically flip a pointer...
I've created and lived with multiple inter-process high-performance multimedia/data systems (e.g. video special effects and editing, real-time audio processing, bioinformatics), and I've yet to encounter a message passing semantic that could match the performance of manually managed systems for the broad range of non-pathological use-cases, not to speak of the broader range of theoretically possible use-cases.
If something's out there, I'd love to see it. So far as I know, nobody has cracked that nut yet.
Thats how you make another AmigaOS, or Be, I'm sure Atari still has a group of a dozen folks playing with it, too.
The OS's over the past 20 years haven't shown much advancement because the advancement is happening higher up the stack. You CAN'T throw out the OS and still have ARkit. A Big Bloated Mature Moore's Law needing OS is also stable, has hooks out the wazoo, AND A POPULATION USING IT.
4 guys coding in the dark on the bare metal just can't build an OS anymore, it won't have GPU access, it won't have a solid TCP/IP stack, it won't have good USB support, or caching, or a dependable file system.
All of these things take a ton of time, and people, and money, and support (if you don't have money, you need the volunteers)
Go build the next modern OS, I'll see you in a couple of years.
I don't WANT this to sound harsh, I'm just bitter that I saw a TON of awesome, fledgling, fresh Operating systems fall by the wayside...I used BeOS, I WANTED to use BeOS, I'da LOVED it if they'd won out over NeXT (another awesome operating system...at least that survived.)
At a certain level, perhaps what he wants is to leverage ChromeOS...it's 'lightweight'...but by the time it has all the tchotchkes, it'll be fat and bloated, too.
The post contains many idealistic proposals, but most of them boil down to lawyer stuff and money, not technical problems. You can't have nice GPU access because GPU's are secret. You can't have things work together because nobody wants to share their secret sauce. Everyone is trying to 'be the best' and get an edge on the rest, but in a way that nobody really profits from it from a technical standpoint.
Aside from the shit-ton of reverse-engineering and some cleanroom design, there is very little that can be done to improve this, and no company is going to help, and thus no big pile of resources is coming to save the day.
This does of course not only go for GPU's, but CPU's and their BSP's and secret management controllers as well, as the dozen or so secret binary blobs you need to get all the hardware to work at all.
Fixing this from the ground up, i.e. for x86, would mean something like getting coreboot working on the recent generations of CPU's, and that's not happening at the moment due to lack of information and secret code signing keys needed to actually get a system to work.
My first thought was actually "what about data formats?" These days, most data formats are at least nominally open, but you still need to write code to work with those formats, and most of the existing code is still in C or C++ libraries. The IdealOS will fail instantly as soon as a user receives a DOCX or XLSX file and it displays the document wrong. It can't just launch LibreOffice and use that. Even LibreOffice can't always parse random MS Office documents correctly, and LO represents decades of coder-years.
I mean, you could also probably come up with some solution to have a guest over who brings their pet cow without doing too much damage to your nicely decorated apartment. Doesn't mean that's an ideal situation, and it most definitely is no reason to not dream about living somewhere nicer than a stable, and what that would look like to you.
The latter part, about dreaming how NICE your house could look like if you did not have to accommodate guests with cows barging into your living room all the time. That is what the article is actually about, he's pretty clear about his awareness that technical possibility is very different from the availability of a realistic road to transition from where we are now to the possibilities he sketches.
It's also a very important matter of combating learned helplessness. If you dismiss dreaming about an IdealOS beforehand because there's no way (that you can see now) to get there from where desktop OS's are today, then you most assuredly will miss the opportunity to attain even some of these improvements, were they to come within reach through some circumstance in the future.
Also, I remember programming on a 386. And on the one hand it amazes me that the thing in my pocket today is so much more powerful than that old machine, let alone my current desktop. And on the other hand, it infuriates me that some tasks on my desktop today are quite slow when really they have no right to be, and some of these tasks are even things that my 386 used to have no problems with whatsoever (but then, TPX was a ridiculously fast compiler, a true gem).
We should not let that slip out of sight, demand better and keep dreaming.
I feel like enterprise customers could provide some demand for IdealOS for this reason. BigCorps have lots of data and application silos, as well as lots of knowledge workers who are expected to synthesize all that data. There are a lot of smart people who are power users but not devs. (i.e., macro jockies). Something like IdealOS could really increase productivity in these places.
Of course you have to deal with all the usual enterprise headaches, mostly security and backwards compatibility. But then they'd pay a premium.
Make a device with enough RAM, Bluetooth for a mouse, USB ports, and one or two HDMI ports. A stick computer might be a good starting point.
Then build your OS for that device. Enable cloud management, integrate with Active Directory, focus on an amazing out of the box web browser experience and expand with an app store for well-thought-out, well designed open and commercial apps.
Now give ten to every company with a DUNS number.
Sell more with a subscription including more advanced management and enable pushing modified Windows group policies to them.
Make it good enough for a casual knowledge worker to use .
But exhibit A: SAP.
Maybe. The fact that Moore's Law finally broke may paradoxically help that.
When you can simply get 2x the (cost, performance, features) simply by doing nothing, there is no incentive to optimize anything.
Now that you can't simply "do nothing", people will start looking at alternatives.
And it's definitely possible, just look at the C64 and Amiga demoscene. Those machines haven't evolved for ages, but they've been making them do one (thought to be) "impossible" thing after another for a very long time after the platforms were essentially considered dead. I've seen things at demoparties around 2000 where C64 demos showed stuff that was thought to be impossible to do on these machines (or so I was told, I'm not an expert on the C64's capabilities, but the thing runs in the single megahertzes and doesn't have a divide instruction, so yeah). One I remember had a part with a veeery low resolution realtime raytracer, about 10fps I think, the scene consisting of just a plane and a sphere (IIRC) ... but it was done on a C64.
I wonder how long it will take for PCs though. Moore's Law broke already a few years ago didn't it? But it's not really happening, so far. Or maybe it is. I haven't been keeping up with what's happening in the PC demoscene lately. They used to be way ahead of the curve compared to PC videogames, this changed somewhere in the 200Xs, probably because around that time videogames started getting Hollywood-size budgets.
Their complaint on the filesystem, for example, falls flat for me, but partially because I think I don't understand what they want or how BeOS did it. Maybe the author has a special meaning for "...sort by tags and metadata", but this looks to be baked right into Finder at the moment; I can add in a bunch of columns to sort by, tag my items with custom tags (and search said tags), add comments, and so on. Spotlight also has rendered a lot of organization moot as you just punch in a few bits of something about what you're looking for (tags, text within the document, document name, file type, date modified by, etc.) and you'll find it. I don't know exactly what is missing from modern OSes (Windows search isn't too bad either) that the author isn't contented with.
The idea of multiple forms of interaction with the computer are okay, but quite frankly it starts to get into an eerie situation for me where I'd rather have to take a lot of steps to set up such monitoring as opposed to it being baked into the OS. I realize that I'm squarely in luddite territory given the popularity of Home Assistants (Echo, Apple Homekit, Google Home), but to me these seem like very intentional choices on the part of a customer; you have to go out of your way to get the hardware for it and disabling it is as simple as pulling the plug. Look at the non-sense we're having to deal with in regards to Windows Telemetry - to me this is what happens when such items get baked into the OS instead of being an additional application; you end up with a function you no longer can control, and for no other reason than to satisfy the complaint of "I have to download something? No thank you!"
I could go on, but the author's rant just doesn't seem consistent and instead seems to just want some small features that they liked from older OSes to be returned to modern OSes. There is a huge issue with bloat and cruft and some real legacy stuff in Windows and macOS, and desktop OSes aren't getting the attention they should be, but these suggestions aren't what desktop OSes are missing or what they need.
AFAIK, this approach is contrary to the BeOS approach, where application write the metadata directly. Spotlight's approach do have few benefits, though, such as able to provide metadata for files in network drives, or for removable disks that might not be using filesystem that supports metadata.
OS X too a lot of getting used to for me as a kid, as I had an old mac clone and an iMac with 10.1 side by side in my living room, and I loved my little mac clone. OS X didn't immediately win me over because I was just too used to OS 9 and had everything I needed on my offline mac clone. But I distinctly remember Spotlight being what really sold me on OS X because from the get-go it worked basically as intended, and man was it magnificent. If the author of Spotlight is on APFS, I have a lot of faith in it then.
This one in particular:
>Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps?
I mean, you can. It's called the taskbar.
I don't think we'll see a change from Linux and Windows (edit: and IOS) until there's another compelling reason to switch; some feature that can't or won't be available in the other two operating systems and their surrounding ecosystems of software.
When was the last time we really had a "Visicalc sold more Apples than Apples sold Visicalc" moment? I can't think of one after Linux wafflestomped all the propietary hardware and os unix vendors, or to give Apple their due, when they released the iPhone.
Edit: duh, of course cloud taking over for bespoke hardware and software defined storage pushing out EMC and the like are two recent examples of industry game changers, but on the other hand both still rely primarily on Linux so my assertion about operating systems still stands.
Well, they wouldn't need to any more. They can adopt drivers from Linux or any other free operating systems. The inner work of a driver might be arcane, the interface to an operating system is generally well defined. Adopting a existing driver is definitely doable.
Yes, it's complexity atop complexity atop complexity all the way down.
But the solution is NOT to throw out a bunch of those old layers and replace them with new layers!!!
Quoting Joel Spolsky:
"There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: It’s harder to read code than to write it. ... The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. ... When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."
The author of the article recognizes there's a problem, but is less clear on how to go about solving it. A clue is in this article by Erik Naggum: http://naggum.no/erik/complexity.html
Dan Ingalls once wrote: "Operating System: An operating system is a collection of things that don't fit into a language. There shouldn't be one." What he meant is we should migrate the functionality of the operating system into the programming language. This is possible if there's a REPL or something similar, so no need for the shell or command line. The language should be image-based, so no need for a file system. So, a bit like Squeak, or a Lisp with a structure editor.
There's still a gap between the processor and the language, which should be eliminated by making the processor run the language directly. This was done in Burroughs mainframes and Lisp machines.
Further up the "stack", software such as word processors and web browsers are at present written entirely separately but have much in common and could share much of their code.
I like the idea of an image based system, eliminating the need for the filesystem itself. I think the 'filesystem' and 'executable-process' ideas are so prevalent that they frame our thinking, and any new OSes tend to adopt these right away. But more interesting and powerful systems might emerge if we find a new pattern of operation and composition. Are you aware of any image based full stack systems that are in active development?
But, sometimes, that's exactly what you should do. It brings to mind OpenSSL after Heartbleed. I remember reading that the LibreSSL people were ripping out all kinds of stuff (like kludges and workarounds to support OpenVMS), and rightly so. You might call it "knowledge [and] collected bug fixes," but sometimes the crap is just crap.
Key word here is "Sometimes".
Reviewing code means you get to find out the reason these hacks were written in the first place, and then decide wheter to keep, rework or delete them.
Starting from scratch means you get rid of the worthless crap, yes, but you also lose all the valuable crap.
So, they weren't starting over. They forked, refactored and removed things no longer needed. Completely different thing.
It's three years later. What general-purpose OS other than OpenBSD is using LibreSSL?
Neither is keeping everything as it is and keep pretending it's fine!
> When you throw away code and start from scratch, you are throwing away all that knowledge.
I disagree with Joel on here. There's lots to be learned from throwing everything away and starting from scratch, and if anything those innovations could make their way into the current infrastructure, as it has happened with Midori and Windows.
Related: "legacy code is code that doesn't have tests" not sure who said this but also very true IMO
Michael Feathers in "Working with legacy code" (A book I can highly recommend)
In the other words, the point is that you always have to do the cost-benefit analysis for any such endeavor and the history tells that rewriting is intrinsically very expensive.
that's absolutely possible on linux with i3wm for instance
> I'd like to pipe my Skype call to a video analysis service while I'm chatting, but I can't really run a video stream through awk or sed.
awk and sed, no, but there are many CLI tools that accept video streams through pipe. e.g. FFMPEG. You wouldn't open your video through a GUI text editor, so why would you through CLI text editors ?
> Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.
Sure they are, on linux: https://linux.die.net/man/1/wmctrl
Fifteen years ago people were already controlling their WM through dbus: http://wiki.compiz.org/Plugins/Dbus#Combined_with_xdotool
The thing is, no one really cares about this in practice.
There's no reason why a desktop window application could not supply audio and video to, or receive audio and video from, ffmpeg or even a chained command that might just include ffmpeg at some step.
And there's already the Xembed protocol for embedding windows in other windows, so it's technically possible to even move tabs from one application into another, with a coordinating dance. None of the changes that he wants really needs changes to X11 (although, as far as I know, it would be totally impossible under Wayland.)
It just needs someone to change applications to support it. I'd be interested to see an attempt.
I'm not sure we'll see it though, as for the most part the applications are developed fairly separately, and almost certainly won't see it in an open operating system working well for even most apps that people use. Short of maybe google releasing a filesystem browser extension and terminal extension, which may be entirely possible.
Sure many file-browsers (thunar, pcmanfm ...) can tab file-browser views, but I don't see the need for web-tabs in a not-web-browser. Firefox likewise can show folder content for file:///, but not feature complete compared to a file-browser.
Having a database with standardized interfaces for documents replace a filesystem is a really important feature mentioned in the article. It will allow the development of many useful apps like the itunes or email examples. Also, this is not specific to any OS can be standardized independently and implemented on current OSes we use by having some extension which stores metadata along with a file.
Ignoring the reasons why I don't trust Google, being able to trust your tools, especially your desktop, is the most important thing for me. I would love to have my emails delivered to a global document store so many smaller apps could take advantage of them, but only so long as I could guarantee there is no special 'google play services' app needing to run in the background doing who-knows-what with root.
I do think it’s important that we create open hardware and open software. I’m realizing Richard Stallman was right all along.
I took my hat out of the desktop race a long long time ago. the only thing that has really affected me recently was Mate switching entirely to GTK3. my text editor now does all sorts of things the maintainer of the editor can't change, like smooth-scrolling when using the find dialog.
I really want to get behind this effort for an improved desktop, even if it means breaking everything. but I have to be able to trust each of the components.
I think anybody who really thinks about it would have to agree modern OSes are a disgusting mess.
-- Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel? It's not the hardware. No app should have the capability, even if it tried, to slow down the OS/UI (without root access).
-- Yes, it should be a database design, with permissions.
-- Yes, by making it a database design, all applications get the ability to share their content (i.e. make files) in a performant searchable way.
-- Yes, permissions is a huge issue. If every app were confined to a single directory (docker-like) then backing up an app, deleting an app, terminating an app would be a million times easier. Our OSes will never be secure until they're rebuilt from the ground up.
[Right now windows lets apps store garbage in the 'registry' and linux stores your apps data strewn throughout /var/etc, /var/log, /app/init, .... These should all be materialized views [i.e. sym-links])
-- Mac Finder is cancer. If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement (like you can with car parts).
-- By having an event-driven architecture, this gives me exact tracking on when events happened. I'd like a full record of every time a certain file changes, if file changes can't happen without an event, and all events are indexed in the DB, then I have perfect auditability.
-- I could also assign permission events (throttle browser CPU to 20% max, pipe all audio from spotify to removeAds.exe, pipe all UI notifications from javaUpdater to /dev/null)
I understand the "Well who's gonna use it?" question, but it's circular reasoning. "Let's not get excited about this, because nobody will use it, because it won't catch on, because nobody got excited about it." If you get an industry giant behind it (Linus, Google, Carmack) you can absolutely reinvent a better wheel (e.g. GIT, chrome) and displace a huge marketshare in months.
Back in 1999 I saw a demo of Nemesis at the Cambridge Computer Lab: a multithreaded OS that was designed to resist this kind of thing. Their demo was opening up various applications with a video playing in the corner and pointing out that it never missed a frame.
Even back then I understood that this was never going to make it to the mainstream.
> If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement
You can do shell replacements and shell extensions on Windows. You can replace whatever you want on Linux. Non-customisability of MacOS is a Jobsian deliberate choice.
> event-driven architecture
Windows is actually rather good at this.
> all applications get the ability to share their content
> every app were confined to a single directory
Solving this conflict is extremely hard.
Yes, but Nemesis was a proper real time OS. The video-playing application had asked the OS for a guarantee that it would get X MB/s of disc bandwidth, and that it would have Y ms of CPU time every Z ms. The scheduler then gave that application absolute priority over everything else running while inside those limits, in order to make that happen.
This isn't hard. However, it conflicts with the notion of fair access to resources for all. The OS can only give a real-time guarantee to a limited number of processes, and it cannot rescind that guarantee. Why should one application get favourable access to resources just because it was the first one to reserve them all? How does the OS tell a genuine video-playing application from an application that wants to slow down the OS/UI?
This is why applications need special privileges (i.e. usually root) in order to request real-time scheduling on Linux. It's complicated.
Nemesis also did some nifty stuff with the GUI - individual applications were given responsibility to manage a set of pixels on the screen, and would update those pixels directly. This was specifically to avoid the problems inherent in the X-windows approach of high-priority tasks being funnelled through a lower-priority X server.
Back in 1998 Apple demoed the then beta versions of OS X doing exactly that. Multiple video streams playing without missing frames, being resized into the dock, while still playing  (a feature that is not present any more), and even resisting highjacking by a bomb app. It all worked back then and it still does today.
> Non-customisability of MacOS is a Jobsian deliberate choice.
Also, there are multiple Finder replacements apps for the Mac, the thing is, nobody cares because the Finder is good enough for most people.
 I can't find the link but it is in one of Job's keynotes from the 2000s
Take a moment to consider your expectations of that operating system, and the expectations upon software used thirty years ago.
Thirty years ago it was uncommon for users to expect more than one application to be operating at once, and so scheduling resource use wasn't an issue. Now, your PC has ll manner of processes doing work while you go about your business; applications are polling for updates from remote servers, media players are piping streams to a central mixer and attempting to play them simultaneously and seamlessly, your display is drawn with all manner of tricks to improve visual appeal and the _feeling_ of responsiveness, and your browser is going all this over again in the dozens of tabs you have open simultaneously.
So once in a while a resource lock is held for a little too long, or an interrupt just happens to cause a cascade of locks that block your system for a period, or you stumble across a corner-case that wasn't accounted for by the developers.
Frankly, it's nothing short of a miracle that PCs are able to operate as well as they do, despite our best efforts to overload them with unnecessary work.
And yes, I too hate Electron, but in all my decades of working on PCs I can't really recall a time that was as... Actually, BeOS was pretty f'n great.
How about 15 years ago? I was doing everything you describe as part of my daily computer use (a few browser windows open, a text editor running, a multimedia player, an IM and mail client running, etc) and had the same performance and usability frustrations.
The main difference is that if I render a video now, I'll do it in 4k instead of 640x480, and that if I download a game, it's 50GB instead of 500MB. But scalin in that direction is expected; my machine isn't any more stable, or anything that goes in the way of the examplea described in the article.
If I showed my mobile phone to 2002 me, they'd be extremely impressed. If I showed the form factor and specs of my laptop, they'd be extremely impressed. But if I showed them how I use my desktop OS? The only cool thing would be Dropbox, I think.
This is happening in the mobile space, as you noted; it will happen in AR next.
Deep down the computer is still linear.
Yes we do all kinds of tricks with context switching to make it seem like it is doing a whole bunch of things at the same time.
But if we had visualized the activity in human terms, it would be an assembly line that is constantly switching between cars, trucks, scooters and whatsnot at a simply astonishing rate.
Multicore processors literally run several things at the same time. Even a single core can literally run several instructions at the same time thanks to instruction level parallelism, in addition to reordering instructions, predicting and speculatively executing branches, etc. The processor also has a cache subsystem which is interacting with the memory subsystem on behalf of the code -- but this all works in parallel with the code. Memory operations are executed as asynchronously as possible in order to maximize performance.
What's more, outside a processor, what we call "a computer" is actually a collection of many interconnected systems all working in parallel. The northbridge and southbridge chips coordinate with the instructions running on the CPU, but they're not synchronously controlled by the CPU, which means they are legitimately doing other things at the same time as the CPU.
When you read something off disk, your CPU sends a command to an IO controller, which sends a command to a controller in the disk, which sends a command to a motor or to some flash chips. Eventually the disk controller gets the data you requested and the process goes back the other way. Disks have contained independent coprocessors for ages; "IDE" stands for "Integrated Drive Electronics", and logical block addressing (which requires on-disk smarts) has been standard since about 1996.
Some part of your graphics card is always busy outputting a video signal at (say) 60 FPS, even while some other part of your graphics card is working through a command queue to draw the next frame. Audio, Ethernet, wifi, Bluetooth, all likewise happen simultaneously, with their own specialized processors, configuration registers, and signaling mechanisms.
Computers do lots of things simultaneously. It's not an illusion caused by rapid context switching. Frankly, the illusion is that anything in the computer is linear :-)
And if you think about it just a bit longer, you conclude that if all modern operating systems are a disgusting mess, then being a disgusting mess optimizes for survival somehow. And until you figure that piece out, you're never going to design something that's viable.
Windows is frankly terrible. I've also tried Windows 8 and I have Windows 10 in a virtual machine (with plenty of resources). It's true that it has too many layers, too many services, too much in the way of doing everything. It cannot run for 15 minutes without some services that crashes or some windows that stops being responsive.
Truth be told, the same thing happened, to a lesser
degree, when I used Ubuntu (first distribution I tried). The experience was more pleasant overall, but the OS felt still too bloated.
My journey among Linux distributions led me to ArchLinux. I've been using it for a few years now, and all I can say is that it's been exceptional. 99% of the times the package upgrades just work (and don't take 2 hours like on Ubuntu), I've yet to experience an interface freeze, and I'm extremely productive with the workflow I came up with. My environment is extremely lean: first thing I did was to replace desktop environments, which just slow you down, with window managers (at the moment I'm using Bspwm and it's the best I tried so far, even better than i3wm). Granted, the downside of this is that you have be somewhat well-versed in the art of Unix and Linux, but I would say that in most cases it's just one more skill added to the skill set.
All of this to say that, in my opinion, un-bloated OSes are already here. The messiest component on my system is undoubtedly the kernel, but what can you do? Surely you cannot expect to have a kernel tailored to the computers released in the last year.
Not that my system is perfect, obviously, but bloat on your system that's not due to something you're explicitly running is a concern I just can't relate to at all, and frankly, I don't know why people put up with it.
... because the answer to a lot of the posed questions are those implementation details.
It's a bit like saying "There should be peace in the middle east. The details of the politics there are largely irrelevant. They should just make peace there, then it would be better".
He contradicts his core assertion (OS models are too complex and layered) with his first "new" feature.
Nearly everything on this manifesto has been done before, done well, and many of his gripes are completely possible in most modern OS's. The article just ignores all of the corner cases and conflicts and trade-offs.
Truly understanding the technology is required to develop useful and usable interfaces.
I've witnessed hundreds of times as designers hand off beautiful patterns and workflows that can't ever be implemented as designed. The devil is in the details.
One of the reasons Windows succeeded for so long is that it enabled people to do a common set of activities with minimal training and maximizing reuse of as few common patterns as possible.
Having worked in and on Visual Studio, it's a great example of what happens when you build an interface that allows the user to do anything, and the developer to add anything. Immensely powerful, but 95% of the functionality is rarely if ever used, training is difficult because of the breadth and pace of change, and discovery is impossible.
And ironically, one of the reasons why Windows was successful in developing these patterns for office applications is that much of the work was done by IBM.
The UI in Windows 3 was functionally almost identical to the Presentation Manager interface that had been designed for the IBM-Microsoft collaboration OS/2. The design implemented an IBM standard called CUA .
CUA is not an exciting UI, but it did a good job of consolidating existing desktop software patterns under a consistent set of commands and interactions. The focus on enabling keyboard interaction was crucial for business apps, and a strong contrast to the mouse-centric Mac (which didn't even have arrow keys originally).
The kind of extensively data-driven UI system development that CUA represented is totally out of fashion nowadays, though. Making office workers' lives easier is terribly boring compared to designing quirky button animations and laying out text in giant type.
The key to efficient IPC is that the scheduler and the interprocess communications system have to be tightly coupled. Otherwise you have requests going to the end of the line for CPU time on each call, too many trips through the scheduler, and work switching from one CPU to another and causing heavy cache misses. QNX got this right.
(Then they were bought by a car audio company, Harmon, and it was all downhill from there.)
QNX messaging isn't a "bus" system, and it has terrible discovery. Once communications are set up, it's great, but finding an endpoint to call is not well supported. The designers of QNX were thinking in terms of dedicated high-reliability real-time systems. It needs some kind of endpoint directory service. That doesn't need to be in the kernel, of course.
QNX is a microkernel, with about 60KB (not MB) of code in the kernel, and it offers a full POSIX interface. (There used to be a whole desktop GUI for it, Photon, good enough to run early versions of Firefox, but Blackberry blew off the real-time market and dropped that.) File systems, networking, and drivers are all in user processes, and optional. L4 is more minimal, probably too minimal - people usually run Linux on top of it, which doesn't result in a simpler system.
I still have a soft spot for QNX though; I hope they'll survive RIM.
60KB was just the kernel, not the additional processes that run in user space. The great thing about such a tiny kernel was that it could be fully debugged. The kernel didn't change much from year to year back in QNX 6.
Many embedded systems put the kernel in a boot ROM, so the system came up initially in QNX, without running some boot loader first. You built a boot image with the kernel, the essential "proc" process, and whatever user space drivers you absolutely had to have to get started.
QNX went open source for a while, starting September 2007, and there had been a free version for years. After the RIM acquisition, they went closed source overnight and took all the sources offline before people could copy them. That was the moment when they totally lost the support of the open source community.
> Having worked in and on Visual Studio, it's a great example of what happens when you build an interface that allows the user to do anything, and the developer to add anything. Immensely powerful, but 95% of the functionality is rarely if ever used, training is difficult because of the breadth and pace of change, and discovery is impossible.
+100. This is something I have advised to any designer that would listen. You must have at least a basic understanding of the technology in order to understand the set of affordances with which you use to design your flows.
Finally one things that I precisely dislike with VS Code is that this whole discoverability ease was throw out of the windows and almost any complex task can't be completed without looking in the documentation.
Ctrl + Shift + P (on Windows, might differ depending on OS) brings up command palette or whatever it is called.
When it has narrowed down to the command you need, make a mental note of the shortcut next to it, then hit esc and use it.
"Oh, but why should we allocate resources to something the majority of users won't use?" - everyone on HN
No need to get rilled up, just take the good and ignore the rest :)
It's hard but not that hard; tons of experimental OS-like objects have been made that meet these goals. Nobody uses them.
What's hard is getting everyone on board enough for critical inertia to drive the project. Otherwise it succumbs to the chicken-and-egg problem, and we continue to use what we have because it's "good enough" for what we're trying to do right now.
I suspect the next better OS will come out of some big company that has the clout and marketing to encourage adoption.
But I think this "reinvent the world" concept has a deeper flaw - in all the discussion I didn't see any mention of how you make it performant despite that being an identified problem. If everything's message passing...how much memcpy'ing is going on in the background? What does it mean to pipe a 4gb video file to something if it's going to go onto a message bus as ... 4kB chunks? 1 mb?
Remember this is a proposal to rebuild the entire personal computing experience, so "good enough" isn't good enough - it needs to absolutely support a lot of use cases which is why we have so many other mechanisms. And it also (due to the porting requirement) should have a sensible way to degrade back to supporting old interfaces.
Microsoft owns the desktop partly because they absolutely were dedicated to backwards compatiblity. You want to make progress - you need to have a plan for the same.
If UWP had been there in Windows 8, with something like .NET Standard already in place, the app story would be much different.
So good HTML 5.0 support is key, but there are a lot of layers between that and bare metal.
1. Application devs aren't trained to architect new software. They will port old shitty software patterns from familiar systems because there's no time to sit down and rewrite photoshop for Android. It's sad but true.
2. People abuse the hell out of it. Give someone a nice thing and someone else will ruin it whether they're trying to or not. A universal message bus has security and performance implications. Maybe if Android was a desktop os not bound by limited resources it wouldn't have pulled out all the useful intents and neutered services, but then again the author's point is we should remove these complex layers and clearly the having them was too complex/powerful/hungry for android.
I do think there's a point to be made that we're very mouse and keyboard centric at the primitive IO level and in UI design. I always wondered what the "command line" would look like if it was more complex than 128 ascii characters in a 1 dimensional array. But it probably wouldn't be as intuitive for humans to interface with unless you could speak and gesture to it as the author suggests.
I always thought LED keyboards were stupid because they are useless, but if they could map to hotkeys in video players and such, that could be very useful, assuming you can turn off the LEDs.
His idea for centralized application configs and keybindings isn't bad if we could standardize using something like TOML . The Options Framework for Wordpress plugins is an example of this kind of thing, and it does help. It won't be possible to get all the semantics agreed upon, of course, but maybe 80% is enough.
Resurrecting WinFS isn't so important, and I feel like there'd be no way to get everyone to agree on a single database unless every app were developed by one team. I actually prefer heterogeneity in the software ecosystem, to promote competition. We mainly need proper journalling filesystems with all the modern features. I liked the vision of Lennart Poettering in his blog post about stateless systems.
The structured command line linked to a unified message bus, allowing for simple task automation sounds really neat, but has a similar problem as WinFS. But I don't object to either, if you can pull it off.
Having a homogenous base system with generic apps that all work in this way, with custom apps built by other teams is probably the compromise solution and the way things have trended anyways. As long as the base system doesn't force the semantics on the developers, it is fine.
Do you have a link to that?
If you are a Windows user you'll notice the problems that it introduces in terms of security and maintainance.
Files are much better in both aspects.
To me the underlying issue is not centralization into a single database, but the usability of advanced configuration. Every OS have multiple attempts to resolve that problem, which ended in more fragmentation for end users (i.e. macOS plist / registry / rc files / etc).
While I agree with the author that more innovation is needed on the desktop; I think that the essay is very disinformed.
For example, Squeak can be seen as an OS with very few layers: everything is an object, and sys calls are primitives. As user you can play with all the layers, and re-arrange the UI as you want.
So why the idea didn't took off? I don't know exactly (but I have my hypothesis). There are many factors to balance, those many factors are the ones that makes design hard.
One of those factors is that people tend to put the wrong priorities of where innovation should be. A good example is what the author mentions as priorities for him. None of the items mentions fundamental problems that computer users face today (from my perspective of course).
> I know I said we would get rid of the commandline before, but I take that back. I really like the commandline as an interface sometimes, it's the pure text nature that bothers me. Instead of chaining CLI apps together with text streams we need something richer [...]
I can't agree with that, it is the plain text nature of the command line that makes it so useful and simple once you know a basic set of commands (ls,cd,find,sed,grep + whatever your specific task needs). Plain text is easy to understand and manipulate to perform whatever task you need to do. The moment you learn to chain commands and save them to a script for future use, the sky is the limit. I do agree with using voice to chain commands, but I would not complain about the plain text nature and try to bring buttons or other forms of unneeded complexity to command-line.
I don't know what he means by "traditional", but Linux native filesystems can store all the metadata you'd want.
> Why can't I have a file in two places at once on my filesystem?
POSIX compatible filesystems have supported that for a long time already.
It seems to me that all the things he wants are achievable through Plan9 with its existing API. The only thing missing is the ton of elbow grease to build such apps.
> Document Database
This is what Akonadi was when when it came out for 4.x. Nepomuk was the semantic search framework so you could rate/tag/comments on files and search by them. They had some performance problems and were not very well received.
Nepomuk has been superseded by Baloo, so you can still tag/rate/comment files now.
Most KDE apps also use KIO slaves:
> System Side Semantic Keybindings
Plasma 4 used to have compositor-powered tabs for any apps. Can't say if it will be coming back to Plasma 5.
Automatic app-specific colors (and other rules) are possible now.
> Smart copy and paste
The clipboard plasmoid in the system tray has multiple items, automatic actions for what to do with different types of content and can be pinned, to remain visible.
> Working Sets
These are very similar to how Activities work. Don't seem to be very popular.
(It's painfully naive, poorly reasoned, has inaccurate facts, is largely incoherent, etc. Even bad articles can serve as a nice prompt for discussion, but I don't even think this is even good for that. I don't we'd ever get past arguing about what it is most wrong about.)
Then he wants to completely redesign a GUI to manage it all, which sounds a lot like Firefox OS with aware desktop apps, but with the added bonus that most things that req privileges on desktop OSs no longer need them with Guix. Software drivers are implemented in user space as servers with GNU Hurd, so you can now access these things and all the functionality that comes with them, exactly what the author wants.
> Bloated stack.
True, there are options which author hasn't discussed.
> A new filesystem and a new video encoding format.
Apple created new FS and video format. These are far more fundamental changes to be glossed over as trivial in a single line.
> CMD.exe, the terminal program which essentially still lets you run DOS apps was only replaced in 2016. And the biggest new feature of the latest Windows 10 release? They added a Linux subsystem. More layers piled on top.
Linux subsytem is a great feature of Windows. Ability to run bash on Windows natively, what's the author complaining about?
> but how about a system wide clipboard that holds more than one item at a time? That hasn't changed since the 80s!
Heard of Klipper and similar app in KDE5/Plasma. Its been there for so long and keeps text, images and file paths in clipboard.
> Why can't I have a file in two places at once on my filesystem?
Hard links and soft links??
> Filesystem tags
What I feel about the article is: OSes have these capabilities since long, where are the killers applications written for these?
> In fact, many common applications are just text editors combined with data queries. Consider iTunes, Address Book, Calendar, Alarms, Messaging, Evernote, Todo list, Bookmarks, Browser History, Password Database, and Photo manager. All of these are backed by their own unique datastore. Such wasted effort, and a block to interoperability.
The ability to operate on my browser history or emails as a table would be awesome! And this solves so many issues about losing weird files when trying to back up.
However, I would worry a lot about schema design. Surely most apps would want custom fields in addition to whatever the OS designer decided constitutes an "email". This would throw interoperability out the window, and keeping it fast becomes a non-trivial DB design problem.
Anyone have more insights on the BeOS database or other attempts since?
(afterthought: like a lot of ideas in this post, this could be implemented in userspace on top of an existing OS)
Edit: I believe the state of the art in this area is the UI Automation API for Windows. In case the author is reading this thread, that would be a good place to continue your research.
Just to rant on file systems for a sec, I learned from working on the Meteor build tool that they are slow, flaky things.
For example, there's no way on any desktop operating system to read the file tree rooted at a directory and then subscribe to changes to that tree, such that the snapshot combined with the changes gives you an accurate updated snapshot. At best, an API like FSEvents on OS X will reliably (or 99% reliably) tell you when it's time to go and re-read the tree or part of the tree, subject to inefficiency and race conditions.
"Statting" 10,000 files that you just read a second ago should be fast, right? It'll just hit disk cache in RAM. Sometimes it is. Sometimes it isn't. You might end up waiting a second or two.
And don't get me started on Windows, where simply deleting or renaming a file, synchronously and atomically, are complex topics you could spend a couple hours reading up on so that you can avoid the common pitfalls.
Current file systems will make even less sense in the future, when non-volatile RAM is cheap enough to use in consumer devices, meaning that "disk" or flash has the same performance characteristics and addressability as RAM. Then we won't be able to say that persisting data to a disk is hard, so of course we need these hairy file system things.
Putting aside how my data is physically persisted inside my computer, it's easy to think of better base layers for applications to store, share, and sync data. A service like Dropbox or BackBlaze would be trivial to implement if not for the legacy cruft of file systems. There's no reason my spreadsheets can't be stored in something like a git repo, with real-time sync, provided by the OS, designed to store structured data.
Actually, that's a main selling point for Powershell. Commandlets take and return objects, which means common operations such as filtering, sorting and formatting are quite easy.
If the file system operated in an event-sourcing model, you'd be able to listen to a stream of events from the OS and reconstruct the state of the file system from them. If it acted like a database, you'd be able to do consistent reads, or consistent writes (transactions! holy cow).
You can! Use hardlinks.
There are well established standards for controlling window managers from programs, what on earth are you talking about?
> Applications would do their drawing by requesting a graphics surface from the compositor. When they finish their drawing and are ready to update they just send a message saying: please repaint me. In practice we'd probably have a few types of surfaces for 2d and 3d graphics, and possibly raw framebuffers. The important thing is that at the end of the day it is the compositor which controls what ends up on the real screen, and when. If one app goes crazy the compositor can throttle it's repaints to ensure the rest of the system stays live.
Just like Wayland!
> All applications become small modules that communicate through the message bus for everything. Everything. No more file system access. No hardware access. Everything is a message.
Just like flatpak!
This is entirely feasible with the current infrastructure.
> Could we actually build this? I suspect not. No one has done it because, quite honestly, there is no money in it. And without money there simply aren't enough resources to build it.
Some of this is already built, and most of it is entirely feasible with existing systems. It's probably not even that much work.
On top of this, the trade-off of creating an entirely new OS is enormous. Sure, you can make an OS with no apps because it's not compatible with anything that's been created before, and then you can add your own editor and your own web browser and whatever. And people who only need those things will love it. But if you need something that the OS developer didn't implement, you're screwed. You want to play a game? Sorry. You want to run the software that your school or business requires? Sorry. Seriously, don't throw out every damn thing ever made just to make a better suite of default apps.
Autodesk Inventor and Blender are at opposite ends of the "use the keyboard" range. In Inventor, you can do almost everything with the mouse except enter numbers and filenames. Blender has a 10-page list of "hotkeys". It's worth looking at how Inventor does input. You can change point of view while in the middle of selecting something. This is essential when working on detailed objects.
3D X-Point memory is coming. This is about 10x slower than DRAM but persistent and at a fraction of the cost. At 10x slower you can integrate it into NUMA systems and treat it as basically the same as RAM. One of the first features prototyped with it is "run a JVM with the heap made entirely persistent".
I agree that there's a lot of scope for innovation in desktop operating systems but it probably won't come from UI design or UI paradigms at this point. To justify a new OS would require a radical step forward in the underlying technology we use to build OS' themselves.
Well, it is hard, but this is not the main source of issues. The obstacle to having nice things on the desktop is this constant competition and wheel reinvention, the lack of cooperation.
The article shows out some very good points, but just think of this simple fact. It's 2017, and the ONLY filesystem that will seamlessly work with macOS, Windows and Linux at the same time is FAT, a files system which is almost 40 years old. And it is not because it is so hard to make such a filesystem. Not at all.
Now this is at the core of reasons why we can't have nice things :)
Universal Disk Format? 
ExFAT can also be used on all currently supported versions of Windows & macOS and added to Linux very easily via a package manager.
You could argue there isn't any need for a cross-platform filesystem these days. It's often easier to simply transfer files over Ethernet, Wi-Fi or even the Internet.
To your last comment, I will reply with the "old" adage to "never underestimate the raw bandwidth of a stationwagon loaded with tapes/drives barreling down the highway."
Yes you can kinda hack it into usage, but programs like gparted will not allow me to make a UDF partition last i checked (Windows sorta can, under the live drive moniker, iirc).
Maybe by starting to build command-line apps and see how good the idea works (cross-platform would be nice). I guess that the resulting system would have some similarities with RxJava, which allows to compose things together (get asynchronously A & B, then build C and send it to D if it contains not Foo).
If an app would talk to a data-service it would no longer have to know where the data is coming from or how it got there. This would allow to build a whole new kind of abstractions, e.g. data could be stored in the cloud and only downloaded to a local cache when frequently used, just to be later synced back to the cloud transparently (maybe even ahead of time because a local AI learned your usage patterns). I know that you can have such sync-things today, they are just complicated to setup, or cost a lot of money, or work only for specific things/applications, also they are often not accessible to normal users.
Knowing how to interact with the command-line gives advanced users superpowers. I think it is time to give those superpowers to normal users too. And no, learning how to use the command-line is not the way to go ;-)
A capability-services based OS could even come with a quite interesting monetization strategy by selling extra capabilities, like storage, async-computation or AI services, beside of selling applications.
Well. Then you get Spotlight (on OSX, at least) - system-wide file/metadata/content search.
It's great! It's also quite slow at times. Slow (and costly) to index, slow to query (initial / common / by-name searches are fast, but content searches can take a second or two to find anything - this would be unacceptable in many applications), etc.
I like databases, but building a single well-performing one for all usages is quite literally impossible. Forcing everyone into a single system doesn't tend to add up to a positive thing.
And why bashing the Linux subsystem, which is surely not even developed by the UX team (so no waste of resources) and is a much needed feature for developers?
BTW, there is a really simple reason why mainstream OSs have a rather conservative design: the vast majority of people just doesn't care and may even get angry when you change the interaction flow. Many of the ideas exposed in the post are either developer-oriented or require significant training to be used proficiently.
Virtual desktops have been part of Windows since at least Windows XP. The necessary architecture was already in place, Microsoft just didn't include a virtual desktop manager. There were/are several available.
I would love a few more options, like pinning one and having two windows share the remaining space (like a video player on a corner)
You can, at least with Windows 10, have the screen split into 4. But once you go beyond a left/right split, the other windows will not resize to maintain their areas if you resize one of them.
The network settings menu in the status bar is much better. I can turn wifi on and off easily.
I like the new notification panel, and setting reminders in Cortana.
The new Mail app is great. The Money app is great. The News app is great. The Calendar app is great. The Weather app is great. Very simple to use.
You can set dark color schemes nearly system-wide.
The lock screen is cool.
Edge doesn't suck.
Heck, you could probably run stuff from the 3.x era, if you installed Win10 as a 32-bit OS.
This is related to how x86 CPUs do 64-bit btw, not the OS itself.
A lot of other things author talks about keep the ecosystems going. The ecosystems, esp key apps, are why many people use these desktop OS's. Those apps and ecosystems take too much labor to clean slate. So, the new OS's tend not to have them at all or use knock-offs that don't work well enough. Users think they're useless and leave after the demo.
The market effects usually stomp technical criteria. That's why author's recommendations will fail as a whole. "Worse Really is Better" per Richard Gabriel.
Personally, I do find the idea of an operating system composed of services and applications that all share the same messaging statement compelling.
If this happens it's only going to happen with a top-down design from an industry giant. Android and Fuchsia are examples of how it might happen. Will it? It seems these days nobody cares as long as the browser renders quickly.
To complement it a bit. There's the problem of bootstrapping. Once all that new city infrastructure and beautiful planning is complete, who wants to move into that new city that has no markets, stores, bars, restaurants, etc?
Desktop is full of old cruft because people use old crufty software today. They must be able to continue to use old crufty software because they need to until a better alternative exists, but they use multiple old crufty software and better alternatives come slowly.
Desktop builds the new city adjacent to the old one and makes the grass greener there, but it takes quite a while for the old city to get empty.
It is very tempting to see all the complexity of an open system and wish it was more straight forward; more like a closed system. But this is a dangerous thing to advocate. If we all only had access to closed systems, who would we be seceding control to? Do we really want our desktop operating systems to be just another fundamentally closed off walled garden?
Like, for example, the WWW. Why is it that desktops have no native support for the user to organize web applications, and everything is handled through a single app, the browser?
It gives an immediate answer to "do I need to read this?", and if so, what key arguments should I pay attention to?
Let me finish with expressing my thanks to the author for including a tl;dr.
I'm happy to answer your questions.
For example, as in shift my workspace from my upstairs office to my downstairs work area just by signing in on the different console setup downstairs. All of my in-process work comes right back up. Right now I do this (kind of) using VMs, but they are limited when addressing hardware, and now I am multiplying that hardware.
Same thing with my streams - Switch my audio or video to the next room/zone where I want to move myself to. Start researching how to correctly adjust my weed whip's carburetor, then go out to the garage and pull up my console there where my work bench is and the dismantled tool.
Eventually my system would track my whereabouts, with the ability (optionally turned on) to automatically shift that IO to the closest hardware setup to me as I move around the structure/property.
And do something like this for each person? So my wife has her streams? Separate back end instance, same mobility to front-end UI hardware?
Can this new Desktop Operating System be designed with that hardware abstraction in mind?
Where some people might not want certain defaults, most people have no clue how to get access to software and will take whatever is already there. This is part of the reason all Windows devices come preinstalled with 50% windows and 50% OEM bloat; the OEM gets paid and the customer might 'use what is already there' and for the bloatware vendors hopefully purchase a full version or subscription.
What you want and what other people want most likely doesn't line up and never will. This is because there is no universal configuration for everyone and because the median is not going to work for anyone at all (i.e. install Garageband but not a browser, or install Numbers but not Pages)
Having use the newest iPad Pro 10.5 ( along with iOS 11 beta ), the first few hours were pure Joy, after that were frustration and anger flooding in. Because what I realize, is this tiny little tablet, costing only half a Macbook Pro or even iMac, limited by Fanless design with lower TDP, 4GB of memory, no Dedicated GPU, likely much slower SSD, provides a MUCH better user experience then the Mac or Windows PC i have ever used, that is including the latest Macbook Pro.
Everything is fast and buttery smooth, even the Web Browsing experience is better. The only downside is you are limited touch screen and Keyboard. I have number of times wonder If I can attach a separate monitor to use it like Samsung Desktop Dock.
There are far too many backward compatibility to care for with both Windows and Mac. And this is similar to the discussion in the previous Software off Rails. People are less likely to spend time optimizing when it is working good enough out of the box.
Quite true. I'm genuinely surprised how much progress Apple has made with iOS 11. The fact that they are giving users a file management app means they are finally ready to handle real work. With a really good Bluetooth keyboard....
Maybe you prefer whalemail, yousendit, or one of the other sign-up-free and get-ads-forever services for large attachments, and that's fine, they're not going away.
Neither is DropBox, for the time being. I'm both worried and excited about DropBox's new offerings and I'm all for it as long as they don't become Evernote and start selling backpacks and rebranded Fujitsu scanners. :(
Talking about how iOS is great for consumers but doesn't have a good keyboard is a bit tone deaf.
But if Non-Consumer, Workstation OS is what we want, then I value backward compatibility over everything else. Which means everything he wanted to remove are here to stay.