We have so many different Desktop "OS's" already. There's hardly a need for a _revolution_; strong stability and support is needed but there's just no paying market for it by the looks of it.
4 biggest desktop environments are by mega corporations (apple, microsoft) and by big open-source collectives (Gnome, KDE). But is that it?
No, there's also a bunch of tilling-window managers with their own desktop philosophies, there are entire terminal drawing frameworks with full GUI support.
Desktop is awesome. If anything mobile is the one needing to catch up. iOS and Android are awful, awful environments outside of casual usage. Just try writing up a document on your phone!
Just imagine what could be if they were on the level _desktops_?
Real question: Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?
To paraprashe Mr. Engelbart: it's a failed tool if you use it exactly the same way the day you bought it and a year after.
Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes? Why shouldn't everyone build our own homes? Should we continue to dumb down feeding, clothing, and sheltering ourselves for exclusive casual usage of <insert offensive stereotype>?
There are only so many hours in the day and time in our lives; why hide the benefits of technology behind arbitrary gatekeeping?
We don't need to know how to sew our own clothes, but knowing how to mend them is useful. Do we really want to live in a world where we toss out a shirt simply because a button falls off?
We don't need to know how to build a home, but knowing how to fix simple problems is useful. Do we really want to live in a world where we have to call in an electrician every time we trigger a circuit breaker?
We don't need to know how to churn butter, but knowing how to cook is useful. Do we really want to live in a world where we depend upon someone else decides what goes into every meal we eat?
Yes, computers are there to do stuff for us and to save time. We should be exploiting that. On the other hand, we should not be reliant upon it to the point where it interferes with control over our own destinies or the creative process.
Related to this is the phenomenon of turning everything into service. Why would you own things and accept responsibility for maintaining them, if you could just throw money at the service provider and have the thing be present when it's needed? Of course the thing will be extremely limited in what you can do with it, subject to Terms&Conditions, but why would you want to do anything non-standard with stuff? There's always another service you can throw money at to solve the same problem.
What's the end-game here? That we specialize into sub-species of humans, forever stuck in one role, with zero autonomy? No longer building wealth, we'll only be allocating the flow of money - from what Society gives us in reward for our work, straight to Services of said Society? Will we become specialized cells of the meta-multicellular organism Society becomes?
I can see how we're on the path towards that reality, and I absolutely hate it.
A couple of million years without advancing is hyperbole to the extreme; specialization which you are touting has only existed for 100-150 years or so (see Foucault, Frankfurt school etc...).
/Collaboration/ has been a hallmark of civilization; but it is revisionist to say that we have always specialized to this extreme or seen it as a necessary goal. For example, the blacksmith of feudal society didn’t only make spoons, nor the farmer a single crop. If we want to critique previous civilizations; we even need to be wary of the fact that such systems were determined primarily through morality, such as /Plato’s Republic/ or /Confucianism/ rather than any ideal of prosperity. The farmer was a farmer because that was his/her place not because they were better or worse at it.
Even if I’m being charitable and equate division-of-labour to specialization; which, I think is a /huge/ leap — it does not counter the original point which is there is little autonomy in what you choose to do for a living in the logical conclusion of a system where you must specialize to compete.
1) Specialization leads to more autonomy
2) Extreme specialization is as "old as civilization"
3) Extreme specialization leading directly to prosperity as an idea is older than the 18th century
On 1) I don’t see any further arguments on your side; so I assume that you have no qualms with such a correction. On 2) I believe that I’ve adequately addressed your concerns — I acknowledge /Plato/ specifically in my reply as shown in your link and my critique on "Ancient theories" is covered in a previous post. The last point overlaps with the 2nd point and I have found no criticisms to the contrary in your answers.
On semantics and pendantry of terms I’m disinterested — we could debate all day and I could argue that the very term "Division of Labour" originated in Adam Smith’s work  and therefore isn’t the same as specialization. Such a debate would neither be useful nor productive.
Exactly. That's why it makes a lot of sense to study/learn things outside your area of expertise, and/or if you want to lean on the more extreme side, study/learn basic survival skills (that everyone used to know before the agricultural revolution).
I think it's crazy that some people don't cook. Like every house and apartment comes with a kitchen, what do you mean you don't use it? "It's so much cheaper" I say ignoring the fact that I'm a cooking and baking enthusiast and so I'm not counting the fact that if I valued my time at any reasonable rate it's really not.
It's the same thing in the woodworking community -- "you'll save so much money, anything you see in the store I can make half as good, for twice the price!"
Traditionally in urban areas most people actually purchased food instead of making it, because most people did not have kitchens.
You can kind of still see that today; many old New York tenements have been converted into apartments with "kitchens", but really this usually describes something about the size of an airplane bathroom with a stove, a fridge and 1 counter.
There is much more to sewing clothes than threading a needle and guiding it through a button hole, at least if you want to make something that will fit and will last. The complexity of the product also plays a strong role.
Much the same can be said of coding. Being able to issue a command in a shell or compose a function in a spreadsheet is probably the closes analog to sewing on a button, but how many people can even do those things?
We live in a world where our phone's calculators are typically as powerful as a four function calculator from decades ago, perhaps with a subset of functions found on a scientific calculator. How do we expand our minds beyond that limited scope if vendors are afraid of creating software that allows us to compose anything more complex? With the status quo, we have to seek out options and those options are mostly targeted at professionals.
Similar things can be said for other domains. While writing of coding, I was actually thinking of graphics design and word processing and databases and the many other domains that have been over simplified by modern consumer applications. For the most part, their functionality has been simplified to the point where you can perform a very narrow range of tasks and have very little scope of the imagination. For example: that database that backs your address book cannot be adapted to catalogue your books, or the online word processor that is fine for writing reports is poorly suited for preparing a book for publications. Sure, there are professional alternatives out there. On the other hand, it seems as though people has a lot more flexibility with the software of the 1990's than the 2010's.
To get a driver's license you need some basic knowledge of car internals. I'd argue that computers are infinitely more important in our society than a car yet majority of people have absolutely no idea how computers work and are expected not only to live in this world but to ace it too.
The car is a relatively simple machine compared to a desktop computer, and unlike computers, most basic knowledge apply to all cars regardless of brand and model. Our computers are patched together with gaffa tape, they're not some timeless universal design the way the clutch in a car is.
In fact, in a way I become worse at using them because I know more about them - think about "just turn it off and on again" vs wanting to debug it and understand the root cause of whatever issue I'm having.
The computer equivalent of what most people know how to do around the house is clearing the browser cache, restarting a service, running something at startup, and other troubleshooting steps like this. Stuff that you learn in under a day just like you would when learning basic clothes mending, replacing a faucet, changing a tire or your oil, or cooking a meal.
Any reasonable definition of "coding" is creating something. Like building your own electronic circuit, mechanical part, simple clothes, etc. This is beyond what a normal person is expected to know about their stuff as general knowledge in life. Everyone should just understand the principles of the tools they're using and basic "under the hood" stuff to assist with basic troubleshooting.
In reality in the parts of the world with higher standards of living (can afford stuff) this piling up of expectations just lead people to give up and pay for services rather than learn all that. And for good reason, modern society has this bad habit of taking every shred of free time and complicating your life, with unfortunate consequences.
> this piling up of expectations just lead people to give up and pay for services rather than learn all that
A small bit of knowledge can save you a lot of money in services. I can get why wealthy wouldn't care, but most people aren't wealthy. Also, I feel there's more pressure from the sales&marketing departments of services than from modern society's demands on free time.
Where do you live that this is true?
Programming is not just like being a car mechanic or factory floor engineer, its like being an expert driver at the same time.
When I use my desktop, it often occurs to me that I could write something to speed up a task, if only the application was accessible in a similar way to Emacs, or had Amiga-style ARexx ports that I could talk to in a script. From this perspective, programming is the most fine-grained GUI affordance within a computer system. By making it accessible along a continuum with simpler GUI tools, we greatly increase the ability of the user to do magic, or to learn to do it.
I would really like to see the development of an ergonomic expert-oriented desktop that lets me use my programming skills in a high-level and bureaucracy-free manner, to augment my use of an attractive and well-integrated GUI. There's no reason why such features should impinge on ordinary non-programmer use.
Arguably, programming is what computers are for. If you're not programming it in some way, then it is more like an appliance that just happens to contain a computer. Personal computers of the 80s booted directly into a programming environment.
In other words I'd like there to be a concept of OS-level "system features" available to high level languages through something a little more friendly and robust than interfacing to a C library. I don't know how I'd implement it :) but its how I'd like things to be.
Macs have AppleScript, and while application support can be hit-or-miss, I’ve got several little things I’ve done with it that have made my life a lot easier. There’s a couple features I wanted in Illustrator that I’ve been able to work around with scripts, I have a script hanging around to help with a tedious part of turning a big pile of individual page files into a comic, I have a hotkey that rotates my display and my Wacom tablet with one keystroke when I want to work with my monitor in portrait orientation for a while, and a few other things I’m not thinking of. Some of these I use once every few years, some I use multiple times a day.
For some reason, UIs are now designed towards an "average user", but as the airforce has found out long ago: average users don't exist in the real-world , it's an entirely made up concept.
The solution to this is customizability. Create an OS that's easy to use in the default configuration but let me tweak it to my own needs just like I can adjust the seat in my car.
General problem-solving ability seems like a synonym for fluid intelligence, which is not very malleable. Learning to code, on the other hand, is possible with effort. I learned to program in the 4th grade, with videos and books I myself bought, without having internet access. (I could use dial-up if I really needed it, but it was expensive and slow, so I used it very sparingly; I don't remember how much, but around 5 hours per month seems an upper bound.) I had no support whatsoever from anyone (except that my dad paid for the books and videos), my mom only let me use my computer for like 3 hours a week (shared between gaming and doing anything else), my computer was old and slow, ... . Now, I sure have a high IQ, but I doubt that we couldn't have 20% of the urban population reach some basic computer literacy when they are 24 years old. Heck, calculus is known by more people than coding. Most non-poor people waste 16+ years of their life in K12 and undergrad, and learn very few useful skills. Imagine what would happen if we taught people a curriculum that did something other than pure signalling.
Selling pre-made butter or clothes doesn't prevent someone from making their own if the pre-made one doesn’t fit their needs.
In technology, making your own is often outright impossible due to proprietary APIs
In a lot of cases the inefficiency of the official implementation is a feature for the developer and they definitely do not want people to build more efficient clients (examples: no ads/irrelevant content, defaulting to chronological feed instead of algorithmic, etc) and use technical (and sometimes legal, like abusing copyright law) workarounds to make the process as difficult as possible.
The aim should be to make programming/scripting/automation easier and more accessible, not to hide it away to prevent people from ever using it.
> Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes?
Why learn history in school? Why learn math? Why learn about philosophy? Should we stop teaching that in school because the <stereotype> will never use it anyways? Or is the opposite true and not teaching that would be the actual gatekeeping?
TLDR: You suggest that this would hide <useful stuff> behind programming and thus be gatekeeping. I think you are hiding useful stuff==programming and are thus the gatekeeper.
Why shouldn't everyone read? Why shouldn't everyone write? Why shouldn't everyone do the basic math?
Being able to use computers efficiently is knowledge, not chores. To be able to use them as a bicycle for the mind goes way beyond pressing colored buttons according to emotion.
I actually think some game user interface ideas should make it to the desktop. Game interfaces are always customizable up-front.
To quote a commonly-used Web meme: "Why not both?"
In my opinion, why should a software tool only have one interface to it? What if there were many possible interfaces available, from very simple interfaces with reasonable defaults for casual users, to more option-rich interfaces for power users, to an API for programmers. What if we could take advantage of today's AI technology to automatically construct GUIs that are tailored to a user's experience level? What if users could customize the GUIs in order to make the GUI fit their needs better?
What if the system supported a variety of languages, not only common languages such as Python that many programmers are familiar with, but also beginner-friendly languages? Users are willing to program provided it's not too difficult: AppleScript from the 1990s was a step in the right direction, and Excel's macro language is probably the most widely-used programming language in the world. With today's AI/NLP technology, we could go further by developing ways for users to describe repetitive, routine tasks using natural language.
I think there's still a lot of room for innovation on the desktop. But you highlight a very big problem: where is the market? Who is going to pay for this innovation? Outside of open-source projects, the major commercial desktop environments are platforms controlled by multi-billion dollar corporations. Building a new desktop environment that is capable of competing against the commercial giants will take a lot of time and capital. The last company to give this a try was Be, Inc. in the mid-1990s, and they had a hard time competing against Microsoft's OEM strategy. I wrote more about this at http://mmcthrow-musings.blogspot.com/2020/10/where-did-perso....
System functionality could be expanded through various means, but most often devs used Extensions. And if a software issue arose, it was easy to disable all Extensions by pressing SHIFT key on start-up. Also, on start-up you'd visually see what Extensions were being loaded. So at start-up you would always be aware what you had installed.
In current macOS it's very hard to keep track of what I've installed. I install a lot of stuff using tools like Homebrew. Some software might install some system level hooks, etc... From my perspective it's kinda hard to keep the system "clean". And it's probably a good idea to do a clean install of my computers maybe once every year or so, since I might have installed stuff I don't really use anymore.
Also, there was the System Folder and that directory contains the Extensions, Preferences, Control Panels directories, etc... So you could also at the file system level manage your System Folder. You could just delete an Extension manually from Extensions directory in the System Folder to uninstall it. You didn't need any "uninstall" software most of the time.
A classic macOS like environment with a view more modern features (maybe a WindowMaker-like UI, multi-user and real multi-tasking) would be pretty neat.
It's very, very difficult to beat a keyboard. Tablets and even phones become day vs. night more usable if you plug in a keyboard, even ignoring everything else that still sucks about them.
I find screen size to be a bigger advantage to desktop.
Mind that I mention it was specifically index finger swiping. Maybe it wasn't actually faster than physical keyboard, but it was certainly an order of magnitude faster than discrete thumbs phone typing.
While GNU/Linux have the necessary tooling for making it as well, but thanks to the fragmentation and some communities hatred against GNOME/KDE, it will never happen.
This is what a modern desktop OS should look like,
"Eric Bier Demonstrates Cedar"
Including the part of being written on a memory safe systems programming language.
I’m not looking to debate - just that it’s not obvious and I suspect the specific reasons why would be interesting to know about.
Mesa/Cedar also shares some ideas with the other workstation variants from Xerox PARC, namely Interlisp-D and Smalltalk, just based on a strong typed language for systems programming, with reference counting and a cycle collector.
The language itself compiles to bytecode, because Xerox PARC machines used bytecode with microcoded CPUs, whose interpreter was loaded on boot. So in a sense it was still native somehow.
The full OS was written in Mesa/Cedar, and everything was kind of exposed to the developers.
The shell is more like a REPL, where you can access all that functionality, meaning the public functions/procedures from dynamically loaded modules, interact with text selection from any application window, or execute actions on a selected window. And as REPL, it worked on structured data.
Basically similar to what Powershell offers, with its structured data, and ability to call any COM/UWP, .NE or plain DLLs libraries.
Then you could embedded objects into other objects and this the basis of the compound document architecture, basically the genesis for OLE in Windows and COM (COM is just the basic features which OLE is built upon, although more the OLE 2.0, the 1.0 was more complicated still).
The way Office works between applications and its inline editing of OLE documents can be found in Xerox PARC workstations, as Charles Simonyi brought Bravo ideas into Word, as one of Bravo creators.
Since Windows Vista, most new APIs are actually a mix of .NET and COM (now UWP), which expose a similar high level set of OS API (bare bones Win32 has hardly changed since XP days).
Now, many of these concepts can also be found in GNOME and KDE, however due to the way distributions get put together, it is hard to really provide such integrated developer experience across the whole stack.
And while REPL like shells do exist for UNIX clones, their adoption is a tiny blip when compared against traditional UNIX shells.
I take this as a joke. There is nothing modern-looking about it. Geeky, yes. Not designed for touch interface. Resembles Oberon, which I would not call modern, either. Maybe, we are not ready for it yet. Belongs in the future, then. (Or, more likely, in the past.)
To proper understand a book one has to read more than just the back cover overview.
- not thinking about files: I can open Notes/Drafts on my phone and get a textbox. I kinda get this with Joplin, barely.
- Real sandboxing, with a nice permission layer
- Extremely easy sharing of data between apps. Of course files are theoretically a great sharing mechanism, but the sharing mechanism in mobile OS's are the logical conclusion of the clipboard
- URIs that go deep into other apps. Lets you easily say "go over here to see details" from a completely separate system
The fact that lots of stuff are webapps lets you get pretty far on Desktop too but I think these metaphors are power user features that the desktop could learn from
Case in point: tried sharing a vpn config with an Android user over Signal. They couldn’t do anything with the file, just yielding an error message saying it was unsupported. Sending the exact same file with a .pdf extension allowed them to download it and import it in their VPN app (only after downloading and installing a generic file manager app, though).
Every now and then I struggle with some file that I can’t figure out how to move between apps. Something as seemingly simple as downloading an mp3 file from a browser and importing it to a music player app is quite an ordeal on iOS.
Sadly, the developers never managed to go beyond a few core libraries and a nice theme (the GUI used GNUstep under the hood). I followed the development with great interest till they stopped updating the site; I believe that the effort would have required many more developers. What a pity!
And I hate app-data not being files, not being portable. My data may be stuck inside a SaaS and only accessible through a closed-source app, and I have nothing to say about it.
> Real sandboxing, with a nice permission layer
Sandboxing apps is a good thing. Depending on your use-case, Snap/Flatpak or containers kinda solve this, but they are not the default way of running apps for now.
What mobile does wrong here though, is that it also sandboxes the user, not giving the user full access to his device, nor letting him let his apps get that access either.
This is user-hostile, on all current major mobile-platforms.
> Extremely easy sharing of data between apps
I would rephrase this as barely functional sharing, for only the limited subset of data the application has decided to implement sharing for, and only in ways the application-developers have considered inter-app sharing.
On a desktop, I as the user, have the power to decide how I want to share data and invent new ways data can be shared and utilized.
> URIs that go deep into other apps
While that is certainly a neat feature, it's an app-centric feature. How do you know which kind of apps I have installed? How do you know which of those apps within that niché I have installed?
And if you're not going to make it app-centric, you have to make it file/data-centric anyway... Which leaves us Android. Android does this better than iOS by having an intent-system which lets you register ability to handle both files, urls and subsets of those, and other apps can query for which apps supports which file/url-intents. So basically just a minor addition to the system we already have had on all desktop OSes for decades now.
And again: that's a system which already works really well when files are first-class concepts which everything else builds on.
So it's all back to files. If you want to empower the user, you must have files.
If you have a contacts program, are you going to make each contact a file? What about performance or bulk editing? From that program's perspective, the ideal is probably to have a single file (SQLite DB for example).
But now you don't have granular sharing mechanisms except through a clumsy export which requires you to give a name to the thing and put it somewhere and open it in the other program.
Meanwhile I hit the share sheet on my phone for the Contact. Some apps know about this form for data and can ingest it. Others fallback to text. It's the clipboard model, not the files model.
Files are _fine_, and it lets you do stuff like reverse engineer the format and do cool stuff. But it's clunky as hell when you have something relatively ephemeral.
Many (Windows) apps support OLE/COM-based objects which can be copied, mixed and processed in between applications. This allows the clipboard to hold rich objects and not just text-based contents. Things like tables, images, rich text, contacts... and even files, or folders of files!
This allows for a much more rich (and empowering!) way to share data than currently done on popular mobile platforms.
This was already implemented back in Windows 95 or something. It's really old tech. Not sure how well this concept is implemented (or at all) on other desktop operating systems though, so it might not be a "universal" desktop solution for everyone.
That said it can clearly be done better than mobile, because on Windows it has already been so for two and half decades.
>Extremely easy sharing of data between apps.
hah no. Try sharing large batches of files on iOS. with plenty of apps there's no way to do this (and often the only way to do it if you can is with the files app.)
I remember when video games came with elaborate manuals, this discussion reminds me of that and how it stopped happening (and just now the smell of opening the box for a new game, i don’t think i’ll ever experience that again)
It's the assumption that people won't learn that's a problem. Minimizing unnecessary complexity is a good thing, but removing capabilities for sake of further UI simplification is taking things overboard.
(I'm forming a new hypothesis that tries to explain why this happens: it's because SaaS products are trying to turn a workflow into a service. So anything that deviates from their perfect workflow, including any flexibility, integration points, or general ability for self-help, is ruthlessly pruned. The users must follow the prescribed workflow.)
But considering the "natural selection" that happens here, it may be the way that is because only technical people care about this kind of thing… Idk…
This is what I telling non technical friends for years now: as they spend more and more time with computers and the internet the investment to learn what is under the hood and to have more efficient interaction with better tools becomes worth it more and more. You can't say any more you don't fancy or care about IT when you spend hours each day on a computer.
Well, the way of progress has always been simplying operations.
Do you know how to fix your car and do you make your own clothes, cheese and bread, in today's "bread and cheese eating", clothes wearing, car driving world?