We have so many different Desktop "OS's" already. There's hardly a need for a _revolution_; strong stability and support is needed but there's just no paying market for it by the looks of it.
4 biggest desktop environments are by mega corporations (apple, microsoft) and by big open-source collectives (Gnome, KDE). But is that it?
No, there's also a bunch of tilling-window managers with their own desktop philosophies, there are entire terminal drawing frameworks with full GUI support.
Desktop is awesome. If anything mobile is the one needing to catch up. iOS and Android are awful, awful environments outside of casual usage. Just try writing up a document on your phone!
Just imagine what could be if they were on the level _desktops_?
Real question: Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?
To paraprashe Mr. Engelbart: it's a failed tool if you use it exactly the same way the day you bought it and a year after.
> Real question: Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?
Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes? Why shouldn't everyone build our own homes? Should we continue to dumb down feeding, clothing, and sheltering ourselves for exclusive casual usage of <insert offensive stereotype>?
There are only so many hours in the day and time in our lives; why hide the benefits of technology behind arbitrary gatekeeping?
The problem is that we usually discuss things in extremes.
We don't need to know how to sew our own clothes, but knowing how to mend them is useful. Do we really want to live in a world where we toss out a shirt simply because a button falls off?
We don't need to know how to build a home, but knowing how to fix simple problems is useful. Do we really want to live in a world where we have to call in an electrician every time we trigger a circuit breaker?
We don't need to know how to churn butter, but knowing how to cook is useful. Do we really want to live in a world where we depend upon someone else decides what goes into every meal we eat?
Yes, computers are there to do stuff for us and to save time. We should be exploiting that. On the other hand, we should not be reliant upon it to the point where it interferes with control over our own destinies or the creative process.
It's not extremes. Modern society is designed so that we don't have to do any of those things (in extreme or even in between), we just throw money (that we generated doing one specific job) at our problems.
It's a strong point. Personally, I find it both hard to argue against and at the same time extremely dehumanizing.
Related to this is the phenomenon of turning everything into service. Why would you own things and accept responsibility for maintaining them, if you could just throw money at the service provider and have the thing be present when it's needed? Of course the thing will be extremely limited in what you can do with it, subject to Terms&Conditions, but why would you want to do anything non-standard with stuff? There's always another service you can throw money at to solve the same problem.
What's the end-game here? That we specialize into sub-species of humans, forever stuck in one role, with zero autonomy? No longer building wealth, we'll only be allocating the flow of money - from what Society gives us in reward for our work, straight to Services of said Society? Will we become specialized cells of the meta-multicellular organism Society becomes?
I can see how we're on the path towards that reality, and I absolutely hate it.
No the ideal end game is that we specialize into whatever role we want with full autonomy without having to worry about things we don't want to worry about. I dont want to have to know how to build a house or do construction work. Once upon a time, I would have had to know how to make a shelter. Now, because that's become a service, I can focus my energy on things I enjoy. But if I wanted to, I could focus my energy on construction. But I dont want to - instead I specialize in ields im good at and build a competitive advantage to generate wealth. We are a species are wealthier than ever (well, maybe we were wealthier a few months ago) and have accomplished more and more because of specialization. We were stuck in the role of "generalist survivor" for a couple million years, never really advancing, until specialization happened.
Correction: You _have_ to specialize in the fields that you are good at; not what you want to be good at because you can't compete in the other fields.
A couple of million years without advancing is hyperbole to the extreme; specialization which you are touting has only existed for 100-150 years or so (see Foucault, Frankfurt school etc...).
Specialization and division of labor is literally a defining aspect of civilization. Did you forget a couple zeros? The industrial revolution alone was starting around 1760, which is 260 years ago, and evidence for the existence of civilization extends quite a bit further back than that...
No; I don’t agree with that at all — the idea of extreme specialization for greater prosperity is from the foundations of Capitalism. /The Wealth Of Nations/ was only published in 1776. The Industrial revolution certainly introduced part of the idea of the modern workforce; see /Discipline and Punish/ [1] for a good overview of the evolution of attitudes, but extreme specialization of /worker skills/ was an alien idea. The existence of cottage industries prior to the Industrial revolution is illustrative of this fact.
/Collaboration/ has been a hallmark of civilization; but it is revisionist to say that we have always specialized to this extreme or seen it as a necessary goal. For example, the blacksmith of feudal society didn’t only make spoons, nor the farmer a single crop. If we want to critique previous civilizations; we even need to be wary of the fact that such systems were determined primarily through morality, such as /Plato’s Republic/ or /Confucianism/ rather than any ideal of prosperity. The farmer was a farmer because that was his/her place not because they were better or worse at it.
Even if I’m being charitable and equate division-of-labour to specialization; which, I think is a /huge/ leap — it does not counter the original point which is there is little autonomy in what you choose to do for a living in the logical conclusion of a system where you must specialize to compete.
A blacksmith is a specialized trade. You cant make up a new definition for an adjective and then use the nonstandard definition to make your point. Division of labor is synonymous with specialization. If you don't like it, make up a new term for what you want to convey.
Leaving aside the fact whether a blacksmith represents a specialist trade; in my post I specifically alluded to a /more/ specialist blacksmith which /only/ made spoons as an illustration of extreme specialization, I believe that my point still stands — that, I challenge your three assertions:
1) Specialization leads to more autonomy
2) Extreme specialization is as "old as civilization"
3) Extreme specialization leading directly to prosperity as an idea is older than the 18th century
On 1) I don’t see any further arguments on your side; so I assume that you have no qualms with such a correction. On 2) I believe that I’ve adequately addressed your concerns — I acknowledge /Plato/ specifically in my reply as shown in your link and my critique on "Ancient theories" is covered in a previous post. The last point overlaps with the 2nd point and I have found no criticisms to the contrary in your answers.
On semantics and pendantry of terms I’m disinterested — we could debate all day and I could argue that the very term "Division of Labour" originated in Adam Smith’s work [1] and therefore isn’t the same as specialization. Such a debate would neither be useful nor productive.
I mean that's already true. The kinds of skills that people talk about in these threads are typically very domestic. Even if you had all of them you still probably have a few skills that can bring in money if you're lucky.
Exactly. That's why it makes a lot of sense to study/learn things outside your area of expertise, and/or if you want to lean on the more extreme side, study/learn basic survival skills (that everyone used to know before the agricultural revolution).
I don't think this is a resolvable conflict in general; one person's division of labor is someone else's helplessness.
I think it's crazy that some people don't cook. Like every house and apartment comes with a kitchen, what do you mean you don't use it? "It's so much cheaper" I say ignoring the fact that I'm a cooking and baking enthusiast and so I'm not counting the fact that if I valued my time at any reasonable rate it's really not.
It's the same thing in the woodworking community -- "you'll save so much money, anything you see in the store I can make half as good, for twice the price!"
> Like every house and apartment comes with a kitchen, what do you mean you don't use it?
Traditionally in urban areas most people actually purchased food instead of making it, because most people did not have kitchens.
You can kind of still see that today; many old New York tenements have been converted into apartments with "kitchens", but really this usually describes something about the size of an airplane bathroom with a stove, a fridge and 1 counter.
Is giving the people the freedom to do what they want and not focus on the pedantry of things they're not interested in "celebrating helplessness", or is it getting work for the sake of work out of the way?
50% of the population of the US and 95% of the world would be unhealthy or poor if they all adopt that philosophy about food. Sounds like your bubble not modern society.
A lot of it comes down to what you mean by coding and what you mean by sewing (as an example).
There is much more to sewing clothes than threading a needle and guiding it through a button hole, at least if you want to make something that will fit and will last. The complexity of the product also plays a strong role.
Much the same can be said of coding. Being able to issue a command in a shell or compose a function in a spreadsheet is probably the closes analog to sewing on a button, but how many people can even do those things?
We live in a world where our phone's calculators are typically as powerful as a four function calculator from decades ago, perhaps with a subset of functions found on a scientific calculator. How do we expand our minds beyond that limited scope if vendors are afraid of creating software that allows us to compose anything more complex? With the status quo, we have to seek out options and those options are mostly targeted at professionals.
Similar things can be said for other domains. While writing of coding, I was actually thinking of graphics design and word processing and databases and the many other domains that have been over simplified by modern consumer applications. For the most part, their functionality has been simplified to the point where you can perform a very narrow range of tasks and have very little scope of the imagination. For example: that database that backs your address book cannot be adapted to catalogue your books, or the online word processor that is fine for writing reports is poorly suited for preparing a book for publications. Sure, there are professional alternatives out there. On the other hand, it seems as though people has a lot more flexibility with the software of the 1990's than the 2010's.
I didn't necessarily mean "learn to code" in such explicit manner with my original comment. More like learn technology itself and some strong basics of how everything works so people would be more equipped to grok this new world we're living in.
To get a driver's license you need some basic knowledge of car internals. I'd argue that computers are infinitely more important in our society than a car yet majority of people have absolutely no idea how computers work and are expected not only to live in this world but to ace it too.
I'm a software engineer and even I struggle to understand how some of the applications I use work. This month I had to figure out why my laptop's monitor brightness keys randomly stop doing anything and it led into a rabbit hole of systemd, udev, ACPI, the intel xorg driver and other unpleasant things. Next month I might run into a sound issue and I'll have to dig deep into completely different areas like pulseaudio and ALSA. And all this knowledge won't do me any good on Windows or MacOS.
The car is a relatively simple machine compared to a desktop computer, and unlike computers, most basic knowledge apply to all cars regardless of brand and model. Our computers are patched together with gaffa tape, they're not some timeless universal design the way the clutch in a car is.
In fact, in a way I become worse at using them because I know more about them - think about "just turn it off and on again" vs wanting to debug it and understand the root cause of whatever issue I'm having.
The problem is that we live in a world and society where you're expected to know a lot about a lot. And we just keep adding to the pile we need to learn. You need to know how to sew or mend your clothes, you need to know how to tinker with your electrical system or plumbing, with your car, with your electronics, know your stuff around a kitchen. And the list can really go on and on. Now you need to know how to code.
The computer equivalent of what most people know how to do around the house is clearing the browser cache, restarting a service, running something at startup, and other troubleshooting steps like this. Stuff that you learn in under a day just like you would when learning basic clothes mending, replacing a faucet, changing a tire or your oil, or cooking a meal.
Any reasonable definition of "coding" is creating something. Like building your own electronic circuit, mechanical part, simple clothes, etc. This is beyond what a normal person is expected to know about their stuff as general knowledge in life. Everyone should just understand the principles of the tools they're using and basic "under the hood" stuff to assist with basic troubleshooting.
In reality in the parts of the world with higher standards of living (can afford stuff) this piling up of expectations just lead people to give up and pay for services rather than learn all that. And for good reason, modern society has this bad habit of taking every shred of free time and complicating your life, with unfortunate consequences.
> Any reasonable definition of "coding" is creating something.
Not necessarily creating something. The OP really did shoot themself in the foot by using the word "coding", where what they most likely meant was doing simple alterations to computing systems that require understanding some basic control flow structures - conditionals and loops. Things like, "I seem to be doing the same repetitive sequence of steps 100x a day, let me automate this somewhat". Think Tasker and bash, not JavaScript and C++.
> this piling up of expectations just lead people to give up and pay for services rather than learn all that
A small bit of knowledge can save you a lot of money in services. I can get why wealthy wouldn't care, but most people aren't wealthy. Also, I feel there's more pressure from the sales&marketing departments of services than from modern society's demands on free time.
At least true in Sweden. You don't have to know much, but you at least have to be able to check your tire pressure, pop the hood and check the oil and know which hole to pour which liquid into.
Many schools used to, as well as shop, etc. albeit pretty heavily split by traditional gender roles. Personally, in high school at least, I think a good case could be made for carving out a bit of time for practical life skills (also personal finance, etc.).
statistically speaking most people today make a living coding in js, if we where going to extremes we would be talking about soldering your own board, assembling your own language, rolling your own OS, the shit Bell labs used to do. Coding is by all means an adequate analog to sewing my dude/dudette
I think the idea that programming is a back-room industrial or maintenance operation performed by specialists misses the point that programming is also another way to use your computer. The most fluid and limitless way, in fact.
Programming is not just like being a car mechanic or factory floor engineer, its like being an expert driver at the same time.
When I use my desktop, it often occurs to me that I could write something to speed up a task, if only the application was accessible in a similar way to Emacs, or had Amiga-style ARexx ports that I could talk to in a script. From this perspective, programming is the most fine-grained GUI affordance within a computer system. By making it accessible along a continuum with simpler GUI tools, we greatly increase the ability of the user to do magic, or to learn to do it.
I would really like to see the development of an ergonomic expert-oriented desktop that lets me use my programming skills in a high-level and bureaucracy-free manner, to augment my use of an attractive and well-integrated GUI. There's no reason why such features should impinge on ordinary non-programmer use.
> I think the idea that programming is a back-room industrial or maintenance operation performed by specialists misses the point that programming is also another way to use your computer.
Arguably, programming is what computers are for. If you're not programming it in some way, then it is more like an appliance that just happens to contain a computer. Personal computers of the 80s booted directly into a programming environment.
Thinking about it some more, I'd like a desktop OS that not only provides extensive scripting, but also exposes system features to the language in an accessible manner. So I can easily write a GUI for my script in a few lines, or draw some crap on screen, out of the box without the bureaucracy or FFI business you'd need to do these things in Python or some other high level language.
In other words I'd like there to be a concept of OS-level "system features" available to high level languages through something a little more friendly and robust than interfacing to a C library. I don't know how I'd implement it :) but its how I'd like things to be.
Macs have AppleScript, and while application support can be hit-or-miss, I’ve got several little things I’ve done with it that have made my life a lot easier. There’s a couple features I wanted in Illustrator that I’ve been able to work around with scripts, I have a script hanging around to help with a tedious part of turning a big pile of individual page files into a comic, I have a hotkey that rotates my display and my Wacom tablet with one keystroke when I want to work with my monitor in portrait orientation for a while, and a few other things I’m not thinking of. Some of these I use once every few years, some I use multiple times a day.
Note: Applescript is mostly abandoned since some years ago, and most apps don't support it in very useful ways in my experience. It's also slow. I imagine Linux would allow more scriptization, but I only ever use Linux for my servers.
While the parent is a bit extreme in his views that "everyone should know how to code" (which is rubbish to be honest, everyone should know how to go about general problem-solving, but not how to code), the underlying problem is a different one and has nothing to do with "gate-keeping", instead that "one-size-fits-all" doesn't work.
For some reason, UIs are now designed towards an "average user", but as the airforce has found out long ago: average users don't exist in the real-world [1], it's an entirely made up concept.
The solution to this is customizability. Create an OS that's easy to use in the default configuration but let me tweak it to my own needs just like I can adjust the seat in my car.
> everyone should know how to go about general problem-solving, but not how to code
General problem-solving ability seems like a synonym for fluid intelligence, which is not very malleable. Learning to code, on the other hand, is possible with effort. I learned to program in the 4th grade, with videos and books I myself bought, without having internet access. (I could use dial-up if I really needed it, but it was expensive and slow, so I used it very sparingly; I don't remember how much, but around 5 hours per month seems an upper bound.) I had no support whatsoever from anyone (except that my dad paid for the books and videos), my mom only let me use my computer for like 3 hours a week (shared between gaming and doing anything else), my computer was old and slow, ... . Now, I sure have a high IQ, but I doubt that we couldn't have 20% of the urban population reach some basic computer literacy when they are 24 years old. Heck, calculus is known by more people than coding. Most non-poor people waste 16+ years of their life in K12 and undergrad, and learn very few useful skills. Imagine what would happen if we taught people a curriculum that did something other than pure signalling.
> Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes?
Selling pre-made butter or clothes doesn't prevent someone from making their own if the pre-made one doesn’t fit their needs.
In technology, making your own is often outright impossible due to proprietary APIs
In a lot of cases the inefficiency of the official implementation is a feature for the developer and they definitely do not want people to build more efficient clients (examples: no ads/irrelevant content, defaulting to chronological feed instead of algorithmic, etc) and use technical (and sometimes legal, like abusing copyright law) workarounds to make the process as difficult as possible.
To me this reads like the complete opposite. Hiding the benefits of programming from the "unwashed masses" because they are not going to understand it anyways is gatekeeping.
The aim should be to make programming/scripting/automation easier and more accessible, not to hide it away to prevent people from ever using it.
> Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes?
Why learn history in school? Why learn math? Why learn about philosophy? Should we stop teaching that in school because the <stereotype> will never use it anyways? Or is the opposite true and not teaching that would be the actual gatekeeping?
TLDR: You suggest that this would hide <useful stuff> behind programming and thus be gatekeeping. I think you are hiding useful stuff==programming and are thus the gatekeeper.
> Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes? Why shouldn't everyone build our own homes? Should we continue to dumb down feeding, clothing, and sheltering ourselves for exclusive casual usage of <insert offensive stereotype>?
Why shouldn't everyone read? Why shouldn't everyone write? Why shouldn't everyone do the basic math?
Being able to use computers efficiently is knowledge, not chores. To be able to use them as a bicycle for the mind goes way beyond pressing colored buttons according to emotion.
Being able to use computers and being able to program are different things. Not exatly at the level of being able to write and being able to make your own pencil, but not that far either.
Everyone probably shouldn't bother to churn their own butter but everyone should be able to cook and plan a meal. Everyone probably isn't going to learn to code but they ought to be able to reinstall or install an OS on their computer, replaced a hard drive, and perform basic troubleshooting steps.
"Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?"
To quote a commonly-used Web meme: "Why not both?"
In my opinion, why should a software tool only have one interface to it? What if there were many possible interfaces available, from very simple interfaces with reasonable defaults for casual users, to more option-rich interfaces for power users, to an API for programmers. What if we could take advantage of today's AI technology to automatically construct GUIs that are tailored to a user's experience level? What if users could customize the GUIs in order to make the GUI fit their needs better?
What if the system supported a variety of languages, not only common languages such as Python that many programmers are familiar with, but also beginner-friendly languages? Users are willing to program provided it's not too difficult: AppleScript from the 1990s was a step in the right direction, and Excel's macro language is probably the most widely-used programming language in the world. With today's AI/NLP technology, we could go further by developing ways for users to describe repetitive, routine tasks using natural language.
I think there's still a lot of room for innovation on the desktop. But you highlight a very big problem: where is the market? Who is going to pay for this innovation? Outside of open-source projects, the major commercial desktop environments are platforms controlled by multi-billion dollar corporations. Building a new desktop environment that is capable of competing against the commercial giants will take a lot of time and capital. The last company to give this a try was Be, Inc. in the mid-1990s, and they had a hard time competing against Microsoft's OEM strategy. I wrote more about this at http://mmcthrow-musings.blogspot.com/2020/10/where-did-perso....
I feel the Classic macOS did one thing quite well. It was rather easy to manage the system.
System functionality could be expanded through various means, but most often devs used Extensions. And if a software issue arose, it was easy to disable all Extensions by pressing SHIFT key on start-up. Also, on start-up you'd visually see what Extensions were being loaded. So at start-up you would always be aware what you had installed.
In current macOS it's very hard to keep track of what I've installed. I install a lot of stuff using tools like Homebrew. Some software might install some system level hooks, etc... From my perspective it's kinda hard to keep the system "clean". And it's probably a good idea to do a clean install of my computers maybe once every year or so, since I might have installed stuff I don't really use anymore.
Also, there was the System Folder and that directory contains the Extensions, Preferences, Control Panels directories, etc... So you could also at the file system level manage your System Folder. You could just delete an Extension manually from Extensions directory in the System Folder to uninstall it. You didn't need any "uninstall" software most of the time.
A classic macOS like environment with a view more modern features (maybe a WindowMaker-like UI, multi-user and real multi-tasking) would be pretty neat.
I do sometimes wonder if the lack of preemptive multitasking and memory protection in classic Mac OS may have ironically led to better quality software. Users are forced to be less tolerant of bugs if a piece of buggy software can lock up your whole computer.
Greater difficulty / risk in development leads to fewer, higher quality applications, but breadth not depth wins the market, so we're stuck mourning dead systems with potential except for cases where depth results in a "killer" application.
> Just try writing up a document on your phone! Just imagine what could be if they were on the level _desktops_?
It's very, very difficult to beat a keyboard. Tablets and even phones become day vs. night more usable if you plug in a keyboard, even ignoring everything else that still sucks about them.
If your typing speed on a keyboard is comparable to your typing speed on a phone, it sounds like you might have a lot to gain from learning touch typing, look it up. I was in your same situation not long ago
Mind that I mention it was specifically index finger swiping. Maybe it wasn't actually faster than physical keyboard, but it was certainly an order of magnitude faster than discrete thumbs phone typing.
That's for sentences, but documents usually contains more than just sentences. Even document numbers have symbols and numbers which cannot take advantage of Swype tech
I can't agree with the grandparent about type + swipe being anywhere near the speed of a keyboard, but in my experience, writing in other languages using swipe is just as fast as in English (at least for other languages that use the Latin alphabet). I frequently use swiping for bilingual English + Romanian (a language which tends to have much longer words than English) conversations, and it generally works very well, even bilingually. I would note that I'm using some Microsoft keyboard for Android, I forget its name, and it is explicitly configured for both languages.
Good that you mention Engelbart, because none of the mainstream desktop OSes are close to his ideas, or what the Xerox workstations allowed for, ironically Windows is probably the one most closest to it.
While GNU/Linux have the necessary tooling for making it as well, but thanks to the fragmentation and some communities hatred against GNOME/KDE, it will never happen.
This is what a modern desktop OS should look like,
Sure, first a short overview of how those OSes used to work and how one can map those ideas into Windows.
Mesa/Cedar also shares some ideas with the other workstation variants from Xerox PARC, namely Interlisp-D and Smalltalk, just based on a strong typed language for systems programming, with reference counting and a cycle collector.
The language itself compiles to bytecode, because Xerox PARC machines used bytecode with microcoded CPUs, whose interpreter was loaded on boot. So in a sense it was still native somehow.
The full OS was written in Mesa/Cedar, and everything was kind of exposed to the developers.
The shell is more like a REPL, where you can access all that functionality, meaning the public functions/procedures from dynamically loaded modules, interact with text selection from any application window, or execute actions on a selected window. And as REPL, it worked on structured data.
Basically similar to what Powershell offers, with its structured data, and ability to call any COM/UWP, .NE or plain DLLs libraries.
Then you could embedded objects into other objects and this the basis of the compound document architecture, basically the genesis for OLE in Windows and COM (COM is just the basic features which OLE is built upon, although more the OLE 2.0, the 1.0 was more complicated still).
The way Office works between applications and its inline editing of OLE documents can be found in Xerox PARC workstations, as Charles Simonyi brought Bravo ideas into Word, as one of Bravo creators.
Since Windows Vista, most new APIs are actually a mix of .NET and COM (now UWP), which expose a similar high level set of OS API (bare bones Win32 has hardly changed since XP days).
Now, many of these concepts can also be found in GNOME and KDE, however due to the way distributions get put together, it is hard to really provide such integrated developer experience across the whole stack.
And while REPL like shells do exist for UNIX clones, their adoption is a tiny blip when compared against traditional UNIX shells.
I take this as a joke. There is nothing modern-looking about it. Geeky, yes. Not designed for touch interface. Resembles Oberon, which I would not call modern, either. Maybe, we are not ready for it yet. Belongs in the future, then. (Or, more likely, in the past.)
- not thinking about files: I can open Notes/Drafts on my phone and get a textbox. I kinda get this with Joplin, barely.
- Real sandboxing, with a nice permission layer
- Extremely easy sharing of data between apps. Of course files are theoretically a great sharing mechanism, but the sharing mechanism in mobile OS's are the logical conclusion of the clipboard
- URIs that go deep into other apps. Lets you easily say "go over here to see details" from a completely separate system
The fact that lots of stuff are webapps lets you get pretty far on Desktop too but I think these metaphors are power user features that the desktop could learn from
The "no files" part of mobile OSs is the worst part for me personally. I constantly re-download the same PDFs, have to look forever for "that particular picture", etc., because even though you can technically access the file system on Android it's not like the majority of apps supports a file manager interface, and many apps just dump their file somewhere without explicit structure, so I basically never find what I need.
Also if you want to do anything beyond “what the devs already thought about” you either need deep understanding of the specific system/app or rounds of trial and error.
Case in point: tried sharing a vpn config with an Android user over Signal. They couldn’t do anything with the file, just yielding an error message saying it was unsupported. Sending the exact same file with a .pdf extension allowed them to download it and import it in their VPN app (only after downloading and installing a generic file manager app, though).
Every now and then I struggle with some file that I can’t figure out how to move between apps. Something as seemingly simple as downloading an mp3 file from a browser and importing it to a music player app is quite an ordeal on iOS.
Please note I didn't say "no files". I said "better clipboard". I like files as a thing to be exposed. But I think that for day-to-day work being able to move stuff around between apps without fidgeting around with files is very valuable.
I remember that some of these points were targeted by the Étoilé environment [1]. It was a desktop environment targeted for GNUstep and programmed in Objective-C that tried to rethink a few fundamental concepts of the Desktop Environments available at the time (~10 years ago). Among them, proper file versioning for everything, seamless app interoperability and data sharing, etc.
Sadly, the developers never managed to go beyond a few core libraries and a nice theme (the GUI used GNUstep under the hood). I followed the development with great interest till they stopped updating the site; I believe that the effort would have required many more developers. What a pity!
And I hate app-data not being files, not being portable. My data may be stuck inside a SaaS and only accessible through a closed-source app, and I have nothing to say about it.
> Real sandboxing, with a nice permission layer
Sandboxing apps is a good thing. Depending on your use-case, Snap/Flatpak or containers kinda solve this, but they are not the default way of running apps for now.
What mobile does wrong here though, is that it also sandboxes the user, not giving the user full access to his device, nor letting him let his apps get that access either.
This is user-hostile, on all current major mobile-platforms.
> Extremely easy sharing of data between apps
I would rephrase this as barely functional sharing, for only the limited subset of data the application has decided to implement sharing for, and only in ways the application-developers have considered inter-app sharing.
On a desktop, I as the user, have the power to decide how I want to share data and invent new ways data can be shared and utilized.
> URIs that go deep into other apps
While that is certainly a neat feature, it's an app-centric feature. How do you know which kind of apps I have installed? How do you know which of those apps within that niché I have installed?
And if you're not going to make it app-centric, you have to make it file/data-centric anyway... Which leaves us Android. Android does this better than iOS by having an intent-system which lets you register ability to handle both files, urls and subsets of those, and other apps can query for which apps supports which file/url-intents. So basically just a minor addition to the system we already have had on all desktop OSes for decades now.
And again: that's a system which already works really well when files are first-class concepts which everything else builds on.
So it's all back to files. If you want to empower the user, you must have files.
Files are emphatically _not_ good first class primitives for rich sharing.
If you have a contacts program, are you going to make each contact a file? What about performance or bulk editing? From that program's perspective, the ideal is probably to have a single file (SQLite DB for example).
But now you don't have granular sharing mechanisms except through a clumsy export which requires you to give a name to the thing and put it somewhere and open it in the other program.
Meanwhile I hit the share sheet on my phone for the Contact. Some apps know about this form for data and can ingest it. Others fallback to text. It's the clipboard model, not the files model.
Files are _fine_, and it lets you do stuff like reverse engineer the format and do cool stuff. But it's clunky as hell when you have something relatively ephemeral.
> But now you don't have granular sharing mechanisms
Many (Windows) apps support OLE/COM-based objects which can be copied, mixed and processed in between applications. This allows the clipboard to hold rich objects and not just text-based contents. Things like tables, images, rich text, contacts... and even files, or folders of files!
This allows for a much more rich (and empowering!) way to share data than currently done on popular mobile platforms.
This was already implemented back in Windows 95 or something. It's really old tech. Not sure how well this concept is implemented (or at all) on other desktop operating systems though, so it might not be a "universal" desktop solution for everyone.
That said it can clearly be done better than mobile, because on Windows it has already been so for two and half decades.
I can open vim on my iphone/desktop and get a textbox. I don't see what storing things in files has to do with that.
>Extremely easy sharing of data between apps.
hah no. Try sharing large batches of files on iOS. with plenty of apps there's no way to do this (and often the only way to do it if you can is with the files app.)
What do you mean by "Extremely easy sharing of data between apps"? In my experience it's quite difficult to get your data out of one app and into another one.
We continue to dumb down interfaces because of the assumption that people won’t learn and the reality that building a complex usable interface is hard and teaching is almost never done.
I remember when video games came with elaborate manuals, this discussion reminds me of that and how it stopped happening (and just now the smell of opening the box for a new game, i don’t think i’ll ever experience that again)
And yet videogames are prime evidence that users can learn just about any UI you throw at them, if they're even little bit motivated to do it. The Web itself is the second piece of evidence - despite all the UX experts' love for "simplicity" and "intuitiveness", every single website looks entirely unlike any other website. Every UI is different. People manage.
It's the assumption that people won't learn that's a problem. Minimizing unnecessary complexity is a good thing, but removing capabilities for sake of further UI simplification is taking things overboard.
(I'm forming a new hypothesis that tries to explain why this happens: it's because SaaS products are trying to turn a workflow into a service. So anything that deviates from their perfect workflow, including any flexibility, integration points, or general ability for self-help, is ruthlessly pruned. The users must follow the prescribed workflow.)
I think that we should have a third DE with the same level of UX for configuration and simple customization as Gnome and KDE. But focused in Tiling WM. Pick something like Awesome or Sway and creating the whole ecosystem. Pick a OS that suits it like Manjaro (look to the logo, its tiling wm for sure!) and offers the DE as an initial option in the installation.
But considering the "natural selection" that happens here, it may be the way that is because only technical people care about this kind of thing… Idk…
> or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world
This is what I telling non technical friends for years now: as they spend more and more time with computers and the internet the investment to learn what is under the hood and to have more efficient interaction with better tools becomes worth it more and more. You can't say any more you don't fancy or care about IT when you spend hours each day on a computer.
>Real question: Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?
Well, the way of progress has always been simplying operations.
Do you know how to fix your car and do you make your own clothes, cheese and bread, in today's "bread and cheese eating", clothes wearing, car driving world?
We have so many different Desktop "OS's" already. There's hardly a need for a _revolution_; strong stability and support is needed but there's just no paying market for it by the looks of it.
4 biggest desktop environments are by mega corporations (apple, microsoft) and by big open-source collectives (Gnome, KDE). But is that it? No, there's also a bunch of tilling-window managers with their own desktop philosophies, there are entire terminal drawing frameworks with full GUI support.
Desktop is awesome. If anything mobile is the one needing to catch up. iOS and Android are awful, awful environments outside of casual usage. Just try writing up a document on your phone! Just imagine what could be if they were on the level _desktops_?
Real question: Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?
To paraprashe Mr. Engelbart: it's a failed tool if you use it exactly the same way the day you bought it and a year after.