We have so many different Desktop "OS's" already. There's hardly a need for a _revolution_; strong stability and support is needed but there's just no paying market for it by the looks of it.
4 biggest desktop environments are by mega corporations (apple, microsoft) and by big open-source collectives (Gnome, KDE). But is that it?
No, there's also a bunch of tilling-window managers with their own desktop philosophies, there are entire terminal drawing frameworks with full GUI support.
Desktop is awesome. If anything mobile is the one needing to catch up. iOS and Android are awful, awful environments outside of casual usage. Just try writing up a document on your phone!
Just imagine what could be if they were on the level _desktops_?
Real question: Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?
To paraprashe Mr. Engelbart: it's a failed tool if you use it exactly the same way the day you bought it and a year after.
> Real question: Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?
Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes? Why shouldn't everyone build our own homes? Should we continue to dumb down feeding, clothing, and sheltering ourselves for exclusive casual usage of <insert offensive stereotype>?
There are only so many hours in the day and time in our lives; why hide the benefits of technology behind arbitrary gatekeeping?
The problem is that we usually discuss things in extremes.
We don't need to know how to sew our own clothes, but knowing how to mend them is useful. Do we really want to live in a world where we toss out a shirt simply because a button falls off?
We don't need to know how to build a home, but knowing how to fix simple problems is useful. Do we really want to live in a world where we have to call in an electrician every time we trigger a circuit breaker?
We don't need to know how to churn butter, but knowing how to cook is useful. Do we really want to live in a world where we depend upon someone else decides what goes into every meal we eat?
Yes, computers are there to do stuff for us and to save time. We should be exploiting that. On the other hand, we should not be reliant upon it to the point where it interferes with control over our own destinies or the creative process.
It's not extremes. Modern society is designed so that we don't have to do any of those things (in extreme or even in between), we just throw money (that we generated doing one specific job) at our problems.
It's a strong point. Personally, I find it both hard to argue against and at the same time extremely dehumanizing.
Related to this is the phenomenon of turning everything into service. Why would you own things and accept responsibility for maintaining them, if you could just throw money at the service provider and have the thing be present when it's needed? Of course the thing will be extremely limited in what you can do with it, subject to Terms&Conditions, but why would you want to do anything non-standard with stuff? There's always another service you can throw money at to solve the same problem.
What's the end-game here? That we specialize into sub-species of humans, forever stuck in one role, with zero autonomy? No longer building wealth, we'll only be allocating the flow of money - from what Society gives us in reward for our work, straight to Services of said Society? Will we become specialized cells of the meta-multicellular organism Society becomes?
I can see how we're on the path towards that reality, and I absolutely hate it.
No the ideal end game is that we specialize into whatever role we want with full autonomy without having to worry about things we don't want to worry about. I dont want to have to know how to build a house or do construction work. Once upon a time, I would have had to know how to make a shelter. Now, because that's become a service, I can focus my energy on things I enjoy. But if I wanted to, I could focus my energy on construction. But I dont want to - instead I specialize in ields im good at and build a competitive advantage to generate wealth. We are a species are wealthier than ever (well, maybe we were wealthier a few months ago) and have accomplished more and more because of specialization. We were stuck in the role of "generalist survivor" for a couple million years, never really advancing, until specialization happened.
Correction: You _have_ to specialize in the fields that you are good at; not what you want to be good at because you can't compete in the other fields.
A couple of million years without advancing is hyperbole to the extreme; specialization which you are touting has only existed for 100-150 years or so (see Foucault, Frankfurt school etc...).
Specialization and division of labor is literally a defining aspect of civilization. Did you forget a couple zeros? The industrial revolution alone was starting around 1760, which is 260 years ago, and evidence for the existence of civilization extends quite a bit further back than that...
No; I don’t agree with that at all — the idea of extreme specialization for greater prosperity is from the foundations of Capitalism. /The Wealth Of Nations/ was only published in 1776. The Industrial revolution certainly introduced part of the idea of the modern workforce; see /Discipline and Punish/ [1] for a good overview of the evolution of attitudes, but extreme specialization of /worker skills/ was an alien idea. The existence of cottage industries prior to the Industrial revolution is illustrative of this fact.
/Collaboration/ has been a hallmark of civilization; but it is revisionist to say that we have always specialized to this extreme or seen it as a necessary goal. For example, the blacksmith of feudal society didn’t only make spoons, nor the farmer a single crop. If we want to critique previous civilizations; we even need to be wary of the fact that such systems were determined primarily through morality, such as /Plato’s Republic/ or /Confucianism/ rather than any ideal of prosperity. The farmer was a farmer because that was his/her place not because they were better or worse at it.
Even if I’m being charitable and equate division-of-labour to specialization; which, I think is a /huge/ leap — it does not counter the original point which is there is little autonomy in what you choose to do for a living in the logical conclusion of a system where you must specialize to compete.
A blacksmith is a specialized trade. You cant make up a new definition for an adjective and then use the nonstandard definition to make your point. Division of labor is synonymous with specialization. If you don't like it, make up a new term for what you want to convey.
Leaving aside the fact whether a blacksmith represents a specialist trade; in my post I specifically alluded to a /more/ specialist blacksmith which /only/ made spoons as an illustration of extreme specialization, I believe that my point still stands — that, I challenge your three assertions:
1) Specialization leads to more autonomy
2) Extreme specialization is as "old as civilization"
3) Extreme specialization leading directly to prosperity as an idea is older than the 18th century
On 1) I don’t see any further arguments on your side; so I assume that you have no qualms with such a correction. On 2) I believe that I’ve adequately addressed your concerns — I acknowledge /Plato/ specifically in my reply as shown in your link and my critique on "Ancient theories" is covered in a previous post. The last point overlaps with the 2nd point and I have found no criticisms to the contrary in your answers.
On semantics and pendantry of terms I’m disinterested — we could debate all day and I could argue that the very term "Division of Labour" originated in Adam Smith’s work [1] and therefore isn’t the same as specialization. Such a debate would neither be useful nor productive.
I mean that's already true. The kinds of skills that people talk about in these threads are typically very domestic. Even if you had all of them you still probably have a few skills that can bring in money if you're lucky.
Exactly. That's why it makes a lot of sense to study/learn things outside your area of expertise, and/or if you want to lean on the more extreme side, study/learn basic survival skills (that everyone used to know before the agricultural revolution).
I don't think this is a resolvable conflict in general; one person's division of labor is someone else's helplessness.
I think it's crazy that some people don't cook. Like every house and apartment comes with a kitchen, what do you mean you don't use it? "It's so much cheaper" I say ignoring the fact that I'm a cooking and baking enthusiast and so I'm not counting the fact that if I valued my time at any reasonable rate it's really not.
It's the same thing in the woodworking community -- "you'll save so much money, anything you see in the store I can make half as good, for twice the price!"
> Like every house and apartment comes with a kitchen, what do you mean you don't use it?
Traditionally in urban areas most people actually purchased food instead of making it, because most people did not have kitchens.
You can kind of still see that today; many old New York tenements have been converted into apartments with "kitchens", but really this usually describes something about the size of an airplane bathroom with a stove, a fridge and 1 counter.
Is giving the people the freedom to do what they want and not focus on the pedantry of things they're not interested in "celebrating helplessness", or is it getting work for the sake of work out of the way?
50% of the population of the US and 95% of the world would be unhealthy or poor if they all adopt that philosophy about food. Sounds like your bubble not modern society.
A lot of it comes down to what you mean by coding and what you mean by sewing (as an example).
There is much more to sewing clothes than threading a needle and guiding it through a button hole, at least if you want to make something that will fit and will last. The complexity of the product also plays a strong role.
Much the same can be said of coding. Being able to issue a command in a shell or compose a function in a spreadsheet is probably the closes analog to sewing on a button, but how many people can even do those things?
We live in a world where our phone's calculators are typically as powerful as a four function calculator from decades ago, perhaps with a subset of functions found on a scientific calculator. How do we expand our minds beyond that limited scope if vendors are afraid of creating software that allows us to compose anything more complex? With the status quo, we have to seek out options and those options are mostly targeted at professionals.
Similar things can be said for other domains. While writing of coding, I was actually thinking of graphics design and word processing and databases and the many other domains that have been over simplified by modern consumer applications. For the most part, their functionality has been simplified to the point where you can perform a very narrow range of tasks and have very little scope of the imagination. For example: that database that backs your address book cannot be adapted to catalogue your books, or the online word processor that is fine for writing reports is poorly suited for preparing a book for publications. Sure, there are professional alternatives out there. On the other hand, it seems as though people has a lot more flexibility with the software of the 1990's than the 2010's.
I didn't necessarily mean "learn to code" in such explicit manner with my original comment. More like learn technology itself and some strong basics of how everything works so people would be more equipped to grok this new world we're living in.
To get a driver's license you need some basic knowledge of car internals. I'd argue that computers are infinitely more important in our society than a car yet majority of people have absolutely no idea how computers work and are expected not only to live in this world but to ace it too.
I'm a software engineer and even I struggle to understand how some of the applications I use work. This month I had to figure out why my laptop's monitor brightness keys randomly stop doing anything and it led into a rabbit hole of systemd, udev, ACPI, the intel xorg driver and other unpleasant things. Next month I might run into a sound issue and I'll have to dig deep into completely different areas like pulseaudio and ALSA. And all this knowledge won't do me any good on Windows or MacOS.
The car is a relatively simple machine compared to a desktop computer, and unlike computers, most basic knowledge apply to all cars regardless of brand and model. Our computers are patched together with gaffa tape, they're not some timeless universal design the way the clutch in a car is.
In fact, in a way I become worse at using them because I know more about them - think about "just turn it off and on again" vs wanting to debug it and understand the root cause of whatever issue I'm having.
The problem is that we live in a world and society where you're expected to know a lot about a lot. And we just keep adding to the pile we need to learn. You need to know how to sew or mend your clothes, you need to know how to tinker with your electrical system or plumbing, with your car, with your electronics, know your stuff around a kitchen. And the list can really go on and on. Now you need to know how to code.
The computer equivalent of what most people know how to do around the house is clearing the browser cache, restarting a service, running something at startup, and other troubleshooting steps like this. Stuff that you learn in under a day just like you would when learning basic clothes mending, replacing a faucet, changing a tire or your oil, or cooking a meal.
Any reasonable definition of "coding" is creating something. Like building your own electronic circuit, mechanical part, simple clothes, etc. This is beyond what a normal person is expected to know about their stuff as general knowledge in life. Everyone should just understand the principles of the tools they're using and basic "under the hood" stuff to assist with basic troubleshooting.
In reality in the parts of the world with higher standards of living (can afford stuff) this piling up of expectations just lead people to give up and pay for services rather than learn all that. And for good reason, modern society has this bad habit of taking every shred of free time and complicating your life, with unfortunate consequences.
> Any reasonable definition of "coding" is creating something.
Not necessarily creating something. The OP really did shoot themself in the foot by using the word "coding", where what they most likely meant was doing simple alterations to computing systems that require understanding some basic control flow structures - conditionals and loops. Things like, "I seem to be doing the same repetitive sequence of steps 100x a day, let me automate this somewhat". Think Tasker and bash, not JavaScript and C++.
> this piling up of expectations just lead people to give up and pay for services rather than learn all that
A small bit of knowledge can save you a lot of money in services. I can get why wealthy wouldn't care, but most people aren't wealthy. Also, I feel there's more pressure from the sales&marketing departments of services than from modern society's demands on free time.
At least true in Sweden. You don't have to know much, but you at least have to be able to check your tire pressure, pop the hood and check the oil and know which hole to pour which liquid into.
Many schools used to, as well as shop, etc. albeit pretty heavily split by traditional gender roles. Personally, in high school at least, I think a good case could be made for carving out a bit of time for practical life skills (also personal finance, etc.).
statistically speaking most people today make a living coding in js, if we where going to extremes we would be talking about soldering your own board, assembling your own language, rolling your own OS, the shit Bell labs used to do. Coding is by all means an adequate analog to sewing my dude/dudette
I think the idea that programming is a back-room industrial or maintenance operation performed by specialists misses the point that programming is also another way to use your computer. The most fluid and limitless way, in fact.
Programming is not just like being a car mechanic or factory floor engineer, its like being an expert driver at the same time.
When I use my desktop, it often occurs to me that I could write something to speed up a task, if only the application was accessible in a similar way to Emacs, or had Amiga-style ARexx ports that I could talk to in a script. From this perspective, programming is the most fine-grained GUI affordance within a computer system. By making it accessible along a continuum with simpler GUI tools, we greatly increase the ability of the user to do magic, or to learn to do it.
I would really like to see the development of an ergonomic expert-oriented desktop that lets me use my programming skills in a high-level and bureaucracy-free manner, to augment my use of an attractive and well-integrated GUI. There's no reason why such features should impinge on ordinary non-programmer use.
> I think the idea that programming is a back-room industrial or maintenance operation performed by specialists misses the point that programming is also another way to use your computer.
Arguably, programming is what computers are for. If you're not programming it in some way, then it is more like an appliance that just happens to contain a computer. Personal computers of the 80s booted directly into a programming environment.
Thinking about it some more, I'd like a desktop OS that not only provides extensive scripting, but also exposes system features to the language in an accessible manner. So I can easily write a GUI for my script in a few lines, or draw some crap on screen, out of the box without the bureaucracy or FFI business you'd need to do these things in Python or some other high level language.
In other words I'd like there to be a concept of OS-level "system features" available to high level languages through something a little more friendly and robust than interfacing to a C library. I don't know how I'd implement it :) but its how I'd like things to be.
Macs have AppleScript, and while application support can be hit-or-miss, I’ve got several little things I’ve done with it that have made my life a lot easier. There’s a couple features I wanted in Illustrator that I’ve been able to work around with scripts, I have a script hanging around to help with a tedious part of turning a big pile of individual page files into a comic, I have a hotkey that rotates my display and my Wacom tablet with one keystroke when I want to work with my monitor in portrait orientation for a while, and a few other things I’m not thinking of. Some of these I use once every few years, some I use multiple times a day.
Note: Applescript is mostly abandoned since some years ago, and most apps don't support it in very useful ways in my experience. It's also slow. I imagine Linux would allow more scriptization, but I only ever use Linux for my servers.
While the parent is a bit extreme in his views that "everyone should know how to code" (which is rubbish to be honest, everyone should know how to go about general problem-solving, but not how to code), the underlying problem is a different one and has nothing to do with "gate-keeping", instead that "one-size-fits-all" doesn't work.
For some reason, UIs are now designed towards an "average user", but as the airforce has found out long ago: average users don't exist in the real-world [1], it's an entirely made up concept.
The solution to this is customizability. Create an OS that's easy to use in the default configuration but let me tweak it to my own needs just like I can adjust the seat in my car.
> everyone should know how to go about general problem-solving, but not how to code
General problem-solving ability seems like a synonym for fluid intelligence, which is not very malleable. Learning to code, on the other hand, is possible with effort. I learned to program in the 4th grade, with videos and books I myself bought, without having internet access. (I could use dial-up if I really needed it, but it was expensive and slow, so I used it very sparingly; I don't remember how much, but around 5 hours per month seems an upper bound.) I had no support whatsoever from anyone (except that my dad paid for the books and videos), my mom only let me use my computer for like 3 hours a week (shared between gaming and doing anything else), my computer was old and slow, ... . Now, I sure have a high IQ, but I doubt that we couldn't have 20% of the urban population reach some basic computer literacy when they are 24 years old. Heck, calculus is known by more people than coding. Most non-poor people waste 16+ years of their life in K12 and undergrad, and learn very few useful skills. Imagine what would happen if we taught people a curriculum that did something other than pure signalling.
> Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes?
Selling pre-made butter or clothes doesn't prevent someone from making their own if the pre-made one doesn’t fit their needs.
In technology, making your own is often outright impossible due to proprietary APIs
In a lot of cases the inefficiency of the official implementation is a feature for the developer and they definitely do not want people to build more efficient clients (examples: no ads/irrelevant content, defaulting to chronological feed instead of algorithmic, etc) and use technical (and sometimes legal, like abusing copyright law) workarounds to make the process as difficult as possible.
To me this reads like the complete opposite. Hiding the benefits of programming from the "unwashed masses" because they are not going to understand it anyways is gatekeeping.
The aim should be to make programming/scripting/automation easier and more accessible, not to hide it away to prevent people from ever using it.
> Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes?
Why learn history in school? Why learn math? Why learn about philosophy? Should we stop teaching that in school because the <stereotype> will never use it anyways? Or is the opposite true and not teaching that would be the actual gatekeeping?
TLDR: You suggest that this would hide <useful stuff> behind programming and thus be gatekeeping. I think you are hiding useful stuff==programming and are thus the gatekeeper.
> Why shouldn't everyone churn their own butter? Why shouldn't everyone sew their own clothes? Why shouldn't everyone build our own homes? Should we continue to dumb down feeding, clothing, and sheltering ourselves for exclusive casual usage of <insert offensive stereotype>?
Why shouldn't everyone read? Why shouldn't everyone write? Why shouldn't everyone do the basic math?
Being able to use computers efficiently is knowledge, not chores. To be able to use them as a bicycle for the mind goes way beyond pressing colored buttons according to emotion.
Being able to use computers and being able to program are different things. Not exatly at the level of being able to write and being able to make your own pencil, but not that far either.
Everyone probably shouldn't bother to churn their own butter but everyone should be able to cook and plan a meal. Everyone probably isn't going to learn to code but they ought to be able to reinstall or install an OS on their computer, replaced a hard drive, and perform basic troubleshooting steps.
"Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?"
To quote a commonly-used Web meme: "Why not both?"
In my opinion, why should a software tool only have one interface to it? What if there were many possible interfaces available, from very simple interfaces with reasonable defaults for casual users, to more option-rich interfaces for power users, to an API for programmers. What if we could take advantage of today's AI technology to automatically construct GUIs that are tailored to a user's experience level? What if users could customize the GUIs in order to make the GUI fit their needs better?
What if the system supported a variety of languages, not only common languages such as Python that many programmers are familiar with, but also beginner-friendly languages? Users are willing to program provided it's not too difficult: AppleScript from the 1990s was a step in the right direction, and Excel's macro language is probably the most widely-used programming language in the world. With today's AI/NLP technology, we could go further by developing ways for users to describe repetitive, routine tasks using natural language.
I think there's still a lot of room for innovation on the desktop. But you highlight a very big problem: where is the market? Who is going to pay for this innovation? Outside of open-source projects, the major commercial desktop environments are platforms controlled by multi-billion dollar corporations. Building a new desktop environment that is capable of competing against the commercial giants will take a lot of time and capital. The last company to give this a try was Be, Inc. in the mid-1990s, and they had a hard time competing against Microsoft's OEM strategy. I wrote more about this at http://mmcthrow-musings.blogspot.com/2020/10/where-did-perso....
I feel the Classic macOS did one thing quite well. It was rather easy to manage the system.
System functionality could be expanded through various means, but most often devs used Extensions. And if a software issue arose, it was easy to disable all Extensions by pressing SHIFT key on start-up. Also, on start-up you'd visually see what Extensions were being loaded. So at start-up you would always be aware what you had installed.
In current macOS it's very hard to keep track of what I've installed. I install a lot of stuff using tools like Homebrew. Some software might install some system level hooks, etc... From my perspective it's kinda hard to keep the system "clean". And it's probably a good idea to do a clean install of my computers maybe once every year or so, since I might have installed stuff I don't really use anymore.
Also, there was the System Folder and that directory contains the Extensions, Preferences, Control Panels directories, etc... So you could also at the file system level manage your System Folder. You could just delete an Extension manually from Extensions directory in the System Folder to uninstall it. You didn't need any "uninstall" software most of the time.
A classic macOS like environment with a view more modern features (maybe a WindowMaker-like UI, multi-user and real multi-tasking) would be pretty neat.
I do sometimes wonder if the lack of preemptive multitasking and memory protection in classic Mac OS may have ironically led to better quality software. Users are forced to be less tolerant of bugs if a piece of buggy software can lock up your whole computer.
Greater difficulty / risk in development leads to fewer, higher quality applications, but breadth not depth wins the market, so we're stuck mourning dead systems with potential except for cases where depth results in a "killer" application.
> Just try writing up a document on your phone! Just imagine what could be if they were on the level _desktops_?
It's very, very difficult to beat a keyboard. Tablets and even phones become day vs. night more usable if you plug in a keyboard, even ignoring everything else that still sucks about them.
If your typing speed on a keyboard is comparable to your typing speed on a phone, it sounds like you might have a lot to gain from learning touch typing, look it up. I was in your same situation not long ago
Mind that I mention it was specifically index finger swiping. Maybe it wasn't actually faster than physical keyboard, but it was certainly an order of magnitude faster than discrete thumbs phone typing.
That's for sentences, but documents usually contains more than just sentences. Even document numbers have symbols and numbers which cannot take advantage of Swype tech
I can't agree with the grandparent about type + swipe being anywhere near the speed of a keyboard, but in my experience, writing in other languages using swipe is just as fast as in English (at least for other languages that use the Latin alphabet). I frequently use swiping for bilingual English + Romanian (a language which tends to have much longer words than English) conversations, and it generally works very well, even bilingually. I would note that I'm using some Microsoft keyboard for Android, I forget its name, and it is explicitly configured for both languages.
Good that you mention Engelbart, because none of the mainstream desktop OSes are close to his ideas, or what the Xerox workstations allowed for, ironically Windows is probably the one most closest to it.
While GNU/Linux have the necessary tooling for making it as well, but thanks to the fragmentation and some communities hatred against GNOME/KDE, it will never happen.
This is what a modern desktop OS should look like,
Sure, first a short overview of how those OSes used to work and how one can map those ideas into Windows.
Mesa/Cedar also shares some ideas with the other workstation variants from Xerox PARC, namely Interlisp-D and Smalltalk, just based on a strong typed language for systems programming, with reference counting and a cycle collector.
The language itself compiles to bytecode, because Xerox PARC machines used bytecode with microcoded CPUs, whose interpreter was loaded on boot. So in a sense it was still native somehow.
The full OS was written in Mesa/Cedar, and everything was kind of exposed to the developers.
The shell is more like a REPL, where you can access all that functionality, meaning the public functions/procedures from dynamically loaded modules, interact with text selection from any application window, or execute actions on a selected window. And as REPL, it worked on structured data.
Basically similar to what Powershell offers, with its structured data, and ability to call any COM/UWP, .NE or plain DLLs libraries.
Then you could embedded objects into other objects and this the basis of the compound document architecture, basically the genesis for OLE in Windows and COM (COM is just the basic features which OLE is built upon, although more the OLE 2.0, the 1.0 was more complicated still).
The way Office works between applications and its inline editing of OLE documents can be found in Xerox PARC workstations, as Charles Simonyi brought Bravo ideas into Word, as one of Bravo creators.
Since Windows Vista, most new APIs are actually a mix of .NET and COM (now UWP), which expose a similar high level set of OS API (bare bones Win32 has hardly changed since XP days).
Now, many of these concepts can also be found in GNOME and KDE, however due to the way distributions get put together, it is hard to really provide such integrated developer experience across the whole stack.
And while REPL like shells do exist for UNIX clones, their adoption is a tiny blip when compared against traditional UNIX shells.
I take this as a joke. There is nothing modern-looking about it. Geeky, yes. Not designed for touch interface. Resembles Oberon, which I would not call modern, either. Maybe, we are not ready for it yet. Belongs in the future, then. (Or, more likely, in the past.)
- not thinking about files: I can open Notes/Drafts on my phone and get a textbox. I kinda get this with Joplin, barely.
- Real sandboxing, with a nice permission layer
- Extremely easy sharing of data between apps. Of course files are theoretically a great sharing mechanism, but the sharing mechanism in mobile OS's are the logical conclusion of the clipboard
- URIs that go deep into other apps. Lets you easily say "go over here to see details" from a completely separate system
The fact that lots of stuff are webapps lets you get pretty far on Desktop too but I think these metaphors are power user features that the desktop could learn from
The "no files" part of mobile OSs is the worst part for me personally. I constantly re-download the same PDFs, have to look forever for "that particular picture", etc., because even though you can technically access the file system on Android it's not like the majority of apps supports a file manager interface, and many apps just dump their file somewhere without explicit structure, so I basically never find what I need.
Also if you want to do anything beyond “what the devs already thought about” you either need deep understanding of the specific system/app or rounds of trial and error.
Case in point: tried sharing a vpn config with an Android user over Signal. They couldn’t do anything with the file, just yielding an error message saying it was unsupported. Sending the exact same file with a .pdf extension allowed them to download it and import it in their VPN app (only after downloading and installing a generic file manager app, though).
Every now and then I struggle with some file that I can’t figure out how to move between apps. Something as seemingly simple as downloading an mp3 file from a browser and importing it to a music player app is quite an ordeal on iOS.
Please note I didn't say "no files". I said "better clipboard". I like files as a thing to be exposed. But I think that for day-to-day work being able to move stuff around between apps without fidgeting around with files is very valuable.
I remember that some of these points were targeted by the Étoilé environment [1]. It was a desktop environment targeted for GNUstep and programmed in Objective-C that tried to rethink a few fundamental concepts of the Desktop Environments available at the time (~10 years ago). Among them, proper file versioning for everything, seamless app interoperability and data sharing, etc.
Sadly, the developers never managed to go beyond a few core libraries and a nice theme (the GUI used GNUstep under the hood). I followed the development with great interest till they stopped updating the site; I believe that the effort would have required many more developers. What a pity!
And I hate app-data not being files, not being portable. My data may be stuck inside a SaaS and only accessible through a closed-source app, and I have nothing to say about it.
> Real sandboxing, with a nice permission layer
Sandboxing apps is a good thing. Depending on your use-case, Snap/Flatpak or containers kinda solve this, but they are not the default way of running apps for now.
What mobile does wrong here though, is that it also sandboxes the user, not giving the user full access to his device, nor letting him let his apps get that access either.
This is user-hostile, on all current major mobile-platforms.
> Extremely easy sharing of data between apps
I would rephrase this as barely functional sharing, for only the limited subset of data the application has decided to implement sharing for, and only in ways the application-developers have considered inter-app sharing.
On a desktop, I as the user, have the power to decide how I want to share data and invent new ways data can be shared and utilized.
> URIs that go deep into other apps
While that is certainly a neat feature, it's an app-centric feature. How do you know which kind of apps I have installed? How do you know which of those apps within that niché I have installed?
And if you're not going to make it app-centric, you have to make it file/data-centric anyway... Which leaves us Android. Android does this better than iOS by having an intent-system which lets you register ability to handle both files, urls and subsets of those, and other apps can query for which apps supports which file/url-intents. So basically just a minor addition to the system we already have had on all desktop OSes for decades now.
And again: that's a system which already works really well when files are first-class concepts which everything else builds on.
So it's all back to files. If you want to empower the user, you must have files.
Files are emphatically _not_ good first class primitives for rich sharing.
If you have a contacts program, are you going to make each contact a file? What about performance or bulk editing? From that program's perspective, the ideal is probably to have a single file (SQLite DB for example).
But now you don't have granular sharing mechanisms except through a clumsy export which requires you to give a name to the thing and put it somewhere and open it in the other program.
Meanwhile I hit the share sheet on my phone for the Contact. Some apps know about this form for data and can ingest it. Others fallback to text. It's the clipboard model, not the files model.
Files are _fine_, and it lets you do stuff like reverse engineer the format and do cool stuff. But it's clunky as hell when you have something relatively ephemeral.
> But now you don't have granular sharing mechanisms
Many (Windows) apps support OLE/COM-based objects which can be copied, mixed and processed in between applications. This allows the clipboard to hold rich objects and not just text-based contents. Things like tables, images, rich text, contacts... and even files, or folders of files!
This allows for a much more rich (and empowering!) way to share data than currently done on popular mobile platforms.
This was already implemented back in Windows 95 or something. It's really old tech. Not sure how well this concept is implemented (or at all) on other desktop operating systems though, so it might not be a "universal" desktop solution for everyone.
That said it can clearly be done better than mobile, because on Windows it has already been so for two and half decades.
I can open vim on my iphone/desktop and get a textbox. I don't see what storing things in files has to do with that.
>Extremely easy sharing of data between apps.
hah no. Try sharing large batches of files on iOS. with plenty of apps there's no way to do this (and often the only way to do it if you can is with the files app.)
What do you mean by "Extremely easy sharing of data between apps"? In my experience it's quite difficult to get your data out of one app and into another one.
We continue to dumb down interfaces because of the assumption that people won’t learn and the reality that building a complex usable interface is hard and teaching is almost never done.
I remember when video games came with elaborate manuals, this discussion reminds me of that and how it stopped happening (and just now the smell of opening the box for a new game, i don’t think i’ll ever experience that again)
And yet videogames are prime evidence that users can learn just about any UI you throw at them, if they're even little bit motivated to do it. The Web itself is the second piece of evidence - despite all the UX experts' love for "simplicity" and "intuitiveness", every single website looks entirely unlike any other website. Every UI is different. People manage.
It's the assumption that people won't learn that's a problem. Minimizing unnecessary complexity is a good thing, but removing capabilities for sake of further UI simplification is taking things overboard.
(I'm forming a new hypothesis that tries to explain why this happens: it's because SaaS products are trying to turn a workflow into a service. So anything that deviates from their perfect workflow, including any flexibility, integration points, or general ability for self-help, is ruthlessly pruned. The users must follow the prescribed workflow.)
I think that we should have a third DE with the same level of UX for configuration and simple customization as Gnome and KDE. But focused in Tiling WM. Pick something like Awesome or Sway and creating the whole ecosystem. Pick a OS that suits it like Manjaro (look to the logo, its tiling wm for sure!) and offers the DE as an initial option in the installation.
But considering the "natural selection" that happens here, it may be the way that is because only technical people care about this kind of thing… Idk…
> or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world
This is what I telling non technical friends for years now: as they spend more and more time with computers and the internet the investment to learn what is under the hood and to have more efficient interaction with better tools becomes worth it more and more. You can't say any more you don't fancy or care about IT when you spend hours each day on a computer.
>Real question: Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?
Well, the way of progress has always been simplying operations.
Do you know how to fix your car and do you make your own clothes, cheese and bread, in today's "bread and cheese eating", clothes wearing, car driving world?
Honestly I think a lot of ideas about where to go with Desktop OSs these days are just complexity fetishist slashfiction. I don't think we need new complicated paradigms, instead we should take a look at our history of what worked and what didn't and build based a simple combination of good ideas we already have.
My opinion, short version, hastily written, and incomplete:
1) Applications are self-contained objects that can be simply copied and deleted instead of installed and uninstalled. They should run from any location on any media. AppImage implements this in Linux and modern Mac application bundles are already this, but the concept goes back to almost the beginning of desktop computers.
2) Spacial file manager (MacOS), drag and drop saving (RiscOS), desktop and quick-launch menus are just views of a folder structures (Windows).
3) All applications in their own namespace and they only get access to what the user configures for them (do not do "allow/deny" popups!).
4) Everything scriptable and something like AutoHotKey built into the OS at a fundamental level.
5) Switchable "user profiles" instead of "user accounts", which are an artifact of giant shared computer systems. User profile just contains personalized settings and can be located anywhere, including removable media so you can take yours to other computers. If you want to keep things safe from others, encrypt them. Otherwise there are no permissions except those applied to applications themselves.
6) Sound and video should be routable like you see in DAWs. Plug outputs into inputs, add filters, split, mix, etc. This is of course scriptable so you can make it do what you want on arbitrary events.
7) Backwards compatibility should be a high priority, but accomplished via shim layers and/or emulation and/or vms when clean breaks are necessary. A wide array of such should be included with the OS from the beginning. In 2020, there is no excuse for not being able to run old software.
I just want to say that OLD Mac applications were exactly that, too -- even the OS was basically just "Finder" and "System", that was it. Dragging those two files to a different floppy (and swapping them 12 times to complete the copy) gave you a bootable disk. Meanwhile Windows apps were hundreds of files ... then eventually Mac apps went the same way, although not as wildly.
I know we've debated this point before, but as I don't recall seeing this particular list from you before, I just wanted to point out that Haiku already has (2), (6), and (7), the technical capability for (1) though it is not exposed to the GUI, about half of (4) though there is also no GUI for it (there is a command line tool for it however), and future plans for something like (3) and (5). :)
Well, we've had some disagreement over the extent to which (1) applies to Haiku for sure. Maybe there have been some changes there but the way it appeared to be implemented a while back I would say didn't quite meet (1), especially by convention if not by technical limitation.
If Haiku actually does (6) as I imagine it, that's actually something I had no idea about and probably deserves its own article. Ditto (4).
I'm genuinely glad to hear about future plans for (3) and (5).
These are the kinds of features that would make it more appealing to me than a Linux distro despite some of its current shortcomings.
> (1) is definitely by convention and not technical limitation, I definitely stated that during our last conversation.
Indeed, now that I think back I believe that was ultimately what you clarified about my misunderstandings of Haiku and package management.
What still baffles me somewhat that it seems normal (non-ported) Haiku applications are actually made less flexible by packaging them because they 1) have no non-base-system dependencies and 2) hpkgs only work from certain locations (IIRC currently hardcoded but could potentially be configurable).
Cortex does look a lot like the kind of system I'm thinking of. That's wicked cool. When Haiku supports hardware accelerated video one imagines they could stick arbitrary shaders in the video pipeline to accomplish some useful, or at least really neat, things. I know built-in audio routing functionality would save me a lot of time on the Halloween display each year, maybe I should look at doing that with Haiku next time.
I see this quote in the comments about 'hey': "hey is not a tool to send generic BMessages", which seems like a missed opportunity from way out here at 100ft. Otherwise the only thing that seems needed for GUI exposure is a script editor with 'record' functionality and perhaps some kind of manager for how scripts are tied to system events.
> All of these points sound like macos apart from 4, possibly 6 and definitely 7.
MacOS has some good ideas. 1 and 2 actually apply to a lot of OSs from the 90s. Good luck finding a spacial file manager today though.
> Your home directory is your profile/account (what's the difference on a single computer?).
The difference is that there is no OS concept of an "account". Right now, you can't take your home directory between systems and have it just work because the OS needs you to have a user account. If your user ID on the new system is different from the old one then your file permissions will be broken.
The idea is, that the OS doesn't really care about protecting the machine from the user in a personal computing scenario. It does, however, care about protecting the system and the user's data from malicious applications, so that's where the permissions should be.
Thanks for the explanations. I realise that the spatial file manager is missing these days, but I prefer the navigation file manager. Whenever I use BeOS I find the plethora of open windows for each directory unnavigable.
You can get Finder to open each item in a new window and hide the sidebar to make it spatial-like, although I am not sure if there is a method of making "open in new window" the default.
I think the separation of account and profile is a difficult concept, unless the directory had a specific UUID associated intrinsically with it so that permissions could be applied to objects that didn't know about "users"?
What I'm saying is, you don't need the permissions at all. From the user's perspective, everything is 777. If you happen to have the edge case where more than one user is using a personal computer, and there is a desire to keep things safe from each other, then you should use encryption for this since it actually works. File permissions are easily bypassed with local access and a boot disk.
As I recall, Pottering is actually working on portable home directories for Linux. Naturally, the solution is over-engineered and involves a new service and file format.
Oh no Pottering at work again! That's what finally drove me from Linux - PulseAudio and having dysfunctional stuttery sound after upgrading from Fedora 7 to 8 I think?? It worked fine beforehand. And then realising I was spending forever tinkering rather than actually getting anything done.
After I went to Mac I didn't look back. Now I look at the mess and debate of SysV, DHCP server in the startup etc. and can only imagine that this home system concept will be the same. As you say, over-engineered and unreadable. Ugh.
I am working on a side project where I am building a desktop environment for Linux and the BSDs that implements OpenDoc-style components using Common Lisp, where components are objects and where they could be combined not only by using code, but graphically as well. This is inspired by Smalltalk, Lisp machines, and OpenDoc. Another aspect of this desktop environment is its support for very flexible user interfaces that can be easily modified. This project is motivated by my frustrations with existing desktop environments (I lament the stagnation of macOS) and my desire to see something like a modern Lisp machine.
I'm still in the very early phases of this project; there's no code available, and I'm currently in the process of learning Wayland and graphics programming (I have a background in systems programming and I've been getting up to speed on graphics programming in order to carry out this project).
Those are some great ideas that dovetail together in powerful ways! Thanks you for sharing your work in progress.
In the hopes of inspiring you and others, here's a kind of messy draft of an article I haven't completely finished, but it's all about HyperLook (a user-editable desktop GUI system inspired by HyperCard and implemented in NeWS PostScript) and some components and applications I developed with it, like SimCity, a cellular automata machine, pie menus, a customizable clock editor, a customizable window manager, and "Happy Tool":
SimCity, Cellular Automata, and Happy Tool for HyperLook (nee HyperNeWS (nee GoodNeWS))
HyperLook was like HyperCard for NeWS, with PostScript graphics and scripting plus networking. Here are three unique and wacky examples that plug together to show what HyperNeWS was all about, and where we could go in the future!
I e recently arrived at the conclusion that we need semantic interfaces (think semantic web). Where things are naturally cross referential and indexing is useful beyond clever search hacks. With so much of our day to day lives being digital it would be UI 2.0 to finally break this barrier. It struck me like a brick wall how I released that my interactions with the “machine” are incredibly isolated from one another and despite advances in Machine learning it still doesn’t build a great predictive index locally that I can leverage for finding or surfacing my own precious content or natural suggestions for whom I may want to share a certain piece of knowledge with and that’s just a small example I can think of quickly
Some of the tech mentioned in this Twitter thread is pretty neat too. Maybe I should take a second look at Pharo
It’s my dream to work in some kind of next level browser/app that would leverage this kind of thinking. One day I hope
Those observations resonate with me. This does sound a lot like the now almost ancient dream of Prolog/Smalltalk where you'd have this much more semantic and malleable environment to work with and shape to the specific purpose it needs to fulfill, without any artificial barriers.
And I don't see why that semantic singularity couldn't be local (or at leat private), instead of the current hodge-podge of "cloud" offerings trying to siphon as much data as they can – under the guise of improving the user experience – just to lock you into their version of the perfect walled garden.
For me, I think this picture[0] illustrates how I feel about this pretty well. The idea that the data, for lack of a better term, that we generate just via interaction with our devices isn't somehow useful is now a concept that I fundamentally feel is a little astonishing in 2020. The very foundational idea of the internet, that ideas can be shared via links and documents (and so much more now) never left me. I think there is huge room in building user interfaces that take this to its core. For instance, why shouldn't my filesystem be used in such a way that I can build links between applications, files, and sharability? Why can't this be extended to say, link inside one application to another so discrete application functionality is available regardd of what app I have open at the very moment? The idea that I have to "switch" apps is fundamentally old.
I'm not sure that I have all the answers, but one area where I know this would make a ton of sense is the simple note taking apps that I feel tech and non tech users alike use. The digital sticky note is something that has been strived for since the early days of graphical computing in particular, but alas no one entity has ever really delivered on it in a way that I would call 'semantic' but it perfectly illustrates a situation where I think this concept is super obvious, because you want different things to be linked for different reasons. And that fundamentally describes what I'm talking about, is I want to be able to have things be linked based on their relationships with other things, be it applications, data, etc that I build over time. The fundamental interaction model here is that I should be able to make these switches and those switches to have meaning when I make them, rather than the current status quo, which is I have to deliberate strike an interaction between programs or actions within programs, rather than things be linked semi-heteogenerously based on exposing semantics based around my whole "system", if you will.
While I think you're right in the sense of "semantic interfaces are our best hope for driving inovation", I'm not optimistic this will happen. Everything I've seen in popular desktop UX has been migrating away from even the slightest semblance of that for the past 10 years.
The problem is that semantics require coordination, and agreement on some semblance of a common language. Popular apps can just strong-arm others into making the clever hacks, which for them is far less effort than coordinating some sort of UI language.
I'm afraid the only way this happens is if a single player controls the whole stack that requires coordination. For example, Google can automatically set a reminder in your calendar about needing to go to work because it knows when you go where through their navigation app. Getting those two to communicate/agree on a common language in that way when you do not control both apps is far harder.
I want less in the "lets make the desktop good" vein, and more in the "lets stop making the desktop crap" vein.
Things that seem to be making things worse, ironically, are all things that ostensibly are there to make them better. Web technologies, I'm mainly looking at you.
Definitely. Windows got worse after XP, mostly with 8. Why did they move things around and setup multiple ways of doing things? Gnome got worse after v2, and what was Ubuntu thinking with ads? OSX/macOS/iOS got worse with flat design (which thankfully morphed) and AppStore and awful windows-style install nanny.
It’s not just the interface. iOS and macOS got embedded spyware years ago, and it’s still there; they can backdoor whenever. Dig through your logs and sometimes you’ll see output of a menu choice upon being connected to. Windows has similar from what I’ve read. I’m willing to give up some privacy, but it seems like B.S. to make people pay for things that do that in such a hidden way. It leaves things exposed. Unfortunately, the desktop OS alone is not enough for security either. Hardware doesn’t lock down, with potential openings for instructions in multiple places in modern computers.
I think the portability of web technologies make them super appealing to new developers, which is what I think the appeal of languages like Java were a decade (or more!) ago. I think web tech has all of those advantages plus the ability to get started so much more quickly (it takes so little effort to get a simple website online).
I started writing some Swift recently and built a small app, and it kinda shocked me how simple it can be to get such great interface performance from so little code. I kinda wonder if there's some sort of middle ground, like web assembly with a more standardized starting point, base language, and set of UI components.
Portability is a shrine that so much has been sacrificed on. In rather interesting ways.
Pretty much, inarguably, Linux is the most per code there is. And it is done using what are easily accepted as not portable practices.
But it is not a gui, is the common refrain. To which I offer Scummvm.
But it isn't a business application? True, but I find it baffling that the ui afforded for those games is somehow easier than the ui most applications need.
I think the only other shrine that has killed more with nothing to show for it, is "easy to reason about threads." Of any sort. So much is done so a program can scale to absurd use cases, that they stall and get nothing done.
I shake my head in disbelief every time I read someone touting web tech as the best way to make mobile apps and how more productive it is. It is obvious that that person has absolutely no idea what native APIs offer.
Maybe that's because you're thinking of apps that actually benefit from native APIs. There are other types of apps that are mostly interfaces to remote data primarily generated by other people.
The only native capability that many apps need is push notifications (something like a banking app for instance). And that's basically a random restriction imposed by Apple to protect their business interests.
The next desktop OS, I hope, will be a real, physical desk. I'd like a computer that doesn't have a glowing screen and a tortured simulation of a space that doesn't exist. Instead I would prefer that I could simply write on a sheet of paper under a calm lamp and move to the white board when I feel like it. I could use my voice if I needed to or gestures as my abilities allow. The computer is there but you can't see it and it doesn't have to impose itself on my and force me to adapt to its interface. It adapts to me, my space, and my context.
I want old pre-AppStore OSX/macOS that doesn’t nanny my installs or screw old drivers, with a good package manager, and easily tabbed and gridded terminal windows without tmux necessarily.
I’d also like GPL’d Windows XP running flawlessly like a mac.
I’d use Linux on the desktop, but I’ve never liked any of the desktop managers and it was never as reliable as OSX/macOS.
Would you mind using Linux? It's been my experience that it's done everything I wanted it to well. Linux on desktop has changed. I hated it too in the 2000s. Now mostly everything just works, KDE can look and feel like any desktop including OSX. Very reliable.
You'll find a pirated version of XP running as you wish soon since the leak of the XP source code not GPL though.
I've been using Elementary OS as a daily driver for at least a year, maybe two (and before that I used Mint for several years). For the most part (there's that pesky word again!), it works quite well. And it really is beautiful--aesthetically, I quite like it.
But boy do I wish that bluetooth would work reliably. Since working at home full time, noise-cancelling headphones have gone from 'nice to have' to 'nearly essential'. It worked more or less fine for a while, then some update broke something and it stopped working. Now it's working again, kind of, but connecting a pair of headphones causes most of the entire UI to stop responding for a full minute or two. Sigh.
And maybe the next time I update it will be fine again. Who knows? And that's the problem: every update feels like Russian roulette. And this isn't even a laptop. I use this thing for work; I do not have time to dick around all today troubleshooting obscure bluetooth problems.
If I'm going to continue to use it, I guess what I need to do is stop updating (or only update specific apps, like firefox) once I happen upon a relatively 'stable' configuration. Security updates be damned.
To add to this, I prefer Ubuntu MATE, where "MATE" refers to the desktop environment: it's exactly what it needs to be, light and responsive and useful, without the need for a GPU just to render you friggin' desktop. It's neat.
Hackintoshes are based on macOS, but that doesnt mean it receives the breath of testing and scrutiny as the real deal.
Thats the whole point, right? Obviously smaller and non-mainstream distros with non-mainstream packages, more cutting edge packages will have more paper cuts.
Use pulse audio volume control. See if it would help. Mine also broke after some system update. With this I was able to select bluetooth profile as well as set audio out via bluetooth. Before finding this bluetooth headset wouldnt work correctly on elementary os, but never needed it in linuxmint.
Didn't miss the point at all. Why would you assume that? I faced a similar problem as parent and know the pain point. Was just trying to let parent know of a solution I found useful.
I'm in the same situation as you: work from home, own noise-cancelling Bluetooth headphones, use elementary OS.
Personally, the only problem I've ever had is when two devices are connected to my headphones (laptop and a phone). When a notification pops up on my phone, my headphones get "taken over" by the sound from my phone. I basically just turn off Bluetooth on my phone at that point (easier than disconnecting a device).
Minor annoyance for sure (especially because I have notifications turned on for like 3 apps on my phone), but I'm so used to elementary OS (using it since 0.2) that there's no way I can switch to anything else — Windows or another Linux distro — at this point.
I've tried to move to Linux once a year since 2005. A few weekends ago I did my annual attempt and had a go at Elementary OS (live USB wouldn't boot, gave up), MX Linux (couldn't get sound, wouldn't boot after installing nvidia drivers), Manjaro XFCE (kept locking up, requiring a power cycle) and Pop OS.
Pop fared best, but even then I had all kinds of showstopper problems with monitor power saving, resolution, crazy window repositioning, and some behaviour where the desktop workspace randomly becomes far larger than the monitor and sort of pans around. If I leave my computer for 10 mins then have to spend 20 mins fixing it when I come back, that is a deal breaker.
I persevered though... Tried playing a game, alt-tabbed out to do something else, machine rebooted. Tried to use their tiling window manager functionality, but it had all kinds of weird bugs making it virtually impossible to use for anything except simply switching focus (and even then, their theme does not visually distinguish between focused and unfocused windows, which is problematic!)
Anyway... rant over. Short version: I disagree with you. :)
My experience is very similar. And yet in every debate on this subject some people will claim that they are running Linux without experiencing any of these problems. They can't all be lying can they? So what gives?
I believe it all comes down to selecting the right hardware. The way I've been trying Linux was to install it on some machine I had lying around (mostly Acer, Asus, MacBook, no-name towers). Apparently, that's not how it works.
I remember back in 1990s and early 2000s it was hit and miss whether or not Linux would install on a particular machine. Then over time things improved and you could install it on almost any machine.
Some Linux enthusiasts celebrated this achievement by claiming loudly that Linux now "just works". They couldn't possibly have done a greater disservice to the desktop Linux movement, because that's just not true.
It doesn't just work. It just installs. And then it's crushingly disappointing on most machines.
My next Linux attempt will be on one of those known good hardware configurations. Anything else is just a waste of time.
Sadly I have about the same experience as you with my last attempt 2 weeks ago.
I started wit KDE Neon, but it failed to install drivers for my Nvidia card and proceeded to sabotage my sound drivers in the process (they were working fine before).
I then switched to ElementaryOS which did fine with my Nvidia but every time it played a sound it would send a loud crack in my speakers.
Back on Windows which I feel a prisoner of. The thing is sending data all over the internet, I can't even write a diary because I feel like I live in the USSR where I have to pay attention to everything I say or the KGB will get me (to be clear it's just a metaphor, I understand I can write whatever I want on my PC without consequences but I don't like the feeling that my inner thoughts would end up on a server somewhere).
I run Linux as my daily driver, but I really do get your pain. There are way too many items problems that run in the way. Live usb didn't boot. Volume keys don't work, etc. etc. It has gotten WAY better, but the polished professional feel just isn't there yet. Your trackpad won't feel 100%, if you don't know your hardware inside / out your Nvidia card or something else might not work. Part of the problem, too, is that there are way too many projects inside the open source world. While that is a blessing, it's also a curse.
Some people just want to boot a machine and get to work. Even though I run Linux, I have become that person as well.
You've tried every desktop manager for Linux and spent enough time with them to be sure you didn't like them? Very impressive.
Not sure what your reliability metrics are, but I've run Linux and macOS desktops side by side for years now (decades, even), and I don't really detect much of a difference. The macOS ones do tend to have the benefit of a rigid hardware platform, which is why I suppose they do a bit less well when forced to run on an arbitrary platform (e.g. inside KVM).
If you want good-faith responses, it's best not to be a jerk in what you're posting. Actually, your comment would be excellent if it had been just the second paragraph. Unfortunately the first paragraph negated it (and then some) before it even had a chance. That's one reason why the HN guidelines include "Don't be snarky."
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and sticking to the rules when posting here, we'd be grateful and you'll get much more interesting responses. Note these in particular:
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Please don't comment about the voting on comments. It never does any good, and it makes boring reading."
The goal is to get things working suitably well for vanilla 2D apps on standalone headsets, and then start building the actual 3D OS apps geared towards office work/tools for thought (things like: 3D programming code graphs, 3D spreadsheets, and other things that VR uniquely can facilitate).
Since the advent of Windows 10 ordinary people have gained the kind of virtual desktops that only wealthy Mac or tech-aware Linux users could have in the past (Win+Tab, new desktop at the top; Ctrl+Win+right arrow). Multiple clipboard copy/paste with a clipboard history (Win+V to paste). A running program history (Win+Tab and scroll down) to go back to them, and maybe to sync between devices when signed in with a Microsoft account. Ordinary programs reopen after a reboot, including things like Notepad coming back with unsaved file content. Windows' photo app on desktop does the same kind of image recognition and face recognition that smartphones do, enabling search-within-image. Windows tablet devices have handwriting recognition in basically any input box in any program
Nobody in tech will consider them "ambitious" since tech people could do them years ago, but what use is a video of live editing in Smalltalk on a research machine to most people?
In my workflow all I need is a good window manager. The one on MacOS is not good even with extra apps (Rectangle). There is too much degree of freedom in moving a window pixel by pixel. I want it to automatically lay out the windows.
The other problem is a lot of things open are Chrome tabs nested within a Chrome window. I wish there were no tabs and the OS window manager would automatically surface the right tabs and windows and pick the right layout (eg. Four windows tiled or two windows side by side, etc.) With an easy keyboard search across the content of everything open
Personally I can't stand automatically positioned windows, the algorithm can't know about what particular thing I'm doing well enough to position things effectively.
What I wish the OSX window manager did was pointer style focus. Terminal emulates this for you but as soon as you have to use other apps (say multiple safari windows and a GUI editor for writing documentation) you loose this and need a second monitor.
I just tried it and so far it's really good. It's still rough around the edges though and feels like someone's side project.
I feel existing OSes are simply missing out on creating a wonderful window manager. Something like this can get better defaults, animation, a tutorial, some ML smartness on making better initial choices and perhaps a dedicated hardware on the laptop (a la touch bar) just for window management.
I just need Apple to get out of the 80s and add an "always on top" option to their windows. Something so simple and useful and OSX just cant handle it.
The timing isn't great for me to show anything, but I'm working on a UI framework that is exclusively vector graphics based. A few of the interesting things that I've come up with so far are: screenshots can be exported as SVG files rather than bitmaps; a lightweight remote graphics protocol is possible thanks to the desktop being described as a scene graph; apps can easily scale to any DPI/resolution; new and interesting widgets and app designs are possible - pie menus and the like.
Unfortunately the GitHub build is currently broken and I'm probably 2 months away from sharing anything here. I'll post something when it's more presentable.
I don’t want to re-invent the desktop. As a software engineer, I’m annoyed that I can’t do my entire job on my phone. I’d like to see dramatically better portable input interfaces.
I’d even be happy if more love was paid to “desktop light” environments. I have an iPad Pro that’s useless for software development without SSHing into a machine.
Maybe no one‘s created a good touch interface for writing code yet. I feel like there’s actually opportunity to completely reinvent coding from something keyboard-focused to something more touch-friendly. At the very least for line of business apps anyway. There’s been lots of work done on visual programming system but they all seem to be centered on teaching children basic coding concepts, not actually getting stuff done in the workplace. But that doesn’t mean it’s impossible...
There was Visual Basic and FileMaker. That said I think non-trivial problems require a dense UI. And visual density can be daunting. Text and language are a hack but their the least bad I've seen so far.
Visual Basic got partway there by having WYSIWYG user interface editing, but you still had to code all of business logic using the visual basic programming language.
What I’m talking about is replacing the latter with something that is not just essentially a text string. Maybe one way to think about it is when you are coding in visual studio with a language that has good IntelliSense, it will pop up with various class members and other details because it understands the objects you are working with. Maybe there’s a way to extend that idea, where there’s not much manual typing involved at all and it’s mostly tapping on the option you want based on what is relevant in the current scope.
Microsoft had a labs project 5 or 10 years ago attempting to do just this, although it never really got very far before it was canceled. I wish I could remember the name of it now, it was intriguing. Definitely not as productive as coding via a keyboard on a real computer, but it was the best experience you could get on a mobile device, so that’s not nothing.
Maybe the real problem is there’s no way to actually run/use anything you create on mobile. Sure maybe if you’re making a website, but there’s no way to build an app or a helper function that can be used directly on an iPhone or iPad. I assume android has more possibilities here, but I still have yet to see something that will compile code on an android device directly to an APK and install it locally without requiring some sort of server to do the heavy lifting.
It’s interesting to see the communities growing around iOS shortcuts. It’s crazy the amount of hoops people are willing to jump through to create shortcuts that would otherwise be fairly simple scripts in a standard programming language, and then people share them on Reddit communities and elsewhere. If people are willing to bend over backwards for that, just think what a purpose-built system could be used for.
The biggest issue is that visual programming environments tend to be domain-specific rather than general purpose. A general purpose visual programming environment would be difficult to build and still be as capable as a software developer using one of the mainstream programming languages with IDE.
> I have an iPad Pro that’s useless for software development without SSHing into a machine.
And even that is a horrible experience because the iPad keyboard doesn’t allow key repeat, making vim a terrible experience. That used to be an issue anyway. Maybe they’ve fixed it.
I just learned you can type command-. to send an escape on an iPad keyboard. And also as freehunter mentioned, iPad Settings / General / Keyboard / Hardware Keyboard / Modifier Keys lets you remap caps lock to escape.
> And even that is a horrible experience because the iPad keyboard doesn’t allow key repeat, making vim a terrible experience. That used to be an issue anyway. Maybe they’ve fixed it.
Isn't Vim the editor with the least reliance upon key repeat? Hardcore Vimmers would argue if you are using key repeat, you're doing it wrong.
iOS allows key repeat from external keyboards, but not from the onscreen keyboard. In theory a third-party keyboard should be able to implement that, although I can’t say I’ve seen any examples of it.
Interesting. I have the Apple Smart Keyboard Folio (not the newest magic keyboard) and I've looked everywhere for a way to enable key repeat, to no avail. Am I wrong in thinking that traditionally key repeat is handled at the OS level, rather than the keyboard level? It seems weird to me that each individual keyboard would be in charge of implementing that, especially when the settings for it are part of a desktop OS.
There is a setting within iOS settings > Accessibility > Keyboards > Key Repeat
I’ve never actually used an external keyboard on iOS so I can’t vouch for how well it works, but judging from the settings pane it should do what you want
I’ve had this thought for a while that the Desktop UI needs to be reinvented to fit the user in two main ways:
- People Centric. Associate files, notes, alerts, calendar entries, emails, documents with your contacts or contact group. Chat using an open protocol. This is in stark contrast with the data silo approach we have going on now.
- Project Based. Want to create a new app? Create a new project. All files associated with a project can live in a container and be shared in other containers. Filter for types of files that you are looking for. This basically reinvents the Finder/Explorer UI to avoid creating complicated hierarchies for the user.
I’d argue that this approach to a more people centric UI would be scalable to mobile as well.
Network effects are huge. People are on Facebook/Instagram/Twitter/etc. because other people are on them. The control over the network is of paramount importance to those companies, and they do not want to give it up. They don't want an open chat protocol. They want their branded experience with their specific font and their specific shade of blue (and it's always blue - Facebook, Twitter, Skype, it doesn't matter, it's always blue).
In the end, while the UX mocks were grand and there was the potential for some interesting flows, it was impossible to get the platforms interested. They don't WANT your contact list to sync to your device and integrate in your OS surfaces, they want to own that data.
I think the "hide the file browser" has had some pretty mixed results when attempted on mobile. File browsers and file extensions come in handy.
For example, the zip/archive workflow. Does it make sense for all archives to live under the zip app? Does it make sense to have to share files to and from a zip app in different bespoke app level UI? If its OS UI how is it really any different than a file browser?
The mobile platforms seemed to have settled on “services” for this kind of thing. Take content from one app, apply a transformation on, put it in another.
I'm looking forward to the day Genode brings capability based security to the desktop. We can actually have computers that don't turn traitor at the first string buffer overflow.
I want computing as secure as when we had MS-DOS and floppy disks. You knew what disk (and therefor what data) you were risking at any given time. You could write protect the OS.
macOS is signing the system partition, making it effectively write protected in the next release. Not exactly the same, but people are still considering the idea.
For a long time what I've wanted is to be free of devices altogether, instead displays and input mechanisms should be universally available. I should be able to walk up to a wall, sit at a desk, stand in the street, and summon up a display for me to work with, laid out appropriately according to what I am trying to do: listen to messages, paint a landscape, design a car. It should offer the same applications and data wherever I happen to be. Form factor and OS should become something invisible to 99.99% of users.
Until then, we can fully expect that 2021 will be the year of the linux desktop ;-)
What I dislike instead is my Android phone's UI. It's 100% proprietary, meaning I have little influence over some decisions.
E.g., if I don't want to be disturbed on my PC, I just quit all messenger apps. On Android, but only since very recently, there's a DnD mode. I don't use it because I don't understand what it is doing + it has been burried somewhere as OS-level app.
Then of course, there's this problem that on Android they want to hide files from you and that the whole experience is optimized to keep you on the screen.
That's not completely true. MacOS has started this process where all apps are checked for safety prior to launching. It makes app launch seem slow, which feels like an app problem.
I’ve used Codespaces and I use Cloud9 for basically everything line of code I write. The problem is Codespaces is ridiculously overpriced and Cloud9 is growing increasingly outdated compared to VS Code. No extensions, limited language support, doesn’t support Amazon Linux 2 or Graviton EC2 instances, etc.
Also even though they’re browser based, many features still don’t work on an iPad. Amazon really needs to put more effort into this space or Microsoft will wildly outcompete them. I still use Cloud9 quite heavily but as soon as Codespaces allows for scrolling with a mouse wheel on an iPad, it’s going to be hard to keep me on AWS.
I know it's not IntelliJ, and there are still some warts to it, but VSCode's remote capabilities have been good enough for me to use full time for python development.
Personal opinion I think a lot of opportunities for innovation and growth in the desktop OS market have stagnated, as increasingly 90% of what people do is contained within a browser window and performed through some sort of network-dependent cloud service/saas/web app. Inside chrome, edge, firefox, safari, whatever.
For many non technical consumer users it no longer really matters whether they're running Windows 10, MacOS or something weird like a linux+xorg/KDE desktop environment.
I think that one of the primary reasons things have stalled is the fear we all carry around because we don't have any decent operating systems. With MS-DOS you could copy the OS disk, write protect it... load up any random piece of shareware and just try it out. There was zero chance it would toast anything, and you might find a gem.
We don't click on links that go to web sites we don't trust. We don't open email attachments, we're paranoid... because our operating systems weren't designed for the age of the internet.
some method to reliably run new stuff in a fully sandboxed environment (short of going through the time and effort to copy an entire disposable windows 10 home virtualbox machine), would definitely help with that problem.
linux based ideas like qubes are are a step in that direction.
Once your full C: volume is backed up to a WIM file it's not too bad even on bare metal. If you keep C: under about 20GB, the resulting WIM is about half that size and easy to deal with.
To deploy using a 30GB partition it only takes about 5 minutes to write zeros to the 30GB, another 5 minutes to quickformat and deploy the WIM file to the fresh partition (in naturally well-defragmented condition), then a judicious BCDBOOT command to make it boot from your present BOOT or EFI folder.
that is a great deal slower than the time it takes me to duplicate a ready-to-go windows 10 home virtualbox VM, copying about 25GB of data SSD-to-same-SSD, and then run it.
I agree. With the exception of the Linux desktop, desktop computing is controlled by two very large companies: Microsoft and Apple. Microsoft still makes plenty of money from Windows licenses, but its business model has become more reliant on its cloud offerings. 15 years ago Apple was virtually synonymous with the Mac, but today the iOS platform far outstrips the Mac in terms of revenue. Back when there was money to be made selling personal computers, we got innovation. But now that Apple is making tons of money from its iPhone/iPad ecosystem and Microsoft is increasingly becoming a cloud vendor, innovation on the desktop has stagnated, in my opinion.
Moreover, the desktop is a very difficult market to enter. Writing a new desktop environment that is competitive with even Windows 7 and Mac OS X Snow Leopard is going to take a lot of work, even if they piggyback on existing operating systems such as Linux in order to avoid the full work of writing a new operating system. Moreover, where is there a viable business model for selling desktop operating systems? Be, Inc. tried with BeOS in the 1990s; the company had a hard time getting PC companies to agree to shipping their PCs with BeOS preinstalled due to agreements these companies had with Microsoft regarding preinstalling Windows. There's also the problem with software incompatibility, though, interestingly enough, this may be less of an issue today than it was in the 1990s due to the dominance of the Web and due to the "Electronization" of desktop applications.
My opinion is such innovation on the desktop will come from a hobbyist open-source project or from a small business that is willing to cater to a niche of users who want compelling alternatives to Microsoft, Apple, and various Linux desktops.
My 2 cents from running Linux for 13 years (on / off, mostly on) and windowz / macOS. Full disclosure, I run Xfce / Xubuntu now with ZERO modifications.
I have to agree with @wraptile and say that we don't need a desktop revolution. In the open source / *nix world, there is a desktop for EVERYONE. Gnome is the corporate DE, then you have folks who run Firefox and everything else in a terminal. I think the desktop is akin to a car. Things have obviously changed, but the base will _always_ be there. Cars have 4 wheels, a steering wheel, and cockpit tools. Desktops need to open applications, field graphics, and navigate you through your computing tasks. And those are different for everyone. Someone driving their kids to school has different needs than a forktruck driver in a factory.
One thing that always interests me (I don't work in "tech") is that everyone is always trying to make things "new" or "revolutionize" it. I think the DE world is fine. We have the 2000 Toyota Tacoma (Xfce), The Teslas (macOS), and the Honda Civic (windowz). I'm not sure revolutions are needed. Tweaks and changes here and there to make things more stable or better tech (akin from x11 to wayland) but large overalls have already been tested and came and went.
We need a new methodology of human and computer interaction to warrant re-thinking desktop os. What we have is very fine tuned to keyboard, mouse, screen and occasional audio input and output. When we have a new set of IO, it could be that "windows" and friends are irrelevant.
Yes, we had touch screen, thats how we got to windows 8 hell, we failed to adopt the new UI. I would argue we are still working on that.
Sometimes when there's a discussion on alternative operating systems, people bring up Urbit.
Urbit advertises itself as an OS with simple foundation: Nock VM.
Here's the entire specification for Nock: https://github.com/urbit/urbit/blob/master/doc/spec/nock/5.t...
You may notice that there's barely any English in this specification, or even examples.
But that's not the main problem. The main problem for me is that this VM lacks support for arithmetic operations.
Well that's not quite true. It has all the necessary components to support arithmetic operations: it has comparison and increment.
Congratulations, now go build your own Peano from scratch.
The author claims that this is not a problem! The VM will optimize your increments in the loop to subtraction, and it will optimize those nested loops to multiplication and division.
But now you have an even worse problem! How the hell are you going to detect that a loop is a subtraction?! It's a halting problem hard task.
So instead what you end up having is a VM with an informal specification, and if you stray away too much from it, your program is going to run hundreds of loops just to subtract two values, which means the program may boil up the oceans until it finishes.
And if you're being so smarty-pants about increment and equality being sufficient for arithmetics, why not use Church numerals?
That's an even simpler foundation that the Nock VM!
Well, step one is explaining the brand new VM and OS built on top of it. Non-starter.
Then explanation follows by claiming global IDs, allocated using a blockchain.
> 2 - User experience
> We want Urbit to be a single, simple interface for your whole digital life.
> A picture of nokia 3110 signifying OSv1,
> iPhone v1 signifying OSv2,
> and a blank space, signifying the supposed new OS version 3
What?
I've re-read the entire website multiple times and I'm yet to understand the problem they're trying to solve, or lessons I'm supposed to learn.
Maybe it's me, maybe nobody just told me that I don't understand new technology. But this seems like an elaborate toy for nothing.
I can't tell anything either, the doc is both too abstract (aspirational marketing) and too low level (buzzwords) at the same time. I can't tell what the entities are, what they do, and what the actual user experience is.
What would help is put on the front page (1) a concrete system architecture diagram and (2) a UI screenshot. That would go a long way to answering WTF is this thing.
tbf, in the context of _desktop_ innovation whether or not an OS is a unix or not is largely irrelevant today. That might've been a more relevant statement in the era when all desktop unixes used the same toolkit / CDE.
He asked about Desktop OS, but I actually spend say least half of my desktop time inside of the browser.
And the browser is its own OS in many ways.
I think that Portals are one of the interesting new developments in the web "operating system".
Personally I think the desktop is only going to be "cool" for another 5-10 years.
Because I think that the XR chip(s) are good enough now for very good augmented/mixed reality. We are waiting on some type of glasses that are closer to normal glasses, lighter weight than VR headsets. That do not cause eyestrain and just look almost like normal glasses.
But to get back to the web. Eventually we will see the web and desktop systems start to integrate and merge back together. As this happens there will be more interactions between desktop applications such as hyperlinks and embedding being more common.
You may also see very convenient but intrusive AI activity. Such as monitoring what's on the screen to give suggestions for applications or configuration or whatever else.
When someone is trying to have meaningful discussion or suggest something important and does it all on twitter for me for some reason it automatically devalues whole effort and communicates that it's just some unimportant random musings.
I'm not perceiving twitter as a place for other things than generating outrage, virtue signaling, lighthearted memes and fun or top-down style notices/messages.
Ironically, at least half of the things that make Haiku awesome are buried deep inside its kernel rather than being anything directly to do with the desktop.
Also, the contrast between Haiku and Linux in my mind just points to the more fundamental difference between a design that says "you will do things this way, because this is better" (Haiku) and one that says "we'll give you a dozen ways to approach every problem because we're not into dictating optimum solutions, which may change over time anyway" (Linux). It's not that Linux can't replicate almost everything that Haiku gets right, it's more that it provides everything necessary to do 12 related things, on the same desktop.
Ultimately a lot of the things Haiku "dictates" are at levels that 99.9% of people, including developers, do not care about: the window manager and display server, launch interface, package manager, etc. Certainly those decisions have consequences, but ultimately (for example) do you really need the freedom to decide whether you want ALSA or OSS? People just want audio to work, and programmers want a sane API to work with it. The internals matter little to most people.
You don't have the freedom to choose OSS in terms of implementation, and haven't for more than a decade.
You do have the freedom to choose OSS the API. Some programmers even think that it's a better API (they're wrong, but what can you do?).
I wasn't thinking at all about user-visible things that Haiku "dictates", but internal design patterns. Most of them are great, and most of them exist on Linux too (but alongside a bunch of alternatives, hence the "problems", such as they are).
I can assure you FVWM statement is wrong. I had a GNOME issue recently, where it would enable panning on the primary screen, and I couldn't find how to disable it :-D
The desktop as it is right now is, imo, just fine - as in it's barely there. On Mac (work), I use cmd+space (Spotlight) to open apps, ctrl+` to open up a terminal dropdown, and maybe some Finder here and there. On Windows (leisure) I just use the windows key (open start menu, focuses on the search key) to open up things, and usually it's one of the game launchers (e.g. Steam).
The OSes are not in my way right now. If anything, I want app builders to move away from web technologies. Steam should be a native app. I suspect the main reason that we don't see more native apps is that the native UI toolkits are limited in their design options / freedom.
I wonder if there's work being done to compile web apps into something that improves performance. WASM is one step I guess, but that's only the 'executable' part of it; we need something similar for the DOM + styling. Or Electron / Chrome should do more aggressive (and persisted) optimizations.
Check red language it offers native GUI app development, ada+gtk also provide cross platform app development support. There might be others but I am not sure.V language also offers n
ative GUI apps
Way i see it is there are 2 camps of people when it comes to desktop computing. There are most regular people who just see it as a tool and want it to let them do what they want without having to put much thought into it. They will have various levels of skill but don't care much past that.
Then there are the people who I think of as enthusiasts. Who have have different things they care about and cause all the arguments and pedantic nit picking. There are the fanboys who love tech created by a company and always like to have the newest thing from them. There are the people who really into making their desktop look pretty. Finally there is the group who likes to have total control of their workflow and hate having features taken away (group i belong to).
So looking at all of this it is no wonder most people are not interested in desktops and the group that is are only going to like it if its from their brand of choice.
What would you add to your current Desktop OS though?
I am using macOS, yes there are few things I dont like, but generally speaking it is very very good. That is specifically the OS. Not MacBook the hardware ( which I increasingly hate... )
As a matter of fact I dont even remember what difference / new improvement they made in the past few macOS version. Other than instability and compatibility problems.
Safari needs lots of love. Stop dumping down Safari into iOS Safari. I use Desktop / PC for a reason.
As a matter of fact, I wish they would stop releasing new macOS every year for features. There is no need for that. Just do bug fix, library / API update and performance improvement.
Edit: Actually Text Layout is still crap, no one has solve text layout that takes writing Top to Bottom into account. That is including the Web Engine as well.
For many years it's seemed pretty obvious to me that most people don't have a need for a laptop for 'real work', but could instead just drop their phones in to a slick little dock connected to a keyboard, mouse and a large touch screen. I believe this was the original vision behind Ubuntu Phone before it was pared-down and made boring.
Completely wireless pairing would be even better.
It seems like the hardware is good enough, the only thing really holding us back is that the software development environments on mobile platforms are primitive compared to anything on PC (Linux, macOS or Windows). It's hard to imagine developing anything like Photoshop on Android for example, and even office tools like Word are terrible on mobile.
> And yet iPad is still very useful computing device for millions.
Frankly I'm amazed at the efforts some people do to make iPads work as laptops (at least from what I saw on YouTube). Once they add a keyboard it's almost as heavy and bulky than a laptop. Some add a HUB with a mouse and even plug a RaspberryPi in USB C to SSH into ...
HP tryed to do it with the Elite X3 and a kind of dumb notebook (all logic in the phone). When it came out, Windows Mobile was already dead, so ..
On Android there is DEX (at least I have it from Samsung), but it seams it exist, but there is no big love from wherever-is-behind-it.
The problem with that vision is that people break/lose phones a lot more often than laptops. We tend to carry your phone all the time with us, that leads to it being potentially exposed to way more dangers than our laptops. What happens to all your stuff when your phone is stolen in a pub? or when it fall into that subway gap?
If the answer to this is: it is all backed up automatically into the cloud and your new device will become a clone of your lost device, then you don't need the device in the first place, all you need is a terminal.
If the answer is: you're screwed, then maybe storing all your stuff on a device small enough to be easily stolen/broken was a bad idea.
> If the answer to this is: it is all backed up automatically into the cloud and your new device will become a clone of your lost device, then you don't need the device in the first place, all you need is a terminal.
Why do I only need a terminal if my data is backed up to the cloud, exactly? If I wish to watch a video that I downloaded to my phone, I can do that without having connection to the internet. If I wish to edit a photo with an application on my phone, all the processing of that file can be done locally on the device. Having a cloud backup of a device does not mean that there's no value in having a device with its own capabilities outside of the cloud.
> If the answer is: you're screwed, then maybe storing all your stuff on a device small enough to be easily stolen/broken was a bad idea.
For any valuable data you should have more than one backup solution, including data you store on your phone. Just because there's more risk of losing your phone than your desktop pc in your home doesn't mean that you take less measures to mitigate the chance of data loss on your desktop pc.
In my experience your average Joe carries most of their important stuff on their phone anyway (photos, contacts, payment options, email, instant message history) and very little of importance on their laptop.
Why can't they be MORE MODULAR? See also cellphones.
It shouldn't be hard to have a sane default setup, but also have the opportunity to easily have your own kind of dock, or add and subtract widgets, even switch between tiling and not, etc.
a few years ago everything that mattered was on the local machine (apps + files) and there was some interoperability among apps (+ standards around files)
now most of what matters is siloed in saas tools and we access those through browsers or siloed browsers
You posted someone's pretty generic tweet to HN and then your HN comment is the tweet with which you replied to the original tweet. It seems kind of weird to loop HN into a twitter conversation given there's already, I dunno, twitter.
Photo management? Image editing? Video editing? DAWs? Games? The browser itself? Compilers... static analysis tools ... a myriad of other native developer tools ?
I could go on, but what on earth is the "everything" that no longer matters? If you had said "everyone's gone to mobile", well, that's a claim. But "everyone's gone to saas and browsers" ... that's not even wrong.
Imagine a computer without internet. It's way less useful. Everything that matters is online now. Pottering wants to get rid of /home on Linux for example. You can even do all that stuff online by SSHing a much better computer.
>You can even do all that stuff online by SSHing a much better computer.
I pay 20€ per month to "SSH" into a worse computer and by "SSH" I mean I effectively do VNC over WebRTC. The video encoding performance of cloud servers is just abysmal. I'm forced to use 720p. Meanwhile my desktop can easily run 8 1080p VP8 streams at once without breaking a sweat. How much did it cost? 350€ for a Ryzen 2700 with 8 cores, 32 GB RAM and a new mainboard. I'm not counting the cost of the case or PSU because I already had those and there are super cheap options already. After 1.5 years (or 2 years if you include case, psu, storage) the server will have cost me more even though it performs far worse. Why would I still pay for the crappy server? Because it's a shared user experience. I'm not the only one on that server. My friends are there as well.
You know. If I actually used Stadia or Nvidia's alternative I'd be their worst customer. I'd run their dedicated GPU instances all day even if the only thing I do is listen to music with that instance while I'm playing games on my desktop.
In many high-security environments, the computers cannot be connected to the outside world. And yet those computers are still necessary and useful.
Many of us were using computers before the Internet became popular and they were still very useful despite their limitations.
Everything that matters is not online. Many businesses and government agencies have data that they would never want disclosed, let alone available on the Internet.
You clearly do not use "creative" software. While you might draw from the internet to do video/audio work, ultimately the stuff happens in front of you, burning cycles on a machine directly connected to your keyboard (or, potentially, accessing some compute-server farm that is 1 IP hop away).
Sorry to say this but I have been seeing this 'rethinking of the Desktop' trend for a couple of years now. Every now and then someone comes up with a new 'concept', but I am yet to see a real product that is not just a reformulation with good looking UI, and whose functionalities cannot be achieved with good old Keyboard-driven workflows in GNU/Linux.
Back in the early 2000s I had the idea for a desktop OS where the local apps were all just http servers, and every OS “window” just a “browser”.
My original idea involved a custom markup ui language akin to WPF because at the time HTML was not nearly as up to the task. HTML could probably handle it today.
Between Electron, WebOS and ChromeOS some of the ideas have been tried but not to as high of a degree.
Is there a paper about how their computing paradigm works? Basically the only thing I got from that page was, "damn, I'm glad I don't need to use gluesticks when I code."
The computer is the most remarkable tool that we've ever come up with. Yet the modern computer is the equivalent of a bicycle with stabilisers: the added support might be nice for children, but comes at the cost of agility and manoeuvrability. Do we really need the PC to become another device used exclusively for entertainment?
If you can show me the invested COW cost of electron is low enough I can live in electron apps.
If Microsoft re-works W10+ to be a thin shim over the kernel and Office works on Linux clean-port, not webapp, I'll be fine too.
Mostly what people want is the plan9 plumber: the ability to point and select things, and have s/w contextually work out how to DTRT with it. This is ORM. Corba. whatever.
Patents suck. If you depend on patented methods to make fonting hints work, you are evil.
Can you define what some of your acronyms mean? COW, DTRT, ORM. I think I know what an ORM (Object Relational Mapper) is, but not in the context you're using it.
arbitrary cut-paste of text, including or excluding hints, font, markup, images, "things" requires you to pass metadata about what has been cut-and-pasted. Thats what the various things like Corba tried to do: define meta-state about things.
I probably misapplied ORM above. It wouldn't surprise me if somebody said "wrong acronym" Copy on Write (CoW) is how you avoid duplicating memory for code or data which doesn't change: when you do what unix does in fork()/exec() your initial state is: things which haven't been written are the same. So it is zero (kinda) extra cost. If you write to that bit of memory THEN it copies, so thats Copy, on Write. Then you have two, because one of them changed. So, If all electron backed apps means the 200+mb of electron crap is Copy-on-Write, its not 200 for mail, 200 for word, 200 for slack, its 200 for "all of them" until one of them writes to memory: it reduces the memory footprint.
We all rely everyday on ancient and modern innovations copied from others; from farming to DDR3. It's just that now patents can lock up that innovation for decades in a system so dense you need a small army to look for possible infringement before you take even a step. Or just hope that forgiveness is cheaper than asking permission.
Of course without any patents many things might be lost, never invented, or kept secret for ages.
We have so many different Desktop "OS's" already. There's hardly a need for a _revolution_; strong stability and support is needed but there's just no paying market for it by the looks of it.
4 biggest desktop environments are by mega corporations (apple, microsoft) and by big open-source collectives (Gnome, KDE). But is that it? No, there's also a bunch of tilling-window managers with their own desktop philosophies, there are entire terminal drawing frameworks with full GUI support.
Desktop is awesome. If anything mobile is the one needing to catch up. iOS and Android are awful, awful environments outside of casual usage. Just try writing up a document on your phone! Just imagine what could be if they were on the level _desktops_?
Real question: Do we continue to dumb down software to exclusive casual usage for house-moms or we try to move our society to be more techn savy as we are clearly moving to a more tech relying world. Why shouldn't everyone know how to code?
To paraprashe Mr. Engelbart: it's a failed tool if you use it exactly the same way the day you bought it and a year after.