Hacker News new | past | comments | ask | show | jobs | submit login
Everyone seems to forget why GNOME and GNOME 3 and Unity happened (liam-on-linux.dreamwidth.org)
418 points by JetSpiegel on July 27, 2022 | hide | past | favorite | 397 comments



I think that MS here is being used as a convenient scapegoat. This worry was never expressed at the time in significant terms. The FAT and SMB patents were much more of a worry than anything related to the desktop interface - only outright clones were being pursued.

At the time, KDE, GNOME and Ubuntu developers alike, were simply drunk on popularity. Linux usage was in ascendancy, money was being thrown around, and the FOSS world was starting to attract young designers who saw it as a cheap way to build professional credibility. And then the iPhone happened and the whole UX world just went apeshit. The core teams really thought they had a shot at redesigning how people interact with computers, "like Apple did with phones". Interaction targets moved from keyboard+mouse to touch screens, because "convergence" and the fact that the mobile sector was suddenly awash with cash.

It's sad that people try to justify their missteps in this way. Microsoft was (and is) a terrible company and a constant threat to the FOSS ecosystem, but defining some of the biggest design choices of the Linux desktop only in antagonistic or reflective terms does a real disservice to those projects and the people who worked in them.

If experience is the name we give our errors, refusing to accept errors were made means stating you've learnt nothing.


> money was being thrown around

Im not really educated about the GNOME world, but I was involved on KDE side of things (very minor contributor). Money was _never_ thrown around. At best, a handful of engineers from a few different companies (TrollTech/Nokia, Novell/SUSE, Red Hat, Canonical) were hired to work full time and that was pretty much it. I don't think more than 10-15 engineers have done paid worked on KDE at any given time. Most of the contributions were volunteers.

There was a few attempts on KDE side to get paid by Intel to develop an office suite for mobile but it was also pretty small.

And to this date that is still the case with KDE. Very few engineers paid to work on it hired by entirely different set of companies like Blue Systems or Krita trying to make a living with very modest donations and contributions.

KDE is one of the biggest softwares out there. It was the first SVN repository to reach 1 million commits, etc etc and most of it has been volunteer work. Any claim that money was thrown at KDE really upsets me because it would be a complete mischaracterization of the nature of the project and motivations of people behind it.

Your main point is valid though. FAT and SMB patents were more serious threats.


The fact that money didn't necessarily reach people doing the work, doesn't mean money wasn't thrown around above their heads - which inevitably conditioned their choices. Novell didn't pay peanuts for SuSE, for example; nor did Ubuntu build a mobile OS for anything but to intercept money from mobile manufacturers. It was fairly plain to see that developers hired by those companies were given marching orders at various points; and because they inevitably were the core developers, projects steered accordingly.

(I would also add that, certainly in KDE circles, there was also a bit of a cultivated aura of rockstar developers around these fulltime hires, with folks making big calls without ever accepting they might be wrong... until several years (or decades) later.)


The fact that money didn't necessarily reach people doing the work doesn't necessarily mean that you can't make vacuous statements.

Name where that money was thrown around — if it was. Don't just name where it wasn't.


I already did: Novell paid $210m for SuSE, plus $50m they got from IBM. SuSE was always a key (if not the key) KDE "customer" and sponsor, beyond Trolltech. That was big money, back then.


A friend who worked at Suse at the time described Novell's price as a fair price for just the consulting division plus nothing for the rest. I don't remember that $50m or what it paid for.


And just after SuSE was brought almost all the KDE dev working at SuSE got fired.


[flagged]


If that's about me, I was in bed asleep. You posted that at six in the morning, in my timezone.

$210m is lowish price for a company with several hundred tech employees and a fine list of enterprise consulting customers. A friend who worked at Suse at the time described it as "a reasonable price for just the consulting division".


Microsoft purchased Nokia’s mobile division in 2013 for $7bn.


And afterwards sold Qt and stopped financially supporting Calligra (the office suite). None of that money went to KDE, it just made the small cash flow towards KDE completely stop.


Just some companies I remember: Eazel. Ximian.


Even after FAT was cleared up, exFAT was under patents until, like, 2019 before Microsoft declared they wouldn't enforce them on Linux. That was kind of an issue for a while as exFAT is the mandatory file format of SDXC cards. If you had a SD card >=64GB before then, Linux wasn't mainlining support because the situation was too risky.


Microsoft’s exFAT Linux grant also doesn’t extend to other free software. The patents expire approximately now (2022), though, so maybe it is a moot point finally.


Is it? I'm pretty sure I've formatted them to ext2/4 before and had no issues. Seems more like SD cards are used in devices, and devices stuck to exFAT. I had to format to ext to make my 256M SD card work in my N900.


IMO that's nitpicking. Of course filesystems like ext4 work with pretty much any block device. If the standard requires only exFAT support, devices like cameras aren't gonna support anything else.


Similarly, I have a digital audio player from Shanling and the device running the Linux kernel driving the device, the only supported formats for the microSD were FAT32, NTFS, and exFAT. When I mailed about it, they said most users don't have devices to format to ext4, f2fs, etc. and there wasn't much to say I mentioned ChromeOS and Android have the mkfs right there to use.


I'm just not sure what you mean by "mandatory" here. Do you mean that manufacturers were required to ship cards formatted with exFAT, or they wouldn't be allowed to manufacture SD cards? I actually don't understand what you said, I'm not trying to be misleading.


They mean that exFAT is the only filesystem format a host device that accepts SDXC cards is required to understand, in order for that device to be certified as SDXC-compliant / get to put an SDXC logo on the box. (Also, that an SDXC card that came formatted as anything other than exFAT wouldn't be able to put the SDXC logo on the box.)

By formatting your SDXC card with any other filesystem than exFAT, you won't be able to use it in all the devices that are "minimally compliant" with the SDXC spec in that they implement exFAT support and nothing else. Where "minimally compliant" devices include: literally all digital cameras, all portable game consoles, etc. — pretty much the whole ecosystem of devices that would ever give you a reason to own any SDXC media in the first place.


Thanks for this. I still didn't understand what was meant. To be certified as taking an SDXC card (and by that to have the right to display the logo), a device was required to be able to read exFAT.

Mandatory in this case does not mean that it couldn't be formatted with something else, it doesn't mean that devices weren't allowed to read cards formatted differently, and it didn't mean that manufacturers or distributors of cards were forced to use exFAT, as long as they didn't care about the logo, and weren't concerned about customer support being overwhelmed by people screaming that their new card was broken. It does mean that if Linux couldn't read/write exFAT, it would be incompatible with most available devices, and people would be screaming about how their Linux was broken.

Got it.


Extra wrinkle: the filesystem you have to use changes based on the capacity. SDHC and SDXC are basically the same protocol, but one mandates FAT32 and the other wants exFAT. This is based on the same 32GB limit that Dave Plummer snuck into Windows NT on a whim[0], even though FAT32's structures can go up to 2TB and Windows can totally parse filesystems that large.

Some devices actually will take reformatted cards that are "technically" "out-of-spec". You aren't supposed to be able to use SDXC cards with FAT32, but devices that never bothered with exFAT will still accept them. This is in fact the only way you even can use a large card on a Nintendo 3DS.

[0] https://www.youtube.com/watch?v=bikbJPI-7Kg


Exactly. Unless you use SD cards exclusively on PCs and Raspberry Pis, formatting them with ext4 is as bad as changing the edge connector pinout.


I think he means default mandatory file system. You can reformat it if you want.


I feel like GNOME and KDE may have very different stories money-wise; as all the commercial "we'll sell you a support contract" players in the Linux space these days — meaning, in the last 20 years — ship GNOME-based distros.


Yep. While Red Hat doesn't "own" GNOME by any traditional definition, they definitely pay for a controlling stake in it's direction.


There is no "pay for a controlling stake", they either send engineers to work on it or they don't. GNOME is not a corporation, you cannot buy stock in it.


That's more-or-less what I'm referring to. When you're dealing with a foundation that hasn't been bothered to fix decades-old issues in their software, sending engineers and paying for man-hours is exactly what gets things done. Since you're paying the engineers, you decide what they make. That gives you a considerable amount of control over the ecosystem, and explains how Linux users ended up in a development tug-of-war with enterprise patrons.


That still does not give you any more control. Anyone can still change the project or fork it or do whatever, just they do not because only the engineers working on it are the ones who do get anything done.

Anyway, the way to fix that would be to donate more to the foundation or to individual developers. In most open source projects, the non-profit foundation is not usually the primary source of engineers because they make a tiny amount of money compared to what the big enterprises do. But it is not very likely you will ever escape that tug-of-war, because developing an entire OS is very expensive.


SUN also shiped GNOME with Solaris so i imagine they also had some money invested there.


That makes sense - wouldn't they need the expensive enterprise Qt license if they went for a Qt based distro ?


> KDE is one of the biggest softwares out there. It was the first SVN repository to reach 1 million commits, etc etc and most of it has been volunteer work. Any claim that money was thrown at KDE really upsets me because it would be a complete mischaracterization of the nature of the project and motivations of people behind it.

I think people also forget the core of KDE is Qt. I think when you think of how capable and powerful Qt is, it kind of makes sense that KDE can thrive the way it does, now I don't want to take away from KDE because no doubt they build their own things on top of Qt, but I would assume a lot of the complex heavy lifting is done by Qt in aggregate.


so .. if there was no money why were KDE folk acting like Mozilla ppl? :o

I mean when Plasma came around it was so full of bugs, the useless activity thing, etc, and bug repors were just ignored (or all the response was get HEAD and reproduce on it otherwise no one cares.)


Well, the reason KDE 4.0 was to buggy was exactly "because" KDE had contributors. Porting from Qt3 to Qt4 was a lot of work. Some volunteers completed that much much earlier than others but had to wait for others parts of the project to ship. More complex parts weren't ready yet, and as such, some contributors were losing interest. So KDE had to ship it at some point to make sure the train keeps moving.

It was a combination of the massive, massive hype train that was generated by KDE + distro's not listening to KDE's ask for 4.0 and earlier versions not being shipped that caused a massive backlash. KDE learned from the mistakes and the 4 to 5 transition and 5 to 6 transition which is in the works now are both avoiding those mistakes. But KDE did fix those bugs. Within a few releases, KDE 4 was quite usable and stable, but the damage was done.

In my opinion KDE hasn't really recovered the bad image caused by that despite the fact that its in a much better position now.


Every Gnu/Linux desktop release was like that. Gnome 1.4 had Nautilus bolted on and very slow. KDE 3 had a consistently crashing KWin that restarted on crash. After many releases they become somewhat stable, but by that time new developers come along and complain about the cruft and passé coding styles, so a new version gets started in development.


> In my opinion KDE hasn't really recovered the bad image caused by that despite the fact that its in a much better position now.

Making incompatible changes between releases (3 to 4, 4 to 5 ) certainly didn't help. Who in his right mind would want to rewrite his program every time the main library changes ?


Gnome/gtk is no different here.


It feels like an eternity now. I left KDE sole time after the KDE 4 release. Before that it felt quite polished. It never regained that polished feeling, as long as I followed it after KDE 4 released. But I hear it is good now, so perhaps it finally got it.


I recently had to dive into the world of linux GUI file managers and their various accessories after living for years pretty much exclusively with just i3wm and the terminal for that sort of basic stuff.

I uh, can't say 'polish' is a world I ever associated with the likes of Dolphin and co., or any of the alternatives if I'm being honest. I suppose there's a bit of a decent veneer there if you only judge them by screenshots and aren't terribly critical of the standard 'modern' sort of design in general.

But that falls apart real quick if you dare to think taboo words like 'configuration' or 'option.' Though I suppose at least on linux you're just in for pain as you try to figure out what to configure, and how, and disappointment when you accept that you can't actually get everything the way you want, and that whatever subset of features you want isn't actually available together. But you can make choices. You won't be that happy with it, but, at least you aren't stuck with the worst option.

To be fair, I couldn't even tell you if I ever used KDE 4 or not. So, I'm utterly useless as a point of reference. I'm just still bitter.


TBH i use Midnight commander as a file manager, but i found Xfm to be fast and configurable. KDE is slow.


This logic seems rather backwards. You think they weren’t addressing bugs because they had money?


no, I mean the core KDE developer community acted like they were flush with money, users, etc.


get HEAD and reproduce

would make a perfect kde teeshirt


> KDE is one of the biggest softwares out there. It was the first SVN repository to reach 1 million commits, etc etc

Doesn't KDE use a monorepo to store each and every project under the sun whose name starts with a K?

I vaguely recalled from way way back that even the most basic of development tasks, such as building kmail, forced developers to checkout all other projects.

This doesn't look like something to brag about.


KDE was a big monorepo but SVN allowed partial checkouts.


I agree MS is probably being scapegoated here. I was using Linux around this time and I don't remember hearing about this threat either.

> At the time, KDE, GNOME and Ubuntu developers alike, were simply drunk on popularity.

But I don't agree with this.

I think a more charitable explanation is they listened to a loud minority, one I would have been part of.

I used Gnome 2 at the time, but I also changed a lot. It's been a decade, so forgive me for forgetting most of the specific app names but: I used compiz then later beryl. I replaced the bottom bar with a dock. I removed the application launcher and instead used the dock plus a Spotlight clone. I switched apps with the Expose plugin provided by compiz/beryl. My top panel had a clock, system tray, and I don't think anything else.

We were definitely loud, but maybe also a minority. Threads, blogs, newsites, etc constantly had discussion on new apps you could use to mod your Linux (mostly Gnome) desktop experience. I remember cycling thru several docks and several spotlight clones within a couple years. The people behind Gnome 3 and Unity very well could have seen all that buzz as an indicator that this is what people really wanted. So that's what they built.

But in retrospect saying that you find the defaults fine and there isn't a real need to change them doesn't make for a very interesting blog post. So the people who were just fine with Gnome 2 didn't get heard until Gnome 2 was gone.


I've also little doubt that Compiz/Beryl/Fusion envy was a major motivating factor for starting GNOME3 etc. Compiz gave desktop Linux a massive boost in popularity. Prior to this desktop rendering was primarily CPU bound and did not use compositing. If these projects were born from compiz-envy, then it was for the right reasons, and maybe they could've done a good job.

But then shortly after came the iPhone, and the developers felt like they had to copy design cues from Apple. It was no longer just about improving desktop rendering and enabling new kinds of visuals. They wanted a combined workspace for the desktop and phone, and they wanted a "pattern language" like Apple, so they limited the ways in which users could tinker with their tools, aiming towards uniformity. The GNOME3 designers/developers in particular had a very pretentious attitude and ignored complaints.

It might not have been so bad if they had succeeded right away, but the earliest releases of GNOME3 and KDE4 were heavily bug-ridden. The whole desktop would crash, or come out with strange glitches. Sometimes you would lose your desktop configuration and had to start from scratch. People who were using Linux for real work would have to revert back to something more stable, and the newer desktops had already left a sour taste.

None of this had anything to do with Microsoft.


> I think a more charitable explanation is they listened to a loud minority, one I would have been part of.

Having seen this type of thing play out in several industries including other open source projects, it would have been more like follow the competition I would say.

Users sometimes get listened to. Often times that's only to retroactively justify decisions that already got made.

What gets attention is competition. Incumbents are watched like hawks by everyone for obvious reasons. And the incumbents themselves are paranoid of newcomers or new ideas.

I worked in software at a company that developed CPUs. "Intel is doing this" or "Intel is adding that instruction" was the quickest and easiest way to get the attention of CPU designers. Not "our customers want this" or "that instruction will speed up this software our customers run". Those things would be considered, but you would have to do a lot of work, modelling, and justification before they'd look at it.

There's actually good reasons for this. Intel having done something, you could assume they had already done that justification work. Them deploying it meant customers would become familiar with it and accept using it (or even expect to be able to use it). So it's not necessarily stupid or lazy to put a lot of weight on what competition is doing, it can be very efficient.

You can get into these death spirals of the blind leading the blind when everybody starts fixating on something if you're not careful, though. I'd say this is what happened with everybody trying to shoehorn smartphone UIs into the desktop back then.


GNOME 2 was my bread and butter. It's absolutely my perception that GNOME devs have gone "drunk on popularity" at the time (or perhaps more precisely, drunk on power), and still are.

GNOME 3 would absolutely have not been pushed as the future of GNOME so unilaterally if the lead devs showed any restraint or signs of listening to their users, given how much initial outcry there was. Given that they announced they may be dropping X11 support in GTK5 (!) [1], I argue they are still drunk to one extent or another. X11 is the GUI server that most desktop Linux users use as of 2022! Have an ounce of respect for your users, and at least ask if that's what people want / are comfortable dropping.

I get it, it's their project. But if a good chunk of the Linux userbase uses your software, then it's good to at least ask if people are OK with the direction being taken.

[1]: https://www.theregister.com/2022/07/05/gtk_5_might_drop_x11/


>GNOME 3 would absolutely have not been pushed as the future of GNOME so unilaterally if the lead devs showed any restraint or signs of listening to their users, given how much initial outcry there was

I really doubt that. MATE exists as a continuation of GNOME 2 and it is just not very popular. The users who care about such things seem to be a very small minority.

>X11 is the GUI server that most desktop Linux users use as of 2022!

This is not the issue. The issue is the number of developers who are interested in working on the X11 support in GTK is shrinking. If there are no developers to actually work on it then it simply cannot be done, what the users use is not relevant anymore.

>Have an ounce of respect for your users, and at least ask if that's what people want / are comfortable dropping.

>if a good chunk of the Linux userbase uses your software, then it's good to at least ask if people are OK with the direction being taken.

There is no point to asking this. If you ask most users whether they want you to continue doing free work for them indefinitely, of course they will probably say yes. There is a contingent who never wants anything to become deprecated just out of principle.


Ironically there's someone linking[1] today to someone with basically the same complaints about how Firefox has less customisation dialogs than it used to.

They literally post screenshots[2] showing less options (eg, the "Close it when downloads are finished" checkbox has gone! Oh no!!).

I was thinking exactly what you've posted here.

[1] https://news.ycombinator.com/item?id=32258822

[2] https://digdeeper.neocities.org/ghost/mozilla.html#historyof...


Does anybody remember Ubuntu from the early 2010s era, when I downloaded my first copy?

Ubuntu was talking about scopes and lenses in your Dashboard (basically, internet content in your start menu, which they thought was revolutionary), Ubuntu TV (nobody adopted that), and later, Ubuntu on Phones and Tablets with a (to put it mildly) bastardized Ubuntu 16.04 install that some phones are still stuck on to this day in UBports, in part because they were actually a Ubuntu/Android hybrid system with some Android driver compatibility blended in for good measure because they couldn't get all the drivers running in GNU Linux, so you had a mess of Android and GNU Linux drivers simultaneously? And after all that, the headline feature, scopes again! Not better privacy, not features any user otherwise really cared about, just scopes and lenses again and "it's Linux, on phones, with a Qt interface!" Oh, and it will have convergence in the future.

Convergence, convergence, that was a buzzword for a while. A neat idea... but Microsoft had the same idea, tried it on Windows Phone, and then completely abandoned the idea, years after Ubuntu had the idea and years before Ubuntu cut their losses.

That was a completely unnecessary disaster. I love Ubuntu as much as anybody, but looking back, it 100% deserved to fail as much as I would have wanted it to succeed. It's just an example of how, sometimes, it doesn't take a evil competitor megacorporation trying to undermine us - sometimes we've got plenty of blame ourselves for not succeeding.


This is what I remember too. The author lays out a lot of the "why" detail on the business/legal side that I didn't know. But what I do remember is waking up one day in 2009 or whatever and discovering that everyone had drank the tablet/convergence Kool-aid and that Unity and GNOME3 were both going full modal/dock. At the time I actually just assumed they were doing it to stay relevant with the impending Windows 8 release (heh), which was shooting for all the same goals with mostly the same ideas.

I also recall being annoyed that they were ripping away something (GNOME2) stable that worked just fine, but also that Unity was an absolute cinch to learn and a true joy to use.

In the end, though, I ended up on KDE Plasma.


Back in 2004? they would ship CD's to Colombia, even if you put like 20 in the form you would get them a few months later and could share with friends.

Those were the days, we laughed and laughed.


[Article author here]

This is four years before that.

You are confusing effect with cause, or as we say in the British Isles, "putting the cart before the horse".


Yes, it was a crazy time for Ubuntu, especially for someone who first got introduced to it in 2007 and really enjoyed using an eeePC with it


That is crazy. I had the same experience but in 2009 and Ubuntu Netbook Remix. Now that was incredible. Perfect for small screens.


I was gutted when they stopped supporting the netbook remix, that was bloody peak UI/UX for smaller screens.


And MS Windows 8 was horrific in the same vein as GNOME 3 and Unity. It looked at the time more like follow-the-leader from the most ambitious Linux vendors (Red Shat and Ubuntu.)


I've never had any issue with gnome 3 or unity or gnomeshell or whatever its called. I started out on linux with Slackware and windowmaker. It was beautiful! And fast! And copy and paste didn't work between applications. I grudgingly used kde for a while and GTK. I then went osx for a while, and when gnome 3 and unity came out it had all my favorite bits about osx without the clutter. I went online to find oodles of hate. I never have understood it.


> I went online to find oodles of hate. I never have understood it.

You update your workstation one day to find the distro put a new UI on it. OK, this is interesting but you need to get on with some work, so you just switch back to the old one. That option's gone. Hmm, OK well let's have a quick play with this new thing... Wait, you can't move anything around to where you're used to? WTF. And there are whole different window-interaction paradigms, odd choices about workspaces over window manipulation. Huh... well at least you can still run the programs you need to. Wait, why isn't the messenger app you need for work appearing in the notification area? What is that in the notification area? A different app that you can't remove, don't need and with no way to replace it? WTF.

OK you really do need to go back to the old version. But the distro repos don't even have it, so you can't even install it. WTF? Is it just gone? You were just using that yesterday!

You go online to investigate and find out there's been this major update, some distros have switched to it already, and it seems like it was a deliberate choice to make it impossible to run the old thing alongside. The old thing is now abandonware, instantly, and if you want to work the way you have been for the last ten years, you're wrong, old fashioned, out of date, stifling open source and generally a problem. Wow OK.

Does that explain it OK?

I'm not saying the vitriol online is all justified, but this is how a lot of us first encountered Gnome 3/Shell. Personally at that point I'd already been using Xfce on a low-powered machine, so I just switched over, and within a couple of minutes I had my desktop back how I like it, and got on with my life. But I'll always remember that at some point the Gnome folks (and the distros share some blame) actively ripped away my way of working, and told me I was a dick for not thanking them for it.


Ok, yeah that would upset me as well. People always just complained about aspects of it and I was thinking of it in regards to home use. My workplace stuck with an ancient rhel version for at least a few years after I had gnome 3 at home. So even if I hadn't been used to the osx ui and doing everything from the cli from Slackware, I wouldn't have had the "I just need to get this work done!" Because I'd been exposed for years before it was my work environment, at which point I welcomed it. Your introduction and your reaction make perfect sense.


>Does that explain it OK?

No. The things you are saying are completely normal and expected, and I cannot see why you would be upset by that if you had been updating a Linux system (or any OS) for the years prior to that. Every OS has changed their UI, some have done it several times. Remember the initial switch to Windows 8, or MacOS X, or from GNOME 1 to GNOME 2? And even before that, how Linux distributions had frequently been switching between window managers or from GNOME to KDE and vice versa? It makes no sense to me to take issue with this one specific case. For a project that is trying to keep attracting new users, it is unreasonable to expect them to never change anything or never refactor the UI.

The real WTF is using something like XFCE and expecting nothing to change for 20 or more years, that is totally out of the ordinary with any comparable retail tech product.


> No. The things you are saying are completely normal and expected

They really aren't.

> Remember the initial switch to Windows 8

Yes, it was terrible, people hated the change and that it was hard to make it work like the older versions, it's still famous for being horrible and unusable and MS basically rolled it back in the face of overwhelming criticism. (Windows 8 is actually a really good point of comparison here, I never claimed that only the gnome people have done this)

> Linux distributions had frequently been switching between window managers or from GNOME to KDE and vice versa

Yep, and you could always switch back and forth, easily. Gnome 3 was scorched earth.

> The real WTF is using something like XFCE and expecting nothing to change for 20 or more years

Yet here I am, and here most people on Windows are, and on OS X. MacOS X from 2001 is still recognisable today. Move the start button on Windows 11 to the left and you've basically got an evolved Win 95/NT 4.

Evolutionary changes and improvements are great. People choosing to work differently is great. Designers exploring different ways of working, great. Wading in and hamfistedly breaking everything about how people work, then telling them they're wrong for wanting to keep working that way, is virtually never well received.


>They really aren't.

Actually they are, I cannot name a single system that has not changed in some significant ways since the 90s, unless development has slowed to the point where it basically stopped and became a time capsule.

>Yes, it was terrible, people hated the change and that it was hard to make it work like the older versions, it's still famous for being horrible and unusable and MS basically rolled it back in the face of overwhelming criticism.

I think this is still a poor example to illustrate your point. They rolled back some aspects, other aspects were changed in Windows 10 and now again in Windows 11.

>I never claimed that only the gnome people have done this

Yes, my point was that they all do it. It makes no sense to fixate on GNOME doing this.

>Yep, and you could always switch back and forth, easily. Gnome 3 was scorched earth.

I am not sure what this is supposed to mean. You could always switch easily, even now there are many distros that offer GNOME as only an option, or do not really have GNOME as a supported option at all. Maybe you were using a distro that only supported one option, but it was always within their authority to do that just like they could decide to only offer KDE Plasma as an option. So this complaint still makes no sense to me.

>Yet here I am, and here most people on Windows are, and on OS X. MacOS X from 2001 is still recognisable today. Move the start button on Windows 11 to the left and you've basically got an evolved Win 95/NT 4.

Again that is describing one specific area, other things have been completely redone or rewritten. And I have spoken to some other people who were very bothered by that change and didn't like it.

>Evolutionary changes and improvements are great. People choosing to work differently is great. Designers exploring different ways of working, great. Wading in and hamfistedly breaking everything about how people work, then telling them they're wrong for wanting to keep working that way, is virtually never well received.

Just to clarify, it is very possible for a user to be wrong about something. If someone is telling you they are never wrong, that will virtually never be well received either. It is very hard for me to interpret this in a way other than that you're describing changes that don't bother you as "evolutionary" but ones that do bother you as "hamfisted". Try to be a little bit more objective about this, perhaps if you feel like it, go through a list of many changes across these and compare them. You might be surprised as to what you find.


That's exactly what happened for me as well. Except I went with i3 (window manager) + Mate (gnome 2 fork) for the desktop utilities.

Overtime the stupid choices of gtk3 are getting on my nerves more and more. Now I have to patch the crap just to be able to navigate in the file browser by typing the begining of filenames (the default now starts a laggy and buggy file search. they removed the old code).


I've met people who loved Windows 8. Different strokes.


There were things I loved about Windows 8 over 7. The UI, however, was not one of them. For me, the killer feature for 8 over 7 or earlier was HyperV. Having local easily usable VMs was awesome. Now, with WSL 2, my need for HyperV has diminished, but it's still useful at times. HyperV was far more usable to me than VirtualBox. I'll admit I never tried VMware workstation, but I also wasn't going to pay what they were asking (at the time).


WSL 2 runs on top of Hyper-V.


This omits another IP-related issue of at least equal significance. KDE is based on the Qt UI toolkit library, which had an encumbered license at the time the GNOME project got started (1997) -- and the first free software license it was released under, the QPL, was not GPL-compatible. This was resolved in 2000 (with a GPLed release of Qt), but by then, GNOME had its own momentum.


We are talking about the era when GNOME 3 / KDE 4 / Unity were born, which was around 2007. By then, the QPL issue had long been resolved and nobody cared anymore.


Title on the piece is "Why GNOME happened". The reason why GNOME happened is QPL, not legal threats from elsewhere years later.


Yeah but the piece actually moves on from GNOME 2. The title is as confused as the author, or maybe intentionally deceiving.

I agree that QPL was why the original GNOME started.


> If experience is the name we give our errors, refusing to accept errors were made means stating you've learnt nothing.

That's a line worth memorizing. Well done.


I have to admit I had great help: the first half of the sentence is by Oscar Wilde.


> And then the iPhone happened and the whole UX world just went apeshit.

Oh, I remember when I could go to Medium every week or so, and there were numerous useful (or somewhat useful) articles tagged with UX - and then it slowly died out.


And it's quite sad. Over the years it feels like UX has become stagnant and rather sterile.


In contrast, I'm so so happy that UX has largely settled on "this works and people know it" researched best practices. We don't need to be reinventing the steering wheel or AC controls for no reason (looking at you, Tesla).

The 90s and early 2000s were a hellscape of design nightmares between geocities, MySpace, iframes, and everyone and their grandma trying out new designs. 99% of it was trash. The bog standard website doesn't have to be someone's art project anymore, and that's a great thing for usability.

UX isn't stagnant, it's just matured since its infancy. (edit: for example, consider Google's material vs Apple's design systems, where they agree or disagree, even though they are both pretty mature by now. What should a primary call to action look like so it's obvious? How should you navigate a multi layered app? How should notifications work? Etc. Still a lot of interesting problems, they're just more nuanced now. Or look at some of the best selling design apps with complex features and UIs, they are innovating plenty. Or dashboards. Or DevEx stuff. Lots of fun new developments out there, even though some of the older problems have been satisfactorily solved!)


Material is mature in the same way that a corpse or old cheese turns "mature". Mature, sure it is, but is it acceptable? I would say it's not.

Sometime after skeumorphic design started dying off, UX has gone in an extremely bad direction. UIs have dropped key affordances and visual cues in favour of oversimplified design elements. UI/UX designers started to optimise for juicing the maximum amount of money and attention out of users too - the focus on "calls to action" and notifications is one example. Features that are 10x force multipliers for users when used sparingly in designs, have been overused to the extent that they become cognitive burdens. Apps are getting redesigned on a whim, confusing users who cannot adjust.

UX designers are having fun with the new landscape, but users are not. It's time this is recognised, and designers start to pull more pages from Apple's design handbook, and not Google's.


> UIs have dropped key affordances and visual cues in favour of oversimplified design elements.

Definitely agree, it's not quite as bad in enterprise products, but consumer products definitely prioritize aesthetics and visual design over usability.

> UI/UX designers started to optimise for juicing the maximum amount of money and attention out of users too - the focus on "calls to action" and notifications is one example.

In my experience most of those user-hostile changes typically came from A/B testing and short-sighted product managers.

Good designers (and developers too) will push back when management asks them to implement dark patterns so they are certainly not blameless, but at least in my experience it's not quite accurate to say that these changes come from UI/UX designers.


Yeah, "optimizing for engagement" too often becomes "optimizing for addiction" =/


Those tools are there to be used or abused... sadly. It goes both ways. Having these frameworks makes it a LOT easier for your average dev or low-skilled dev-designer hybrid (like me) to put together something quite a bit more usable than I would've been able to create a decade ago. It's a huge improvement from, say, WinForms controls with a bunch of human interaction guidelines. Most of the engineers and devs I've met are not good with UX and UI at all, and in that sense Material is a huge upgrade for them.

But yes, its ease of composition also facilitates dark patterns and the gamification of addiction, etc.

I don't know that is really Material's fault, though, or that iOS design guidelines would magically protect the user from user-hostile business decisions. Those are made long before questions of IA or "which UI widget do I use" come into play, popularized by evil advertising companies (yes, Google is one of them) and lootbox vendors and free-to-play-for-like-5-min-and-then-it-costs-a-limb pricing models.

FWIW, I'm not a diehard adherent of Material or anything. At work, I'm currently pushing for our company to NOT use Material for our iOS app, even though it'd be a lot more work, and even though I kinda like Material myself. It just doesn't seem like the right thing to force upon our iOS users.

That said though, for web especially, Material is a lot better than nothing at all, or vanilla Bootstrap for all but the simplest sites. Once you have to start designing custom data tables, autocomplete, chips, cards, etc., Material has put waaaaaaaaay more thought into them than your average team would be able to afford to. For small to medium businesses, it's a godsend... it's not usually a choice (for web) between Material and iOS design, it's just Material or some custom in-house thing that accounts for like 10% of the complexity. There are other web UI frameworks too, but not many are as feature-complete as Material.

If there were a similar framework from Apple, that isn't iOS/mobile-specific, I'd strongly consider that too (is there? did I just miss it?). Or even if you have specific examples of "Material says to do it this way, but Apple's way is better for the user because _____" I'd love to see those and learn.

And for Windows, ehhh... Microsoft gonna Microsoft, and the world will leave them behind. shrug


> UX isn't stagnant, it's just matured since its infancy.

I'm sorry to tell you but the kid got polio and, as an adult, is quiet disabled.


UX? I would be even happy with more UI variety.

I remember jobs and gates being asked at a d8 conference in 2008 how they imagined desktops in 10/15 years.

Gates spoke about voice and motion as controls, while jobs argued that on desktop there wasn't much to change because customers were probably already used to things in certain way and any disruption without a net advantage was just not gonna happen.

He also said that the only chance for a ux change were mobile devices as they were a fresh start for everyone.


I write that off as the "magnetic aura" affect. When so many apps/pages are written the ones that have the closest UX to the UX the user already has are the ones that are going to be successful.

You can deviate only slightly from the existing 'critical mass' or risk not getting adpoted.


FWIW I agree. Old UI/UX wasn't perfect but it beats deserts of blank space and hamburger menu hellscapes.


Stagnant ? No. The favourite sport of UX people is moving elements around, autohiding things and moving things to titlebar.


UX for VR/AR must be a developing subfield right now, no?


Is anyone still seriously pursuing that? I thought it was a burst bubble by now.


VR hardware is barely entering the stage where it's affordable, yet becoming more useful for mainstream tasks. It's far from being a bubble, let alone from bursting.


I feel like I've been hearing this since the Nintendo VirtualBoy, and then again with the failed VRML craze, and then again with Oculus, and then again with that leaping whale thing, and then again with the Microsoft and Google versions... yet fundamentally nothing has changed. It's glasses that project 3D images. So what? Who actually wants this stuff? The novelty wears off after a game or two, and aside from super niche industries (remote surgeries?) there's never been a convincing use case...


One hypothetical killer app for me is the ability to have unlimited displays arranged spatially (the last part is important - virtual desktops on normal displays just don’t work for me), as well as visualisation of complex system states in a way that can make use of our built-in spatial reasoning abilities. You need pretty dang dense VR displays for this, and the old stuff was nowhere close to being usable.


I can't wait for MS Word VR/AR plugin.


> I think that MS here is being used as a convenient scapegoat.

Yup, the fact that many prominent desktop people from that era refute that here, that all the links in the article point to TheRegister, and that nobody else seems to remember it happening that way - it is probably safe to disregard it.


Hi. I wrote the article. You're wrong.

I work for the Register now. That's why I pick links to them: I have saved searches that make it quick and easy.

I wasn't working for them then. In fact I only started freelancing for them in 2009 and went full-time late last year.

Secondly, many other now-popular tech news sites weren't around 15 years ago, or the links have gone. The Reg is stable and doesn't delete or move old stories.

This was entirely factual, and if you go Googling you can find plenty of supporting evidence from other sites.

CNN: https://money.cnn.com/magazines/fortune/fortune_archive/2007...

ComputerWorld:

https://www.computerworld.com/article/2566211/study--linux-m...

And so on.

https://www.mercurynews.com/2007/05/15/microsoft-says-linux-...

The fact that some people weren't paying attention and forgot this does not mean it isn't true.


The claim you made originally was not just that there were patent threats: it's that those patent threats were taken seriously by the people building the desktop environments and that the major redesigns were motivated by the threats. You keep posting links to the first, none to the latter. And it's specifically the latter part that people doubt.

If this had actually happened like you claim, there would surely be some kind of actual evidence of the redesigns being motivated by patents. Links to email discussions, design documents, recordings of presentations, something. But you seem to have nothing, and are instead trying to boost the credibility of the stuff you invented by providing links to the part of the story that wasn't actually under doubt.

Even if there had been some kind of a secret motivation behind the designs, and the publicly claimed justifications were really just a parallell construction, it's completely implausible that this shadowy cabal could have mainted opsec for this long without something leaking to the public.


The fact that this is the quality of journalism that The Register pushes these days, is just sad.


  > If experience is the name we give our errors, refusing to accept
  > errors were made means stating you've learnt nothing.
Right into the quotes.org file. To whom shall I attribute it?

Seriously, you've just summed up a significant fraction of both my parenting and my professional life.


The first half is by Oscar Wilde, the second half from my brain - although I cannot state beyond doubt that I didn't pick it up somewhere else in my 40+ years of life.


Microsoft is the reason the Linux desktop is still not a thing in 2022. At every step of the way for many years Microsoft has always done some psychotic ill-intentioned thing to monopolize the PC. Even in 2022.

- OpenGL is crossplatform, we could have crossplatform games... but no. Microsoft didn't like that. So they made sure nobody used OpenGL so that Windows became the de-facto platform for PC games: http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-an...

- Regulators demanded Microsoft to define an open format for Office documents. They did, by creating the most complicated standard they could in a 6000 page document, and then decided to not use it by default in MS Office so that they could keep an advantage over competitors. https://en.wikipedia.org/wiki/Standardization_of_Office_Open...

- Internet Explorer for many years was the standard browser and internet browsing experience on non-Windows platforms had compatibility problems. They didn't pass Web Standards Compliance Acid Tests, so that a website that respected standards didn't render properly on Internet Explorer. Sometimes Microsoft websites would deliberately send broken CSS stylesheets to non-IE browsers.

- They tried to lock down the PC with Palladium, they failed. They now did it with Pluto and succeeded. https://www.gnu.org/philosophy/can-you-trust.en.html

- They tried to compete with google by stealing their top results. https://www.wired.com/2011/02/bing-copies-google/

- They have bought every major PC game publisher: Bethesda, Activision/Blizzard, Mojang, and countless others.

But there's hope...

- Vulkan is the successor to OpenGL, and there is an open source Direct3D implementation on top of Vulkan. Wine/Proton uses this to run Windows games with a very low performance overhead.

- There are Open source office suites other than OpenOffice/LibreOffice with a proper UI that do not look like developer art. There are also cloud office suites like Google docs you can use for free.

- Internet Explorer is dead and their successors are dead.

- Nobody uses Bing and it will likely never be the top search engine.


OpenGL is such a pain in the ass. You'd know if you tried it.

I'll just repost my desktop linux rant from last year since nothing's changed:

---

1. Update glibc and everything breaks

2. Update Nvidia drivers and everything breaks

3. 99% of laptops have at least one device with missing or broken drivers. 802.11ac is very old in 2021, but the most popular ac chips still need an out-of-tree driver which will break when you update your kernel. Have fun copying kernel patches from random forums

4. Even the smallest of changes (ex: set display scaling to something that's not a multiple of 100%) require dicking around with config files. Windows 10's neutered control panel is still leagues ahead of Ubuntu's settings app. (Why are the default settings so bad? High DPI 4K monitors have been out for a decade now. Maybe they've fixed this since I've last checked. Or maybe all the devs use 10 year old Thinkpads)

5. Every config file is its own special snowflake with its own syntax, keywords, and escape characters

6. Every distro is its own special snowflake so it takes forever to help someone unfuck their computer if you're not familiar with their distro. Releasing software on Linux is painful for a related reason: The kennel has a stable ABI, but distros don't. You have to ship half the distro with every app if you want it to work out of the box. Using Docker for GUI apps is insane, but sometimes that's that you gotta do.

7. The desktop Linux community seems to only care about performance on old crappy hardware.

8. Audio input and output latency is really high out of the box. That's one of many things that require tweaking just to get acceptable performance.


This is an accurate description of the different ways it sucks when one insists on using software on unsupported or semi-supported hardware.

Of course there will be unsupported wifi cards on the market. Of course there will be cards which kind of work when you load some unsuported third party driver but they will promptly crash on some future version. Those will never be supported in upstream software. Don't buy them and force Linux on them, it won't end well.

Don't buy a Dell PC and be angry at Apple when MacOS won't run. Unless you do it with the intention of working on a port. It's the same situation with any other operating system.

It's also been pretty clear for over a decade that nvidia is unlikely to be supported in upstream software, as long as they won't use the same standards as everyone else. There's been some recent developments on that front which suggests this situation may not be forever, but don't hold your breath.

Given the choice, it's pretty common to use Linux, at least among developers. One of the great things about it is that it just works. I have now thrown away the hardware but I used to have a workstation with a Debian install that was about two decades old. It had only been upgraded in-place, never reinstalled, and how many libc upgrades that included I do not know. The migration to the new executable format in the 90s was a bit problematic in the beginning, but that worked out well too. That's the sort of stability you can expect from it.

It was a long time ago I bought any new computers, but in the laptop realm well supported for a long time meant the performance Thinkpad series. Now all the big providers sell laptops with Linux pre-installed, so I take it they will work great too. But there are a lot of crappy Windows-type laptops out there. They're the kind that turn themselves on in your bag and cooks themselves. The Linux experience aren't going to be great with them.


It is the situation with Windows 11 as well. They have broken backwards compatibility with older hardware.


I HATE Nvidia. I had setup ubuntu once. Got everything running. Think this was 16.04. And for a couple of months it was rock solid. Then I was trying to play a game and some forum said to update the Nvidia driver. So I did… after rebooting I got a black screen. No UI. No she’ll. Nothing. If I shut down the shell would flash up for a second. But essentially I was locked out of the machine because the Nvidia drivers broke everything. Even booting into safe mode or whatever resulted in a black screen.

That’s the last time I bought a desktop computer with Nvidia card.

AMD even with their new cards being a bit buggy has never had such issues.


This rant sounds like an informercial full of fake problems nobody has.

I have used Linux for decades and have never experienced any of this.

1. Use the software packaged for your distro. Want to update? then update. It just works.

2. Use the packaged nvidia drivers for your distro. Want to update? then update. It just works.

3. Prefer a laptop that is compatible with Linux, if that's not possible, use a laptop-friendly distro.

4. 99.99999% of people will never ever try to do that probably that's why.

5. Most of them are key/value based despite the syntax.

6. Yet most distros inherit from a major base distro like Redhat, Debian or others. If you want to use Slackware or Gentoo then that's your own choice.

7. There are many distros but, many of them are Debian or Redhat based, the ones that aren't are used by a tiny percentage of users.

8. And Pipewire solves this, and is the default sound server on recent distros.


4. I'm an old. I will almost certainly try scaling my display by 125% or 150%.


Pipewire does not solve that unless your app uses their API. For 99% off apps (100% of VoIP?) you still get the same old latency until you go tuning configs. Pipewire is also not enabled on any LTS.

The rest of your replies reads as "La la la, I can't hear you!" which is pretty typical of the desktop Linux community. Deflect, deflect, and deflect :)


Not sure what "solving" means but it's not correct that Pipewire doesn't offer any benefits unless you use another API. Depending on what you compare it with, of course. If you use audio semi-professionally you might already use low latency software and not see better latency, but compared to Pulseaudio the difference should be noticeable.

Of course, if the application is VoIP than Pulseaudio was already good enough, as network latency is an order of magnitude greater, but for realtime streaming applications the difference is there.

Most end users won't care however. They switch when the distribution they use switches over.


You just listed a bunch of obscure issues that most people do not have and tried to portray that as being a blocker for a regular person adopting Linux.


1. Not all software is available in the repos, in some distros this happens frequently. And no, it doesn't "just work", I've even tried a popular distro that destroyed itself with the first update on a clean install.

2. I invite you to try an Nvidia Optimus setup. Good luck and hope you don't end up with a black screen because your distro thought it can handle it but doesn't.

3. Even if that was easy, most people trying out Linux do so with the hardware they currently have. And more often than not this means broken suspend or wifi or...

4. We have pretty cheap 4k screens nowadays and laptops with 4k. It's not that uncommon.

6. Are you seriously implying distros inheriting from another are all compatible with each other?


Also the point about using Docker for GUI apps is crazy. I've never seen the need for such thing. AppImages, Flatpaks, heck even Snaps exist and allow developers to distribute applications along with the dependencies. They have their issues but they're good if the developer does not have much time to mess around with deps and testing in different distros, some users also prefer them due to "sandboxing".

I'm not saying Linux desktop is perfect by any stretch of the imagination, but it's not the clusterfuck NavinF is talking about.


Docker? Just use SELinux or Apparmor, seccomp, Xephyr, or something.


Disagree

Linux desktop hasn't been a thing because running anything other than FOSS on linux is an absolute nightmare. If SOMEONE hasn't compiled your gui for your specific linux flavor, you're in for a bad time.

Heck, if you are looking to distribute a binary, it is often EASIER to target windows and use WINE instead of trying to get binaries working for the 5 million platforms on linux.

There's a reason 30 different containerization package distribution systems (flatpak, snap, et all) have cropped up.

In addition to all that, there's simply the fact that the Linux GFX backend is a complicated mess. XFree86? X11? Xorg? Wayland? Y11?

All of those + the fact that windows management is a DIFFERENT problem from the actual rendering so now you have concepts like "Window" strewn across 20 different systems.

The Linux desktop is a lovable freakshow. The reason it hasn't caught on is because it's far too common to need to grab the man pages, pull up vim, and tweak 30 config files to get it just right for your specific set of hardware.

That isn't to say that OSes like Ubuntu, Redhat, or Debian haven't made herculean efforts to make things appear simple. However, it's far from "Well, microsoft bad therefore this didn't happen."


> it's far too common to need to grab the man pages, pull up vim, and tweak 30 config files

Well, sure, but the desktop itself is pretty solid.

I've had all my relatives on bog standard Linux desktops for the past decade or so, and the half-yearly questions about it are limited to things like "what does disk full mean?" and "should I press this button that says 'Upgrade'?" (yes, you should).

That's when all your applications are web based anyway. If you have special local software needs you should of course use whatever they support.


> Heck, if you are looking to distribute a binary, it is often EASIER to target windows and use WINE instead of trying to get binaries working for the 5 million platforms on linux.

Slackware was never a "supported platform" yet i had no problem running binaries (java, Mentor Graphics) on it.


The Linux desktop is its own worst enemy.

The fragmentation, the inconsistency, the need to drop into a terminal to debug certain things, and the difficulty of dealing with driver problems for inexperienced users all do more damage than anything Microsoft attempted to do.


I dunno, I'd say it depends on how you like to work.

GUI program not running on Linux? Find the CLI call, run in terminal, grep and analyze the logs, find the error.

GUI program not running on Windows? Eh, good luck digging through the eventlog and 90% see nothing.


I consider the "need to drop into a terminal to debug certain things" a big plus to be honest. I work with windows machines from time to time and wish PS would be simpler and more expressive.


> the need to drop into a terminal to debug certain things, and the difficulty of dealing with driver problems for inexperienced users

Have you used Windows recently? At least the former is a growing problem. Dropping down into the registry to change or create keys is no cakewalk either. If you're an inexperienced user, you're in trouble on Linux and Windows both.


[flagged]


As someone who's work gave me a brand new Dell XPS 15 with broken Wi-Fi drivers out of the box, I kindly disagree with this. Linux is an absolute shitshow unless you happen to have "well supported" hardware. I've had far more problems getting and keeping even Ubuntu stable (oddly, Arch has been the most stable so far) than Windows. Windows still sucks for a lot of reasons, but drivers are not one of them.


Windows is perfectly usable, unfortunately. Microsoft's packaging isn't terrible, and web browsers support it just fine, which makes it good enough for the majority of users. While I haven't used Windows in quite some time, I totally get why people still use it. It really does "just work" on most hardware.

The parent comment was on the right track, I think. The Linux desktop constantly sabotages itself, and things like the x11/Wayland rift, the Snap/Flatpak vs Native Packaging battle, and all of the new abstractions do not lend themselves well to usability. Pretty much every open source project has made this mistake at some point, but I think the most consistent party that works against it's community is GNOME. Not only do they ignore the feedback from their users (a problem so pathological it's a meme by now), but they don't ask for help when they make new changes. Inevitably, this leads to worse UI (like the GNOME 40+ dock, the worse GTK widgets, the ugliness that is Adwaita, etc.), which leads to Linux being seen as neckbeard software again.

If you enjoy Linux, I encourage you to use it; I use it too, and it's great for a lot of stuff. Everyone responsible for marketing Linux to a wider audience has failed though, and the effort to replace Windows with OSS is looking increasingly Sisyphean.


You can have Snap, Flatpaks and native packaging running side by side so... and Ubuntu has a unified interface for all of them (the "Software" application). So yeah, another non-problem for the end user.

The end user doesn't have to know what X11 and Wayland are. You don't need to know this to use most software. Just use what your distro provides.

Most of the complexity you mention is hidden from the end user.


As a developer, we've had to tell users to switch to X11 to use certain features that are unsupported on Wayland (related to screensharing), so I'd disagree that these things are hidden from end users. Wayland is fine for the everyday use case but there's a lot of software out there that is built on tools like xev and xprop that have been part of the ecosystem for decades.


Yep. Wayland is nice, and there are components in it that I like, but it gets destroyed by x11 in terms of feature-completeness. Even adding wlroots doesn't get close to the out-of-the-box capabilities x11 offered.


There's also color management which is not yet supported in Wayland if you need monitor calibration or to do any print work.


> things like the x11/Wayland rift, the Snap/Flatpak vs Native Packaging battle, and all of the new abstractions do not lend themselves well to usability

Which is why opiniated distributions exist, so that users get a coherent environment from people who made the choices for them.


Microsoft's OOXML spec was so much larger than ODF for at least 3 reasons.

1. OOXML is an XML-ization of a descendent of earlier binary office formats. You can do things in binary formats that don't map efficiently to XML. That resulted in OOXML having uglier and more verbose XML. If I recall correctly this was only a small part of the size difference.

2. OOXML was way more complete. This was a big part of the size difference.

The first approved standard for ODF for example gave no details on how formulas worked in spreadsheets. All you could get from the standard was that you should support formulas.

The OOXML spec on the other hand described in detail the format of formulas and gave detailed specs for a large number of functions. There were often 2 or 3 pages for a single function.

3. When IBM and Sun were touting that 6000 page number for the draft OOXML spec, they were using a printing that has something like half as many lines per page as the printing of ODF they were comparing with.


The other side to that is that MS appeared to "stuff" ISO committees in various places to force it through, and didn't provide enough detail in their huge spec for true interop.

Perhaps neither was perfect, but it certainly seemed like MS played dirty.


Well, they were/are the dominant player who could not adopt another standard and had zero incentive except regulatory and public opinion.

It is a proprietary standard full of History. Nothing else than horrible in standard and process is to be expected.


.doc and .xls stopped literally every single business I ever worked in from dabbling with Linux desktop for staff.


Same here. Lots of money wasted on Windows licenses.


Funny how nobody remembers how OpenGL thing was SGI’s proprietary API designed to replace the standard, cross platform PHIGS.


Microsoft phylosophy is the reason. KDE and GNOME at the beginning (1.x) were quiet good. If developers would have fixed the bugs linux would have been better. But fixing bugs sucks, do they release every couple of years a new version, incompatible with the old one and the circle is closed. The sane users had given up a long time ago. My 15 years old fvwm config still works today.


Like Windows 95 that had blue screens of death every 5 seconds? And would die if you send it a ping?


This is just confirmation bias. Let me present an alternate view: Microsoft is a corporation that wants to make $$$.

1. Making OpenGL a compatibility layer on top of DirectX simplifies the stack, and helps Microsoft save $$$. (Apple doesn't support OpenGL 4.4+, is that "psychotic ill-intentioned"?)

2. Giving things away for free costs $$$. (If the EU demanded that IBM provide an open interface to its mainframes, do you think they would comply eagerly? Would it be "psychotic ill-intentioned" if they didn't?)

3. Standards are almost never followed by the company with a dominant position. Standards benefit companies that are not dominant in the market. Adhering to standards as the dominant player in the market = wasted $$$.

4. Trusted computing puts PCs on the same footing as Playstation/Xbox and allows content to be streamed without risk of piracy. That's the simplest explanation. A lot of people would like to stream 4k netflix on their PCs, but they can't because piracy is too great a concern. More popular features = more $$$.

5. This has nothing to do with Windows vs Linux. Regardless, plagiarizing your competitor makes you more $$$.

6. It should come as a surprise to no one, owning property makes you more $$$. (If Nintendo buys a game studio, is that also "psychotic"?!)


Even the king of open and darling of oss dudes-- Google, doesn't support caldav and cardav natively in Android. So I cannot host radicale and expect to get my contacts in Android without 3rd party app.


No one forces gamedevs to ditch opengl.


Read the information in the link about how DirectX became the de-facto standard (sections 2 and 3).


The link seems full of MS distrust rather an objective review. DirectX overtook OpenGL thanks to the mess started by GPU manufacturers that introduced their own proprietary extensions and ARB sleeping.

https://softwareengineering.stackexchange.com/a/88055


DirectX was simply a better API. Game dev is hard enough without dealing with OpenGL's jank-ass state machine, which adapted to vertex and pixel shaders about as well as the Unix process model adapted to threading. One of the great tragedies in 3D graphics is that Project Fahrenheit never got off the ground. Under Fahrenheit, OpenGL would be put out to pasture, DirectX would become the new low-level standard, and a higher level scene graph API would be built on top of DirectX.


True, but that was not the reason it was abandoned initially.

Back then multi-core CPUs were not much of a thing yet.

The reason it was abandoned is because people were told OpenGL on Windows Vista was going to be implemented through a compatibility layer that was going to be slow.


By the time Vista came around, virtually no one except Carmack was seriously targeting OpenGL.


Even Carmack eventually changed his point of view on DirectX, that is how good OpenGL happens to be.

> First person shooter godfather John Carmack has revealed that he now prefers DirectX to OpenGL, saying that 'inertia' is the main reason why id Software has stuck by the cross-platform 3D graphics API for years.

https://www.bit-tech.net/news/gaming/pc/carmack-directx-bett...

Vulkan seems to be doing all the same mistakes all over again, such is the pain of committee designed APIs.


Yeah the linked post is nonsense.

Apple's "Look and Feel" lawsuits had failed by then, and it was clear there was no way Microsoft would succeed. As you say, they were only pursuing exact copies.

The real problem was the KDE/Gnome split.

KDE came first (1996), but used the non-free Qt. Gnome was started (1997) to be entirely free, and so developers split like that.

Subsequent splits were for a bunch of reasons, but none were really related to MS threats.


It's not nonsense; I wrote it, and I provided citations, and I've posted more upthread here.

I wrote it because it was 15 years ago now and everyone is forgetting about it now.


I was there too - hell I was using newsgroups when Miguel posted his Gnome post.

And I'd note Miguel says the same thing: https://news.ycombinator.com/item?id=32258035

Just because The Register was posting something it didn't make it true then, and still isn't. The Register always posted random anti-MS scapegoating articles, and it was nonsense then and remains nonsense now.

The article you post at the end of the linked article is typical ignorant nonsense. Patents don't cover visual appearance - Design Patents can, but they are very specific and MS didn't have them.

So it's both ignorant of IP law, and of the history of the projects.


I remember at the time the FOSS community found a bunch of prior art that would have most likely invalidate the Microsoft claims. There were plenty of graphical UI in existence that Microsoft probably copied. I do not have links to the epic /. flame-fests on the topic, but perhaps somebody else can find them. Lots of people were sharing links on slashdot with evidence showing how Microsoft would lose.


Yeah I don’t buy it either. If there were Windows 95 design patents they were more than 11 years old by 2006 and had only a couple more years of life left.


[Article author here]

Follow the links I put in the piece.

No legal action was taken in the end, partly because they couldn't work out who to sue, and partly because of the legal principle of laches. I cover this in the last article I linked to, which I wrote, a decade ago.


Why does Miguel call it nonsense?


It is generally not of interest if you buy something or not.

For other readers it would be interesting to read evidence about the original article being wrong / right or more details that helps to better understand the situation.


I did point out the patent was near the end of its lifetime. Also, the founder of the gnome project is in this thread, calling it nonsense.


A key difference between a utility patent and a design patent is that the design patent, like a trademark, does not expire.


That is not true; a design patent from 1995 would have had a fourteen year life.

https://www.uspto.gov/web/offices/pac/mpep/s1505.html


I see, I don’t know why but I was under the impression they were infinite but they apparently are not.


There was a time when Ubuntu or Fedora, I don't remember which, but it had Gnome 3 from its earlier days, had a fancier concept of multiple desktops to where I could be on any desktop and if I had Pidgin (or some similar IM application... idr which) open and got a message, it would "zoom out" on my desktop view just slightly enough and on the border show an overlay of my chat convos, so I could be on any desktop and talk to anyone, kind of like how you can answer a text on iOS from a notification if you hold tap on it or whatever. I thought that was next level, then I never saw it again, nor could I figure out how to make it happen again. I think they really were close to making some amazing UIs but due to the nature of Linux neck-beards being slow and repugnant to change (looking at Devuan, Trinity Desktop Environment, MATE, and so on...) I'm sure they opted to remove it.

I really would love to see someone just take Ubuntu kind of like how Budgie was inspired by Material Design from Android, and just work on something new that really makes your software a little nicer in terms of being integrated with the Desktop Environment, something I would argue Mobile gets well, and would be a boost for multi-tasking. I really do like Ubuntu Budgie, but it feels kind of incomplete. The most decently integrated DE I can think of is probably KDE since a lot of the apps are made with KDE libraries, and they integrate nicely, you can expect the same behavior out of all of them.

I know I kind of went all over the place, I guess my pipedream is to see some companies invest into R&D on Desktop Environments to see what we can do better. I'm really getting sick of Windows again, with all its telemetry but it is a nightmare to get Linux going due to the GPU driver situation, plus I don't want to spend a fortune on a GPU.


I was the lead designer of Unity. Unity was not a response to patent concerns, but rather the manifestation of a convoluted OEM strategy involving exclusive distribution of distinct design features, and a write-once-run-anywhere dream of running Ubuntu on any device from netbooks to big screens to automobiles.


When Ubuntu first switched to Unity I hated it. But over time it got better and I got used to its concepts and I'm daily driving it to this day. Now I even fear the day Compiz and therefore Unity stop working - it has become my DE of choice. I love its space efficiency: the global menu bar and the local window menus (I have it configured to show the window menu when I hover the title bar) and the "task bar" on the left: I use my displays in landscape mode, so width is abundant but height is at a premium. The grid layout shortcuts just work, something that Gnome 3 couldn't really do when I last tried it a couple of years ago. I don't often use the HUD to activate menu entries, but it can be very nice for programs like GIMP when you remember the name of an entry but can't find it in all those submenus. I wish Canonical would still maintain it, over the years it seems to have regressed a bit. Nowadays some (I think mostly SDL2) applications tend to slow down the whole UI and window transitions "take forever". But apart from that it has served me well. So thank you and everyone who was involved with Unity!


So true! I loved Unity in the late 2010s. Unfortunately, now where it is no more maintained and GNOME3 does not really provide the same features, I switched to KDE. What I miss most is the space efficiency and the global menu bar / searchable HUD. I miss so much time digging in menus instead of just typing and hitting enter.


Well, if nothing else hopefully someone else notices this witness statement exposing the above-linked revisionist history for what it is--a fabrication.

On the topic of Unity: I, a linux user since the mid 90s, worked an actual linux job through the forced deprecation of GNOME 2.. I don't have much anything constructive to say about it. I even used fvwm2 for a while after gnome2 got yoinked.. it was a nice reminder of my halcyon days writing perl in xemacs, running fvwm on my sparc 2. It wasn't too long after that I landed on xfce. I still wonder what gnome2 could've been.

All I ever wanted was a compositing, focus-follows-mouse, raise/lower window manager with a full-featured keyboard shortcut config. I still feel as though we were sold out. I don't think my mental representation of Canonical will ever recover. It just felt so blatantly anti-user during that period, and that impression has not been tempered with the passage of time.


I wrote the article. The legal threats I cited were not and are not a fabrication.

I did not write these other articles covering it:

https://money.cnn.com/magazines/fortune/fortune_archive/2007...

https://www.mercurynews.com/2007/05/15/microsoft-says-linux-...

https://www.computerworld.com/article/2566211/study--linux-m...


The problem is you're still failing to demonstrate a causal link between threats of legal action and GNOME3 or Unity.

I was around during this time and early adopter of GNOME Shell (i.e. 2.31.x); there was and is zero evidence indicating that legal issues were even a consideration.

There is, on the other hand, mountain ranges of evidence indicating this was driven by everything but legal issues[1][2][3].

[1]: https://wiki.gnome.org/Projects/GnomeShell/FAQ#What_problems...

[2]: http://wingolog.org/archives/2008/06/07/gnome-in-the-age-of-...

[3]: https://wiki.gnome.org/Events/Summit/2008/GUIHackfest


Thank you, I appreciate the behind-the-scenes context here. Unity ran well on my low-spec low-res hardware that choked on Gnome Shell almost instantly. It was great for making full use of a small screen and hand-me-down computer; meanwhile Gnome was throwing blank space and memory around like they were nothing. My impression at the time was that Unity was designed to be able to run on embedded devices and netbooks while Gnome was expecting a fairly new desktop or laptop.


I think that’s just polish that was missing at a time for Gnome. Check out gnome now, it is very fast on even low-end laptops, and has better input latency than a goddamn mac.


Is that polish, or is it just the moving target of low-end machines catching up to the system requirements? I recently tried Ubuntu 22.04 on a computer with minimal graphics hardware and experienced extreme graphics lag until I switched to XFCE. I haven't tried other DEs on that machine so the story may be the same, but I still wouldn't use Gnome on any but my newest machines.


There may have been some misconfiguration there, but it is buttery smooth on a low-end iGPU for me.


Okay, could just be my machine. I was just so appalled by how badly the default install ran. It's a production server now, and runs great with XFCE, so I probably won't be messing with the configuration.


This is indeed much more believable.

The idea of having a UI that works is any device messed up not just Ubuntu but the entire industry. Windows 8 and up is also a victim, and even Apple couldn't resist.

Desktop computers and smartphones and TVs and cars are entirely different. Desktop computers (and laptops) have keyboards, a large screen and an accurate pointing device, smartphones have small screens and fingers are inaccurate, calling for big buttons, gestures and minimalist interfaces, TVs have a large screen but seen from far away, so not that much information can be shown, and are typically operated from handheld remote controls with limited pointing capacity, cars displays should be operatable without looking at the display, in an environment with lots of vibrations, so we need very big buttons, predictable, static interfaces, physical controls and limited gestures.

Many contradictions here, and for me, trying to unify all that requires too many compromises.


[flagged]


I don't understand. It was their role at the time - why do they have to convey repentance?


... why are we being obtuse here? They designed Unity. Let's not skirt around the truth here. Unity set the entire community behind a few years and put a lot of projects on the suboptimal paths they're beginning to suffer from today. Gnome 3 users may disagree, but Unity ostensibly through Ubuntu's influence destroyed UI paradigms that were there for good reason, just from sheer hubris and the will to be different.


I still prefer Unity to Gnomeshell. But the 10.04 -> 12.04 era was the only time I stayed on LTS so I could avoid the early Unity versions. By 12.04 Unity was pretty solid and usable, but like KDE 4.0 a lot of the people it alienated during the early versions never came back to it.


What are you talking about? Lots of people liked Unity.


This is news to me. I remember that the Linux desktop seemed to be on the ascendancy back in 2006. GNOME 2 and KDE 3 were very nice desktops; while they weren’t Mac OS X Tiger (which still holds up as an amazing desktop), they were solid contenders to Windows XP. But then came GNOME 3 and KDE 4, which were major fumbles that set back the Linux desktop for years.

If it’s true that GNOME’s radical direction beginning with version 3 is the result of threats from Microsoft to change its desktop to avoid influences from Windows, then that definitely changes my assessment of the era, and my distaste for Microsoft’s anti-competitive behavior only deepens (this is the same Microsoft that complained about Apple’s look-and-feel lawsuits).


This is my impression to, as a person deep into the Linux desktop ecosystem from back then.

I even managed to give gnome2 to my mum at one point and she remarked how nice everything was and easy to find. I had more difficulty than she did since I was much more into computers and had learned all the intricacies and idiosyncrasies of Windows.

Then gnome3 came and was a huge regression in terms of flexibility, ease of use, aesthetics and performance.

Plasma was the same, though, subjectively it looked better and was a lot less antagonising than gnomes radical redesign, it was still really heavy and buggy compared to kde3 (or whatever the predecessor was, I think 3).

Plasma/KDE was also very windows-like, so I don’t buy the argument that Microsoft was the cause, it was mostly gnome devs thinking that they could get a radical touch-first redesign in before anyone else.

This is a very long way of saying: the parent is correct in my opinion.


> Plasma/KDE was also very windows-like, so I don’t buy the argument that Microsoft was the cause, it was mostly gnome devs thinking that they could get a radical touch-first redesign in before anyone else.

The article does touch on that, apparently Suse signed a patent deal with MS which sidestepped the issues so they could continue the design as usual. Gnome and Redhat refused.

If this is true it kinda reinforces my feeling that Gnome 3 was a redesign for the sake of it. I hate almost everything about it though I'm not married to the windows concept. But everything about it just feels wrong. The huge "touch-friendly" title bars, the menus hidden in hamburger buttons, the lack of configuration options like macOS. And I really tried to like it. When I moved back to FOSS from Mac it was my first choice.

I love KDE though only recently, I thought KDE 3 was too cluttery. It feels like it only came into its own right with Plasma. And I don't find it all that Windows-like though the taskbar certainly is more Windows-like than other DE's. But a taskbar doth not a windows make. Overall I find it distinctly different from Windows. The control panel is definitely a lot better than that mashup of different styles in Windows.


SUSE doesn't even use KDE, they use GNOME... Sounds unlikely.


In 2006 SUSE was the KDE distribution. Unambiguously. While other distros had community editions that shipped KDE (as the article mentions) when you wanted KDE with no frills and everything working out of the box you used SUSE or OpenSUSE.


IIRC SUSE had been using KDE as the default DE since its inception. I know KDE was the default environment around 2000 when I first used it. YaST 2 was also written against Qt.


The article also touches on that, at the time SUSE used KDE, and after they acquired Ximian they became more Gnome-friendly.

Even now KDE is one of the 3 default DEs, the others are Gnome and XFCE.

Like I said in another comment, I'm not sure how much is true of the reasoning of the Gnome redesign the article mentions but this part seems to pan out.


SUSE only supports GNOME. Any other DE is community maintained and only in OpenSUSE.


I wrote the article and I specifically covered why that happened.

Novell bought SUSE. Then Novell bought Ximian, the GNOME company that wrote Nautilus ("Files") and Evolution.

The Novell forcibly merged SUSE and Ximian.

https://www.arnnet.com.au/article/12755/


AFAIK, Gnome2 was the only foss desktop to receive extensive user testing. (Sun dogfooded it with their employees.) Lots of little workflow issues were identified and resolved.


Same experience, so this history must be told from a very specific perspective. I've had over two decades of Linux use on the desktop and remember something similar to you guys.


I wrote it, and the specific perspective that I wrote it from is a matter of published record. What more do you want?

https://www.linuxinsider.com/story/patent-suit-against-red-h...

https://www.theregister.com/2004/11/18/ballmer_linux_lawsuit...


Well, either one of the following things must be true:

- I am misremembering the past

- The published record misrepresents the past

- The motivations at the time were not visible to the community (i.e. the published record is correct and I am not misremembering but that's because the user community never saw it)

I suppose the last one is really the most likely.


At the time I remember the discussions seemed to be "this how the GNOME team have decided things will be, it's better, deal with it"

There was no discussion I saw that they doing all this under the threat of legal action, although maybe they didn't want to paint an even larger target on their backs by naming Microsoft or be seen trying to change things just enough to avoid the patents.


Plenty of stuff in the Linux world was done naming Microsoft in adversarial terms - that was definitely not the case on the desktop. All three main groups (GNOME, KDE, and Unity) always, always stated they were making bold design choices for practical reasons related to UX. To claim otherwise, today, seems very disingenuous.


Yes, I frankly find this blog quite dubious. I have followed the Linux landscape very keenly for over 20 years and never before heard the claim that the infamous new UI paradigms in Gnome 3 & Unity had anything to do with Microsoft patents. It was instead framed as open source taking the design initiative for next generation "convergent" UIs, which was all the rave back then (one design/OS for workstation, laptop, tablet, phone, TV, etc.)

Microsoft's interpretation of the same trend was Windows 8 and later Universal Windows Platform (UWP). They almost all seem to have fallen flat on the face, and especially on the open source side the "bold new direction" seemed to alienate many users. But blaming Microsoft for this sounds like an odd attempt at historical revisionism.


I wrote it.

The basis is a formal legal threat from Microsoft, widely recorded all over the web even now.

https://www.technewsworld.com/story/patent-suit-against-red-...

I specifically wrote this comment on Reddit because everyone has forgotten about it now.


Hi! First of all - thank you for your reply.

I still don't think the portrayal of these events as stemming from patent issues is realistic. First of all, that Acacia Research lawsuit concerned some 1980s Xerox patents "that protect computer GUIs that span multiple work sites and that allow users to access icons remotely."[1] How would changing the GNOME UI to remove a Start-button-like menu protect Linux distributors from such litigation?

That lawsuit was filed in October 2007, and would be accurately described as patent troll litigation. While it did not have Microsoft as a party, nor did it claim infringement of Microsoft's IP, that article does note that only days before Ballmer had warned that Linux vendors could face similar patent troll litigation as Microsoft had faced in the past and that the patent troll had hired a senior Microsoft IP manager.

While this would have surely prompted some warranted speculation in the community, was it - or the threat of direct M$ litigation - the impetus for reworking the GNOME 3 Shell and building Ubuntu Unity?

GNOME 3 was announced in summer 2008, and at least this one in-depth articles I could find makes no mention of any patent litigation connection.[2] Instead, it notes that the progress towards the "audacious reinvention of the desktop with completely new interaction paradigms" had started already in 2005 with a concept called ToPaZ. Mind you that's well before the whole Acacia Research lawsuit.

GNOME 3 was released in April 2011. Ubuntu Unity was an alternative version of GNOME 3's Shell. Its development forked in the advanced stages of GNOME 3, in late 2010. The cited reason was "tension over design issues" between Canonical's and GNOME's designers.[3] Nothing seems to again hint that Ubuntu would have taken this step to shield itself from patent trolls or Microsoft.

Maybe the reason why nobody can remember this is because it did not happen? We're talking about events well over a decade old after all. There could have been speculation at the time that it was all connected to this lawsuit, but I don't think Canonical and Red Hat would have really done it for that reason. Software patents are terrible because any non-trivial piece of code almost surely infringes on some patent, and defending against patent suits is expensive. Surely these companies would have realized that even if the desktop UI was changed not to supposedly infringe on one patent, there would soon be another troll claiming infringement on another patent?

[1]: https://www.zdnet.com/article/red-hat-novell-win-verdict-in-...

[2]: https://arstechnica.com/information-technology/2008/07/gnome...

[3]: https://www.pcworld.com/article/504223/article-3057.html & https://www.pcworld.com/article/504253/is_unity_the_right_in...

P.S. Here is one of the patents cited in the Acacia Research lawsuit: https://patents.google.com/patent/US5072412A/en


I'll go a step further. The author of the article is deceiving himself and his audience to apologize for terrible decisions in OSS, separate from Microsoft, which is no saint either. This is historical revisionism that the creator of GNOME, in this very comment thread, throws out.

https://news.ycombinator.com/item?id=32258035


GNOME already changed direction rather radically between v1 and v2. I specifically remember many people complaining about all the simplifications and removal of various configurable options, and blaming GNOME for drinking too much Apple kool-aid.

For example, it was GNOME 2 that removed the address (path) textbox in the file picker by default: https://bugzilla.gnome.org/show_bug.cgi?id=605608


I don't think GNOME's radical redirection was necessarily a product of threats from Microsoft. This is me reaching back into the old memory bank. I remember a justification for GNOME 3's design was to be touch friendly. I don't think that is necessarily a Microsoft fault. Touch screens were coming to laptops and at the time it felt like every hardware maker was throwing touch screens on laptops. And well, it turned out not to pan out very well for anyone. Here we are and yes, some people may buy laptops with touch screen, but I have yet to run into someone who owns one and uses the touch screen. The exception would something like the Microsoft surfaces.


What everyone else said: if this is true, that would mean that every developer involved was willing to take the secret to their grave. It would mean they preferred to take the massive PR hit of issuing public statements saying "why can't you idiots just accept our superior aesthetic sensibilities" than to blame their incredibly unpopular choices on threats from Microsoft.

In other words, there's just no way this is true.


Yeah.. I moved to mac for a long while after gnome3 showed up the mess now with a million extensions really sucks still

1\2 the effort of the modern linux desktop is screwing up your WM until it's in a normal state


Interesting theory, but I don't recall this happening. As other commenters are saying, there were indeed patent-related issues around FAT and the like, and huge interoperability problems with NTFS (to name one, but that's irrelevant here)... but I don't remember anything around patents and the desktop style -- which may exist, but they were not the problem here.

From my recollection, the real issue was Mac OS X (as it was spelled at the time) gaining territory by being a real Unix with a good interface, thus attracting a lot of Unix/Linux users into the Mac. I'd even bet that many old-timers abandoned Linux on the desktop to move to the Mac, which in turn left room for the newcomers to replace the interface with something resembling the coolness of the Mac. But I'm only hypothesizing here.


I was in this boat. I used to call it "a Unix laptop that could run Photoshop." Also it didn't crash when I closed the lid.


For me it was a bit later in 2009 when I was giving a presentation for a position at Imperial College using my Dell laptop running Ubuntu and it crashed when plugging in the VGA cable for the projector. There was an iMac at the back of the room taunting me.

Got the job via the EU in the end and bought a 2010 13” MBP which was rock solid for many years.


>Also it didn't crash when I closed the lid.

For me it was this + small form factor + high pixel density. I always looked at System76 and the laptops were too big and the displays were disappointing. Other sellers seemed similar or were just refurbishing, except Dell briefly, but I never trusted them. Hardware integration is just very nice to have.


I don't like Apple in many ways but thank fuck they moved the market along in high resolution screens. 1920x1080 was the defacto standard for far too long and I was supremely agitated by the lack of progress.

“If I would have asked people what they wanted, they would have said faster horses.”

When you KNOW the technology can do it sometimes you just need to give people the friggin' faster horses!


To this day High DPI displays don't work right everywhere. Even when they do, they don't look better than 1920x1080. Perhaps if I did a lot of image or video editing I would feel different, but as a human being it just isn't that important.


You're entitled to your opinion, of course. But I'm still not buying a laptop without one. It's like if I said I want chocolate ice cream and people always have to bring up that they don't like chocolate.

It's a screen. I can see it. I'm quite confident that I want it.


That's fine, I think the other comment is actually quite salient though: no matter what laptop you get, you're going to get a blurry screen. MacOS' scaling is based on downsampling, which wastes a lot of processing power and renders a supersampled image that scales down to an arbitrary resolution. This is how a lot of UI elements on MacOS don't really look that crisp despite being high-res. YMMV based on scaling settings, though.

Windows gets the actual UI scaling side of things right, but the market of Windows laptops with hi-DPI displays is still quite small. Plus, Windows has a lot of bitmapped UI elements (particularly in old programs) that don't scale well.

Linux... I love Linux, but display scaling is a lost cause. On x11, you can hack things to sorta work, and fractional scaling will be fairly usable, but on Wayland, you're forced to either pixel-double or face a litany of Wayland/scaling bugs. Some hardware/software combinations can get it working, but it's anything but consistent.

So... you're certainly entitled to your choice of aesthetic preference, but all of these systems have flawed scaling implementation. I like high-res displays as much as the next guy, but not a single desktop was prepared to properly implement display scaling, and it shows.


While this is true, I don't want humanity to simply give up and stay on 1080p forever. If anything, this issue arose precisely because people were too comfortable with the "one true screen res to rule them all" instead of properly scaling.

And it was most frustrating to get phone and tablet screens MUCH better than a laptop or computer monitor!

At this stage I think a 27" 4K screen is ideal. Use highres when you can - particularly for reading text with a High DPI-aware app - and use 1080p for the legacy-rendering stuff or games.

Of course, laptop 4k screens are harder to find.


GNOME 2 was perfect. I use the spiritual sucessor MATE wherever it’s possible. It’s not perfect and arguably “boring” but there is really something to it https://mate-desktop.org/

On a side note a similar project exist for KDE 3. Trinity Desktop https://www.trinitydesktop.org/

And on a second mildly related side note: /r/linux had a thread about a modern (2009) OpenSuse spin with KDE 2 a few months ago. It was released as an april’s fools joke but the ISO file totally disappeared from the internet. Or at least the sub couldn’t find it (and the archive.org mirror is corrupted) https://blogs.kde.org/2009/04/01/new-kde-live-cd-release-bri... Maybe someone here have it? :)


> GNOME 2 was perfect

I'm one of those rare people that like Gnome 3 way better than Gnome 2 or any other "windows 95" or "traditional-style" desktop. Other than command-line apps, I run like 5 main pieces of software regularly (IDE, browser, etc.) and probably 20 pieces of gui software total over the lifetime of my PC. I want something that:

1. Makes it easy to launch my main 5 apps (under 1 second)

2. Makes it fairly easy to run the 15 apps I run less often (couple secs max)

3. Looks pretty out of the box w/o customization so I can just get stuff done

The only environments I've used that tick all my boxes (subjectively of course) are Gnome 3 and macOS.

I think Gnome 3 especially does this extremely well. I flick to my upper right hot corner to expose my dock with my 5 apps - instant launch. If I want one of the other 15 I'm happy to type it in the resulting app search box. Done. I never understood why I would want to click and navigate for like 10 seconds (much less have to remember WHERE it is and WHY it is under a certain 'category') to launch one program from a Windows 95 style start menu. To each their own I guess.

EDIT: I should add a #4: Minimal customization. Gone are my days tweaking me DE exactly how I want it. While I do _some_ customization - it is very little (1 or 2 extensions)


I don't think it's rare to prefer GNOME 3 over GNOME 2; in fact, I would wager that that is the majority opinion.

By the way, I find it even faster to launch apps by pressing the Super (Windows) key and typing the app name. For example, you can press [Super] [t] [e] [Enter] to launch a GNOME Terminal instance. Just my 2c :)


I really shouldn't focus on speed. I start my apps and then let them run for a month or more (or whenever I reboot to for kernel patches), so it is more about annoyance than speed (and I've always found "start menus" annoying), so I've never bothered to figure out the keyboard shortcuts :-)


I use XFCE, a lightweight Win95 style DE. My XFCE start menu ("whisker menu") is bound to Start+Space, which lets me find apps by name. For apps I use frequently, they are bound to Start+[Key], eg, Start+F for the file manager, Start+S for Sublime Text (also used to open new windows in each app).

I do miss the Expose from macOS though, I don't think XFCE has that. (Edit: apparently this can be done with a third party tool.)


On manjaro (xfce edition), just press start and then type part of the application name.

I just figured it out a few hours ago. :-)


I bound it to Start originally, but then all my Start+[Key] bindings would open the start menu (and leave it open). This was a few years ago though, maybe the current behavior is different.


Add this to Application Autostart in xfce4-session-settings.

xcape -e 'Super_L=Control_L|Shift_L|Alt_L|Super_L|Escape'

Then bind xfce4-popup-whiskermenu to Ctrl+Shift+Alt+Super+Escape in the keyboard shortcuts. You can use another key combination, but that's what I use to avoid any plausible conflict.


The whole XFCE, LXDE, LXQT always leaves me confused but probably just because of the name :D

On Manjaro I've always used XFCE and it's a very nice DE for sure.


Yeah I think Gnome 3 nowadays is very good. Though I recently switched over to KDE and I think it’s also pretty great from a “gadget” perspective. The options offered in a right click in the file browser are a good distillation of the design differences between KDE and Gnome (as well as the screenshot widget…)

Definitely would have no issue recommending Gnome to a normal computer user. I like having the busier interface nowadays though.


It is hard to switch windows with mouse in vanilla GNOME 3


Windows or desktops? For windows, why not just alt+tab? Gnome really prefers having multiple desktops with close-to-fullscreen windows on each. And super+mouse wheel is fantastic on desktop machines, but the best experience to me is on a laptop with 3-finger swipes (because it is fast and doesn’t make me wait an animation a la osx. Which is funny because their swipe gestures are spot on on the iphone)


I found it more Intuitive to control my computer with mouse instead of keyboard shortcuts, as most regular user do.


I rarely do. I have 3 monitors and usually just place one window per monitor.

When I do switch, it is easy, flip to hot corner, pick window, done.


As you said, it takes two actions to switch windows in GNOME, but only one action in Windows (clicking an icon on taskbar).


I absolutely loved KDE 3 back then.

Konqueror was awesome. File Manager and Web Browser in one! For some ideological reasons they decided to introduce Dolphin as a file manager in KDE 4 which had much less features.

I ended up switching to Xfce though these days I also use Cinnamon (also Gnome 2 inspired) because it is the Linux Mint default choice. Though I hate that they have recently tried to make Cinnamon look more "fresh" and keep changing it so I probably will end up going back to Xfce if it still the same.


From what I (vaguely) remember, the reasons behind moves like Dolphin were that they felt the integrated Konqueror was a mistake. It certainly made for hard maintenance, the integration system often broke after small changes; and I can imagine security was an issue. Microsoft had made a similar choice in Windows 98/2000/XP, and went through a similar backtracking after XPsp2.

Also, they were trying to move the system to Qt4, which was a big task. Breaking things up probably seemed logical. Add a bit of classic FOSS bugfix-is-boring-let's-rewrite, and voilà...


Of course, in MS’s case the backtracking happened primarily because after years of court battle and endless millions of dollars they were told they’re very much not allowed to deeply integrate the browser to the OS. There never was any technical or usability merit in marrying the two Explorers, it was 100% Microsoft’s attempt to own the web.


I went through almost the same story.

At some point I was disappointed by KRunner of KDE 4 disappearing with my partially written formulas, then found Qalculate.

Then I saw someone at work using i3, and I was amazed and hooked. I changed some keybindings to be more familiar (like alt+f4 close and alt+f2 run). But recently I have been having some dual monitor troubles with it. Maybe I'll try Trinity for a few weeks. And maybe some tiling WM other than i3.


I should really spend a weekend seeing whether I can get the Trinity desktop environment running with i3 as a window manager. I'm currently using a Gnome-flashback/i3 hybrid that's surprisingly nice.


> GNOME 2 was perfect. … MATE … It’s not perfect …

Is MATE worse than GNOME 2 used to be?

I just switched to MATE and I have run into all sorts of odd bugs.

1. If I position an external monitor above the primary, I can’t put any windows in the overlapping portion along the x axis.

2. If I disconnect and reconnect the external monitor, maximized windows return to their original monitor but half-maximized windows don’t.

3. If I disconnect and reconnect the external monitor, the background becomes tiled instead of zoomed with Marco + compositing and it blacks out and the desktop flickers with Compiz.

4. With Compiz, I can constrain the window switcher across all windows to the current workspace but I cannot constrain the window switcher within app group to the current workspace.

5. With Marco + compositing, due to a regression, taking a screenshot of a single window includes the boundary around it that contains the window’s shadow.

There was something else too but I forget. Quite dispiriting but I have given up on finding the perfect desktop.


It has rotted a bit unfortunately, due to shifts in lower level libs.


Gnome 2 was peak for me too.


Really?


I use XFCE these days. It's close enough to what GNOME 2 was.


Came here to say this. I don't use heavy desktop environments personally (some machines of mine run dwm or even), but I set up a machine for my daughter recently and did so with MATE. It's a pretty good desktop setup out of the box. Boring and mostly gets out of the way. She's mostly using chromium.


That's a historical misrepresentation. There were no treads against the "traditional" Linux desktops from Microsoft. There were, however, a lot of mediocre designers who wanted to "be like Apple".



I was into really into Linux and had M$ derangement syndrome at the time. Nothing in those really relate to GNOME and Unity.

Who are you going to believe? People trying to stoke fear or an important GNOME project member?

https://news.ycombinator.com/item?id=32258035


I don't think commenters are denying there was a legal threat; I think they seem to be denying that threat was the motivation for Unity.


I still think the Applications/Places/System triple menu was absolutely superb in terms of usability. You had three clear and coherent categories where all the GUI software in your system was placed, which made all that stuff easily scanneable by anyone even if they have never used it before. It shouldn't have been ditched, if you ask me. That clever way to separate and categorize that stuff was and is superior to any 'start' menu Windows (and any other desktop environment) has ever had.


The Applications/Places/System is not a good idea: it doesn't scale. It worked relatively well at the time because people had something in the order of tens of megabytes of free space, allowing only a few programs installed. Nobody does that with smartphones for a reason.


> Nobody does that with smartphones for a reason.

Smartphones combined Applications and System into one app drawer, and don't have Places. It should have been an even worse idea, yet it worked.


But we're not talking about smartphones with their dozens of tiny apps. We're talking about desktops. How much desktop software does a typical user run?


Enough to make the lists on the start submenus to become very long. People don't remember it so much because nobody uses it anymore and nobody uses it anymore because it just doesn't work. With WinME MS started hiding option that weren't used frequently, in XP and 7 MS start bringing the most frequently used options to the first page. After 7 the start menu and submenus were killed because they didn't work anymore with normal computer usage.


Historically, Windows didn't really have the grouping by application type that GNOME did - any grouping in the Start Menu was down to whatever folders any given app would create when installed. It did have some standard folders like "Accessories", but I don't recall any third-party app using those. The common practice was (and still is) for every app to create its own top-level folder, then put its shortcuts underneath. So it ended up as a linear list of apps, which of course doesn't scale well - but it's also not how GNOME did it.

I will also note that, 1) the start menu is still very much there in Win11, including folders, and 2) the fact that hacks like Start8 exist and sell enough copies to be profitable indicates that quite a lot of users do prefer it to the newer arrangements.


> a linear list of apps, which of course doesn't scale well - but it's also not how GNOME did it.

Oh, sure! You're right on this one. I read the apps/places... and my brain only saw the windows start submenus. Yes GNOME was much tidier.

> hacks like Start8 exist and sell enough copies to be profitable indicates that quite a lot of users do prefer it to the newer arrangements.

People preferring something they are already used to says more about how hard it is to change habits than about how good an option is.


Mate also has supported type-ahead search for a long time. Possibly Gnome2 did with an add on.


I don't recall ever seeing that in GNOME 2. In MATE, if I remember correctly, you need to install a different system menu widget for the taskbar (although some distros do it out of the box): https://packages.debian.org/buster/mate-menu


Yes, on Linux (both KDE and Gnome), the typical menu arrangement was to group applications by category. So if you want a photo-editing program or a drawing program, you'd open up the applications menu and look for "Graphics", which makes sense.

On Windows, you just have to know which applications are already installed, and then you open the application menu and look for "Adobe" or "Corel" or whatever vendors make the software you're hoping to find there.

One of these is much more "discoverable" than the other.


It's always interesting for me to see that I'm so alone in loving Gnome 3. As a keyboard heavy user, it has always matched my workflow. I never appreciated a start menu. It leaves essentially the full screen for my program. The settings drop-downs are intuitive.

Granted I use the Material Shell tiling extension now, but vanilla gnome 3 will always have a place in my heart.


Not alone. I love it. I want my DE to "just work" and I don't want to customize it or hunt for programs in a "start menu". Give me my hot corner and a short list of apps any day so I can get to work. Happy to type in to launch less commonly used ones the few times a month I actually need them.

As a bonus, Gnome 3 is incredibly easy on the eyes (subjective of course). I think probably a close competitor to macOS in looks category.


Gnome 40+ is basically perfect for me after adding the "AppIndicators", "Bluetooth Quick Connect" and "Sound Input and Output" extensions.

I absolutely love being able to click a single key and being able to easily:

- see all windows open across monitors

- drag windows to different desktops / monitors

- close any window and re-organize the whole thing (something I can't do on macOS)

Default apps are also pretty good and the whole DE is super stable and snappy. My only complaints are the GTK file picker being useless and the title bars taking too much space on non-gnome apps (which don't have window decorations).


Gnome 40+ is so excellent I can't switch back to other OS's for regular use now.


I feel the same way. Gnome 3 feels great as a keyboard first power user, it feels great with a laptop touchpad and using single, double, triple finger swipes for different things. Presumably it works pretty well on touchscreen devices too.

It actually still matches my Windows workflow pretty closely because all I ever do is hit winkey and type the name of the app I want to launch there too.

The only place it really falls apart is if you're using it with Keyboard + Mouse and aren't the sort of person that likes to just learn keyboard shortcuts and gestures for everything without looking at the UI much.

The other problem I have with it is the opinionated stance on window border/title sizes. The defaults are way too big and you're not really supposed to change them. Other than that I quite like Gnome 3.


> It's always interesting for me to see that I'm so alone in loving Gnome 3.

You're not alone. Complaints always get projected louder and wider than happiness.

GNOME 2 was the same way. So many users treating it as the end of the world, and it took a few versions to improve and fix initial issues, but after those first couple of versions it was a great upgrade. GNOME 3 likewise took a couple of versions to find its feet, and then it was a great upgrade.


Interesting, I never saw (or knew) that Gnome 2 received the same treatment. It's heyday was before my time though.


Massive blowback for "removing configuration options".

GNOME 1 had an absurd about of configuration, including many of what the GNOME 2 designers described as "unbreak me" options: options where only one answer could possibly make sense ("make X work"), and worse, some of them were off by default.

GNOME 2 removed those, in favor of better autodetection, better defaults, and similar. However, in doing so, they ended up removing some that were actually important to people, for use cases that weren't as well known. Some of those were added back, or better replacements provided, in the first few versions; as a result, GNOME 2.4 or so was much better than 2.0.


True, I do remember this, like we had to wait a while to get options for nautilus back. It really wasn't as bad as GNOME 3 though. There was a huge difference in the number of options (or % of options if you will) taken away in 2 and 3. The original GNOME 3 was originally a barren straight-jacket, where not even gnome-tweak-tool existed yet (or at least most people didn't know about it.) Remember, extensions didn't come until later and mostly didn't even work until yet later.


The Gnome3 developers publicly said that they didn't want anyone changing anything on it, and didn't believe that there should be any configuration options.


Yeah, GNOME's great. It's coherent (doesn't feel like a kit of parts cobbled together) and gets out of the way. It makes it easier to focus.


I would be curious to know when you started using Linux and GNOME. I always thought GNOME 3 was interesting and had some good ideas. But at least up until, I feel like 2016 time, most people just felt bitten by how GNOME 3 was rolled out. GNOME 3's initial roll out was essentially a burning dumpster full of issues. It did eventually get better, but it took a long time.

With that being said, I do think GNOME was way off track. Based on everyone I know who swore off GNOME after the 3 release but are using it now, GNOME is back on a great track with GNOME 40+.


I toyed with Linux growing up (2005-2012) then fully committed in 2014 with fedora. Gnome definitely had some issues at that time but I still sound it very productive to use.


You are not alone.

I love it being easy to use keyboard driven.

I like that the virtual desktop are dynamic

I love that on my Lenovo Yoga I can use it as easily as a regular desktop or like a tablet (you haven't appreciated reading the hacker news comment until you start doing it on your couch with a screen in portrait mode).

My only gripe is I wish I could dial the titlebar height depending on the context.


I love it too. It works so well on my machines. I hope it sticks around


This is nonsense.


I appreciate that in the heat of the moment, you want to correct the record quickly; but this adds very little to the discussion. I look forward to your longer comment / blog post setting the record straight.


I don't think that Miguel needs to waste time trying to prove a negative.

He was there at the time of the supposed events. If he says it didn't happen then we can take him at his word imho.


For others who didn't know:

> Miguel de Icaza is a Mexican-American programmer, best known for starting the GNOME, Mono, and Xamarin projects.

https://en.wikipedia.org/wiki/Miguel_de_Icaza


(Also Midnight Commander, which was surprising to learn. I can’t actually say why, in retrospect, but it really was.)


I hadn't read the username and took the comment as a typical low effort HN troll but coming from Miguel, "This is nonsense" is a complete statement that stands on its own.


"Famous person who worked on the project disagrees and does not explain why" does not meaningfully contribute to a discussion. We're left to throw random hypotheses at a wall with no evidence.


He did explain why - he thinks the linked article is nonsense. What more could he write?

It isn't like this is some scientist throwing out a fact about nature and needs to give further evidence - Miguel de Icaza's opinion in and of itself was a pretty influential part of the decisions made at the time. He is qualified to know his own opinion.

The article does look like bunk anyway, there were a lot of MS threats around at the time but it was never picked out as a serious part of GNOME's decision making.


If I ask "Why do birds fly south for the winter?", and an ornithologist just says "Well it's not what your friend said!" and walks away, that is not a reason why birds fly south for the winter. It doesn't help anyone understand the subject. But because the ornithologist is an authority figure, people take that as an explanation. People will then chatter among themselves and decide that whatever the group thinks must be true.

Besides the fact that he did not give a reason why GNOME/GNOME 3/Unity happened, he is also still a human being, and thus is subject to bias, deception, forgetfulness, mis-remembering, etc. That's why if you want a real discussion it's useful to have evidence, not just whatever hypothesis sounds good to an in-group.


The ornithologist is a scientist. They do not control what the birds do, they study what the birds do. The birds make the decisions.

Miguel started the GNOME project (with, obviously, lots of similar involvement from other people). He is closer to a manager, ie, a person making the decisions. He can just say that the proposed article is nonsense and that is pretty significant evidence.

> he is also still a human being, and thus is subject to bias, deception, forgetfulness, mis-remembering, etc.

Microsoft threatening legal action is not the sort of thing that slips the mind.

> That's why if you want a real discussion it's useful to have evidence

The evidence is Miguel says it is nonsense. You can spill a lot of ink and not come up with anything half as convincing. Miguel's one line is more evidence of why GNOME acted as they did than the entire article managed to scrape together, unless Mr Liam on Linux happens to also be on the GNOME core dev team and I've just never heard of him. But I don't think he was. I doubt he was in the room or involved in any of the decisions.


You misunderstand, cryptic public statements are a longstanding part of GNOME's commitment to transparency and user cooperation.


In this house we trust science which is why we must take for granted the statements of an important person.


Why should someone have to look at a username, bio, or life outside of HN to evaluate an HN comment's quality?

It was low effort and clearly not in keeping with HN guidelines on what makes a good comment. Even if his comment is correct it is not at all a productive contribution, when an extra 10 seconds to type "I cofounded Gnome, I would know" would fix that. He doesn't need to prove a negative to add to the conversation.

And would he really know the true reasons here? It doesn't look like he was at RedHat, was he actually involved in Gnome 3 development to know the motivations? Again, instead of a 3 word comment it would take literally seconds to have started a much better discussion. Instead this thread is filled with responses derived from the comment's poor quality rather than the actual merit that Miguel's viewpoint would have.

It would have contributed more to the discussion to say nothing than to post 3 words than derail things in this way.


Source article made assertions that saber-rattling drove divergence between Linux desktops, as opposed to the popular narrative of divergence we've heard-- and offers no evidence, but only nebulous assertions.

Someone key that was there says "uh, no."

Yes, adding infinite context would make it a little easier to read, but it stands on its own.


>"infinite context"

No.

That's a pretty poor straw man. As I said, literally seconds would have done it. Heck 3 more words would have done it. "I created Gnome."

Besides which, the article was written by someone who was also around and claims insider knowledge closer in time to the events than when Miguel was involved with Gnome.

It was a low effort comment that did not coincide with HN standards. You can't talk your way around that.

Miguel is hardly enough of a household name, nor does he appear to have been involved with RedHat circa Gnome 3 such that his 3 word low effort comment can stand, with his username, on its own.


> Besides which, the article was written by someone who was also around and claims insider knowledge closer in time to the events than when Miguel was involved with Gnome.

He's talking about 2006-2010; during these times he was mostly doing desktop support, it looks like-- occasionally contributing to various online Linux zines. He wrote a similar article in 2013 for The Register. He spent 3 months as a tech writer for Red Hat on JBoss stuff in 2014. Then he did spend 2017-2021 doing tech writer work for SuSe.

> Miguel is hardly enough of a household name

Man... I would have a hard time thinking of more than 4-5 people who were equally prominent in the open source zeitgeist of those years.

> nor does he appear to have been involved with RedHat circa Gnome 3

He certainly was still plugged into the Gnome Foundation and going to the various Gnome meetups, etc, while in a senior role at a Linux vendor doing stuff with Gnome. If there was a problem, I'd think he'd have heard about it.


> And would he really know the true reasons here? It doesn't look like he was at RedHat, was he actually involved in Gnome 3 development to know the motivations?

Was OP?


OP was at SUSE when they made an agreement with MS touching on this issue. I don't agree with the OP, I think if there was specific pressure of this sort it wouldn't really be a secret or subject to speculation. My point in all of this is simply that the GGP 3 word comment without anything else is not defensible as a constructive comment.


Well even the information contained in your post and the reply ("I am the founder of GNOME, and [something about his official connection to the project during the 2 -> 3 transition]; I can assert that nothing like what was described here happened: No patent lawsuit was threatened; fear of such lawsuit did not play into our decision to develop GNOME 3; and Ubuntu did not start Unity because they were rebuffed from GNOME 3") would be adding to the discussion.

As for "need", I think depends on what you mean. Is he morally obligated to set the record straight? No, of course not. Will his original reply, with only his username and no other information, convince very many people? Not really: if he actually wants to correct what he sees of as misinformation, he does "need" to do at least a little bit more.

FWIW I certainly don't remember "Avoid being sued over Windows 95 design patents" being part of the motivation for the 2 -> 3 transition being discussed in the wider community at the time. (As opposed, say, to the DCO -- aka "Signed-off-by" discipline -- adopted by the Linux kernel, which was definitely designed to defend against MS-funded SCO nonsense.) But it would certainly make me feel better about Linux desktop fragmentation if it were true.


Was Miguel involved in GNOME at the time? I thought he was off making Mono then?


No idea; that's why I put something in brackets to be filled in. If he wasn't working on GNOME, then it moves his authority from "I was directly involved in the decisions" to "My colleagues were directly involved in the decisions"; still strong, but not quite as strong as the former.


Not only is it nonsense, you can look at GNOME's own wiki and various blog posts for its (publicly stated) design history: https://wiki.gnome.org/ThreePointZero/DesignHistory

The TLDR is that nothing was ever said about implied threats to sue, and it was all about "windows/virtual desktops are hard, let's _simplify_ everything!" Also, netbooks/iPhones were all the rage then, so you see where the modal design (for small screens) and giant padding (for touch input) came from in order to accommodate any possible convergence in the future or whatnot.


[Article author here]

Because saying so would be an admission of infringing the claimed intellectual property.

If any company had said, then or now, "we did this because we needed something non-infringing" then that would be a statement confirming guilt.


I'm just surprised that I had to scroll so down to see a correct characterization of the article...


Lol. I came here looking for your reply.


I think it's largely sarcasm. That is, the part about Microsoft threatening to sue over patent violations is probably true, but I think the op is implying that granting the patents was ridiculous.


I have no particular desire to defend Microsoft, but anyone who was around at the time and has an intact memory can tell you that this is not true. Citation very much needed here.


I'm not sure if the author of the linked post was being sarcastic but, as one of the rare Macintosh users of the era, I feel like I have to jump in here and point out that a lot of those original features weren't particularly original. The old-school macOS of the time (System 7, which was not based on BSD) had a lot of similar features in its UI. Those included things like the desktop hard disk icon (called "Untitled" by default I think?), "Trash", the Apple icon menu, maximize buttons on windows, and others. Windows 95 definitely had some unique stuff and was a nice OS to use. But I remember at the time it seemed pretty derivative.


I wrote the article.

I was a Mac user then too, from System 6.

You may forget but there was a graphical desktop for DOS PCs before MS Windows. It was called GEM, by Digital Research.

Apple sued Digital Research and forced DR to cripple its GEM desktop on the PC, by obliging it to remove features deemed derivative of classic MacOS: desktop drive icons, drop-down menus, etc.

So, for instance, DR moved drive icons into a window.

This is why Win95 has "My Computer" and "Network Neighbourhood" and so on instead: because it wasn't allowed to put drive icons on the desktop, but folders with drive icons in was OK because it didn't infringe.


None of the counterexamples you tried to give are actually in the article.


Well, the article was making the case that Windows 95 was some singularly unique UI, and the comment was suggesting that it really wasn't. Among Mac users at the time this wasn't exactly an uncommon belief; the joke "Windows 95 = Mac 89" was pretty prevalent, even making it to bumper stickers, IIRC. (Although former Apple engineer Erich Ringewald later observed, that while was largely true, the real problem was that Mac 95 = Mac 89 was also true.)


I was at the 1995 MacWorld. I saw a lot of "Windows 95 = Mac '84" and "C:\ONGRTLNS.W95" T-shirts about. Then again, the first machine I saw actually running Windows 95 was a Mac, I believed equipped with SoftPC, at that same expo.


Yes they were... Apple Menu = Start Menu, Hard Disk Icon = My Computer, etc. You can't just not do the translations. Everything Microsoft did was stolen from other places, including Macs.


[Article author here]

You're missing the real core point.

Apple sued to protect its features.

So, Microsoft had to do things differently.

Apple: drive icons on the desktop. GEM, Windows: drive icons in folders.

Apple: global system menu at top left. Clock at top right. Windows: start menu at bottom left; clock at bottom right.

Apple: start apps by browsing to their folder. MS: separate app launcher menu.

Apple: no graphical way to switch apps, because classic MacOS multitasked poorly. MS: the task bar, with buttons for each running window.

The point is not the broad general similarities. The point is the differences.


No, I think you're missing the point. Intellectual property is a joke and none of this belongs to anyone. All any of this did was waste resources. Also, what a terrible article, I think you should apologize. Stop trying to reply with "No" in every comment and take in what people are saying to start with.


I actually largely agree with most of that.

I do think that being able to patent relatively minor UI stuff is absurd. I do think it was a total waste of resources, and I have said so, in print, for instance here: https://www.theregister.com/Print/2013/06/03/thank_microsoft...

However, I was there as an active industry watcher and commentator at the time, and it seems to me that the main focus of comments here are simple incredulity. I have seen very very few people responding with any kind of citation or reference or anything else.

I do not accept simply argument from authority. The founder of the GNOME project says "no" is not a counter-argument. [1] I am not sure that he was involved in GNOME 3. [2] He then became an MS employee and therefore has (literally) vested interest.

Denial is not refutation.

Furthermore, as I have said, if anyone involved publicly confessed to it, that would constitute an admission of liability.


Good, serves them right. Apple shamelessly stole their interface from Xerox, so I don't see why we have to put everyone on blast here.

Steve Jobs said it himself: "Good artists copy; great artists steal"

Hell, arguably Apple wouldn't exist as it does today if they hadn't brazenly appropriated companies that worked in fields they were interested in. There's nothing new (or shameful) about stealing UI motifs if they're effective.


Apple took resizable overlapping windows from Xerox; in return, Xerox got shares in Apple. Not stole: bought, with stock.

Then, on the Lisa, Apple invented from scratch:

Standarised dialog boxes; a global menu bar; window title bars with global controls on them; standardised icons and functions and buttons (OK, Cancel, etc).

None of this stuff was in Xerox Smalltalk.

Xerox invented 3 things: overlapping resizable windows; an object-oriented programming language to create and manipulate them; ubiquitous networking to connect the machines.

Apple took the windows, missed the rest.

Source: Steve Jobs.

https://www.youtube.com/watch?v=nHaTRWRj8G0

But Apple invented most of the standard GUI controls that we all know and use today. They weren't in the original Xerox system at all.


> Apple took resizable overlapping windows from Xerox; in return, Xerox got shares in Apple

This is sort of backwards. Xerox already had taken a stake in Apple before any of the demonstrations (and, in part, this is why the demonstrations happened at all). It was the Lisa team that came for the second of the legendary demos, including the engineer -- formerly of the LRG team at PARC -- who would go on to invent the Finder.


Well, there's nothing new under the sun really. And I should have framed it differently because, even though I was a Mac user, I wasn't exactly a fanboi. I don't care so much to champion Apple products as I do to point out that claiming the UI features of Windows 95 were original is inaccurate. And also that they weren't seen as being original at the time they arrived on the scene by certain cohorts.


Apple licensed the interface design from Xerox.

The only thievery involved was on Microsoft's part. They should have gotten smacked down in that lawsuit.


> "Good artists copy; great artists steal"

If Jobs said that, he stole it; the original was Pablo Picasso.


I didn't attribute the quote to him (very few good things can be directly attributed to Jobs), but it was one of his favorite quotes so I think it directly refutes the Apple Apologism mindset that everything good in modern OSes came from MacOS. Apple didn't invent the mouse or the cursor, and they didn't come up with the keyboard or the floppy drive. The pastiche of a mid-90s computer doesn't resemble a Mac at all.


I don't remember MS being all that threatening, though certainly there may have been less public pressure.

The author worked for SUSE which signed an agreement w/ MS to sidestep any issue, so maybe that closeness made it seem a much larger issue than it was.

I don't even see much of a real change in the ecosystem: A single command or a few clicks in the GUI will get you LXDE on a fresh Ubuntu install. I certainly don't see any hurdles here that would have been a major set back for desktop Linux.

Desktop Linux was (is?) always just around the corner, but never quite got there for reasons unrelated to MS's actions. Maybe MS would have taken more aggressive action if it ever looked closer to mainstream?

I'd rather hear from Unity and Gnome devs though, what motivated them at the time? That would determine if MS threats were a significant factor. From what I remember of those times it seemed more like UI folks wanted to explore other options in general, especially after a very nice looking and usable non-MS example came along in the form of OS X.

There is also now a major commercial distro that does use an MS-like desktop: SteamOS. Around May I saw analyst estimates of about 1.5M units sold of the Deck. If the author's claim is still part of MS's playbook then SteamOS will be an interesting test for it. Especially because Valve's Proton development has in some cases improved performance above native Windows. SteamOS with wider hardware support to drive gamer's primary systems would be a larger threat of desktop Linux than I think has really ever been present in the past.


I'm sorry, but timelines don't match up here.

Windows 3.0 was released in 1990. Patents last 20 years. Gnome 3 development didn't really start in earnest until 2008 (2 years after the alleged reason for the change), a 2 year timeline before any patent MS could claim on position of the x button or start start button. Gnome 3 was finished in 2011, 1 year after any patent related to windows 3 or earlier would be dead and buried.

And arguably, some of the design elements of mention in this post were present in windows 1.0, released in 1986.

There are also other prior art examples such as Amiga released in 1985 which had similar "top right button closes the application" design elements.

And then let's not forget, this is about enforcing patents. Not something that's really easy to do with FOSS, hence the reason projects like FFMPEG and x264 have stuck around even though they very clearly violate a bunch of patents (if someone distributes the binary). Heck, even projects like DeCSS2 were almost impossible to kill off with it's explicit flaunting of the DMCA.


> The task bar, the Start menu, the system tray, "My Computer", "Network Neighbourhood"

Literally not a single one of these existed in Windows 3.0.


[Article author here]

Thank you.

Also, just for the record: Windows 1, 2 & 3 didn't even have an "X to close" button on their windows.


> There are also other prior art examples such as Amiga released in 1985 which had similar "top right button closes the application" design elements.

AmigaOS has the close widget on the top left, like Mac OS.


> Ubuntu used Unity, alienated a lot of people who only knew how to use Windows-like desktops.

Nonsense. It was alienating because it was slow and clunky, and for the people that choose Linux over OSX because they don't like OSX interfaces it was just being forced to use an inferior version of something they already didn't like.

The usual OSS intransigence where they couldn't actually figure out what UI they were trying to implement, like the blue-yellow-red buttons on the upper right, but also a close box / menu on the upper left and then also the right click for optional behavior of the window was also just straight _bad_.

That, and it was really slow and clunky. FWIW, I actually used fluxbox a lot for VNC installations and found it to be more pleasant and snappier (especially remotely) than Gnome.

I recall the switchover and don't remember hearing _anything_ from _anyone_ about liking it, but the reason was never that it wasn't enough like Windows.


It wasn't just slow and clunky, it changed how to do basic things, split configuration between about three different places, and changed defaults (which they still frequently do, to the point that it is still on-going and my "go-back-to-sane-defaults" script is now pages long)

The best UI is one the users already know. Changing to get more users only to lose your existing ones is a massive failure.


My own experience, the biggest turn off for unity was the Amazon button integration.

Of course you could remove it, but I felt that a threshold was passed.


[Article author here]

> it changed how to do basic things

Which is the entire point of my article.


And my point is, doing that is the worst UX thing you can do.


On the one hand:

GNOME 3 did the same thing. That seems to be overlooked a lot.

On the other hand:

If you're a keyboard user, like me, then all the Windows keystrokes that worked in GNOME 2 still work in Unity, so it's much less jarring than KDE or something.

As an example: the Windows Quick Launch toolbar, that came in with Win98 in 1998, lets you start the first 10 apps by pressing Win+0 to 9.

Unity honours that.

Most people didn't know it was there, but if you did, it still worked. That is what I consider good attention to detail. UI that you didn't even know was there was carefully preserved to make Unity less of a shock to the system.

Yet all the mob bitch about is that it's Mac-like, which was because Ubuntu was under threat of prosecution for being too Windows-like.

Be fair. Learn about this and consider it. Because currently it looks like you're not.


> If you're a keyboard user, like me, then all the Windows keystrokes that worked in GNOME 2 still work in Unity, so it's much less jarring than KDE or something.

Actually since you mention it. GNOME 3 and Unity block basic readline keystrokes from working. Alt+Space has been a keystroke in bash since the 1980s, and they fucking override it. Fuck them.

> Learn about this and consider it.

NO. Fuck you. I have my UI and it will work how I want it to. The UI comes to me, I don't change to fit the operating system. That's bullshit. This is ergonomics 101.

They keep changing shit for the sake of change, then fuck them. They don't get to keep their install base, and nobody will bother to learn their bullshit.


You took the time to reply to me after a week, so OK, I'll do you the same courtesy.

> Actually since you mention it. GNOME 3 and Unity block basic readline keystrokes from working.

I'd never even heard of "readline keystrokes" before.

> Alt+Space has been a keystroke in bash since the 1980s, and they fucking override it.

I am not sure what that does, but hey, I'm sorry you feel that way.

But I have 2 counter-arguments. Neither will satisfy you, I realise, but hey, I feel it's important to say.

#1: Bash has used this since the 1980s.

According to Wikipedia, Bash was first released on June 8, 1989.

Windows has used alt-Space for the window control menu since at least Windows 2.0 -- I used Windows 2. I didn't like it much but I used it.

Windows 2.0 was released December 9, 1987.

So, that's 18 months earlier. If that was a new bash feature in 1989 then Brian Fox used a keystroke that was already in use by a widely-used GUI. Bad plan.

#2: Bash is on millions of Linux boxes, and Mac OS X from 10.3 to 10.15, and probably FreeBSD and maybe other BSDs.

OTOH, we are talking about a basic Windows keystroke that's probably on billions of machines. At least 100 Windows boxes for every Linux box, I would guess.

I am not arguing that this is right or proper or good. But on sheer numbers, the piece of software that used it first and which is used by 2 orders of magnitude more people gets to keep it, IMHO.

Sorry, but either on numbers or on chronology, it's fairly clear, IMHO.


I was the lead for Ubuntu Desktop at Canonical.

This is nonsense.


From my memory GNOME 3 and Unity were created because there was a trend to make all interfaces both desktop and mobile friendly, Ubuntu tried to ship a phone using Unity, and GNOME 3 was advertised for its tablet friendly interface.

For Windows world there was Metro UI of Windows 8, Apple insisted on using different UIs for desktop and mobile and was laughed at.


Unity was really good in saving vertical screen space. Invaluable on notebooks with 1366*768 screens that were typical on the notebooks of the day. Gnome3, on the other hand, wastes it like the user have nothing better to look at but panels, upon panels, upon panels.


This is nonsense, revisionist and blames people who didn't like having their workflow yanked out from under them for only being able to use "windows-like" desktops.

Linux (and the wider unix-ecosystem) had a variety of paradigms available, in the form of things like fvwm, gnustep, enlightenment etc. Gnome 1's default look is a hybrid of CDE and windows influences, and Gnome 2 with its default top bar took things in a different direction, if anything its default layout looks much more like earlier MacOS releases, which had had a top bar since, AFAICT, the 80s.

Microsoft were certainly making a lot of noise, but with their SCO proxy war failing so badly, nobody was taking it seriously by then.

Gnome 3 happened because the people who were behind Gnome 3 wanted to change everything and they were in a position to do so. The bad feeling came because not only did they change everything, they went from a "do things your way" to a "you must do things our way" model and they (apparently deliberately) killed the ability to run Gnome 2 and 3 on the same machine. They announced 2 was deprecated and abandoned as of right now, and that the users would basically just have to suck it up.


I don’t doubt it, but the post does not lay out the timeline. Those events likely aligned the business interests to what the designers were already wanting to make happen. From what I recall, by 2004, Linux on Desktop designers were already looking to move on from mimic and displace Windows to win on their own terms. There seemed to be a lot of energy do something new and different by 2005. It seemed like there was a lot of wireframes and prototypes in both GNOME and KDE camps. Also around 2005, touchscreen tablets were also having a moment. I think I remember a popular Nokia model. 2006 had Sugar UI for interactive learning on OLPC XO.


I remember gnome 3 being a lot more touchscreen friendly.

One big turnoff was that it wasn't compatible with my favorite gnome 2 theme.

In the end, I switched to XFCE.


[Article author here]

I provided citations for my claims.

Can we see some for yours, please?


I am going from memory. I can anchor the memories as September 2005 till early 2006 as it was a memorable time for me. I spent that time working out of a Palo Alto garage working on a "web 2.0" web browser, Flock, with a small group of people that included a few who had previously been at Eazel and were still passionate GNOME participants.

Thinking further on it now, unrelated to that work, Jeff Waugh @jdub would be person I'd go to for receipts.

If I was searching the web, I'd be looking for references to Gnome ToPaZ with topaz being a play on ThreePointZero:

"When the prospect of GNOME 3 was first discussed by developers in 2005, the concept took on a life of its own among the users who imagined that it would be an audacious reinvention of the desktop with completely new interaction paradigms and a new kind of user interface. This pie-in-the-sky vision was referred to as ToPaZ, word play on the phrase three-point-zero. " https://arstechnica.com/information-technology/2008/07/gnome...


December 2005 article:

"So, the GNOME people have started to focus on questions such as universal access, so if you have motor difficulties or other disabilities, software still should be usable. Likewise, it shouldn't matter what language you use or character set you need, software should be usable.

Part of Galago and Telepathy comes from getting beyond questions of windows, menus, icons and pointers and focusing on the things people really care about. In Jeff's view, these things are people, events, documents and sex. When questions of when GNOME 3.0 will be released arose, people have suggested it was a stupid idea. So Jeff came up with TOPAZ, taking the first letters from Three Point Zero and inserting some vowels. TOPAZ is not planned for release at this time."

An Evening with Jeff Waugh Linux Journal by Colin McGregor on December 27, 2005 https://www.linuxjournal.com/article/8752


In the end MS did not sue anyone... but it got what it wanted: total chaos in the Linux desktop world.

True. But it wasn’t their accomplishment. Nobody dragged these DEs into an endless war against their own users and developers. DEs were just frontend frameworks of the day.

Those skilled enough to apt-get install <lighter-de-or-wm-of-choice>, tick few checkboxes and fill menus with commands - were constantly perplexed by all the drama.



Has RedHat ever made anything non-horrible? It popularized all of Poettering's abysmally overcomplicated designs (Systemd/Pulseaudio/etc), there was Ansible, GNOME, NetworkManager, OpenShift, RHEL kernels, they abandoned XOrg, killed off both CoreOS and CentOS. Podman is supposed to be good but I've never gotten it to work.


Not since they bundled an unrelease compiler in their distribution ( around RH7).


I'm pretty happy with i3/sway and can't believe how much Microsoft spent on case studies when tiling window managers do appear more productive


I like tiling window managers but I've hit too many problems/bugs... I just use XFCE now with an add-on that let's me snap windows to 1/4, 1/2 or full screen depending on where on the edge of the screen I move them and that's good enough for me.


If they did any research into tiling, I think a lot of their case studies would be about how to get the users to adopt and learn it. Learning a good timing WM can be very productive but you only see the benefit of it after you've already put in the work. It's not instant gratification.

Though on the other hand, MS has a habit of dumping bad UI design on the user and then having to scramble to revert amid all the commotion. Windows 8 being a prime example. So perhaps they don't do this.


There were plenty of UX studies showing that Win8 was awesome. Same story for VS 2012, another famous (at least among VS users) example - you can read the original blog post announcing it for explanations of why they thought it a good idea: http://web.archive.org/web/20140813025825/http://blogs.msdn....

It just turns out that it's very easy to bend such studies into showing what you want to see.


I clicked the article by the OP, and immediately Ctrl-F "sway" thinking surely I'm not the first person to think of this being an orthodox vs. the unorthodox kind-of problem.

The comments give it away: the tmux-like or emacs-like productivity gains with sway (or its ilk) are only obvious in retrospect. And indeed, measurement of the productivity gains those of us in the tiling camp speak of, is elusive, anecdata, not well-documented or consensus-driven.

I'd suggest measurement lies somewhere around Fitts' law.

Indeed, what Fitts' law could not have foreseen is that we'd have devices that could solve this rather two-dimensional time-speed-distance problem by introducing Z-order navigation which is arguably instantaneous and non-linear with a keystroke.

https://www.interaction-design.org/literature/topics/fitts-l....

The key insight is that window management doesn't belong in your terminal or in your text editor. Rather, window management belongs at the same abstraction layer as the window, or window manager, or meta - one layer above.

Put intuitive, direct manipulation of window geometry in the window manager tied to the keyboard, not the mouse, and you get massive gains of time, space, and indeed, space-time that transcend terminals, text editors, and the limited frame of lower levels of abstraction such as CLI, TUI, etc. precisely because the real time, speed, distance problem is not in the physical, but rather the conceptual, or mental model.

The mouse, for all its brilliance is part of the impediment in spatial reasoning. I suspect we can test and measure this across cohorts, but I have no evidence. I don't believe the results are consistent across cohorts differentiated by spatial reasoning scores, much like tmux, emacs, or any sufficiently complex and advanced system that has few adherents, but those adherents can demonstrate productivity well-above average.

The one key missing link in the current implementations that I've examined is that they seem to miss the notion of shadow windows - windows that have only an invisible, mental component, not a visual one. I think they're getting there, but that concept is difficult to explain or learn.

To quote Kaja Perina referring to Alan Turing's read on Grothendieck's method of solving problems:

Grothendieck's approach was to "dissolve" problems by finding the right level of generality at which to frame them.

I'd argue that that is precisely what is wrong with most R&D around window management - it gets sucked into the pretty and beautiful visualization cult instead of the academic and more principled approach you'd see from Alan Kay and his protege Bret Victor as having this kind of perspective today. There are a few others, but they tend to be quiet academics.

And of course, I'd be curious what Kay and Victor think of the current state of the art in window management given the history and recent advances in tiling. https://www.psychologytoday.com/us/articles/201707/the-mad-g...


That is incredibly insightful and thought out. Rarely do I see a comment that dives into the science and psychology. I agree with everything you said. I will give it a read.

I would like to see more advances in typing as well. I almost envision a new language or device may have better performance as well.


The mention of solaris is interesting. The Gnome 2 based Java Desktop was part of what would be used to sell commercial institutions that had regulatory requirements for things like Accessibility (e.g. banks, government) - and it did win customers. This meant Sun put a lot of fixes into G2 upstream that helped stability and viability.

I used it natively on solaris 10 on a desktop for several years with few limitations or issues. You could even use it on a SunRay but the performance would never compete with the dolphins

Then, at a critical time it just kind of stopped being. Sad day indeed


Project Looking Glass was also a Sun/Java thing that led the OSS 3D window effect bandwagon: https://en.wikipedia.org/wiki/Project_Looking_Glass


Yea i ran looking glass at the time to see it in operation. Sadly on commodity hardware the compositor was incredibly slow. But another project that just stopped. I too then got into the compiz craze of a fire animation when one minimised windows and a cube to change desktop.

What happened to us. We used to have fun.

The modern day linux/unix desktop is no better than it was 15 years ago


I vaguely remember the Microsoft/SuSe deal but never linked that to GNOME3. I really love gnome-shell, but one thing I always have to tweak is putting the minimize/maximize buttons back in the toolbar (couldn't ever figure out why they would get rid of those... now i know?).


I assumed that it either had something to do with filesystem patents[0] or the vague patent claims they made on early Android vendors. UI design didn't even cross my mind - I didn't even know those were patentable, and Apple's foray into "look and feel" lawsuits was something Microsoft adamantly fought against. Hell, if I've heard correctly[1], Microsoft was the reason why those early UI design flourishes like faux-3D highlights made their way into CDE and Motif.

Given that Miguel de Icaza already is calling BS on this I'm starting to doubt the veracity of any of this.

[0] Microsoft still claims ownership over ExFAT, for example

[1] It was vaguely mentioned somewhere in NCommander's very, very long "Installing every version of IBM OS/2" stream


The SuSe agreement was really about networking and filesystems. At the time, SuSe had been bought by Novell - a networking company. They saw Linux as the future of their business, but the risks around Windows interop as a real threat to that business. They did not have an adversarial relationship with MS (actually the opposite), so they moved to mitigate those risks with a friendly agreement. Other companies did not share those interests. This post is bad revisionism.

(As an aside, what MS did or did not do in the '90s really has no bearing on what they did in the '00s or later. 90s MS was a hustling startup, '00s MS was a corporate behemoth. Like Apple pre- and post-iPhone, they changed behavior and (declared) values once they became the top dog.)


This is one of the biggest problems to Linux on the desktop that hasn't been addressed with all the work. If it gets large enough, Patents.

Design patents by Microsoft, Design patents by Apple, Design patents by Google, Design patents by third parties, software patents on algorithms, and countless codec patents that Linux users violate every time they download an H.264, H.265, or AAC, or AptX encoder or decoder, and DMCA laws they violate every time they play a DVD. To this day, Fluendo sells $20-$30 Gstreamer Codec Packs that are legally licensed in the US for businesses who notice and care. However, so few care at this point Fluendo discontinued the legal DVD player, leaving basically no legal equivalent on Linux.

It's easy to ignore when the companies that own most of them don't feel threatened. But if GNU/Linux was on 40% of PC Desktops, Microsoft would be absolutely eyeing those patents, and MPEG LA + Access Advance would be figuring out how to launch codec lawsuits, and the DVD CCA with the MPA would be running after anyone with a copy of VLC, Adobe might be hunting for patents in the printing stack that could be construed as PostScript-related, and on and on.


Except none of this happened. Multiple project leads from the projects in question are in this very thread denying this made up version of events, which none of us, even those of us deep into it at the time seem to remember.


The sources this post links to support its primary claim don’t even mention the Windows 95 desktop. The only one that comes even close was written by the same author. It also doesn’t cite anything that would support its claim.


I wrote it. What citations would you like?

Tell me and I will see if I can find some.

Microsoft cited how many patents were infringed, but only approximately:

To CNN & the SJ Mercury, 235: https://money.cnn.com/magazines/fortune/fortune_archive/2007...

https://www.mercurynews.com/2007/05/15/microsoft-says-linux-...

To ComputerWorld, 283: https://www.computerworld.com/article/2566211/study--linux-m...

But AFAIK it never actually listed what the patents were.


I’d like to see something validating the claim that parents influenced Linux GUI design.

The Fortune article printed by CNN only says that 65 patents apply to the Linux GUI. It’s so vague that I can’t see how it would be actionable information for FOSS developers.

The SJ Mercury article says the same thing, but cites the Fortune article for its claim; it’s not a separate source.

And the ComputerWorld article explicitly does not cover this claim, noting that they didn’t examine patent infringement risk for non-kernel components like KDE.


OK. I will see what I can do for some future piece, because I think this story has legs.

It was the publicly stated intention of RH and Canonical at the time of the MS threats, but I need to find records of that.


Just take a look at GTK 3+. It has to be deliberate sabotage. By Microsoft? Who knows.

I think it's funny how everyone is still trying to copy Apple. Apple gradually stopped being worth copying after Snow Leopard. After High Sierra, you'd have to be high to want to copy anything from it. I guess it's just inertia - people haven't realized it isn't good anymore.


Okay, we all agree this is not what happened (and we have confirmation from project leads in this very thread) and I think we know what happened with GNOME but what happened with KDE? Did KDE just fail on accident, just completely coincidentally at the same time? I was always a GNOME fan myself, so I never really followed KDE. Does anyone know the story there?


They had to move to Qt4, and took the chance to make some big changes - in the name of "convergence", which was a shared buzzword back then. They generated a lot of hype, then shipped a 4.0 release that was very half-baked. Whether that was always the plan or not, the public image of the project never fully recovered.


If gnome's menu is a "start menu", then surely macOS classic's apple menu also counts as a start menu ?

https://en.wikipedia.org/wiki/File:Apple_Macintosh_Desktop.p...


[Article author here]

No.

The Apple menu wasn't originally hierarchical, it was flat in the first 6 major version of classic MacOS; and it wasn't originally an app launcher. It held the control panel and things but no launchers.

It became customisable in MacOS 7, which introduced aliases (symlinks in xNix terms, shortcuts in Win9x terms).

Then you could put aliases in the "Apple menu" folder inside the System folder, and make it into an app-launch menu, but that was not its purpose and MacOS did not ship with it configured like that.

Whereas the Start menu was always hierarchical, and was always an app-launch menu; in Win95 it did little else.


It's been a while, but if I recall, the classic Mac OS Apple menu was not a general purpose "start menu." IIRC, it provided access to limited functionality, like desk accessories and recent documents.



ok! you are right. It has been a while...


> In the end MS did not sue anyone... but it got what it wanted: total chaos in the Linux desktop world.

So true, and Miguel de Icaza, the architect of Gnome, later got his wish of working for Microsoft. I said at the time, he was working for Microsoft all along.


I came from the vtwm/otherwise highly customized xwindows world and could never really understand Unity (also can’t really figure out TikTok or Snapchat UX, so maybe I’m a dumb dinosaur). Windows, Mac, Motif, etc all were pretty easy.


Cheers to Xfce for ignoring most of this nonsense. I love having a stable daily driver.


Ubuntu user since 2005. I was one of those users who flew to Mint. I loved MATE and eventually Cinnamon.

Over the years, some of my work habits changed + Cinnamon slowed down my system. So I went back to unity but I still prefer windows like DEs...


Not sure I really buy it. The patent suits seemed more related to VFAT than anything else, especially not as vague as desktop metaphors (remember, Microsoft succeeded against Apple trying to come after them in the mid-1980s on similar notions; they'd be fools to think that precedent would not hold on yet another competitor).

It's also the same time that Microsoft seemed to want to abandon Windows 95 paradigms as quickly as possible, with the release of Windows 8. It did seem like everyone just went crazy at about the same time.


KDE 2 was such an incredible experience when it came out. I understand the GPL purism in GNU/Linux that hobbled Qt and by extension KDE, but I always felt that KDE from 2 onwards was by far the superior desktop environment. Ultimately despite Qt and KDE adopting GPL they never seemed to catch up in terms of corporate sponsorship or distro adoption.

I thought it was a sort of bitter irony that Apple basically co-opted Konqueror into their core web platform... Shows the quality of engineering the KDE folks had.


The bar at the left of the desktop on Unity is very much like the Windows 7 taskbar, which can optionally be placed at the left. This doesn't seem credible to me.


[Article author here]

I think that is wildly misrepresentative.

I covered the differences in my 2013 (ish) article which I linked at the end of my blogpost. I don't think you've read it, from what you're saying. I suggest that you do.

Win7 revamped the taskbar specifically to make it look more like a dock, but it wasn't like that before (which I suspect you haven't used) and there are salient differences between docks and taskbars.


You could place the taskbar on win ( at least 98) in every edge of the screen. The same in fvwm.


[Article author here]

This was a feature in Win95, even in beta in 1994 or so.


I'm still using gnome 2 with Unity on Ubuntu 16 ESM while Gnome 3+ has no future on my notebooks. Never thought that Gnome 2's LaF is Windows-y; if anything, it looks more like Mac Os, with global menu and minimal window decorations. I always thought Gnome devs wanted to jump onto the tablet bandwagon (or simply wanted to try out something different and non-WIMPy), but failed hard.


I wrote the article.

I do not understand your points here.

GNOME 2 does not have global menus. GNOME 2 did not have minimal window decorations, with separate min./max/close/menu buttons.

GNOME 2 (and MATE) does have:

* a taskbar with iconic buttons for windows (a Win95 innovation);

* a hierarchical app-launch menu (a Win95 innovation);

* a system tray with the clock and status icons at the opposite end of the panel from the app-launch menus (a Win95 refinement of a MacOS feature).

And so on.

I think you're remembering wrongly or are confusing different GNOME versions.


Seconding/thirding/N-ing general consensus of those invested in the Linux desktop scene from the era that this wasn't the rhetoric or the feeling at that time.

I think it'll take a stronger argument than this to convince anyone that lived through this era.

Felt an apparent mania to shove mobile interfaces on to desktop computers for fear of missing the next big thing, usability and sense be damned.


TLDR:

> SUSE signed a patent-sharing deal: https://www.theregister.com/2006/11/03/microsoft_novell_suse...

> Note: SUSE is the biggest German Linux company. (Source: I worked for them until last year.) KDE is a German project. SUSE developers did a lot of the work on KDE.

> So, when SUSE signed up, KDE was safe.

> Red Hat and Ubuntu refused to sign.

> So, both needed non Windows like desktops, ASAP, without a Start menu, without a taskbar, without a window menu at top left and minimize/maximize/close at top right, and so on.

I remember Unity was born because GNOME (Red Hat) wouldn't accept Canonical changes. I remember GNOME 3 was born like a long term project that could adapt better for the future (nascent adaptive UI's) just like KDE4. It was known to be slow in its first iterations to get better over time.

Of course, if the described reasons really make sense, it is not something that would be discussed in the open, but is not what I felt at the time. I even vaguely remember the "genie" effect in Compiz needed two curves when minimizing a window to be "different enough" from apple, but that was all. I also remember you couldn't play a DVD or an MP3 out of the box because of DRM and patents. Everything else... never had the vision the author described.

Disclaimer: around 2010-2013 I was a sporadic contributor to Compiz and GNOME.


At the time, both Novell/SUSE and Red Hat were significant contributors to GNOME. Novell acquired Ximian around the same time they bought SUSE and merged the two. That effectively made SUSE into a GNOME shop rather than a KDE one, though it didn't last as SUSE bled out all of their GNOME developers...


I’ll go against the rules and say that unity makes my guts churn. Linux/gnome is one sleek ui design away from mass adoption and unity is the exact opposite. I hate it. I’d even contribute £ to a sustained effort of making gnome look good.


The "classic" elements of Mac OS X were out in 2001 and then Apple made the UX more and more convergent to iPhones and iPads. Enterprise programming, including FOSS, from the '00s to our days got more and more hooked to Apple rigs. GNOME 3 and Unity, although bad resource-hoggers, were multi-year efforts toward abandoning Windows-ish dumberies (sticky keys for example) to approach similarity to devs' personal computers' perceived terseness. Some intel and microsoft employees and ex-employees were and are working on GTK and systemd, so this article looks very very, very bold in their statements.


I've been using linux since '97, RH 5.1 that came on cd in the back of a linux tome I can't even recall the name of now was my first distro. The partitioning tool was called disk druid and the desktop? Fuck man the desktop was ugly, I don't even remember what it was called. Hahaha I just looked it up it was gnome, the beta: https://www.youtube.com/watch?v=kkD9jY8cxIY god I'm old.


On the topic of Windows 95 UI clones... If you go one minor version back from your experience, RedHat 5.0 actually shipped with FVWM95* as its default desktop and it was fugly:

http://toastytech.com/guis/x.html

* FVWM, with an unholy patchset to drag it kicking and screaming into kinda-sorta resembling Windows 95


Regardless of the reasons, I'm quite happy with Gnome 3 as my daily driver. Once I understood the design philosophy of the /Shell/ aspect of /Gnome Shell/, the friction disappeared. The idea is that Gnome 3 can (should) be a keyboard-centric UI for most tasks whereas Gnome 2 and other similar UI's were/are arguably mouse-centric. For me, it's much faster to navigate using hotkeys and shortcuts than mouse/trackpad movements.


Traditional desktops supported keyboards heavily; they were born at a time when mice were optional. Original Mac as well. Gnome3 has removed most menus, which can't help it in that area, although I'm guessing they added a few others.


This does not match any of what I remember from the time


The timing on this seems off to me - I am wondering why would gnome and unity releasing a different UI/UX ca. 2010 be a response to Windows 95?


The article is not saying GNOME 3/Unity is similar to Win95, it's saying that Steve Ballmer believed that GNOME 2 and KDE 3 (shipped by Red Hat, Ubuntu, SuSE at the time) were too similar to Win95-Win8, which Microsoft owned patents on. And therefore, because of rumblings from within Microsoft in 2006-2007, GNOME had to change as soon as possible, and KDE was fine staying the same because SuSE had been semi-blessed by Microsoft.

(The premise of the article is factually wrong, I'm just explaining its thought process)


[Article author here]

Because, as I have cited in about a dozen links up and down this comments page now, MS threatened to sue between 2004-2007, then companies started to settle or work on something different.


I still think Win95 is peak U, but it's what I grew up with, so that's probably expected.

Minus some of the non-resizable windows, some newer UI elements that are clearly better, etc.

I still run "it" via XFCE and Arch Linux: https://github.com/grassmunk/Chicago95


Whatever the cause, I think what happened had a positive outcome. Being able to choose from a huge selection of window managers (or none at all) is something I really like about Linux. Thanks for all the hard work that people have put into WMs!


Nextstep 1.0, 1989, with a dock.


However, RISC OS (Acorn), 1987, with an icon bar. :-)

Project lead Paul Fellows is quoted in this article [1] as follows ("Arthur" being the code name for one of two competing projects, which became RISC OS),

> "I found this page I was very pleased by: 'Apple acquired the Dock from Steve Jobs' NeXTstep OS which stole it from Acorn's Arthur Operating System of 1987.'"

and,

> "There was a guy at Colton Software with us… who joined Microsoft in Seattle, and it was shortly after that that Windows acquired an icon bar. I know how that idea got there."

And on the rationale behind it,

> "In case anyone ever asked where it came from, we were sat in a room thinking 'how do we design this to be different, so we don't get sued by Apple?' The Mac had a menu bar of text across the top, so we thought 'we can't go across the top, we'll have to go across the bottom – and we can't use text, so we'll have to use icons.' That's why it's like that,"

[1] https://www.theregister.com/2022/06/23/how_risc_os_happened/

Edit: That said, Xerox PARC had the Cedar development environment in the 1980s and this featured sort of a task/icon bar already. Notably, the second, long ongoing project for an Acorn operating system, written in Modula 2, was conducted at ARC (Acorn Research Center) in Palo Alto, next to PARC. There may have been some exchange (and inspiration) involved.

[2] http://www.bitsavers.org/pdf/xerox/parc/techReports/CSL-83-1...

[3] Video, "Eric Bier Demonstrates Cedar", CHM YT channel, https://www.youtube.com/watch?v=z_dt7NG38V4


It would be interesting to make a study about users that do not consent to report metrics and the UIs they prefer. They might as well be a consistent slice of the user base but invisible when decisions are made.


Nobody gives a damn about those studies. They even don't need studies - they can look at bug reports. It's not that this issues are new. Just looking at MS there are bugs which are not fixed for years.


Then I suggest you do read some studies about where incentives are in open source development: spoiler: they're not in closing bug reports ASAP.


Personally I like how gnome is no, especially after gnoem 40. It feels like a more advanced and more modular version of MacOs desktop, with a great feel and finish the IMO KDE lacks.

Its great!


>Before the threats, almost everyone used GNOME 2

No, not really. Early gnome was a joke compared to KDE, in that while KDE could be actually used as desktop, gnome was helloworldware.


Rather confused by this claim that GNOME 2 was helloworldware. GNOME 2 was one of the very few Linux desktops that saw widespread usage within large companies. Red Hat and Canonical had professional offerings based around it. It attracted a small chunk of home users who were pissed off by Windows Vista. So it definitely got UX tested quite a lot. Red Hat Linux actually still includes a builtin clone of the GNOME 2 design just to cater to people who have been using the same UI in a professional setting for decades. They only recently switched away from this classic clone layout being the default.

KDE is the one that was way more community-focused, but it did see a bit of use for professional workstations.

You may be thinking of GNOME 1 - it was incomplete garbage.


[Article author here]

Early GNOME was GNOME 1.x. It was not great.

GNOME 2 was not early. It was the 2nd version, it fixed the issues, it was very widely used, and it still exists today in the form of MATE.


> In the end MS did not sue anyone... but it got what it wanted: total chaos in the Linux desktop world.

KDE is still the most used Linux DE.


[[citation needed]]



By under 12K respondents who are gamers. I am not confident that this is a representative sample.


Am I the only one enjoying and finding peace in this type of short sentence narrative?

Is it maybe because English is not my native language?


> Ubuntu used Unity, alienated a lot of people who only knew how to use Windows-like desktops, and that made Mint a huge success.

This sounds disingenious or at least overly simplified. Most people I know don't like Unity. It doesn't matter if they were used to Windows, OSX, or enlightenment. "Don't like" still means it's tolerable, they don't hate it - but most prefer to use anything else.


We know why Unity happened. Because Canonical has a bad case of NIH


lmao author thinks unity is better than macOS, invalidates his opinions for me


Contempt for the user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: