ps: also, flat came after both the aqua trend where effects were everywhere and skeumorphism was pushed higher. Not that surprising in a way.
 beos, win311, macos classic, win95 (office97 era) and nt5
- Standard interaction widgets. There was no breaking the scroll, there was no button that you can't discover how to press.
- Expert oriented interfaces. There was no action without a shortcut. The most complex software always had some CLI or an API (optionally with an intepreter).
- Discoverability features. The sortcut descriptions were embedded on the same places you had to click to get the functionality. Buttons were clearly marked. There was almost always some text area telling you what was happening.
I don't miss skeumorphism, but it was often used to mark real features, and those features were gone with the arbitrary skeumorphisms. Current GUI trends were created by people with no interest on making their software useful, instead they only keep an eye on showroom conversion rate. (Whether they are right or not on doing so, it's a problem nonetheless.)
Yeah, I feel A/B testing is really the place where Satan secretly influences the world. I'm losing track of how many times I've seen user-hostile or application-debiliating decisions justified by "data" on user behaviour. Something is very wrong here.
Or occasionally a setting you last touched five years ago when you installed the thing, as you're quite happy it's been set ever since.
How strange that they don't need data to to install ads in start menus and new tabs, "refreshing" the whole GUI (once you've fully assigned the old one to muscle memory), or adding other pointless bloat.
# Runs horizontally.
- Result: wastes precious vertical space on low-resolution widescreen displays (i.e. business displays and most notebooks) that should be dedicated to showing the document body.
# Due to being horizontal, GUI items appear/disappear depending on window width.
- Result 1: having two documents on screen can be disadvantageous compared to having one on screen because a lot of the GUI items aren't showing or are hidden in submenus, encouraging wasting screen real estate by only having one document on the entire screen.
- Result 2: due to the Ribbon's horizontality, instead of having the elements stay on screen in a consistent manner and be easily scrollable, the interface constantly surprises the user by unexpectedly hiding even key GUI items.
# No option to disable.
- Result: if you wanted to have certain GUI items visible at the same time… well, tough. If the floating palette that inconsistently appears when the user places the mouse cursor over certain elements doesn't have your favourite item, then you're out of luck if the items you want to use exist under different tab groupings.
iWork '09 and prior had the best design: a main toolbar with general items, a smaller context-sensitive toolbar underneath, and context-sensitive inspector windows. If the revamped iWork had simply docked those context-sensitive inspectors, rather than get rid of the context-sensitive toolbar, it mightn't have received such a poor response.
The irony is that Office 2003 (and a few releases prior) already had inspectors on the side, and those would have been perfect given the prevalence of widescreen displays, leaving as much vertical space for the display of the document body as possible.
LibreOffice seems to have three GUI modes; one like iWork, one like the old Office, and one like the Ribbon.
Precisely. The average user assumes (often correctly) that the way things are is the way things are. When it comes to what technical people think are basic features (pairing a wireless keyboard to an iPad, or changing default text size on iPhones, or any number of similar tasks on any system), the modern “ease of use” guidelines suggest hiding everything away as much as possible, severely limiting setting discoverability.
So I get used to the defaults. It makes it easier to throw out, reinstall, or switch environments if I need to. In any given day I use 5 or 6 different primary environments.
Time ago I have a discussion with a "commercial" guy who say that the sole really integrated platforms are cloud&mobile so they are obviously the future because we are a society and we need to interoperate. I respond plugging my laptop HDMI into the room projector and show a quick Emacs/EXWM(-X) demo: email? Hit a single key (F6 in my case) and my MUA (notmuch-emacs) popup instantly. On top of it's big search bar I have few single-key accessible saved searches and bottom the big series of tags, a far superior "dashboard" than bloated GMail UI. Of course compose a new message can happen with a single key at any time with any open application I have focused. Oh I imaging someone demand me a demo, a quick M-x skeletor (ivy-completed) popup, a single key to choose beamer slide, quick typing and slides are made, tested locally and uploaded. Another imaginary interruption and another task (skeletor again + org-mode), an imaginary patch sent via mail and voilà: magit integration. All datas are really integrated and usable in a consistent environment, anything can be done in a snap and NO other monster modern GUI or '90-style can do the same. That's the past (starting from LispM/MIT AI lab glory time) and the future like we have had "golden age" of ancient Greek polis and more modern "middle (dark) age" and again a modern age. That's integration and customization. No need to switch between systems (while can be done easily with NixOS/GuixSD + homeManager/GNU stow + unison). My system is main and I can replicate/extend it on any decent hw as nedeed. That's "switching systems" IMO :-)
I'm assuming they were talking about different systems they don't own, aren't their own systems, and over which they don't have the sort of control to install their own software and set things up using their personal configuration files.
It's awesome that you've got, or at least dreamt up, a system that works for you, but if you're able to use that exact system on every single machine you use, that isn't quite what was being described. That's an ideal, but only really feasible for personal machines.
Also, I'm going to get downvoted, but please put in a few line breaks.
> Also, I'm going to get downvoted, but please put in a few line breaks.
I still have to learn the idiosyncratic way HN handle text... I do put linebreaks, I'm edit in Emacs and paste here, however HN mess it up...
It mostly deals with decisions during programming, but the first part of the article describes exactly this problem.
Always remember a thing: from diversity born evolution, from standard borne Ford-model workers.
> of course for work there are requisite
So, which is it? We do or we don't?
> but tech users should IMO do their best to avoid working in bad environment
More often than not, it's not up to the workers but company policies. Besides, needing to use a machine other than yours isn't a "bad environment", it's life.
Not only that, but the "other machines" could equally be non-networked terminals for heavy machinery. A lot of these run a stripped-down version of Windows, so the basic user interface is usually left at default settings whilst an always-open program takes up most of the display.
I agree with you on the next part:
> [...] to convince their company to let them use productive software
I'm right with you here, but again, company policies. Plus, your example suggests you're just thinking of individuals within a company as individuals.
We mustn't assume that all users here are in technical jobs, particularly development; often we're just moving between standardised Windows workstations, lowest common denominator setups so that (A) non-technical users could log in to any machine and still understand how to use it and (B) the IT department have fewer headaches to sort out.
After all, a company is not just made up of individuals; it's full of teams who have to work together to reduce each others' burdens. Sometimes that means using setups that aren't our favourites; our personal productivity mightn't be as great as if we used our own setups, but the company doesn't grind to a halt when someone's delicate configuration goes haywire and the IT team spends more time on it than anybody has any right to expect.
There's a delicate balance to maintain in most companies. IT departments have no trouble labelling even the very technically competent users as ID10Ts.
> however HN mess it up
Are you making sure to use two carriage returns, not just one? It's not particularly idiosyncratic, reddit is the same. I think it might be inherited from non-WYSIWYG forums or bulletin boards.
At any rate, it's becoming a bit of a standard to use two carriage returns due to this being the way that line breaks are entered in Markdown.
hard stuff nor applied in a too bureaucratic manner: you ask/negotiate
and see results. BTW I do my best to avoid too big companies because of
On line break, no I type on a single line and Emacs automatically brake
line a 79's column... Of course I can leave double empty lines but the
results will be obscene for small-screen reader... That's the F-F idea in
emails. How does it look for you know?
IMO HN should limited it horizontal line.
Your editor shouldn't insert the line breaks at 80 column intervals; separate the content from the presentation, and let HN format your text properly. After all, if you have a small screen, the text will be wrapped according to the browser width anyway.
I understand that you complaint about my comment's long lines because I "format" in F-F style (i.e. no linebreak except for paragraph), next I format with double linebreak to force HN "cut" longer lines.
I do not know how to format in other way, inserting html+inline CSS with maximum text width or maybe even media query is not something I expect HN accept nor a thing I'd like to do as a HN user...
Never underestimate the importance of good defaults.
 - https://github.com/fuhsjr00/bug.n
In the end, I find that left-side orientation is rarely done well, or well imho. I did really like the Unity UI in Ubuntu, but I think I'm the only one on this. I used to position my launchpad in macos on the left as well, but this is awkward, and easier to just have it auto-hide.
That said, I'll agree that the Ribbon does work best on a full screen interface at the top, though I still think a or two context-sensitive toolbar(s) would work better.
I was thinking about this in the shower: why is mapping a drive letter to a path in the Ribbon? Shouldn't that be a button somewhere in or near the navigation tree interface in the left sidebar, grouping it with the other aspects of disk drives and paths.
Plenty of what's in even Explorer's relatively simple ribbon could be in a context-based location for greater semantic grouping. That could simplify the Ribbon, and turn it back into a simple toolbar, maybe with a secondary, highly context-sensitive toolbar at the bottom of the window.
That's not an unprecedented thought. Windows XP had it with the Quick Tasks sidebar. The problem there was the use of sentences rather than simple command names made it difficult to separate the signal from the noise; plus, if you didn't change any of the defaults, you had that little dog making the whole thing seem rather unserious when it was actually a rather power paradigm, poorly implemented but with much untapped potential.
Ever seen a non-technical user move the mouse with the same dexterity as you or I? I haven't. The mouse always roams around for a month of Sundays before it eventually arrives in the right place.
This is not a stable interface; it is an ugly hack to make up for wasteful the Ribbon was designed to be
At best, it could be a sort of distraction-free writing mode except that you can still see the rest of the interface, some parts equally as eye-catching as parts of the Ribbon.
Not just non-technical. I hate having to hunt for the tiny target. A very highly technical friend of mine invented a term for that: “pixel spearing”. I am truly insensed at how much time I waste trying to spear the exact right pixel.
It's why hiding toolbars dynamically works well in macOS' fullscreen mode; flinging your mouse to the top of the screen always shows the menubar and toolbar without fail (unless the mouse is captured by the app, of course, like in a game).
At the very least, Office should then show a small context-sensitive toolbar (mini-ribbon?) or a sidebar; instead, the only recourses are either the inconsistent appearance of the floating palette that only appears when the stars are aligned with the user's mouse pointer or temporarily showing the Ribbon again.
Like I said to another reply to my comment, lipstick on a pig. Maybe another analogy is that it's a band-aid on a flesh wound.
The floating palette is not inconsistent. It appears when you select text. First it's half transparent because you might not need it. If you want to use it, hover with the mouse above, the palette becomes non-transparent. If you just want to highlight something by selecting it, you can move the cursor away, the transparent palette hides.
I can reproduce this behaviour all the time. It might not be the best idea, but it's not inconsistent in its usage.
Though they were more common, the market had already moved to notebooks outselling everything else. Microsoft should have had foresight. In fact, you might say they did, with Windows Sidebar in Vista; whilst not fantastic, it was a good use of horizontal space.
Besides this, they already had interfaces in prior versions of Office that made more efficient use of vertical space __and__ made efficient use of horizontal space on widescreen displays; they were the sidebar palettes, still used in Visual Studio. They just needed further development; instead, they were completely removed.
> It's not inconsistent
It is if you're a user with special accessibility requirements, especially those with motor skill problems or vision difficulties; ephemeral interfaces are hard to target, and without a means to manually invoke it and a consistent location, might as well not be there for many a user.
A well-designed interface doesn't need to account for edge cases. Office 2003 and prior's interface, whilst not pretty, was already extremely usable in that sense. All that was needed was context-sensitivity; instead, the baby went out with the bath water.
But you didn't specify that when you called it "inconsistent" in the parent comment. I assumed no special accessibility, as did you, because you haven't mentioned it before. So maybe it is inconsistent of those users, for the rest it's still consistent.
>All that was needed was context-sensitivity; instead, the baby went out with the bath water.
The ribbons in Office have context sensitivity. Select an image, image ribbon is shown, table - same and so one. Since text is the primary context, it is always shown per default ("Start" ribbon).
That's your mistake, then. When talking about user experience design, it's always inclusive-by-default, accessibility a top priority, not an afterthought.
This comes back to my parent comment right at the top of this thread: using native widgets with full accessibility support gives you this for free. Of course, I'm not saying there shouldn't be innovation, but those outcomes should be on par with the default widgets, not even a tiny bit lesser.
> The ribbons in Office have context sensitivity
But they (A) still show too many features for whatever is selected, demonstrating that the context-sensitivity is limited; (B) don't always automatically change to the appropriate tab; (C) sometimes show two tabs, confusing users (especially when it comes to tables or graphs); (D) hide other tools, by virtue of switching tabs, which would still be useful (namely, everything on the main tab).
Also, context-sensitivity would mean that the Ribbon would change back to the main tab after any operation in the other tabs was done. Since it doesn't, it demonstrates that the user has to constantly switch between contexts manually, meaning that the Ribbon's context-sensitivity is pretty poor and, again, inconsistent.
Not really. I think you pulled this card to win the "consistent" argument. After all, I don't see evidence that this style does hinder accessibility.
>(B) don't always automatically change to the appropriate tab;
They do, Word 2010. Inserting Image -> Image ribbon, same goes for tables. If you want to force-show a ribbon you can always doubleclick.
>(C) sometimes show two tabs, confusing users (especially when it comes to tables or graphs)
Whats confusing about this? It shows a header "table tools" (translated) so the purpose is clear. Sometimes stuff is more complex and needs more space.
>Also, context-sensitivity would mean that the Ribbon would change back to the main tab after any operation in the other tabs was done
It does. Again Word 2010. Select image, make an operation. Write text again, (because you might make several operations) thus exiting the image manipulation mode and the Start ribbon is there again.
Note: I'm not saying this is the best interface there is, merely that it's not as inconsistent as you depict it.
Actually, if you look at all the comments I've been making in response to my initial comment, the parent of the thread, I've been talking about accessibility the entire time, especially for visually impaired.
For the visually impaired and those with motor skills impairments, it's fairly easy to overshoot where the floating palette appears. If you shoot too far up, it disappears; it doesn't reappear when the mouse comes back to where the palette was, the text needs reselecting.
That's my main inconsistency.
> (B) don't always automatically change to the appropriate tab;
> They do
I stand corrected in one respect, but we still have inconsistent behaviour. Say you create a table, the Ribbon will change to Table Tools. Great.
Click away from the table and then click back. Still on the Home tab. Well, this makes sense since you're probably editing text — but again, that just reinforces my belief that those tools should be in a separate toolbar (à la Office 2003 and prior).
> What's confusing about this? It shows a header "table tools" so the purpose is clear
The technically minded may figure it out, but ordinary users have to rote learn what the tabs do. As an example, Table Tools shows two subtabs, Design and Layout.
Do you think you could get Sheila from accounting or Bob from packing to tell me the difference between the two tabs, or how the two Design tabs differ, without letting them click around the interface? I doubt it.
When there are two context-sensitive tabs shown, and they're both heavily related, you can almost bet money that an ordinary user, Sheila from accounting or Bob at reception, is going to cycle between the two tabs to find what they're looking for. This isn't intuitive.
> Note: I'm not saying this is the best interface there is, merely that it's not as inconsistent as you depict
That isn't high praise. A user interface shouldn't be inconsistent at all, especially the flagship product of a multi-billion dollar company, and especially the de facto standard in productivity software.
Not just the technically minded. Office is primarly used by non-technical people.
>Do you think you could get Sheila from accounting or Bob from packing to tell me the difference between the two tabs, or how the two Design tabs differ, without letting them click around the interface? I doubt it.
Telling? Probably not. But asking them to do X with the table? Yes, at this point (when working with the program) it's probably muscle memory to do the stuff you want.
Who, in my experience, have only a grasp of the absolute basics of what they're using (and pretty much only live in the home tab), and consider most of the other basic functions as too complicated.
> [...] it's probably muscle memory to do the stuff you want
A good user interface design doesn't _require_ muscle memory, though it can reward it by making repetitive actions quick and efficient. Intuitiveness should be the goal.
My theory is that the assumption there is some kind of meaningful "Average user" which a product is then built for ultimately destroys utility in software.
If you have 50 features that on average 1% of people use, you can easily reach the false conclusion nobody cares about these features, when on aggregate 90% might use at least one of those features. Thus in aiming for the average, you haven't designed for anyone at all.
Combine this with the fact that metrics can't truly tell you why something is, only that things somehow appear to have a relation. Developers love love to run wild with a narrative about how their particular reading of why the metrics shook out this way is unchallengable fact, backed by DATA (when its really exactly the opposite, you drew a conclusion based on relations you assumed to exist), you have a recipe for some really horrible misdesigning.
I think modern application design is completely off the rails, driven by statistical phantoms that are assigned significance in an arbitrary and non-systematic way and its leading to software that is genuinely a degraded experience for the user, wearing the mask of something that is propped up as "objectively better".
Everything was accessible via the menu! You could tell what was a label and what was a button! There were tooltips! There were no hidden gestures--some magic combination of taps and swipes that has you banging on your phone like a monkey.
What's remarkable about GUIs today is that the software is hardware to use despite having less functionality and fewer features than ever. Compare new GMail to Outlook 98; Windows 2000 to Windows 10; the built-in PDF viewers in Edge/Chrome to the Acrobat PDF plugin; etc.
The other trend that's pissing me off is - actions are increasingly irreversible. I loved computers because it seemed to have reset or undo buttons everywhere! Most actions / mistakes were fixable in one click. Even Windows Recovery which failed to work correctly most of the time - still provided mental safety and I explored with ease.
Now, things seem to happen for no reason. Flick the mouse the wrong way and something will change, and render a software unusable with no way to fix it unless the application's uninstalled - some changes even survive reinstalls and the only way to fix is an OS change.
when I booted a win95 box last year, I thought to myself, with emacs and sbcl I'd use that as my daily driver
But you're being a needlessly argumentative, by any real definition all versions of Windows have been "real operating systems".
Brushing all the checkboxes and menus it could under the rug in favor of those stupid "Wizards" gives me a headache even just reminiscing about it
"Internet Connection Wizard"
Windows 7 was a-ok too, but my best memories are still with 2000/2003.
This still is the right way to design GUIs for applications to which it is applicable, many people (me included) still follow it and I would warn against thinking of it as of obsolete.
Nevertheless there was and it looked really crazy.
> There is no equivalent dialog in modern Windows
The last version of Windows I used was Windows 7 and I'm pretty sure it was there. I was coding WinForms apps and it uses native system widgets stylable via these settings. Now I use Qt to develop cross-platform apps (on KDE5 as the primary environment), it follows the principle in almost every aspect (it doesn't let you change the colors easily however, you need to design a custom theme for this), it also suggest attaching a keyboard shortcut to every action and displays these right to the menu items.
You're talking about the high-contrast theme. It wasn't anything special, it was just a specific set of settings from this dialog. Compare that to now, when having a dark mode took so much work Microsoft made a special announcement and everything. Ridiculous.
> The last version of Windows I used was Windows 7 and I'm pretty sure it was there.
It's been a while since 7, but if this dialog was there it was because you could still disable compositing and use the old fashioned Win32. The closest thing in 10 lets you change your titlebar color and background.
I'm grateful for all those people, yourself included. But the problem is that most software is moving to the Web, and the browser is treated as blank canvas. Everyone reimplements their own UI to their aesthetic taste, with no regard for accessibility and interoperability. It's the other reason (besides security) that Flash was bad; just calling it "DOM" doesn't make it suck any less.
Today people are always optimizing (for the showroom), always redesigning. That's part of the problem, not of the solution.
What’s refreshing about Be is that it only goes like three deep. Button and not button, and... desktop?
It’s because the Be UI is so shallow that it is charming. If you allow 6 different types of every UI thing like Google and Apple do, your UI designer has no chance of getting everything correct. There’s just too many combinations of widget states.
Only by limiting UI depth can you get that Nintendo-Esque complete feeling, like everything is there for a reason. And Be does.
could you point to some additional resources on this topic?
I've got an old Celeron system I was going to throw away but because of this I just installed the 32bit version of Haiku on it and have been enjoying its simple style.
1. Constraining a guest user so they won't access anything they shouldn't - when a guest comes to you and asks to use you PC and you want to be nice and let them.
2. Constraining resident server programs so you won't get pwned.
3. Constraining nonfree apps (their installers especially) so they won't put/remove/modify anything outside user directory (Windows apps love to do this).
I find it particularly funny that Android is based on an OS designed for thousands of simultaneous users (although they did get some nice security / isolation out of it).
It's hard to understate this.
The rampant security vulns of Windows were due largely to its heritage as a single user, sparingly networked computer.
You can absolutely build a better single user system.
Placing the security around users is used by server operating systems to protect shared resources from malicious users. It is completely useless at protecting a user's resources from malicious applications run by that user. The mobile OSs got this right in that the put the permissions on the applications instead.
Strawman; that's not what I said.
In the late 90s/early 00s you could get pwned on Windows by opening an email.
I have been installing Windows 10 inside some VMs for running windows applications lately, and that's a pretty ridiculous experience when the host OS is the one really doing all the heavy lifting for multi-user and authentication, and one really just needs a thin windows runtime for applications inside the VM, not a full-blown "operating system" (a awkward description for Windows 10, considering how much shovelware it comes with).
Things that are separate users by this definition:
- Every VM running under my account.
- Every docker container.
- Every SELinux process/file type combination.
- Every cgroup.
- Each CPU ring.
- Each call to pledge.
call them users or roles, doesn't matter. it is crucial that privilege separation is baked into the core and customizable by the owner of the device. at the moment i only see a mulituser system capable of doing that.
Does BeOS/Haiku still have a technological edge somewhere (compared to say, linux or windows)?
Where it does have an edge, though, is how’s its being designed first and foremost as a desktop OS with responsiveness as the top priority above all else. The latter especially is depressingly uncommon in modern operating systems.
I'm running three apps right now: VSCode, Slack, and a web browser. Given that VSCode and Slack are basically other browsers, how much does OS responsiveness matter? Has the responsiveness battle just moved to the browser?
How much are people really running other than a web browser on a desktop/laptop now-a-days? When I evaluate an OS today, I'm going to care about how good its trackpad drivers are, whether I'm likely to have wifi issues, whether it can run the *nix tools I need to do my work, maybe whether I like the way it switches between windows/apps, and whether there are modern browsers to run on it. Responsiveness was a big deal back when I was running a Pentium II and BeOS had amazing demos (and I presume real-world usage) with uninterrupted video while multi-tasking and just buttery-smooth context switching in an era where you'd have your system lock-up. Heck, it was many years before OS X got past the beach ball of death happening constantly.
However, today, it's my browser that locks up; it's my browser that I want to be able to smoothly scroll through a web page.
I really want Haiku to succeed, but it feels like we might have moved beyond the OS for so much that the OS can't have the same impact it once had. On the one hand, the web has made it so that alternative OSs don't need a boat load of software to be usable. On the other hand, because the browser is now the OS, the usefulness of an alternative OS is lower.
Currently open on my machine:
- Firefox (streaming audio)
- A program converting a 500 page PDF to 500 300dpi PNGs
- A bash script extracting PDFs from a municipal web site.
- A program downloading my video viewing for the week and converting it to a format better suited for my iPad
- iTunes serving audio and video to TVs in two different rooms
- An FTP program doing weekly backups
- A mail client
- A VPN client
- A spreadsheet
- An IDE (Coda2)
And if I were to look at my wife's laptop, I can guarantee that she's running more than just a web browser, and she's not technically inclined.
People have been pushing browser-only machines since the days of Novell's "thin client" strategy in the 90's. Netbooks were a fad. Consumption-only gained a small amount of traction recently with Chromebooks, but even the non-technical people in my office are dumping their Chromebooks for real laptops these days because they realize they have needs beyond Google's ecosystem.
Maybe we'll get there some day. But for now, if you want a laptop computer, you have to have an actual laptop computer.
This is one of my pet peeves. Netbooks were great as a small disposable internet console. They weren't just for consumption - the keyboard meant you could use them to compose text. The problem was that people bought them when they wanted a super cheap laptop, and so netbook manufacturers kept making the screens bigger and the processors faster until they stopped being netbooks and turned into crappy cheap laptops.
The new wave of tablets with keyboard covers are basically what netbooks were, but for a good decade there we lost the idea of a small cheap internet console which could be used to compose text.
Which like Android afterwards, kind of proved the point of what happens when OEMs get to ship their own distributions.
I've got a Samsung NC10 running Haiku beautifully. It's a 32-bit machine with just 2Gb of RAM and it's never run so smartly. I'm very impressed by Haiku's stability and consistency. The extended attributes on files are an eye-opener. Your mail client is a settings window, a compose window and a file manager. Just add the attributes you need for email to the list view.
Otherwise they wouldn't need to pivot it into a general laptop OS.
A lot of people run web browsers.
A lot of people stream audio.
That's a weird one, sure.
Also weird, but "average people" need to do some kind of automation once in a blue moon, so let's not discount that.
The conversion is unusual, but people download videos to their iPad.
A little niche, but "media server" is still a thing.
Backups are a thing. A thing more people should be doing, but definitely a thing.
Not niche at all.
Not niche if your office requires a VPN. A lot do.
Definitely not niche.
Sorta niche-y, but think of it as "text editor" and it's less so: it could be a word processor, or Scrivener.
Somehow, responsiveness with Windows 10 on my i7-8650U laptop is total garbage, so yes, I think it still does.
Then I tried Windows 7, and I could not believe the difference. We're talking 5-10x faster. Everything just responds instantly. Too bad they're ending support in a year.
Because for better or worse that's not what technology has been designing for.
Many programs nowadays are designed for the tradeoffs of SSDs rather than spinning media.
It always matters; I don't want to waste my life away in front of someone's irresponsive, laggy, slow software.
And I am not kidding in the least when I say that. Regarding more modern operating systems, BeOS was so nice to use. Here was an OS that had UI/single-user needs baked in to it from the beginning, and it was a breath of fresh air. I'd liken it to the experience I had switching from a 60Hz monitor to a 164Hz one- I didn't really think about it much at the time, other than I wanted a gsync-capable display, and one was one sale. But the smoothness of the scrolling, mouse use, etc was so much more pleasant. And yet I never would have complained about the 60Hz screen until shown the alternative.
Venturing in to personal opinion, I'd also say BeOS was developed at the inflection point between engineering running the show, UI be damned, and where we are now, with designers running the show, actual capability be damned. It was a good mix.
As a C=128D owner, I hear you loud and clear. It's a slap to the contemporary programmers' faces that a 1 MHz MOS 6510 based system was more responsive than multicore GHz, hardware accelerated systems we have now.
I find it very noticeable, not least as in terms of those basics we seem to be stepping backwards for responsiveness.
Loading software should be quicker, but no one cares about space and size any more, especially as more and more becomes a front for a browser. Even native software is faintly disappointing. You load a game in seconds from the lightning fast SSD then it takes 2 min to load the save as that's bigger than some OS's. :)
You can't build a responsive browser on top of an unresponsive OS.
I’m definitely not a normal user, but I’m running emacs, st, tmux, stumpwm, dunst, gocode, cupsd, sbcl, redshift — oh yes, and Firefox too. Honestly, I’d love to be able to get rid of my browser, because then I could run emacs in a terminal and get rid of X entirely.
How many threads are they running?
That sure is something that most OS'es today can't do.
Hell, even Windows 10 boots from an SSD these days in less time than my BIOS takes to start the Windows boot process...
Now, given that servers are fundamentally running the same OS, improvements geared towards that market tend to bleed over to desktop as well. Desktop just has a bit more to bring up, with a graphical environment and everything.
I say all of this because I feel that my community has lost control of the debugging pipeline and I applaud Be and Haiku for making that a priority.
I find the benefits of committing to an always-running, always-traceable-in-production execution thread is an EXTRAORDINARY productivity gain (vs console.logs and connecting to IDEs manually via various witchcraft)
I felt this last week as I tried Webpack 4 on a new project. The source maps weren't correct, and instead of debugging my app I had to debug multiple Webpack plugins and work out which one was breaking the source maps.
Never did find out, but I fixed my app by debugging it mentally. It sure made me wish for a developer-first language.
Windows has that - if you have e.g. Visual Studio installed (or any other app that registers itself as a system-wide debugger), then application crash dialog will immediately offer to attach the debugger at the point of the crash.
It was also just cool to have a GUI mounted on a Unix like os - which wasn’t Linux.
Nowadays though I’d like to see some Plan9-isms adopted instead of cloud stuff. I think people would benefit from being able to make their own user experience that was kept more or less unchanged and the same files and preferences available on all of their devices. I’d also like to be able to “mount” processor/memory hardware from networked machines and delegate processing to them and output results to the local GUI. Like tunneling X but without actually needing to pipe the whole user stack - just the compute input/output.
BeOS offered the same FIFO scheduling mechanic that allows Linux to support soft real-time.
RTLinux was a hard real-time OS which runs Linux as a lower priority service you can pre-empt with your actual hard real-time stuff. Similar systems still exist today. They work fine, if, in fact, what you needed actually was "hard real-time".
If your idea of the potential consequences when your system misses a deadline is more like "Some users are annoyed and demand refunds" rather than "I might go to jail for manslaughter" or "The $40M space probe turns into space garbage" then "hard real-time" probably wasn't what you needed after all.
Is Haiku only soft real time? Yes, and I never said anything different.
Does it do a markedly better job at making real time commitments than NT or even RT-Linux? Absolutely mainly due to how much simpler of a kernel it is.
Also, I'm going to throw out there that if you're going to get uppity about hard real time guarantees out of nowhere and throwing out concepts lime manslaughtee charges out of nowhere, I hope that you're only choosing sel4 as last time I checked it was the only kernel that backed up it's guarantees in a way that it can be checked.
They chose to show several postage stamp videos because they only had software decoding. At the time the way most competitors handled video on consumer hardware was to decode to YUV and then overlay and scale that with dedicated hardware, and on most systems you could only play one video this way. BeOS would be much worse at that with a software decoder, by showing many videos they could "make a virtue of a necessity" as they say.
The best thing in Haiku is probably their vector icon format, which offers a good mix of basic vector features for small pictures in a compact binary format. It's mentioned briefly in that article.
BeOS ran circles around everything else - particularly when multitasking - and I was did use each platform heavily.
Off the top of my head: security guards do.
And if you limit a general purpose (operating) system by what most users need you'll soon end up with a system where most users miss something.
New versions of programs may modify the user config files in a way that old versions may not be able to read those config files anymore and even corrupt them. How is this handled, if at all?
On Haiku, config files are generally either plaintext (kernel/drivers/a few core apps, so they are easily editable) or archived-BMessage format. The former are almost always read and not written, and the latter are key/value based so at worst the application would just not know about the new keys, and possibly remove them, which isn't the end of the world.
If you allowed it to upgrade your Evolution data store there was no going back.
In a BeOS case it would be as if it migrated to different configuration keys in the new version, then the old version would detect corruption, throw a fit and erase ALL the keys.
If anyone from Haiku is reading, would be really cool if you also offered network bootable image by default that can successfully boot from 'standard' PXE boot server setup.
I never understood the hype and then they went bust.
On package management IMO Nix and derived Guix are real revolution...
If Office Libre were to update their UI I think it would make a killer combination.
In general, vendors only have financial incentive to get windows drivers working well.
It's the self defeating prophecy of the Linux desktop.
Apple spins their own hardware and I'm sure gets proper documents from the vendors they select.
Linux just assumes that all drivers must be perfect with no bugs, and must be open source and distributed with Linux itself. That's just not realistic.
Linux just keeps on working with old devices across distributions and updates.
Linux takes 6m-24m till vendor unsupported drivers stabilize - but at that point, it works way better than windows (and basically forever) in my experience.
Old drivers get shaved out of the kernel tree and good luck getting them to work again in more modern kernel versions.
Sure, maybe I can take some weekends off to port the missing features to new kernels....
I never really got to experience it the way it was intended, because upon first booting it I became horrified that everything had changed, and installed a bunch of tweaks to make it behave like Windows 7.
But that’s all only up to a point. There are concessions one starts making as you woo the business/casual user more and more. Since there is no direct financial compensation to offset the increased sacrifice, an equilibrium arises. An equilibrium of good enough for general consumption, enough of a concession for the developer. Unless something changes the balancing point of that equilibrium, then it’s likely to stay about there.
It’s like a neighborhood handyman/craftsman who first fixed up his own house and then, because it gives him purpose and makes him feel good and contributes to the general betterment of the neighbor hood begins doing the odd project throughout the community. The work is quality AND it’s free(ish). As the community embraces the handyman’s efforts more and more, he begins having to do things more and more that don’t fit his vision or lifestyle. He has two choices: start charging or stick to the projects he likes to do. If he starts charging though, now he becomes dependent on trading concessions for income. The relationship fundamentally alters.
And having a certain Amiga feeling to its culture and multimedia capabilities.
Solaris 10 (and now by heritage illumos) has had this with beadm(1M) since ten years ago, if not longer.
WebPositive works "acceptably", Otter/QupZilla are significantly better. All can at least play YouTube, etc.
We don't have standby support at all. :/ Most of the device hooks are there, and we use ACPICA for ACPI support, so it's just a matter of wiring all the right things up and adding some buttons... but we are pretty starved for time as-is, so nobody is working on it at present.
Previously the default font was DejaVu Sans, which really looked a lot worse.
Haiku does have it (it was hard-disabled for a long time due to patent concerns), but there's a bug relating to how the main text control draws strings that causes characters to overlap one another with it enabled, and so things generally look worse when it is. Someone should take the time to fix that...