>I don’t know how to make a screenshot, because I normally use my computer in text-mode. I have X and GNOME installed, but I use them only occasionally.
I find it weird that the man is such a visionary in concept, but such a Luddite in actual day-to-day. Elsewhere, I've seen him explain that he still "views" most webpages by cURL'ing them and printing them to paper.
He walks the walk and talks the talk. People may think he’s backwards or annoying but he’s consistently maintained his philosophy that software should be free. The fact that he has to use absurd measures to consume what everyone else does is a statement about the limitations of free software not a statement about his workflow preferences.
He is just a professional contrarian. We are lucky he went against big software corporations and his contributions to Open Source licenses are groundbreaking.
But many others managed to provide screenshots while using free software.
He, a great man in many other ways, he is also backwards and annoying.
>If you can find a host for me that has a friendly parrot, I will be very very glad. If you can find someone who has a friendly parrot I can visit with, that will be nice too.
>DON'T buy a parrot figuring that it will be a fun surprise for me. To acquire a parrot is a major decision: it is likely to outlive you. If you don't know how to treat the parrot, it could be emotionally scarred and spend many decades feeling frightened and unhappy. If you buy a captured wild parrot, you will promote a cruel and devastating practice, and the parrot will be emotionally scarred before you get it. Meeting that sad animal is not an agreeable surprise.
>Richard Stallman's rider has been a cause of amusement, bemusement and
confusion for many conference and lecture organisers who have hosted him.
It has even drawn the attention of the press[0].
>But what is the story behind this complex beast? When were certain clauses
added, and why? We hope that with enough data regarding when modifications
were made, we may be able to shed some light on the why's.
>A parrot once had sex with me. I did not recognize the act as sex until it was explained to me afterward, but being stroked on the hand by his soft belly feathers was so pleasurable that I yearn for another chance. I have a photo of that act; should I go to prison for it?
This answer would require to explain the whole free software ideology. If you want to know more, just type Richard Stallman on Youtube to see one of his talks.
It's good to live strictly by a philosophy, and I admire anyone who does this. I can't say I agree with Stallman's philosophy, because it places too high of an importance on communist principles which are inherently flawed and unjust.
There is nothing evil about closed-source software or proprietary licenses. Yes, it can be abused by greedy people, as can pretty much anything including water. But I wouldn't go so far as to say that all software must be open source as a step in humanity reaching its ultimate goal of perfection.
My reasons are whatever the Catholic Church's reasons are.
The Catechism of the Catholic Church has a little more to say about it[1]:
> (2425) The Church has rejected the totalitarian and atheistic ideologies associated in modem times with "communism" or "socialism." She has likewise refused to accept, in the practice of "capitalism," individualism and the absolute primacy of the law of the marketplace over human labor. Regulating the economy solely by centralized planning perverts the basis of social bonds; regulating it solely by the law of the marketplace fails social justice, for "there are many human needs which cannot be satisfied by the market." Reasonable regulation of the marketplace and economic initiatives, in keeping with a just hierarchy of values and a view to the common good, is to be commended.
My pleasure, I always enjoy sharing what I firmly to be the truth. I guess that's a pretty common attitude here on HN, but I seem to be the only one who believes that truth is found in the Catholic Church.
I'm sure it can be a kind of lonely around here for Catholics. For what it's worth, while I'm no longer conventionally religious, and while even in my religious days the Catholics were considered to barely qualify for heaven (Evangelicals being Evangelicals), I've come to have a somewhat more nuanced perspective on 'you guys'.
And I definitely think having views like yours over here is a benefit to the 'community'.
Yeah, about the only bit there that gets me is the printing. But I'd be a happy camper if more webpages could be effectively used with `curl <url> | less`.
I also realize that it's difficult to make the kinds of social media inspired experiences™ that most consumers expect nowadays without some curl-breaking beast like React or Vue, and that most websites nowadays see themselves as an entertainment medium more often than they do a simple conduit for the transfer of information. I am beginning to think that that, too, speaks to what RMS is on about.
I was recently setting up my machine and figured I'd try 'mutt', the terminal mail reader. It's a pain to install and isn't great for HTML emails, but it sure is blazing fast and allows me to create custom macros that are really useful. Maybe if I stick with it, it'll be faster than GMail. But then again, after I set up some filters, my volume of email to look at in GMail was pretty low, so the performance advantages of 'mutt' are not as important now. It feels super cool though. We can't underestimate the aesthetic value of cool terminals.
He is free to compute as he desires. He has worked a lifetime for that freedom. As for a luddite, he has enabled generations to forge new wild techno paths.. So literally the opposite of a luddite.
He is not paranoid, he's just setting an example of how the life's 'ought to' be without unnecessary corporate intervention. & I'd argue that he is doing so at a great personal cost so I am thankful to him for being as such, I do wish that he'd be a bit more creative in avoiding censorship/tracking in his day-to-day life.
Quite the opposite; he likely does this because he is very mindful of what "viewing a webpage" actually entails and how much he is willing to identify himself. See "How I use the Internet" https://stallman.org/stallman-computing.html
Ergonomics research has shown a long time ago that dark text on light background is easier to read on computer screens for long periods of time. These screenshots demonstrate that developers in 2002 had got the message.
Young developers nowadays seem to mostly use dark backgrounds, which to me feels like a weird '70s throwback. I suspect it's primarily an identity signal: it's harder to feel like you're hacking The Matrix if your IDE looks like Word from a distance.
This is the HN equivalent of avocado-toast bashing. Let’s not do that.
As someone who used dark themes with CRTs whenever possible in the ‘90s, my theory is simply that those studies you mention were academic horseshit - CRTs with bright backgrounds were eye-killers. Well, they were eye-killers in general, but more when set with bright backgrounds.
Unfortunately, colorbleed in small-size text made it a bit more difficult to read text on dark backgrounds, but that could be somewhat compensated with larger fonts if you had a big enough display. So basically you could choose two different ways to kill yourself, but one of them gave you a chance if you were rich enough (and lucky enough - most programs on Windows never allowed you to change their white background). Man, CRT was so shit.
LCDs are much better in general, and occasionally worse (a lot of games make my eyes cry, with their aggressively shiny palettes). I still prefer dark out of habit in most cases, but it really depends on the monitor (and its calibration) and the surroundings. Saying one mode or the other is superior in all circumstances is silly, imho. And certainly wouldn’t bash millennials, or question their geek-manliness, for choosing this or that mode.
I’ve always liked to work at night or early morning or northern European winter afternoons and since 2002 it’s been obvious to me that a dark but not black background with a medium contrast text is optimal for day and night, especially the Zenburn theme. Please don’t come to my house at night and switch to a white theme because of “science”...
Of course everyone can use whatever they want. Arguments of taste are irresistible, that's all. (Did I mention I like proportional fonts?)
The bashing is mostly the other way around though. I regularly get shit from people because I prefer white backgrounds. It's also some kind of meme — just the other day I saw a popular tweet that read "People who use light background IDEs are serial killers." As an edge case GenX-millennial who buys expensive avocados and lets them rot, it's pretty much my glass house for rock-throwing practice anyway...
I don’t think you really want me to break down a very clear similitude in smaller chunks for your comprehension, considering you follow with a gratuitous and unsubstantiated ad-hominem and incendiary language. Have a good day, sir.
> Young developers nowadays seem to mostly use dark backgrounds, which to me feels like a weird '70s throwback
Everything "young developers" nowadays do seem to me a weird '70s throwback - from the proliferation of the command line (as opposed to actually trying to create advanced GUI tools for people who know what they are doing, like a lot of developers did in the late 90s/early 2000s whereas today you mainly have dumbed down GNOME 3 lookalikes), to using tiling and lightweight window managers "for performance" on "low end computers" when those low end computers have massive supercomputer abilities compared to the computers that people used -again- in the 90s with UIs that did WAY more stuff despite their limited performance, to having crude dumb TUIs that look and behave worse than what you'd get in DOS in the 80s, etc.
You seem to be confused, developers work with text. What more can computing power bring to text? The only thing all that CPU and GPU power brought to the desktop is useless power wasting eye candy like beryl or whatever it's called now.
Plus, in this environmentally conscious time, isn't it nice that people want to do more with less? Why waste GPU cycles adding shadows and lens flares to text? I don't get it.
I like using tiling WM's (dwm and Awesome) because they organize windows for me and nothing is hidden in a bar. The concept of workspaces is much nicer than shuffling windows around a screen. I can quickly jump between workspaces and immediatly see my windows and contents. What more do I need? GPU rendering of windows? Why?
I along with many others enjoy doing more with less. My old i3 Thinkpad runs just fine. Plays youtube videos, streams music and edits text using OpenBSD. What more do I need? Why pointlessly throw more at a problem that doesn't need it?
This is exactly my reasoning — I love tools that do amazing things with minimal resources. I have a ten-year-old Thinkpad X300 running Debian with JWM. I mostly work in Emacs, terminal emulators, and a browser, so it performs very well indeed. Why would I want to spend a couple thousand dollars to have 3D titlebars on my terminal windows and run an OS that takes three times as long to start, when a $70 system does exactly what I need? It's not for everyone, sure — if I did a lot of video editing, or played games, it would be different — but it's more than sufficient for me, even with minor photo editing and CAD work.
If only I could find a car that adheres to the same principles!
No i am not confused, i'm perfectly aware of what i am seeing and writing, thank you.
> developers work with text. What more can computing power bring to text?
This is highly myopic, there is more to development tools that just source code and even visualization of source code and programs do benefit from being able to work with graphics. There is more to computing and programming than a 70s terminal.
And people use computers for more than programming.
> I like using tiling WM's (dwm and Awesome) because they organize windows for me and nothing is hidden in a bar. The concept of workspaces is much nicer than shuffling windows around a screen. I can quickly jump between workspaces and immediatly see my windows and contents.
Sure, although workspaces are available in overlapping window setups too. The rest you mention are just personal preferences, my comment weren't about liking tiling window managers over overlapping windows, it was about using those (and lightweight window managers - i included overlapping window managers too there) using them due to their resource use.
> What more do I need? GPU rendering of windows? Why?
I don't know but i have a feeling you'll just call irrelevant whatever i suggest.
But honestly...
> I along with many others enjoy doing more with less. My old i3 Thinkpad runs just fine. Plays youtube videos, streams music and edits text using OpenBSD. What more do I need? Why pointlessly throw more at a problem that doesn't need it?
...i think you totally missed the point of what i wrote. My point wasn't to throw more at a problem and especially not be wasteful about resources. Using and -especially- writing efficient software doesn't imply 70s terminal UIs.
A Pentium at 133MHz with 32MB of RAM, a machine that is a fraction of power of even the first Raspberry Pi, can run at comfortable performance an interface as rich as Windows 98. Even the lowest of the low performance laptops you can get your hands on today will have a ton more power, yet people treat it like it isn't anything more than a dumb terminal.
I don't really like the sound of my computer fan - so while it's nice to have a relatively good cpu, I still don't want it to be running flat-out all the time. Also, I'm not really sure what the point is of GUIs, other than looking flashy. I do a lot of 3D modeling, and I really don't like clicking around the menus whenever I have to. I figure that's just taste.
If you think that GUIs == Flashy, then i think you have a very narrow view of what GUIs are. Also a GUI does not have to be processing intensive and unless you have configured Linux to not use a graphical console and you are not using X or Wayland, you are already using a GUI. It is just that all your GUI does is draw text (which, btw, is among the most processing intensive things that a GUI can do).
GPU rendering of windows is likely more power-efficient than doing the same rendering work on the CPU.
Of course, once GPU rendering is possible, developers tend to fool themselves into thinking lots of animations and effects are a good idea, which then sucks up the efficiency gains again...
>You seem to be confused, developers work with text. What more can computing power bring to text? The only thing all that CPU and GPU power brought to the desktop is useless power wasting eye candy like beryl or whatever it's called now.
Yes, we work in text. Yes, text is quick easy and powerful. That does mean it is the right tool for every job.
Not everyone works somewhere taking down all your machines in some region for 30min because you typo'd -a when you meant -s when invoking some automation script is no big deal. Some of us work places where having normal operations carry the risk of that kind of thing isn't ok. GUIs when designed at least half-assedly make it significantly harder for people to enter dumb, dangerous or erroneous inputs while working at the same speed.
There's nothing preventing you from using a GUI to generate commands that will be passed as command line args and showing that to the user. In the few cases where you need to do something unsupported just invoke the command from the CLI.
Tossing up a quick web UI to streamline routine tasks while making users go out of their way to do non-routine or dangerous tasks is worthwhile at all but the smallest scale. Yes, it's quicker and easier to write a CLI tool but restricting the amount of rope just laying around to hang yourself with is part of operational maturity. With a GUI people at least have to go out of their way to find the rope.
Edit: Am I being down-voted because my opinion is unreasonable or just because it makes some people uncomfortable? At least tell me why I'm so wrong.
You're likely being downvoted because your comment is implying that risk of errors is a feature of GUI vs. CLI rather than about sound interface design.
You're conflating two entirely separate issues: Making it harder to trigger dangerous functionality vs. how the functionality is presented to a user. Nothing prevents hiding dangerous options behind extra steps in a CLI either.
I work mostly in text mode, but I also do use various simple GUI tools, and I half-way sympathise with what you want, though. What I've found is that in fact there are lots of nice little tools written to support minimal WMs or systems without desktop environments that are very useful in that respect. E.g. tools like rofi, dmenu and the like are great to wrap tiny GUIs around functionality that is hard to remember how to use correctly.
I work mostly in my own text-editor, and instead of writing a UI from scratch for it, I depend on the IPC support for the bspwm window manager to implement multiple windows/panes (the editor buffers are maintained in a server process, so I can have multiple views to the same buffer), and use rofi to bring up UI's. E.g. I have little scripts that bring up rofi with suitable input to select a server to ssh to, open a file in my editor, run a yarn/npm/make target, select a theme for my editor, switch buffers in my editors etc. Most of them are a handful of lines at most.
I'd love to see more tools like that, which makes building simple tool-specific GUIs for scripts easy.
I'm using Vim and i3 and not for performance, but simply because they are efficient and more flexible. Together with Vim browser plugins I can do almost everything on my PC without having to use the mouse, and with the right setup this can drastically decrease the input latency.
Modern PCs being insanely fast doesn't mean we have to use software that requires insanely fast computers.
Screen, tmux, vim, emacs, ncurses. None of these are 70s products. Nor do they optimize for 70s performance limitations.
They are modern terminal UIs. We pick terminal UIs because the terminal tends to expose a lot more features than a gui. This tendency is probably because it is easier to add features to a tui than to a gui.
Then there is a kind of self reinforcing effect where people like working within similar paradigms, and people get used to TUIs in general. This makes those people prefer making TUIs over GUIs even more. Incidentally, it is mostly developers who tend to need the more powerful features currently only offered in a TUI.
They are not 70s products but they look like 70s products. See my original comment about how those TUIs look and feel worse than programs you'd see in DOS despite having a lot more power and features available to them - even if you confine yourself to the basic colors (but many modern terminals also support -simple, yet usable- graphics, more colors and more characters).
> Then there is a kind of self reinforcing effect where people like working within similar paradigms, and people get used to TUIs in general.
Yes, it is basically a matter of fashion and what baffles me is that fashion.
I mean, even if you consider that terminal-based applications are great, why limit yourself to what a beefed up VT100 could do? Why when typing "ls" not also have a tiny folder icon near the directories (you are already going into the hassle of adding colors anyway, a little icon would make it more obvious which are folders and which are files when you are looking at a huge list), why not having a command akin to "cat" that can decode and display image files right in the terminal, why not have an over-time graph in "perf"'s realtime TUI mode, Emacs already supports different font sizes when running as a GUI program, why not have this in terminal mode too, Clang's static analyzer can show you data flow-based errors, why not have graphical arrows superimposed over your syntax highlighted code when you run it from the terminal (Xcode already does that but it is a GUI), etc.
These are just some examples from the top of my head, i'm sure i can come up with even more.
I think by using only text-based UIs people get used to text-based UIs and all they can imagine is within the limitations of text-based UIs.
Lots of terminals do support graphics in various ways, e.g. formats like Sixel (bitmap graphics), ReGIS (vector graphics), and others. Several have experimented with folder icons. Many support e.g. URL matching, and some support outright hyperlinks.
My setup does use a Sixel enabled (and ReGIS but the ReGIS support is poor) Xterm.
Here's the thing:
Years of experience is that the gain is very much marginal. I really want to do more related to this, and it can sometimes be useful. But unless you're sending images over an ssh connection, Sixel output to xterm is no easier than opening another window most of the time. My ls replacement locally already shows symbols and uses colours; the benefit of going to images instead of unicode characters is minimal. Drawing lines instead of using Unicode box drawing is similarly such a marginal improvement that for most things it doesn't offer much.
There are things where it can be nice, such as e.g. being able to plot a histogram from output from shell tools without opening a separate window (and making it work over an ssh connection), but even then you can mostly get there with unicode characters too, and then you don't have to worry about whether or not you'll need to use those tools from somewhere without a Sixel or ReGIS enabled terminal.
And there's the rub: these gains in functionality are so marginal that the benefit is very easily lost when you often work on multiple machines, because suddenly you have to distribute them to all the machines you work on, or juggle between multiple sets of tools.
Yes, terminals do support graphics like they do support applications that look at last as good as applications you'd find on DOS (if not better), my point is that the programmers who make those applications do not take advantage of those.
And my point is that a large part of the reason for that is that experience with the terminals that originally supported this functionality, and modern attempts to resurrect it, is that even when you do have programs that takes advantage, the benefit is smaller than you might think.
I still want to do more with it. I grew up with an Amiga where the terminal was a system service that could be trivially embedded in a window and co-mingled with gui elements. I still think parts of that design are ahead of modern gui's. But it's also an ecosystem thing. It's more important to have software that works together than to get those gains that individually are quite minor. You first get the full benefits when things start coalescing into a greater whole.
> Why when typing "ls" not also have a tiny folder icon near the directories
This is already reality. Tools like lsd do that by default.
> I think by using only text-based UIs people get used to text-based UIs and all they can imagine is within the limitations of text-based UIs.
No. One of the main reasons I like this "limited" UI is because it also means less clutter. No popups, no stupid notifications or animations, no useless and seemingly infinite menus. Just relevant information and nothing else. I can imagine a UI without these limits, I've used it for decades, and now I'm glad I can often stay away from it. If there wasn't a huge problem with horrible overloaded UIs we wouldn't have plugins for blocking ads and scripts on websites or distraction-free modes in text editors.
I'm not against displaying simple graphics in the terminal(e.g. like ranger does), but there's a point where a decrease in limitations only means an increase in eye candy.
> They are not 70s products but they look like 70s products
I'm not sure how you could have a TUI that didn't look old. For serious applications, you have to build TUIs assuming the worst - that the user has no colors or special keys. Sure, some terminals can display bitmapped images with 24-bit color, but you can't depend on that, especially for anything running over SSH or within Tmux, screen, emacs, (neo)vim, or some combination of these. These can create some very obscure bugs related to colors and keybindings, so you can't assume every key will work as expected or that your users will have color at all (I disable Vim syntax highlighting over SSH because it tends to demolish its performance, for example). This is why many TUIs are 16-color and use just ASCII characters - you don't know if the user can display emojis or if it'll show up as a garbled mess.
GUI applications (including text-based ones like gVim and emacs) have a lot more flexibility because they can control more than they can in a terminal and can guarantee 8- or 24-bit color instead of having to hope for 16-color support.
> Why when typing "ls" not also have a tiny folder icon near the directories (you are already going into the hassle of adding colors anyway, a little icon would make it more obvious which are folders and which are files when you are looking at a huge list)
That would probably break a lot of scripts. I just alias "l" to "ls -l" so I can see the permissions (which has a "d" to indicate directories).
> For serious applications, you have to build TUIs assuming the worst - that the user has no colors or special keys.
Realistically, what are the chances of this happening nowadays and how worth is it supporting such users?
> That would probably break a lot of scripts.
If an icon would break scripts, so would having colors, but it doesn't since ls knows when it outputs to a terminal or to a pipe and adjusts the output accordingly. But this misses the point, it could be another commands like vls (visual ls) or whatever, the point is the ability to use more than ASCII text.
> I just alias "l" to "ls -l" so I can see the permissions (which has a "d" to indicate directories).
An icon is more distinguishable from a single letter and using the -l view would not take advantage of any horizontal space you have.
When gnome 3 came out I had to find something else to use, so I chose one of those "light weight tiling window managers" since if I was going to be setting up something custom anyway I might as well do some research on it.
I've ~yelled at~ talked to a bunch of gnome developers trying to figure out exactly what's going on. I think that they're making the weird UI decisions in the name of "inclusiveness". They believe that simpler apps, and a standard unified ecosystem, makes things easier for marginalized groups to adopt free-software. All their weird technical decisions ultimately seem to be justified by making things easier for marginalized groups.
The Unix tools don't work on guis. Piping between applications on the command line is still faster and simpler to hack together than a real solution. I don't feel the need for anything more because the desktop is pretty much dead and so are the metaphors it used. We might as well be complaint that the devs today aren't designing a touch based interface to get real work done on a phone. Me and my Bluetooth keyboard on termux are doing well enough thank you very much.
This is more of an accident of history, there's no reason you can't have pipe friendly collection of graphical tools, dmenu is one simple example: https://wiki.archlinux.org/index.php/Dmenu
Who are you addressing this to, end-users or UI designers? The debate seemed to start off asking "why do people prefer text UIs" and now seems to have morphed into "why don't OS vendors innovate in text UIs more".
I don't know the answer to that, but it's not really relevant to the initial debate; text UIs, for all their lack of innovation, are still dramatically better than GUIs for many use cases, which is why I use them.
I do not want to go off and spend years designing and implementing the theoretically perfect OS and toolchain before I can do other work, I just want to use the best of what exists today.
My original comment was targeted to both users and developers, but obviously the "make better tools" is only targeted to the subset that can make those better tools.
Other than using json there's nothing to improve on for the type of work I do. With jq and a few self written small programs in C I get on par performance with 'better' tools that have teams of hundreds working on them.
On my case, I've switched to tiling window managers because it streamlined my work and made me a lot more efficient. It has nothing to do with "performance" in the computing sense.
I've tried to use the existing "containers" systems on regular desktops and they simply sucked. Not even close to what I get on i3.
This idea of using the mouse to click on everything you need to do is something I cannot get around. Creating advanced GUI tools for "people who know what they are doing" sounds counter-intuitive to me. If you know what you are doing, you can use the damn command line.
GUIs are normally built with less features and/or dumbed down versions for the average user, not the other way around. I suppose if we forced ourselves to build better GUIs that were super flexible, this wouldn't be true. But the reality of today (or ever?) is that normally if you know what you're doing, you're using a CLI and not a GUI.
Yes, this is how things are nowadays and what i refer to in my message above: developers basically gave up trying to make more advanced UIs and regressed to 70s-styled terminals.
It's not regression. It's progression. Power users are more efficient and proficient on what you (incorrectly) refer to as 70s-styled terminals. It took us a while to get there. We went down the wrong path with fancy GUIs in the 90s, 00s. Things like not needing a mouse, being able to freely tie programs/commands together, being able to script every aspect of your workflow,... these are all big wins for power users. If there is an easy way to achieve the same flexibility with GUIs, then it hasn't been discovered yet.
The best of both worlds is having a REPL, just like the Xerox PARC and ETHZ workstations did.
My first startup was an heavy Tcl shop, thanks to it I learned never again to depend on languages without support for JIT/AOT on their canonical implementations.
Because reflowing 1000 words is a lot simpler that designing a general method for conveying your message on any given aspect ratio, resolution and size.
Because text is a lot more portable across platforms.
Because interacting with text is a lot easier to automate. And extracting information from text automatically is easy whilst extracting it automatically from a picture is very difficult.
GUIs are great for discoverability, and complex long ineractive sessions. But they also have downsides.
Why use a picture when you can tell the computer what to do, such as going out to the café while the computer is either number crunching/photo editing with ImageMagick by itself?
The mouse is the least ergonomic input device. I have a trackball which is mildly better. Until someone develops a pointing device that doesn't cause rsi I will stick to tiling wms and terminals.
I am also an order of magnitude faster at my job than my colleagues stuck in gui land,pecking away at their IDEs. Not to mention I have a greater command of the languages we use because I am not relying on an AI to guess syntax for me.
Avoid tiling, it sucks. Switch to a keyboard drivable WM such as CWM. You can use keybindings to resize, tag, delete windows without botching the aspect ractio of your browser/80x25 TUI terminal.
And it has an inline window-search menu which is really convenient, it can autocomplete window names too :D
"xterm -T yourtitle -e nvi", then in my case I press super+a, I type "you<tab>", I press "intro", and my xterm raises up.
No stupid tiling, no window resizing, no borked aspect ratio.
To unclutter my screen, that's what are tags (almost like virtual desktops) for.
I will consider switching away from a tiling wm the day I have enough screen real estate. Until then, I want to use as much screen space as possible on actual window contents. I really could not care less about the aspect ratio of my terminal windows - the applications I use know perfectly well how to deal with resized windows.
> I want to use as much screen space as possible on actual window contents.
I was like you, until I began to use a max 2-3 open per tag/workspace. The clutter ended.
Tiling is useless under Unix environments where virtual desktops are a norm since ~1989. They may worked great under DOS and pre Windows 3.1 where few things are open, not under a powerful multi virtual-display based environment with several terminals running at once.
I do have 2-3 open on most of my workspaces. I have 10 workspaces. It's only just about sufficient.
I've used virtual desktops since my Amiga days in the mid 80's - they are not a replacement for wasting as little of what space is actually visible at any given time.
Aside PowerShell already mentioned before, Windows had system-wide scripting since Windows 98 (some parts since Win95) with windows automation and active scripting that allowed any application to expose COM objects and interfaces for manipulation by other applications either directly (via COM calls) or via active scripting, which provided pluggable scripting languages so that you could use any scripting language - JScript and VBScript provided by default - to access any compatible COM server. In addition applications could also expose and host reusable controls (initially known as OLE controls, later rebranded as ActiveX controls) and some applications (such as Office and Visual Basic) were made explicitly to allow composing applications out of such reusable controls (Visual Basic since version 4 was basically a scripting environment for COM).
The two main issues with the above was that because it was made to be interoperable with any native language that had support for functions and structs, it was complex to do it right and only expensive (meaning, difficult to acquire for many developers) tools could help you with that. And, really, another issue was that most developers simply didn't appreciate the flexibility and composability it offered (they still don't, see how no Linux environment provides even something like Windows 3.1's OLE, let alone something like COM).
Yeah, that gave us Macro virii, IE6 hell and well, the API on Windows was utterly verbose and non-docummented.
Poweful, but as it was too obscure for the end user/programmer compared to the in-place source of Nixen+man pages, guess which platform was both free and easy to code.
VC++ was expensive too, BTW.
On COM/OLE, on UNIX no one cared because by design you could embed whatever shit you wanted, from video players to window manager modules. And TCL was amazing.
Windows has been embracing composable shell tools, scripting and automation for over a decade with PowerShell; Microsoft Exchange management was powershell in 2010 including the GUI which also generated the cli code in case you wanted to automate what you just clicked through.
PowerShell syntax is too verbose (That's why I hate Java and .Net). Also, UNIX' composability and parallel execution (no pun intended) wins over objects.
> Ergonomics research has shown a long time ago that dark text on light background is easier to read on computer screens for long periods of time.
I have to wonder whether that research took into consideration how large and bright modern displays are, and for the duration that modern IT professionals are using them for.
I would guess that the majority of these people would have been using 15-19" CRT monitors, and mostly single displays.
In pretty much every office that I've seen, most people have two or three 23-27" displays. Modern LCD panels are also extremely bright.
Staring at that much screen realestate with primarily light/white colours can be painful for long periods of time. It certainly feels less uncomfortable to use a darker theme.
> I would guess that the majority of these people would have been using 15-19" CRT monitors, and mostly single displays.
I doubt that this is the reason. On a CRT I prefer bright text on dark backgrounds because it minimizes visible flicker. Where the screen is dark on a CRT, the beam is simply off. This becomes especially important if the monitor has low refresh rate and/or fast phosphor. I think that it's for this reason that a lot of (actual) terminals have really long afterglow phosphor and use bright text on dark backgrounds. It's basically a prerequisite to use them regularly for any extended amount of time. Newer CRTs improve the situation considerably, though, with much faster refresh rates to compensate for the short afterglow necessary for media like games and video.
Overall I think that using CRTs is more exhausting than LCDs, even taking into consideration the typical size of a computer CRT display compared to a modern office dual LCD setup. After a couple of hours of use of a 15" 2002 CRT my eyes feel dry and sandy, and the problem is exacerbated by bright screen content. Also, my LCDs (at home and at work) are not nearly as bright as my CRTs.
On LCDs I tend to use bright backgrounds in daylight because it seems much easier to read, and the screen doesn't need to be so bright. In the dark, I use dark backgrounds because bright screens easily become the brightest thing in the room with little natural lighting. In practice this means a bright theme at work and a dark theme at home.
I can't wait for paper-like unlit desktop displays with fast refresh rates. IMO it's the most important step for improved computer ergonomics. Then we can fully embrace natural lighting e.g. during summer time office hours, which is healthy for soul and body and easier on the eyes than basically staring into a lamp. Also potentially a lot more energy efficient.
I have been using solarized-dark for a long time and switched to light background for a few years only to be back in a dark mode everywhere.
I find this much easier on my eyes than light theme. Especially when I read some pdf's just before bed and all lights are off where my screen is the only source of brightness. Granted I also reduce my laptop screen brightness and pdf viewer (Zathura or Mupdf) supports custom colors (solarized dark). My eyes never get too tired.
I've also used redshift for some time but I don't need the gamma reduction actually with the above approach. At night my laptop looks very similar to "e-paper" or a Kindle. And my gf who has to get up a couple of hours earlier doesn't even notice my late night reading next to her.
Also, did that research have variables for ambient lighting (in field of view but not from the display) and for screen glare? A 360-degree bright cube farm is much different from a dim room with a desk lamp facing away. Your eyes respond differently and I would love to know more.
I suspect the real answer is a big, fat, "it depends."
Turn the brightness down to something not much brighter than a sheet of paper on your desk. You can always turn it up to watch a movie.
The brightness setting is like the adjustments on a car driver seat: the point is not to always use an extreme of the range. You probably wouldn’t move the seat all the way back just because the setting exists.
> Turn the brightness down to something not much brighter than a sheet of paper on your desk. You can always turn it up to watch a movie.
With the computer screens I've seen, it's only possible to approach paper in a brightly lit environment. I'm normally setting brightness to minimum and contrast to maximum, and still using light-on-dark themes with good (though not extreme) contrast.
For dim environments, some mobile phone screens (and I've heard computer screens as well) attempt such adjustment automatically, and can go dimmer than common computer screens, but it usually leads to a very low contrast and illegible texts.
> I have to wonder whether that research took into consideration how large and bright modern displays are, and for the duration that modern IT professionals are using them for.
According to my cursory study, ergonomic research leaves out many variables and settings.
Do you know research validating or contracting these claims:
- light text on dark background in dark room better
- dark text in light background in bright room is better
- light text on dark background is better when refresh rate is low
- the font size changes the situation.
My personal experience after some testing is that light on dark with slightly larger monospace font that 'feels right' [1] leads to fastest long term reading and comfort. Also light on dark background is always better in dark room. Otherwise dark on light is better. Back in a day when we all had low resolution CRT monitors font size was large for everyone and light on dark was common.
[1] If I just sit in a monitor and set up my programming environment I settle to text size that I can read well but is still relatively small. It seems reasonable that you want to see more text. But if I choose little bigger font than what 'feels' right I can read faster. Large font, light on dark background seems optimal.
Using vi for 5y here (switched to neovim) in dark mode, with nofrils-dark[1] theme now, I don’t feel any less productive, on the contrary I can focus for way longer than before.
Ergonomics research has shown a long time ago that dark text on light background is easier to read on computer screens for long periods of time. These screenshots demonstrate that developers in 2002 had got the message.
Wouldn't the change in display technology from CRT to LCD make a huge difference? I'm not suggesting the ergonomics research from the era was wrong, but a new study could come up with very different recommendations today.
I don't think people are making decisions about their UI colorschemes based on ergonomics. As a rule, people who say they use white-on-black or black-on-white for reasons of comfort or health are just rationalizing an aesthetic preference.
(n.b. I'm not weighing in on whether one white-on-black or black-on-white is more ergonomic, I'm just saying that it's rare that someone preference is actually grounded in ergonomics)
There's no shame in having an aesthetic preference, nor in sacrificing some ergonomics to realize it. If people always preferred ergonomics to aesthetics, everyone would be wearing New Balance shoes (I'm a happy owner of a pair). Your system UI and your code editor is something you stare at a lot, and it's reasonable to want it to look good.
> I suspect it's primarily an identity signal: it's harder to feel like you're hacking The Matrix if your IDE looks like Word from a distance.
This is also my theory. My theory is that "hacker signals" like dark backgrounds, tricked-out shell config, and using vim are more common in language communities where people feel insecure about their claim to technical status - or rather, about the acceptance by others of their entirely legitimate claim - such as those of Ruby and JavaScript.
> using vim are more common in language communities where people feel insecure about their claim to technical status
What about us greybeards who just feel naked without vi/vim? I've ended up (due to market demand) in the frontend world, but I still use vim because I feel horribly unproductive plodding around with a mouse and arrow keys in Visual Studio or what have you. Yes, given a few weeks I could learn the shortcuts and be more productive, but I can't imagine I'd ever reach my vim level of productivity (same applies to emacs users as to vim users, I'm sure). So why put myself through that?
Just something to keep in mind. Many of us use vim because that's what we're used to. And to be fair to the younger developers, some of them see how productive vim users are in navigating a document and make the switch. Nothing to do with "signaling".
When I have a big edit I'll just pop open a vim window. I cut the text out of VS Code or whatever wretched web editor I have to use for some online App and paste it into the vim window, edit, then cut and paste back into the original window. When they come up with a variant of a pathetic markdown box that supports :%s//blah/g I might consider the internet ready for prime time.
The best thing about modern GUIs is they let me have a zillion command lines open at the same time.
it may just be my own circle, but comparing the Ruby folks I know from 10 years ago to the Ruby folks I know now... there's fewer of them, and those that remain using Ruby day to day never went in for those 'hacker signals'. But the Ruby folks I know from 10 years ago who've left and moved in to other arenas (like JavaScript) did throw use some of that sort of signaling.
It's simply much, much easier for me to work with a dark background. Not fully black, but preferably a bit in the purple direction. You are not top-down reading most of the time, but working on lines, jumping between them. That's much easier for me on a black background, because the text pops out and the background "noise" is much easier on the eyes.
When you're reading a block of text, you interact much less with the background than when you're searching and jumping between lines.
From personal experience: I use dark themes on everything for years, and the difference is night and day, literally.
Dark themes are much better for my eyes.
Having a hard time believing any research suggesting otherwise. Would surprise me, to say the least.
Also, this is anecdotal - I have 240Hz monitor, and since I started using it (couple of years), I have less eye strain. It could be the refresh rate, or it could be this particular monitor somehow (the quality or whatever). I am not sure.
Basically, 240Hz + dark background + light text + low brightness is ideal for my eyes, based on my experience.
I like my screen to be dim, and find that the white parts of an LCD are usually too bright, even when the brightness is turned right down. I expect an off-white or grey background on an OLED screen would be better, but they haven't made their way to the desktop yet.
On my phone with an OLED screen I still prefer white-on-black, but that's mainly because even at the lowest brightness it is still far too bright at night.
> Ergonomics research has shown a long time ago that dark text on light background is easier to read on computer screens for long periods of time.
There are studies that have shown that in the lab. Meaning that it's worked out that way under whatever conditions were present in the lab, which aren't necessarily the same as the conditions in one's workplace. Real life is probably messier.
I'm not a "young" developer, but regardless of any research I simply prefer dark themes because I spend several hours every day staring at screens, and I find them so much easier on my eyes.
What is needed is alternating themes based on the real world day/night cycle. Even though when the sun goes down that arguable should be the time that screen time should be cut anyway.
If you are using X11, you can use xcalib to modify gamma/brightness/contrast of the color channels individually. My own quick hack is "xcalib -invert -alter" which simply inverts the colors. This probably breaks subpixel rendering, though, but IMO subpixel rendering looks like crap on low resolution displays and is pointless on high resolution displays.
Having been around then, light backgrounds on old CRTs really showed off astigmatism and non-linearity and blooming and general problems of the technology, and 2002 is modern enough for "famous tech people" to have LCDs. Light backgrounds were a display of conspicuous consumption for those who could afford LCDs.
Blooming, for kids these days, is when the HV supply of a CRT is poorly regulated in a constant-power sense and changes in display brightness cause changes in electron beam current cause fluctuation in acceleration voltage literally make the screen zoom in and out a couple percent while under high load. The poor regulation was usually worse under high current demand, so you could "clear ; top" a black background and there would be no weird zoom effects but a white background might zoom flutter in and out as the content of the screen varied. Very annoying to have one window flutter a little because another had its content change.
Interference and weird aliasing problems showed up in vast fields of illuminated white background more than unlit black background. Retrace lines looked particularly awful on large fields of white. Yes there's a clamping circuit that SHOULD totally shut off the beam on vert-retrace but as you'd expect it looked awful on a $200 CRT monitor and beautiful on a $1000+ monitor. Its just not noticeable on a black background.
There was also a filtering effect, light backgrounds having terrible image quality, a tech giant of 2002 spent decades staring into mostly black background terminals so anyone who in 2019 would not be able to stand black backgrounds was simply filtered out of the pool of possible tech giants back in that era. If you couldn't stare into the abyss of a TRS-80 model III black background sorta low res CRT display for 40+ hours per week, you simply had to leave the computing field in 1981, for example.
The only times we used light backgrounds on CRTs was for non-text purposes (games) and very strange lighting problems (the hated glossy screen with a light source reflected into your eyes, perhaps a very bright CRT background would reduce the massive glare distraction)
To some extent USABLE white background on a CRT was conspicuous consumption in that it was unusable on a $250 monitor or a TV connected to a 1980s home computer but usable white background CRT meant someone bought a $1000+ fancy name brand monitor. Anyone could have a decent black background display regardless of personal wealth but a white background meant you wanted everyone to know you had money, or at least you had money before you spent it, LOL.
WRT conspicuous consumption I was multi-monitor back to the 80s (aka multiple machine on desk) and CRTs were an absolute nightmare where a 70 hz monitor next to a 60 hz monitor made both unreadable or in some ways it was even worse to have a 60.01 hz VGA card next to a 59.99 hz card because then you'd get a magnetic interference crawlie that crept up the screen slowly like a visual hallucination. The tech giants in the linked article were not doing tech work (generally) as most listed people most were (generally) managers and as such having one monitor on a desk for gnatt chart manipulation and email and such was typical for managers of that era, even if the techie front line personnel had three (or more) monitors on their desk.
One of the most interesting interior design aspects of the CRT to LCD transition about two decades ago is my multiple monitors on my desk at home went from as far separated as possible to as close as possible due to no more interference. Roughly as interesting was the transition from 4:3 monitors to wide TV aspect ratios.
I don't know. I work with rootkits and do Reverse Engineering stuff, (I believe that's the most "hackerpunk" you can get), and my setup is Emacs with leauven theme, pretty standard IDA white and Sumatra PDF viewer for reading (Everything on MS Windows no less). Its feels pretty great!
I find dark backgrounds and low brightness make floaters in my vision much less visible working on a computer. Please take your condescension and put it where it belongs.
Shadows on the wall -- Plato. This is a non issue with e-ink monitors. Neither side is correct. Until that day when all monitors are e-ink, I don't have to read ergonomics research to tell which is best. I just go with what my pain receptors and common sense tells me. The beaming white background of hacker news is making my head hurt and when I return to my black terminal I become aware suddenly of the rest of the room.
Some of those fonts make my head swim. I'm glad the quality of desktop typography has improved so much in the recent past. Also, I will always be scared of the people with strange cursive fonts like CmdrTaco. It's like people who use that comic sans-esque font that comes as a pre-installed option on Samsung phones.
Old designs used to fit more data in 1024x768 (or 800x600) than we get now in high-res, high-DPI widescreen displays.
There's a personal finance app I use that was recently modernised, and it went from being able to show me 50 transactions on a 1920x1080 screen to barely being able to display little over 10 transactions at a time. No improvement, just... padding.
this calendar widget. [..] My gripe with this design aesthetic is the loss of information density. I'm an adult human being sitting at a large display, with a mouse and keyboard. I deserve better. Not every interface should be designed for someone surfing the web from their toilet.
Here's what the PayPal site used to look like. I never fell to my knees to thank God for giving me the gift of sight so that I might behold the beauty of the old PayPal interface. But it got the job done. Here's the PayPal website as it looks today. The biggest element on the page is an icon chastising me that I haven't told PayPal what I look like. Next to that is a useless offer to 'download the app', and then an offer for a credit card. I can no longer control the sort order, there are no filter tools, and you see there are far fewer entries visible without scrolling.
> If you're only displaying five sentences of text, use vanilla HTML. Hell, serve a textfile!
I like this author. Current size of my entire personal site is just under 900k, including all downloads, the blog, and images. You could read it on a TI-83 with the right software, a modem, and a dialup account. Could probably even display the images since they're 1bpp bitmaps.
I used to run flashy DEs years ago. Enlightenment for all those who remember, with all the bells and whistles turned on.
As years went by, I started to remove everything. I now use an automatic tiling window manager with no borders. I actively disable all animations, and I use colors much more thoughtfully (color yes, but only where it is needed), so that by default my laptop looks a lot more like those old screenshots than a today colorful tablet.
I'm now quite pedantic on how text should be rendered the way _I_ want, and it should be the same _everywhere_.
So in a sense, I see why the current look is attractive and my younger-self would approve, but in retrospective the bland-but-consistent look is what I eventually moved on to by choice for a lot of reasons.
The current UI trend in my mind is considerably worse from an UI perspective than what Windows 3.1/95 (and same-era DEs) would offer.
I tend to hit print screen randomly every few weeks, and have been for a number of years. It’s like photography for when I’m not outside - it amounts to a near record of my ways of working over the years.
I used to do this too. For me they were only interesting until I started my career. Now those screenshots are just the default Mac background with a few apps hidden in the launch bar.
The things in person in real life, on the other hand, have become much more interesting!
Jon Hall's desktop shows exmh running. I did a bunch of work on that long ago (90s). His two screenshots highlight two of my contributions (both of which live on today).
I wrote the folder display in the upper pane (I'm really proud of this result).
I also worked on the abomination of a 'pick' interface.
I love how some of them used Phoenix (earlier name of the firefox project). It was a fresh breath after netscape and explorer. Even the earliest versions supported multiple tabs, advanced CSS and adobe flash containers.
Ahh. I was told by a CS student in 2005 or so that knowing just some Unix commands would get me a good job. I even took an introductory class based on this. But I never pursued it that diligently. And now 20 years later, I am learning Linux and plus many other related stuff. Should've listened to the recommendation 20 years earlier.
I did all sorts of desktop and window manager things over the years, but the last few years, I'm very happy with a tweaked tiling window manager with workspaces (currently based around XMonad).
All screen real estate is used for window content, except for a few pixels for borders. Currently no persistent panel or status display of any kind. I already known what workspace I'm on, based on the windows there, and I know what workspaces I've been using. When I need to see the clock, I've rigged a keypress to display it. I occasionally miss having system loadavg info displayed all the time, but I prefer using that screen real estate for an extra lines of code or text.
I've made a bunch of keybindings that start an application or switch focus to it if it's already running.
Yes, I don't even customize mine. I leave the OS, tools in default configurations. I do my work in the cloud. If my computer is hit by lightening, I toss it in the dumpster and get another one. Back to work within an hour.
My toaster is also boring, and my pipe wrench, and my floor jack. Just tools.
Mine would look like Jordan Hubbard's, because by 2002 I'd replaced my Slack linux running fvwm with MacOS (on a Mac desktop I'd picked up for free from the e-waste bin at Berkeley!)
I used linux in 2002 and the sentiment i get is that the whole desktop environment thingy is getting all bloated and padded. heck, those kde screenshots show so much stuff on such a small screen, right?
And it was all really snappy on a p166mmx with a meager 64megs of ram.
KDE 3.x was also in some ways more feature rich than a current KDE. 4.x was a huge mess due to some big ego trying to push his ideas on the world but I think the current Plasma releases are on a good path torwards mature and productive - however the applications beyond dolphin are quite lacking. It's also sad that a lot of ressources and energy goes to GNOME...
current KDE. 4.x was a huge mess due to some big ego trying to push his ideas on the world
That really wasn't the reason for KDE 4.X being a huge mess. I find it annoying that people who contribute a lot of their free time and life to a free software project are treated this way.
aseigo was a paid developer on KDE but what he contributed was far more than his paid time. Also he was a contributor before and after his paid development.
KDE was a mess because the Qt3 to Q4 change was a humongous change. KDE to decide to stick with Qt3 or move forward with a huge effort to rewrite the whole stack. And all it had was volunteers.
You underestimate the complexities of that situation and blame it on a single person and that's an unfair oversimplification of the situation.
> KDE was a mess because the Qt3 to Q4 change was a humongous change.
KDE 4 was a mess because in addition to the Qt upgrade they decided to jump on the semantic nonsense fad as well as develop a completely separate set of widgets for the desktop and the shoehorn those things into everything. Oh then there was the overengineered PIM framework that starts a MySQL server on your desktop system even if all you want is to have holidays being highlighted in the clock widget's calendar. Also they released what was still an alpha at best as 4.0 - version numbers for user facing releases do matter.
Solid, Phonon, Plasma, Nepomuk, Akonadi, Sonnet, and many others.
Some of them paid off well (eg Solid and Plasma), some were less successful. That's a very normal thing.
KDE 4 wasn't a mess. Later versions (4.7 afterwards) were very polished and usable.
The problem was that KDE 4.0 was hyped to no end. Distributions jumped on the hype and had a race to ship it first. Although KDE developers all warned that the .0 release is nowhere as stable.
They had to get it out though. It was already in the works for 2-3 years and some projects were losing volunteers because their working was not going to be released and were losing motivation.
I'm sorry - you are correct. I didn't want to blame a single person.
But the new concepts and metaphors for the desktop were just not well thought out IMHO - that widget stuff was the wrong direction - as well as the idea to abandon the classical desktop. That mandatory Desktop-Folder just drove me nuts.
SuperKaramba on KDE 3.x is still not completely possible in current KDE.
I don't think KDE was ever serious, especially considering the mascot was initially some old bearded wizard in blue robes, striped socks and slippers with fishbones in his pockets[0] :-P.
But the theme was way more neutral than most themes today that could work to anything from serious applications to games - similarly with Win9x.