Then, as you would add graphical controls to the main window in the layout builder, the terminal-emulator-interface part of the window would get squished down further until it just looks like a status-bar. (Which is essentially what status bars are: a very underpowered terminal-emulator only capable of showing the last line of stdout.)
But click [or tab into, or Ctrl-` to activate] the thing, and it’d “pop down”, adding more height to the window, if there’s room; or “pop up”, resizing the rest of the window narrower; or just overlay translucently on top, like in games with consoles.
Opening this wouldn’t give you a login shell, exactly—it’d be the stdin of the program. But the toolkit would hook the stdin and turn it into a REPL, based on the CORBA/DCOM/DBUS service-objects available in the program; and the GUI toolkit would then enhance this REPL, by introspecting these service-objects, to enable both textual autocomplete/auto-validate of commands, and the drawing of these sorts of graphical command-palettes. (But you’d be able to “shell out” to process these objects with commands, like you could in scripting languages.)
Or, to put that another way: Plan9’s Acme editor is a cool idea, making the whole editor into interface where you can just write magic runes on the walls and then activate them. But there’s no real understanding or support for those runes, because they’re fundamentally opaque Plan9 executable binaries. What would this look like if we translated it into the object-oriented world that GUIs live in? What would Smalltalk’s Acme be like? And then, what if all applications in the OS were built on top of it, the way that you can build applications as major modes in Emacs or as plugins in Blender?
Or, to put than another other way: if Automator in macOS can already latch onto a running program’s service API and tell it to do things as a batch script, why can’t I interactively access the same APIs from inside the running program itself, as an automatic benefit of using whatever technologies power Automator? Why can’t I do the equivalent of opening the Smalltalk Inspector on my GUI program?
Open source projects like GNOME paid a heavy price for not being able to deal with people like Lennart resolutely.
I see it as a great unjustice that 2-3-4 drama queen personas can hound hound out 50+ most productive developers out of the ecosystem, and not the other way around.
This is how Gnome 3.0 was a private project of a few drama queen personas who hijacked an open source project from 100+ people
Isn’t that what the script editor on OSX does?
On Linux some TCL programs are like this, I don’t know if it’s a feature of the applications I use or a feature of TCL since I (unfortunately) almost never use it.
In what concerns Oberon System, there were no executables, only dynamic loadable modules.
They could be used by other modules, or by the OS itself.
Basically, there were a set of conventions how public procedures/functions should look like for the OS to recognise them as callable from REPL, or mouse actions.
And for mouse actions, they could either act immediately, act upon the current selected text/viewer or present a context menu.
Or how Inferno and Limbo interact with each other, don't stop a Plan 9.
I think the closest you can get back into these ideas is Windows, with COM/.NET/DLLs and PowerShell as orchestrator.
DW/CLIM record text+graphical output in a DOM-like tree structure with references to the underlying application objects plus a "presentation type" and parameters for how the object was presented. The interaction window presents a stream interface rebinding your application's standard-input and standard-output streams, but also implements drawing operations, and the functions 'accept' and 'present' do structured IO against the stream in terms of actual objects. The presentation-type parameter determines both how to parse/print objects and which objects on screen may be selected with the mouse as input. Everything being live objects, the lisp machine usually let you select any object on screen and pop it into an inspector. Commands defined the presentation types of their parameters, again defining how to parse/print arguments and enabling mouse selection. Presentation translators let you define how to convert an object of some type into a context where the UI wanted input of a different type. Commands themselves had a presentation type, so the top-level command processor literally loops printing a prompt, calling 'accept' for a command, then executing the command. Consistently, presentation (to command) translators let you trigger commands directly via clicking objects, drag-and-drop gestures, etc. permitting the UI to function in terms of commands even if the textual 'interactor-pane' was absent.
It's a fascinating paradigm, with some shortcomings. Despite CLIM defining some more traditional GUI widgets, how to integrate then with presentations and the command processor is a bit half-baked. This is a solvable problem. A free implementation called McCLIM is still around and worked on by one or two of the devout.
Cmd-Shift-/ (a.k.a Cmd-?) opens the help menu item, where you can just type in and it filters all of the menu items matching the query. One can press return to execute the item, or discover new shortcuts to items.
Kudos for using Vala, it needs a bit more love, instead of slowing down GNOME with GJS everywhere.
Keybind is already in use for some Gtk+ applications (Firefox IIRC), and it also makes it painful how some apps I use are not Gtk+ but say Qt... which makes me wonder if there's a thing like this for Qt?
Huh. I was sort of under the impression that nix was supposed to make that impossible.
Here is the error:
error: The unique option `environment.variables.XDG_DATA_DIRS' is defined multiple times, in:
(use '--show-trace' to show detailed location information)