“MGR provides each client window with: curses-like terminal control functions, graphics primitives such as line and circle drawing; facilities for manipulating bitmaps, fonts, icons, and pop-up menus; commands to reshape and position windows; and a message passing facility enabling client programs to rendezvous and exchange messages.”
Essentially, each window was a souped up ASCII terminal with extra escape sequences which implemented primitive graphics. It competed with X-Windows on low-end workstations during the 1980s.
The width of the display used in the screenshot appears to be less than 150 characters. Why would you add the clutter and distraction of transparent windows with frame decorations in such a limited environment? 30% of the real estate is consumed by elements that don't present any information to the user.
On a modern display, this could far more productively be replaced with (1) any number of tiling WMs with console terminals in the panes (I use stumpwm for this), or---and you don't even need a GUI/WM for these---tmux or screen, or even emacs -nw, which give you a much more configurable and robust text-only environment. My normal working environment is a pretty heavily customized tmux (including powerline and full 256-color/unicode support) running on a machine that doesn't even have a GUI installed. (I do cheat a bit by using kmscon---or fbcon in the past---to get colour and font support on linux vttys.).
The demo video (https://www.youtube.com/watch?v=fLumnSctakY&feature=youtu.be) reveals that this is not just a traditional window manager for a static screen, it's got a much larger virtual canvas that the display can scroll around in. So there's a LOT more space to make an aesthetic decision for big, character-wide window frames.
Seriously go watch the video, this thing feels like it's got a foot in the realm of UI roads not taken like Raskin's zoomable UI, text-mode is maybe more just a way to keep their focus on thinking about the interactions without having to worry about drawing a bunch of pretty widgets.
Yeah, I see it. I just don't get it. There's an awful lot of UX work being done to preserve an illusion that you have a bunch of fixed-size consoles that are arbitrarily arranged in a 2d space. Which, I guess, is the defining metaphor of GUI interfaces, but if that's what is desired, why create a console-only (if only in appearance) GUI? Text buffers in emacs or panes in screen or tmux can be made to have arbitrary size and dimension if you want to turn off wrapping and keep a longish scrollback. You can have as many buffers as you want, as many as your screen can accomodate visible at once and easily navigate them as you like. All you have to give up is the conceit that windows have independent physical size and location and embrace one that holds that you can have an arbitrary number of text buffers available to arrange, swap, hide, show, etc. Not trying to be nasty, but this feels like a bit of a cargo cult: someone stuck in the windows GUI metaphor trying to create a version of CLI metaphor they don't fully grasp. Of course, the fact that those of us who prefer the CLI metaphor are rapidly dying off is relevant here---but seizing all of the disadvantages of an all-text environment without any of the advantages seems ill-advised. That said, no one needs my approval. If this works for you (and I'll readily admit it's kind of cool), what do you care that I don't get it.
Typing text into a terminal/buffer is easy. But remembering all the shortcuts or cammands to resize and move windows in termux et.al is hard. Resizing and dragging with a mouse is easier (or fingers on a touch screen/interface)
I agree that it feels really confused but if I can get it running one day with the ability to save my sessions, I would gladly flip between this and X+i3 for my daily driver.
What I see offered here is a way of mapping a huge, non-software project more loosely than a strict classification-based directory/file structure. Specifically I've been slowly fleshing out my own fantasy world with lore and history. I tried the live demo and I can totally see this meshing really well with the part of my brain that likes fiction because it can be a little messy.
I also think the friction of learning a new UI would be a benefit here in reducing the amount of "laptop fatigue" which keeps me from writing less technical stuff after wrestling with a bug all day.
At the end of the day, its a fun project. Plus tmux can get annoying configuring behavior to feel like something more native in terms of mouse use consistently. Either it works great scrolling up in the shell or it works great scrolling with an editor open, never both.
Agreed. But using an entire character-width for that purpose is extravagant in this example (use of background colour would be better. Right now, some of those windows are !0% border). In a larger area, you can use unicode or even the drawing characters in MS 8-bit ASCII for a single-pixel hairline plus padding for both windows' text). I guess my take is that this is sacrificing an awful lot of real-estate to simulate overlapping windows, which is a feature of dubious utility in a pure text environment. "Users expect them" is probably the most common and egregious excuse for bad UX design, but I do understand why.
Window transparency is actually a way to increase the information density; if you set the transparency just right, with a bit of practice you can essentially view the contents of several windows at once in the same space. For a long time I used a laptop with only XGA (1024x768) resolution, and writing code in a semi-transparent window with docs, IM, and several other windows underneath made the small screen far more usable than it would be otherwise.
And, of course, ed is the default editor. :-) I exert geek dominance over introduction to C students by demoing using ed to modify a source file. After which I amuse myself for the remainder of the term: "Source Control? Syntax highlighting? Luxury! When I was young we had to hand-magnetize knitting needles and edit files one bit at a time directly on a Winchester drive!"
More seriously, I think the 80 column width is just too constraining with more modern languages that aren't remotely as terse as C–not to mention a limit that aggressively militates against in-line commenting. We also have monitors and fonts with resolutions that permit easily reading much denser use of the visual display. I mostly go with 120 chars now (in practice, that means that lines still won't wrap when I print source code files) and however many lines fit into the display pane.
Even though 120 fits comfortably on the screen, I find it leads to sloppier code than 80 or 90. The latter exerts pressure to "do one thing per line, and make it neat."
Ime this is true of modern langs like ruby or javascript or Go, as well as older langs like bash. But I know this is a religiously contentious subject, so to each his own.
Anecdotal, but- It does feel like a good visual approximation and when I start to run close to it I usually pause and figure out if that code needs attention.
I've been thinking about such text-based DE for custom Raspberry Pi nano based smartphone. Reason being, common mobile apps are not going to arrive for Linux and Nano is not powerful enough to run web browsers anyways; So why not a text-based DE or even just a console which enables basic communication features and Lynx for browsing?
It would be great if we could get some GPU juice from Pi for text rendering in such environment like how iTerm2 does on macOS.
>That can totally run a desktop environment and browser. Especially something light like XFCE or Enlightenment
Even regular sized Pi struggle with web browsing on a standard browser (Firefox/Chromium), I know that it largely has to do with the web pages themselves being memory hogs, but then again a similar specced Chromebook x86 doesn't hang while browsing(Same web page/Same no of Tabs); I think it largely has to do with browsers not getting optimised for ARM architecture on Linux.
I had the same issue with XFCE i.e. browsers hang after sometime, as for Enlightenment 3rd party apps didn't work last time I tried with Pi3[1] has the situation changed now?
This shortcut works even if the remote side isn't responding to ^c and/or ^d. It's handled by the local client. The two primary use cases I've found it useful are when I accidentally wedge the remote box by using up all of some resource or when some other event causes my network connection to hang and I don't want to wait for a timeout.
yeah, i was specifically confused about him having to use it often. it is sometimes useful, like when you're connected to the demo site and cba to figure out how to disconnect.
but such situation are so rare that i was confused about the remark how often he uses it.
If you do the first connection with mosh instead of ssh, it will survive vpn/wifi/4g roaming and even system hibernation. Assuming that first hop is a vps in the cloud with a stable connection, you can literally wake your pc from hibernation and once you get any Internet access, mosh will happily restore your "ssh" session for you.
mosh.org:
>Mosh will log the user in via SSH, then start a connection on a UDP port between 60000 and 61000.
Needs at least a couple of ports in that range open, because old sessions tend to block ports until they are cleaned.
I would suggest keeping around some versions :) I use git for this very reason locally when I'm futzing with settings, it's immeasurably handy for experimenters and config file junkies.
Along with DESQView and IBM’s TopView as already mentioned, this reminded me of the first version of PC Smalltalk put out by Digitalk that was called “Smalltalk Methods”. It was a character based windowed Smalltalk.
This was quickly followed with the full bit mapped windowed Smalltalk/V. I believe all of this was when MS Windows still only had tiled windows without overlap.
Glad to see someone made this to save me the time! I've wondered if a TUI could replace some desktops for low end or embedded systems, or something to use on remoting into a datacenter etc.
That's only true for TrueType Fonts in graphical mode.
In text-mode, characters are "drawn" by simply writing their ASCII code point into VRAM and writing another byte (two 4-bit values really) in an additional "attribute" buffer to set the foreground and background colour.
Programmers can swap out the default character set to provide their own typeface and graphical characters for use as window borders, shadows, etc.
(S)VGA text-mode is as cheap as it gets in terms of computational complexity.
This sort of thing hasn't been done with the display adapter in a text mode for a decade or so, now, even on PC-compatible hardware. Even terminal emulators of this sort on Linux and the BSDs use the display adapter in graphics mode, drawing glyphs from loadable (bitmap) fonts. The sorts of systems that mysterydip was talking about might not even have hardware that provides an old-style MDA/CGA/EGA/VGA text mode, moreover.
This in addition to the fact that, as others have pointed out, the old-style text mode really doesn't lend itself well to Unicode, to the new (sic!) terminal colour systems of the 1990s, nor to proper boldface+underline+italics. (The latter is one of the reasons that commercial DOS programs from 1-2-3 to Quattro Pro were switching display adapters into graphics mode and rendering their TUIs by drawing graphics, with BGI in the case of Borland, back in the late 1980s and turn of the 1990s.)
With post-VGA blitting capabilities one can of course do things like pre-load glyph bitmaps into off-screen video RAM and blit them into the display, potentially a lot faster than having the main processor write the glyphs directly; but it is still not as simple as writing 16-bit character+attribute pairs, to do what modern users generally regard as the basics of TUIs in these modern Unicode, 24-bit colour, full ECMA-48 attribute support times.
I suggest running this on a server vs xfce/lxde/some other low overhead windows environment and get back with us on the overall load on the machine vs graphics. Holistically this is going to be a much smaller footprint.
This reminds me of the DOS menuing system we used to install on clients' computers several decades ago. Cannot remember the name, may have to dig out my old 5 1/4" floppy collection binder I used to always carry with me if not remembering starts to drive me crazy...
That system remapped the function keys to launch batch files or call up sub-menus... Easiest way (at the time) to "customize" peoples' computers for them...
Although I played with DesqView, that was not the menuing system I used to use - It had some monumental name for the day, like SuperMenu or MenuGold 2000 or some such... Internet search not getting any hits that match. Oh well.
It was very common on some of the old vt220(etc) terminals to have menus and windows and such. However I never saw anything like this :) . Can't imagine those 9600 baud terminals having fun with all of this haha
No, went and looked through my little 5 1/4 binder, not sure I still have the menu utility...
But for those of you who were doing this back in 1991, here is a list of the utils I carried on 5 1/4":
Gibson's SpinRite
MS-DOS 6.22
pcAnywhere 5.0
PKUnzip 2.04G
Everex - The Edge Utility (BIOS settings app)
Easy Data 286 Setup for 10-12MHz
Laser Digital Phoenix Setup
Wyse Tech - Setup & Test
IBM Diags PC/AT
Utron Inc - Neat 286 Setup Disk
Misc. Novell files
QEMM 386 5.10
Brief 2.0
SuperStor V2.04
XTreeGold
Mountain Tape (tape backup)
Norton Utils 5.0
ProComm Plus
Intel Satisfaction
MicroHouse IDE
Diagnose
DOS 5.0
DOS 3.3
Checkit 3.0
Scan80, Clean80, CPAV
Norton Antivirus
LapLink III
And somewhere in my office I still have the multiple boxes of 3 1/2" disks, both the 10 disks boxes, and bigger 100 Disk box, that saw a lot of wear and tear in my backpack - I skateboarded to clients around Santa Cruz, so everything had to fit into my backpack at the time.
Oh yeah. I'm going to date myself here, but I was just going to say this reminds me a lot of Borland Turbo C++ way back in the day. They had this thing called Turbo Vision for building text UIs. It seemed super cool back then.
It definitely was super cool. The windows had types, could be resized, had titles, dropdown menus, and form controls. The editor had syntax highlighting. And most importantly (at least for me at the time), there was a language reference included with the help system which had hyperlinking, so you could click through pages about the supported syntax and various topics, gleaning a wealth of information. All on DOS!
Hypertext integrated help has been available earlier than turbo vision even. It has been available from turbo pascal 3 at least. For a young like me who ... copied it on two disks, it was awesome what you could learn.
Turbo Pascal 3 didn't have integrated help (hypertext or not), it was introduced in Turbo Pascal 4 (which itself was AFAIK in general a big rewrite of the compiler and IDE).
Though the integrated help really improved with version 6 (when Turbo Vision was also introduced) since you could move a cursor freely inside the help window as if it was an editor, select and copy text, the help text was much more detailed and more organized and every function/procedure also had a small example showing its use (version 5.5 also had examples but the overall help was much more terse).
Windows 1.0, while having a framebuffer-based layout engine, clearly evolved from a pure TUI, and retains elements (title bars, scroll bars, etc.) whose drawing algorithm emulates a TUI within the framebuffer, by just drawing DOS-like lines or grids of box-drawing characters.
In fact, it’s only window content that is arbitrarily graphical in Windows 1.0; if you only have DOS VMM windows open, you’re essentially seeing a TUI, just with extra steps.
Windows 1.0 didn't actually use box-drawing characters for any of this, of course. For example, the System menu in the top left corner of each window was three horizontal lines, just like a hamburger menu of today. Scrollbar buttons did not look like any box-drawing character.
The Maximize button in the top right did look like a box-drawing character, but that's really about the only resemblance I see.
(I developed a DOS GUI for email, Transend PC, in the early '80s that used box-drawing characters, and soon after started writing SQLWindows for Windows 1.0, so pretty familiar with both.)
Right, I'm not saying that the GUI was rendered using the DOS box-drawing characters, i.e. the monitor's built-in codepage 437 text mode; nor were they using a monospace bitmap font of codepage 437.
What I'm saying, is that I'm pretty sure that a lot of the Windows 1.0 GUI was drawn by emulating the "technique" TUIs use to draw box-drawing characters, but on a framebuffer and with a custom (monospace) bitmap font:
1. create any-and-all graphical detail you need on screen, by repurposing the "leftover" parts of the bitmap font you're using to draw control-label text to the screen, adding a set of additional, custom symbol-drawing elements, which are all 'characters' of that bitmap font, and so all stuck being the same size+shape as the label text;
2. create a graphical "monospace text" drawing primitive, that takes as input a pair of buffers representing the text itself, and its hardware-text-mode-alike per-character drawing attributes (i.e. FG + BG color);
3. implement your OS widget library almost entirely in terms of calls into that graphical "monospace text" drawing primitive, passing a static data-section buffer holding the positional+attribute data for your box-drawing character.
(For example, look at the drive icons in the File Manager. Those are just clearly just "text" composed of four drawing-element characters from the bitmap font, like this: [-=-]. So the whole drive-chooser area can be drawn with a single text-draw command, passing a string like "A[-=-] C[-=-] C: \WINDOWS".)
-----
You might say "but look at any screenshot of Windows 1.0 — the first thing you'll notice is that the menu-item labels in each window's menu bar are offset by a half-character-width horizontally! And modal dialogs are, in their entirety, offset by a half-character horizontally and vertically! How's that possible?"
Well, the GUI "monospace text"-drawing primitive might be drawing a grid of characters to the framebuffer; but it's not drawing them to an imaginary grid on the framebuffer. It accepts an arbitrary pixel offset for where it should start to draw the block of monospace text.
So, with that in mind, the algorithm for drawing the menu bar very likely has two passes:
1. render some box-drawing characters representing the "background" of the menu (i.e. yellow with a black border)
2. render a layer of regular non-box-drawing characters on top, offset by +4px on the X axis, in "black on transparent", representing the menu-item labels.
It's pretty clear (to me, at least) how a drawing algorithm like this could be a natural evolution and outgrowth of a TUI: first, replace the TUI's backing text buffer with a framebuffer, and the character-plotting calls with draw calls to a "draw monospace text character at emulated-grid position on framebuffer" primitive; then refactoring the draw calls to use a window-local coordinate basis; and only then adding the ability to draw anything other than monospace characters—but gradually, starting only with defined grid-snapped "rich graphical content" regions within windows (sort of like how Windows today has "Direct3D drawing surface" regions); and then going back and gradually enhancing the GUI widgets with little flourishes like draw offsets.
Within the Windows 1.0 codebase, I would bet money that—at least in some previous revision in early development—there was probably a #define flag for whether the "framebuffer driver" was enabled; and that all these windowing-system and common-controls drawing algorithms were written in a "hybrid" way where, instead of one TUI-based and an entirely-distinct framebuffer-based implementation, the framebuffer-based implementation is just #ifdef'ed ornamentation on top of the base TUI implementation.
That's an interesting and creative theory. I have to admire your putting so much detailed thought into it! I mean this sincerely.
Perhaps parts of the Windows 1.0 GUI could have been implemented like that, but the truth is much simpler and more mundane.
The bundled apps (including MS-DOS Executive) and the window decorations generally used the same GDI (Graphics Device Interface) calls as third party apps:
Text was drawn with TextOut() or DrawText().
Bitmaps were copied to the framebuffer with BitBlt() or StretchBlt().
Lines were drawn with MoveTo() and LineTo().
Rectangles were drawn with Rectangle() or RoundRect().
This is not an exhaustive list but should give you the general idea.
All of these functions operated on a "device context" (DC) that you obtained with functions like GetDC() or CreateCompatibleDC(). Some, like BitBlt(), used two DCs for the source and destination.
The MS-DOS Executive drive icons were bitmaps drawn with BitBlt() with TextOut() for the drive letter. The selected drive letter and icon were inverted with InvertRect(), or possibly drawn with the DSTINVERT raster operation code.
These are the same functions that any Windows application could use. The MS-DOS Executive was just another app.
The non-client area of a window (titlebar and such) was drawn with the same GDI calls as the client area. Your app would get a WM_PAINT message to draw the client area and a WM_NCPAINT for the non-client area. Most apps passed WM_NCPAINT through to the default handler DefWindowProc().
I was programming Windows apps starting with the first release of Windows 1.0. I spent a fair amount of time reverse engineering other Windows apps and Windows itself, along with people like Matt Pietrek and Andrew Schulman.
Andrew in particular would have devoted an entire chapter of his book Undocumented Windows to the text-based system you describe. It would be a field day for him!
Also consider memory limitations and programmer time. Remember that Windows 1.0 ran (slowly) on a machine with 256KB memory and two floppy drives.
Since they had to build GDI anyway, it would take more memory to also include this text-based system. It also would have taken more developer time, and provided less testing of the public APIs.
But again I do appreciate your interesting speculation!
Framework was pretty great! My partner and I were working on some similar ideas at the time, and we got together with the Framework team to compare notes and brainstorm.
The thing that stuck in my mind when Robert Carr took us on a tour of their office was the printer lab.
It was a full size conference room with floor-to-ceiling shelves on every wall stacked with printers.
Because back in those days, if you wanted to sell a DOS application that could print anything more than plain text, you had to write your own drivers for every printer you wanted to support. Ouch!
I've sometimes thought that the real innovation in Windows 1.0 was having systemwide printer drivers so every app didn't have to provide their own.
This looks quite cool, but I don't think it's open source? I can't find any source code in the repository, there's just a single file called not_ready_yet.
I once built something like this. There’s a Perl library[1] that lets you control windows, dialogues, etc. My grasp on concepts like asynchronous programming, callbacks, process IO, etc. was nascent at best, so naturally this UI that I built was pretty terrible. I learned a ton though!
Its good to see experiments like this. Maybe this project won't be the next big thing, but maybe it will give someone an idea of what the next big thing should be.
There's also a window claiming to be Windows' CMD. As I can't figure out how to enter text in any of these windows (I'm assuming it's disabled, otherwise CHAOS) I can't tell if it's just a vestigial window of if it's a true output.
The topics on the Github repo list both Linux and Windows. The Readme calls itself "aka Monotty Desktop". That makes me think they may have written the code for the Mono runtime, so I'm expecting it's C#.
I had played around with a similar idea in C# some months back, though I was targeting only Windows because it took P-Invoking into some Win32 APIs to get most of the functionality.
I know the difference between PowerShell and CMD. And I know Microsoft ported PowerShell to Linux. You're not paying attention. Command Prompt is also on display: https://dice.netxs.online/cloud/vtm/mde_banner.png