“MGR provides each client window with: curses-like terminal control functions, graphics primitives such as line and circle drawing; facilities for manipulating bitmaps, fonts, icons, and pop-up menus; commands to reshape and position windows; and a message passing facility enabling client programs to rendezvous and exchange messages.”
Essentially, each window was a souped up ASCII terminal with extra escape sequences which implemented primitive graphics. It competed with X-Windows on low-end workstations during the 1980s.
SVG over HTTP/2 in Wasm with WebGL1 (for compat)
It's a really neat idea. But it was of limited use as not many applications were made for it.
Not sure I'd bother with MGR itself, but instead do something similar.
It's not as compelling now that we don't do much over serial links anymore.
The modern day version of this really is the browser/HTML.
I remember being intrigued at the time by the idea of using this protocol over a MUD (telnet / TCP socket carrying text) connection.
On a modern display, this could far more productively be replaced with (1) any number of tiling WMs with console terminals in the panes (I use stumpwm for this), or---and you don't even need a GUI/WM for these---tmux or screen, or even emacs -nw, which give you a much more configurable and robust text-only environment. My normal working environment is a pretty heavily customized tmux (including powerline and full 256-color/unicode support) running on a machine that doesn't even have a GUI installed. (I do cheat a bit by using kmscon---or fbcon in the past---to get colour and font support on linux vttys.).
Seriously go watch the video, this thing feels like it's got a foot in the realm of UI roads not taken like Raskin's zoomable UI, text-mode is maybe more just a way to keep their focus on thinking about the interactions without having to worry about drawing a bunch of pretty widgets.
What I see offered here is a way of mapping a huge, non-software project more loosely than a strict classification-based directory/file structure. Specifically I've been slowly fleshing out my own fantasy world with lore and history. I tried the live demo and I can totally see this meshing really well with the part of my brain that likes fiction because it can be a little messy.
I also think the friction of learning a new UI would be a benefit here in reducing the amount of "laptop fatigue" which keeps me from writing less technical stuff after wrestling with a bug all day.
More seriously, I think the 80 column width is just too constraining with more modern languages that aren't remotely as terse as C–not to mention a limit that aggressively militates against in-line commenting. We also have monitors and fonts with resolutions that permit easily reading much denser use of the visual display. I mostly go with 120 chars now (in practice, that means that lines still won't wrap when I print source code files) and however many lines fit into the display pane.
Anecdotal, but- It does feel like a good visual approximation and when I start to run close to it I usually pause and figure out if that code needs attention.
It would be great if we could get some GPU juice from Pi for text rendering in such environment like how iTerm2 does on macOS.
Correct, sorry brain fart.
>That can totally run a desktop environment and browser. Especially something light like XFCE or Enlightenment
Even regular sized Pi struggle with web browsing on a standard browser (Firefox/Chromium), I know that it largely has to do with the web pages themselves being memory hogs, but then again a similar specced Chromebook x86 doesn't hang while browsing(Same web page/Same no of Tabs); I think it largely has to do with browsers not getting optimised for ARM architecture on Linux.
I had the same issue with XFCE i.e. browsers hang after sometime, as for Enlightenment 3rd party apps didn't work last time I tried with Pi3 has the situation changed now?
1. <Enter> Press Enter.
2. ~ Press Tilde.
3. . Press dot.
This disconnects from the remote host.
If you want more: `man ssh` and look for "ESCAPE CHARACTERS"
but such situation are so rare that i was confused about the remark how often he uses it.
>Mosh will log the user in via SSH, then start a connection on a UDP port between 60000 and 61000.
Needs at least a couple of ports in that range open, because old sessions tend to block ports until they are cleaned.
This was quickly followed with the full bit mapped windowed Smalltalk/V. I believe all of this was when MS Windows still only had tiled windows without overlap.
In text-mode, characters are "drawn" by simply writing their ASCII code point into VRAM and writing another byte (two 4-bit values really) in an additional "attribute" buffer to set the foreground and background colour.
Programmers can swap out the default character set to provide their own typeface and graphical characters for use as window borders, shadows, etc.
(S)VGA text-mode is as cheap as it gets in terms of computational complexity.
This in addition to the fact that, as others have pointed out, the old-style text mode really doesn't lend itself well to Unicode, to the new (sic!) terminal colour systems of the 1990s, nor to proper boldface+underline+italics. (The latter is one of the reasons that commercial DOS programs from 1-2-3 to Quattro Pro were switching display adapters into graphics mode and rendering their TUIs by drawing graphics, with BGI in the case of Borland, back in the late 1980s and turn of the 1990s.)
With post-VGA blitting capabilities one can of course do things like pre-load glyph bitmaps into off-screen video RAM and blit them into the display, potentially a lot faster than having the main processor write the glyphs directly; but it is still not as simple as writing 16-bit character+attribute pairs, to do what modern users generally regard as the basics of TUIs in these modern Unicode, 24-bit colour, full ECMA-48 attribute support times.
It's the America-centric ASCII stuff that we're past, not plain text itself.
check it out in notcurses-demo -- pretty crazy stuff
That system remapped the function keys to launch batch files or call up sub-menus... Easiest way (at the time) to "customize" peoples' computers for them...
No, went and looked through my little 5 1/4 binder, not sure I still have the menu utility...
But for those of you who were doing this back in 1991, here is a list of the utils I carried on 5 1/4":
Everex - The Edge Utility (BIOS settings app)
Easy Data 286 Setup for 10-12MHz
Laser Digital Phoenix Setup
Wyse Tech - Setup & Test
IBM Diags PC/AT
Utron Inc - Neat 286 Setup Disk
Misc. Novell files
QEMM 386 5.10
Mountain Tape (tape backup)
Norton Utils 5.0
Scan80, Clean80, CPAV
And somewhere in my office I still have the multiple boxes of 3 1/2" disks, both the 10 disks boxes, and bigger 100 Disk box, that saw a lot of wear and tear in my backpack - I skateboarded to clients around Santa Cruz, so everything had to fit into my backpack at the time.
Though the integrated help really improved with version 6 (when Turbo Vision was also introduced) since you could move a cursor freely inside the help window as if it was an editor, select and copy text, the help text was much more detailed and more organized and every function/procedure also had a small example showing its use (version 5.5 also had examples but the overall help was much more terse).
The latter is still in use and actively developed: https://www.farmanager.com/screenshots.php?l=en
In fact, it’s only window content that is arbitrarily graphical in Windows 1.0; if you only have DOS VMM windows open, you’re essentially seeing a TUI, just with extra steps.
The Maximize button in the top right did look like a box-drawing character, but that's really about the only resemblance I see.
(I developed a DOS GUI for email, Transend PC, in the early '80s that used box-drawing characters, and soon after started writing SQLWindows for Windows 1.0, so pretty familiar with both.)
What I'm saying, is that I'm pretty sure that a lot of the Windows 1.0 GUI was drawn by emulating the "technique" TUIs use to draw box-drawing characters, but on a framebuffer and with a custom (monospace) bitmap font:
1. create any-and-all graphical detail you need on screen, by repurposing the "leftover" parts of the bitmap font you're using to draw control-label text to the screen, adding a set of additional, custom symbol-drawing elements, which are all 'characters' of that bitmap font, and so all stuck being the same size+shape as the label text;
2. create a graphical "monospace text" drawing primitive, that takes as input a pair of buffers representing the text itself, and its hardware-text-mode-alike per-character drawing attributes (i.e. FG + BG color);
3. implement your OS widget library almost entirely in terms of calls into that graphical "monospace text" drawing primitive, passing a static data-section buffer holding the positional+attribute data for your box-drawing character.
(For example, look at the drive icons in the File Manager. Those are just clearly just "text" composed of four drawing-element characters from the bitmap font, like this: [-=-]. So the whole drive-chooser area can be drawn with a single text-draw command, passing a string like "A[-=-] C[-=-] C: \WINDOWS".)
You might say "but look at any screenshot of Windows 1.0 — the first thing you'll notice is that the menu-item labels in each window's menu bar are offset by a half-character-width horizontally! And modal dialogs are, in their entirety, offset by a half-character horizontally and vertically! How's that possible?"
Well, the GUI "monospace text"-drawing primitive might be drawing a grid of characters to the framebuffer; but it's not drawing them to an imaginary grid on the framebuffer. It accepts an arbitrary pixel offset for where it should start to draw the block of monospace text.
So, with that in mind, the algorithm for drawing the menu bar very likely has two passes:
1. render some box-drawing characters representing the "background" of the menu (i.e. yellow with a black border)
2. render a layer of regular non-box-drawing characters on top, offset by +4px on the X axis, in "black on transparent", representing the menu-item labels.
It's pretty clear (to me, at least) how a drawing algorithm like this could be a natural evolution and outgrowth of a TUI: first, replace the TUI's backing text buffer with a framebuffer, and the character-plotting calls with draw calls to a "draw monospace text character at emulated-grid position on framebuffer" primitive; then refactoring the draw calls to use a window-local coordinate basis; and only then adding the ability to draw anything other than monospace characters—but gradually, starting only with defined grid-snapped "rich graphical content" regions within windows (sort of like how Windows today has "Direct3D drawing surface" regions); and then going back and gradually enhancing the GUI widgets with little flourishes like draw offsets.
Within the Windows 1.0 codebase, I would bet money that—at least in some previous revision in early development—there was probably a #define flag for whether the "framebuffer driver" was enabled; and that all these windowing-system and common-controls drawing algorithms were written in a "hybrid" way where, instead of one TUI-based and an entirely-distinct framebuffer-based implementation, the framebuffer-based implementation is just #ifdef'ed ornamentation on top of the base TUI implementation.
Perhaps parts of the Windows 1.0 GUI could have been implemented like that, but the truth is much simpler and more mundane.
The bundled apps (including MS-DOS Executive) and the window decorations generally used the same GDI (Graphics Device Interface) calls as third party apps:
Text was drawn with TextOut() or DrawText().
Bitmaps were copied to the framebuffer with BitBlt() or StretchBlt().
Lines were drawn with MoveTo() and LineTo().
Rectangles were drawn with Rectangle() or RoundRect().
This is not an exhaustive list but should give you the general idea.
All of these functions operated on a "device context" (DC) that you obtained with functions like GetDC() or CreateCompatibleDC(). Some, like BitBlt(), used two DCs for the source and destination.
The MS-DOS Executive drive icons were bitmaps drawn with BitBlt() with TextOut() for the drive letter. The selected drive letter and icon were inverted with InvertRect(), or possibly drawn with the DSTINVERT raster operation code.
These are the same functions that any Windows application could use. The MS-DOS Executive was just another app.
The non-client area of a window (titlebar and such) was drawn with the same GDI calls as the client area. Your app would get a WM_PAINT message to draw the client area and a WM_NCPAINT for the non-client area. Most apps passed WM_NCPAINT through to the default handler DefWindowProc().
I was programming Windows apps starting with the first release of Windows 1.0. I spent a fair amount of time reverse engineering other Windows apps and Windows itself, along with people like Matt Pietrek and Andrew Schulman.
Andrew in particular would have devoted an entire chapter of his book Undocumented Windows to the text-based system you describe. It would be a field day for him!
Also consider memory limitations and programmer time. Remember that Windows 1.0 ran (slowly) on a machine with 256KB memory and two floppy drives.
Since they had to build GDI anyway, it would take more memory to also include this text-based system. It also would have taken more developer time, and provided less testing of the public APIs.
But again I do appreciate your interesting speculation!
I had just started working at Gupta Technologies at the time, and for a short while we seriously considered developing for TopView instead of Windows.
Though my memory is that it goes back a few years earlier to Valdocs on CP/M machines, but I can’t find any pictures on the intarwebs.
The thing that stuck in my mind when Robert Carr took us on a tour of their office was the printer lab.
It was a full size conference room with floor-to-ceiling shelves on every wall stacked with printers.
Because back in those days, if you wanted to sell a DOS application that could print anything more than plain text, you had to write your own drivers for every printer you wanted to support. Ouch!
I've sometimes thought that the real innovation in Windows 1.0 was having systemwide printer drivers so every app didn't have to provide their own.
Does anyone remember Tandy Deskmate?
A lot of firmware systems still do their "windowing" like this.
Edit: Presumably they plan to put the code in a repo, but it's not ready yet.
Must be looking for ANSI instead of VT100.
Mismatched emulator modes reminds me of BBS days.
Well done brother ;)
You are probably confused by the powershell which seems to be available for multiple platforms nowadays, see https://aka.ms/pscore6
The topics on the Github repo list both Linux and Windows. The Readme calls itself "aka Monotty Desktop". That makes me think they may have written the code for the Mono runtime, so I'm expecting it's C#.
I had played around with a similar idea in C# some months back, though I was targeting only Windows because it took P-Invoking into some Win32 APIs to get most of the functionality.