My uncle was one of the programmers on the Cat project, he still has a working production unit. The Cat is a handsome little machine. There's something elegant about its lack of silos: all of your (textual) data is in one space that you leap around with the aid of dedicated search forwards/backwards keys, reprogramming or extending the system itself is likewise a keystroke away. It obviously wasn't a commercial success, but the user was indisputably the boss of the thing.
"but the user was indisputably the boss of the thing."
Thanks for posting: I read Raskin's The Humane Interface some time ago. I wondered then and now if the fact that the user was the boss was actually part of the problem with sales? Reasonably steep learning curve?
This wasn't an isolated incident of innovation from Canon. Can anyone remember their 486 laptop with built in printer/scanner?
The 'note jet' had the same form factor as a normal notebook of the era. The printing part was roughly where you would expect the function keys to be, under the hood there was a miniature inkjet cartridge that could be swapped out for a scanning cartridge. The paper line was pretty good, going under the keyboard. I can't remember there being a paper tray.
In those days 'executives' struggled to plug a printer in so it solved the 'check connections' problems they faced. It also looked very cool, distinctly 'from the future'. Actually the idea was from the past - this was a take on the standalone electronic typewriter. I always felt the idea needed to live on, not least for travelling salesmen that need some sales order printed out and signed whilst at a customer's site.
They dropped the model when the Pentium came along for the idea never to be seen again.
"The Note Jet is also a harbinger of future generations of mobile computers that offer many features that executives now enjoy in their offices. If a laser-quality printer can be tucked inside a notebook PC, it probably won't be long until we see built-in cellular telephones with answering machines, wireless pagers and beepers, send and receive facsimile capabilities with a built-in scanner, and even video cameras for remote teleconferencing."
I was reading this from a tiny computer containing a built-in cellular telephone with (remote) answering machine, wireless paging and beeping (text messaging), send and receive facsimile capabilities (e-mail or other messaging, with images) with a built-in scanner (camera), and even two video cameras for remote teleconferencing. This paragraph struck me as remarkably prescient.
However, in my minds eye I did not choose to remember the screen ugliness - the somewhat large contrast/brightness controls. Or the keyboard chunkiness or the sizeable height of the thing.
Seeing this reminds me of meeting my 'athletic' uncle after he returned home from having emigrated twenty or so years ago - he had a paunch and other alcohol related signs. Sometimes it is preferable to keep one's memories untarnished by unvarnished reality.
I had a Canon CAT. The praise for it is overrated. It was never as simple as people claim nor really any joy to use. The hardware itself also tended to die suddenly. When mine died I went to repair place where I happened upon the Canon rep. He said the CAT's motherboard was prone to suddenly dying and the only solution was to buy a new CAT.
And now you can relive this classic computer in your browser, courtesy of JSMESS [1] (although, at the moment, it doesn't run at 100% speed).
Information on key bindings and how you can e.g. activate the FORTH interpreter can be found here [2], although I couldn't get the FORTH interpreter working (I'm not sure if the browser/Emscripten can properly distinguish between ALT keys).
I've always been a fan of the way the Cat works. So much so that my main work tool is pretty much a copy of the Cat on modern hardware. It beats, and has been beating for a bunch of years now, any alternative that I try to give a go.
PS. Notational Velocity and NValt are pretty awesome too.
My main work tool is a Thinkpad keyboard with the trackpoint buttons below the spacebar mapped just like the Leap keys on the Cannon Cat. All of my work is inside two text files, one with what I'm actually working on ATM and the other with everything else I'll ever need to keep on a computer. I've been through a few text editors over the years, and the one I'm using now is Archy, which was made by Raskin's son actually. It's pretty much a copy of the Canon Cat, rewritten in python. To the commenter below: I will switch to emacs one day, but haven't yet, since any good text editor with incremental search will do. Oh, and I also remap every navigation shortcut to some home row shortcut, for when I don't want to use 'leap' to navigate.
Emacs would be perfect for this, if it weren't so dammn complex/complicated. The learning curve is too steep for someone who just wants to write and isn't a programmer.
From one perspective, Jobs and Raskin ended up going down two opposite paths.
Raskin emphasized that "modes" were a major cause of UI problems. Essentially you want a UI that the user can habituate to as strongly as possible, so that all one's attention can be focused on the actual task and all of the administrivia you need to do tell the computer your intention are handled subconsciously. Like touch typing at a higher level.
The issue with modes is that they break habituation. If performing a given UI gesture does one thing in one mode and another thing in another mode, you can't make that gesture a habit. For example: Cmd+Z is a conventional gesture that many applications interpret as undo. If the keyboard shortcut changed between applications (or worse Cmd+Z meant something else in another application/mode), it wouldn't be so habitual and you'd be less productive.
Raskin was very serious about no modes. The Cat for example didn't even have an on/off mode. It would go to sleep to save power, but if you started typing it would buffer all your key strokes and have them in your document by the time the thing woke up. That is, you didn't have to switch from off mode to on mode! And because you were always in a document (that is, there were no separate application modes), you knew that typing some words always did the same thing, so the scheme really worked.
On the other hand, Apple has really pushed, especially since iOS, on the "App" model. Applications are, of course, giant modes. And the strategy has been to push a separate App (mode) for every single use of the machine. So rather than learning a few powerful gestures and then combining them to do disparate tasks, users need to learn a separate, surface-level gesture complex (App) for each individual task they want to do on their machine.
Which is more efficient or appealing? On what time scale of use?
Did Apple end up this way because Apps are a more natural fit for a consumerist model? "Want to do this task with your machine? Don't bother figuring out how you can do it yourself. There's an App for that!" No Modes vs Buy More Modes?
Raskin's book, The Humane Interface, talks extensively about his UI design philosophy. In addition to explaining the above problems with modes, he discusses how to actually design a computer system with no modes (I believe an elaboration of what he did with the Cat). He also explains other really important UI principles and their ramifications, for example, "The user's data is sacred" (hence undo).
PS:
Raskin's definitions: A gesture is defined as an action that can be done automatically by the body as soon as the brain "gives the command". So Cmd+Z is a gesture, as is typing the word "brain". What constitutes a single gesture will be different depending on the user! A mode is defined as any situation not at the user's locus of attention that would cause a gesture to perform an action different from another mode. So "pseudomodes" where the user holds down a modifier key or holds the mouse button while performing a drag gesture get around this since they keep the user's locus of attention on the fact that they are performing the pseudomode.
I think both the above definitions are still a bit problematic but Raskin's definitions are better than any other that I've heard. I hope there are more people who study and discuss these deep UI design issues! What do you think about modes?
You are painting a picture with black and white where in reality there are shades of grey.
The dichotomy you are trying to conjure up does exist to an extent and it is interesting, but in reality no one even knows how to build a modeless mobile interface (that is as powerful as an interface that is allowed to use modes).
Deciding whether modes would be a good idea pretty much has to be done on a case-by-case basis, considering the involved tradeoffs. While avoiding modes is in general a good rule of thumb I have a hard time believing that it is realistic as a hard and fast rule.
(The characterisation of apps as giant modes doesn’t make much if any sense in the context of iOS.)
>>>Did Apple end up this way because Apps are a more natural fit for a consumerist model? "Want to do this task with your machine? Don't bother figuring out how you can do it yourself. There's an App for that!" No Modes vs Buy More Modes?
Bingo! Evidence that this is the case: not many computers ship with compilers onboard any more. OS vendors want you to continue to invest in their platform - Apps just happen to be a way to do that. There is a vested interest in making developer tools as weighty and difficult (for the PITS) as possible .. why learn programming when you can just buy an app? In many ways, we've gone completely backwards in the computer industry - users have to spend a lot more time and effort to maintain their platforms than they should have to.. same is true of developers, as well. The fuss and nonsense required to just get a window up on the screen being one case in point. We're all scrambling to learn these new - but not necessarily better - technologies, because the OS vendors have a vested interest in capturing the minds of their users.
There is a vested interest in making developer tools as weighty and difficult (for the PITS) as possible .. why learn programming when you can just buy an app?
This is nonsense. You appear to be arguing that every computer user should be able to develop software - equivalent to arguing that every driver should be able to build a car.
You know why modern computers don't ship with development tools? Because they are no longer used exclusively by people who are computer programmers. They're mass-market devices now, and one of the requirements of that is that they need to allow users to perform tasks without learning how to develop software - an act that is totally orthogonal to actually performing those tasks.
Coupled with this, those people who are developers and require access to development tools have immediate access to them, thanks to the Internet. You can immediately download SDKs and IDEs for both Windows and MacOS, and every major Linux distribution comes bundled with a compiler. Same for Android, iPhones, etc. - it would be nothing but a waste to provide development tools when the vast majority of users do not require them, and when they are so easily available elsewhere.
I'm also completely unclear what "fuss and nonsense required to just get a window up on the screen" you think exists, given the ~10 lines of code which is required to do this on any platform.
Its not nonsense - it used to be that you could write a new application for your computer if you wanted to, easily enough, and you'd have the exact same tools as anyone else would have - because the computer shipped with them.
The 'developers' arose as a class of society simply because computer use became decoupled from application development.
>>10 lines of code to make a window
But this doesn't actually do something. Used to be, you could write a functional application with 10 lines of code -but now, because computing is being run by Fashion Directors (I agree with you on this, btw) where 'trends' are more important than actual use, we get a lot of weight added to the truss normally bearing the load of 'usefulness' to the user.
I don't consider that we've actually made a lot of progress with human computer interaction over the decades since the Canon Cat was around. In many ways, I think we've been side-tracked in the computer industry. The rise of Windows, for example, set the whole computer industry back 10 years ..
"But this doesn't actually do something. Used to be, you could write a functional application with 10 lines of code -but now, because computing is being run by Fashion Directors (I agree with you on this, btw) where 'trends' are more important than actual use, we get a lot of weight added to the truss normally bearing the load of 'usefulness' to the user."
I've been programming since the mid-80s, when I started out doing BASIC on the Commodore 64. Along the way, I touched on BASIC on the Apple II, Pascal on the Apple IIGS, Mac, and Windows, C and C++ on the Mac, Visual Basic, and various other environments. These days I mostly do Objective-C and Java on Mac, iOS, and Android, with a sprinkling of Python and other languages.
It has never been easier to write a functional application than it is today. The ratio of code to functionality for any given app is lower than it has ever been.
Take the C64 or Apple II with built-in BASIC that so many hold up as a shining example of empowering the user. Now compare it to a modern Mac, which ships with AppleScript, Python, Ruby, Perl, and bridges to let many of these languages be used to create real, fully functional GUI apps. I can build a decently functional text editor in a few dozen lines of Python on a brand new Mac, while back in the day that much code would probably not even get you inline editing, let alone spell check, saving, rich text, printing, versioning.... Furthermore, the "exact same tools" as anyone else would have, Xcode, are available free from Apple and you'll get a prompt to download and install the stuff automatically if you try to use the compiler. Massive quantities of high-quality documentation are also available for free.
The reason most computer users don't program is because they don't want to, not because it's harder than it used to be.
I've been writing code since the 70's, and this statement:
"It has never been easier to write a functional application than it is today. The ratio of code to functionality for any given app is lower than it has ever been."
.. in my personal opinion, is false. The apparency-of-ease is there, but in reality its just not true. The runway to get something running and useful is about 10x as long as it used to be ..
Its not nonsense - it used to be that you could write a new application for your computer if you wanted to, easily enough, and you'd have the exact same tools as anyone else would have - because the computer shipped with them.
Yes, it is nonsense. Development tools are easily and freely available to anybody who wants to use them.
The 'developers' arose as a class of society simply because computer use became decoupled from application development.
Developers exist because development of applications is inherently more complex than use of applications, almost by definition.
But this doesn't actually do something
It displays a window.
Used to be, you could write a functional application with 10 lines of code
You still can, and you'll be able to accomplish a lot more than you could 25 years ago.
but now, because computing is being run by Fashion Directors (I agree with you on this, btw) where 'trends' are more important than actual use, we get a lot of weight added to the truss normally bearing the load of 'usefulness' to the user.
I don't agree with you. Computers are being designed to aid users in accomplishing tasks, which is exactly what they should do. That they are not required to understand the internal workings is a good thing.
I believe that the reason its so difficult to understand the internal workings is the intention of the OS vendors - and this is not an altruistic purpose! If computers were very easy to develop for, a lot of people who have a vested interest in maintaining their control and secrecy might have to re-educate themselves. What I see happening in the industry is the same thing that happens in Class-based societies - as soon as there is an opportunity to draw a line, it is drawn - and we then have two classes of people.
Saying that every driver should be able to build a car is like saying that every computer user should be able to build on operating system from scratch. What (I think) the gp and I are saying is that every driver should be able to do basic repairs and maintenance on their car, and should be able to write basic programs.
Fascinating. To play the devil's advocate, would you agree that sometimes a new mode's optimization for the task can outweigh its cognitive overload? If you try to shoehorn too many disparate tasks into the same interface, usability can suffer.
This is an outline for a computer designed for the Person
In The Street (or, to abbreviate: the PITS); one that will be truly pleasant to use, that will require the user to do nothing that will threaten his or her perverse delight in being able to say: "I don't know the first thing about computers"