A one-handed "keyset" is also available here for those of you who want to experience NLS the "engelbartian-way":
Unfortunately, the software was written in the early 90's and it's only 32-bit so you have to run it in an emulator to make it work on modern systems.
It's a real shame that NLS didn't catch on. The big issue with it is that it's like emacs in a sense - there's a learning curve involved in using it which makes it intimidating to use. This makes it hard to market to commercial markets (lotus notes took over in the '90s). It's too bad they didn't open source this back in the '70s, this is a "could have been" software project.
From what I've read, I gather that the actual keysets developed by Engelbart's team for NLS differed a bit in style from the one in that link, but more interestingly/importantly(?) could also provide haptic feedback by puffing air up from under the keys. (i.e., as a very basic example, one could mouse over a toolbar button and the keyboard might provide a physical cue by bumping the "hotkey" for that given command - immediately teaching the user how to bypass using the mouse next time).
> it's only 32-bit so you have to run it in an emulator to make it work on modern systems.
What about it necessitates using a VM? 32-bit Windows software should run fine on Windows 10. (I'm aware of some exceptions, like when I bought MathCAD 13 shortly before upgrading to Windows Vista 64-bit, grumble grumble...)
And what would the artist, or sociologist, or musician say?
Within this framework, one of the interesting approaches to me here is: once you define which processes/agents are involved in a human activity, how can one augment/externalize it? Software that effectively does this is useful by definition. We might be missing lots of utility here due to lack (or just inherent difficulty) of introspection.
The most notable productivity boosts/cognition boosts found so far have got to be Google [search engines] and Stack Overflow. Yes you could use books but the speed is incomparable. I can accomplish something extremely complicated that would require lots of thinking with just a few queries, reading solutions and glueing them together. I can get expert advice on anything instantaneously. Interestingly this augmentation is mostly 'oracular', just giving you the right answers on demand. Though sometimes the rationale behind an answer is well explained and you can somewhat incorporate the general solution model.
Should we value more augmentation devices that are less oracular? E.g. maybe a high level programming language is a non-oracular augmentation device; you still have to correctly encode all logic, only now you can glance over some details (in sacrifice of some performance).
Do we risk of atrophy from this practice of getting everything almost ready? The productivity increase would indicate you can just use your time to accomplish more; but we all know sometimes "wasting" time is needed to learn or maintain useful skills (e.g. you could always use a calculator, but we chose to learn arithmetic; you could use wolfram to solve integrals instead of learning calculus; we spend time exercising).
It does sometimes feel like I'm almost a god at the computer, and a naked mortal away from it.
From the perspective of augmenting the intellect via computers and software, I feel quite blessed to be living in the current historical time - at the same time, we're still far from Doug Englebart's vision, especially for the general, non-programming public.
I also wonder about the "atrophy" that may come from depending on external augmentation, that we may be losing the ability to do things directly with "raw" intellect, without machines.
Simple word completion in the shell or in your $EDITOR saves some cognitive buffer. Completion in Gmail feels a bit too creepy at the moment. Outsourcing personal responses to a bot seems wrong. What comes next?
Imagine if you and your recipient(s) could toggle a setting "always accept Gmail completion" - in theory the Gmail "agents" could then proceed to have a conversation between themselves.
Getting on the same page as someone on 'big' topics seems a task particularly suited to AI. Human cognition is messy, slow, and not everyone is adapted to rapid-fire social interaction. This bears real costs. If the computer could just understand what I wanted to say, taking my dumb words and turning them into nice shiny non-offensive prose, then I could maintain a lot more human connection than I do at present.
It is entirely possible that we'll end up with a highly inefficient network of systems interacting with each other by mimicking human interaction as a de facto standard of communication.
Anyone bought this?