
The Once and Future Visual Programming Environment - NelsonMinar
http://techcrunch.com/2012/05/27/hey-kids-get-off-my-lawn-the-once-and-future-visual-programming-environment/
======
SoftwareMaven
The biggest problem I have with tools like Visual Age and XCode is they want
to replace my tool chain. I want it to tie into my tool chain comfortably (I
really hope Emacsy[1] succeeds; I would contribute time to that project!).

As I look at Light Table, I really hope they learn a LOT from Emacs:

• Editing is not secondary. In many IDEs, it feels like actually writing code
is an afterthought.

• Mice suck[2].

• Allow me to integrate external tools easily.

• Make _everything_ easily customizable and replaceable.

1\. [http://www.kickstarter.com/projects/568774734/emacsy-an-
embe...](http://www.kickstarter.com/projects/568774734/emacsy-an-embeddable-
emacs)

2\. They are fine for detail-oriented, fine-motor-control work. _Coding_ is
not one of those.

~~~
mkl
_Mice suck[2]._

 _2\. They are fine for detail-oriented, fine-motor-control work. Coding is
not one of those._

As someone who has written tens of thousands of lines of code with _only_ a
mouse, I would modify this statement to "Mice suck when you are primarily
using a keyboard." Editing code _is_ detail-oriented, fine-motor-control work,
and in my experience a mouse is better than a touch screen for editing (with
current interfaces), and worse for typing code in. I have not done any coding
with a keyboard for many years, so I can't directly compare, but I think
mouse, with my custom typing system, is easier for many situations. The
problems come when you are using both mouse and keyboard.

~~~
SoftwareMaven
I, too, would be interested to hear more as I'm having difficulty picturing
what you mean. My mental picture is "iPad with a mouse", but there is
obviously something missing. I'm dubious of the ability to code at least as
efficiently with no keyboard as with, but that may be my lack of imagination.

~~~
mkl
The thing about coding is there's lots of thinking. I can write code
efficiently because after a point, raw typing speed is not very important. I'm
much less efficient at, say, copying out prose that's already written (unless
my prediction engine is familiar with it...).

My software is like an on-screen keyboard with context-sensitive prediction,
more sensible orientation and layout, and lots of shortcuts overloaded on the
buttons, operated by the other mouse buttons and the scroll-wheel. So, for
example, moving a word/page at a time, selecting lines/files, unindenting,
etc. are just done by scrolling in the right place, control-A is middle-
clicking A, and so on (the program just sends the appropriate keystrokes to
make things happen). Selecting and toolbars/menus are fast because you're
already using the mouse, and copy and paste is just select and middle click on
Linux.

RSI basically forced me to develop the system (I can't use a physical
keyboard, and no existing software was or is good enough), and I have been
using it for all my typing since 2003. I'm now working on making a commercial
version aimed primarily at touch screens (gestures will replace different
mouse buttons).

~~~
arvidj
Screenshots and more information would be very nice!

~~~
mkl
As far as looks go it's as you'd expect, a grid of letter and symbol buttons,
and a list of predictions. I don't think it's a good idea to disclose the few
interesting features until I have something on sale (right now they're a
competitive advantage), so sorry, no screenshots :-P

I'm not sure that there's much more to it than what I've already described. I
think what it's got going for it is refinement driven by nine years of
dogfooding day in day out, and that refinement isn't easy to describe as it's
just many tiny little choices.

------
joe_the_user
This struck me:

 _Third, screen real estate matters. The traditional “everything is a file”
approach is wonderfully portable. You can build an environment for working
with files even for a very small display. Heck, you can work with files if all
you have is a line-mode terminal. But flexibly arranged code snippets and
fully interactive graphical debuggers require a lot of pixels._

I'd go even further. Programming requires a lot of pixels. You want to look at
as much code as possible no matter what. Maybe a 10' x 10' screen would be
enough but otherwise, there's no limit to what I want _for myself_ , not for
some GUI eye-candy. As another post says _Editing is not secondary_ but for me
its a matter of never, ever crowding my precious screen real estate, ever.

But by that token, anytime someone tells you their application needs lots of
screen real estate is a time when they're admitting they'll waste that real
estate doing something I don't care about.

~~~
lispm
I wonder if that really is the case? Are there any useful experiments done
comparing screen size?

I would have two questions:

1) how much screen size is ideal? 2) are there better ways to deal with
information display than large screens?

I use laptops a lot. I have used 17" laptops and now I have a small one. I
don't feel that productive on the small screen. That's subjective.

I use an external screen. A 30" screen. Am I more productive? On larger
screens my main field of view is not much larger. I have to move the head to
see things at the edges.

One thing which might be more productive is instant switching between contexts
on the screen. For example switching between a debugger view and an editing
sessions easily and fast. That might be better on a certain screen size, than
having both the debug and edit version on the same screen - but each smaller.
Sometimes programmers add another monitor, so they can put these things on
different monitors. But is it really better to look to another monitor,
instead of just switching the context on the one you are currently looking at?

~~~
joe_the_user
You know, I really don't know whether more screen space makes you more
productive. I actually suspect that question matter less than you'd think in
terms of the question at hand (whether "visual" workspace will be accepted by
programmers).

I was simply saying that I and I suspect many typical programmers _want_ this
screen space and will become annoyed at programs which stand against it. IE,
put a widget between me and the data I really want and you'll soon see me not
using your application. Will this preference make me more or less productive?
That's a further question.

On the other hand, I happen to think a second monitor is a great recipe for
disabling neck injuries. But that is my rather particular view based on my
studies of effective and ineffective postures.

------
davesims
On the other hand, is it possible that the slew of visually-driven programming
environments we've seen in the last 30 years or so, running the gamut from
Visual Age to Rational Rose, were novel, innovative, and helpful in many ways,
but ultimately just not as effective as a coder that knows what she's doing
with a lightweight editor and increasingly speedy runtimes on faster and
faster CPUs?

~~~
joe_the_user
That's possible.

But "why" is the sixty-four dollar question.

I think it is reasonable to say these environments gave some programmers the
information _they thought they needed_. But the bare text was more useful.
Text and GUI dook-hickies are both pixel-based information. What makes one
superior to the other.

------
gruseom
This is interesting:

 _the closest anyone has ever gotten to creating a full dynamic environment
for a C-language platform is Alexia Massalin’s Synthesis operating system. If
you are a programmer of any kind, I’ll wager that Alexia’s dissertation will
blow your mind_

There have to be people on HN who know about this. Tell us more!

Wikipedia [1] says that the Synthesis kernel relied heavily on self-modifying
code and that adoption of its techniques was inhibited by its being written in
assembly language. That makes sense; it probably relied on the code=data
aspect of assembly language, something that (Forth aside) you mostly don't get
back until you've left the lower levels well behind.

[1]
[http://en.wikipedia.org/wiki/Synthesis_kernel#Massalin.27s_S...](http://en.wikipedia.org/wiki/Synthesis_kernel#Massalin.27s_Synthesis_kernel)

~~~
spc476
He wrote special assembly language templates that allowed constant
propagation, constant folding and code inlining at run time quickly (at a time
when state of the art machines ran around 33MHz to 50MHz).

The actual thesis _is_ mindblowing (I'm currently reading it now). Not only
does it create kernel syscalls on the fly, but specialized interrupt handlers
that handle only the devices required (if a new device requires an interrupt
handler because it's being used, a new interrupt handler with the existing
code and the new code is generated).

The generalized system call (TRAP #15, since his machine used the Motorola
68030) was fast, but a user-mode program could designate up to fifteen system
calls (per thread) to be called directly via TRAP #0 to TRAP #14. The end
result---a system call was about twice as expensive as a native subroutine
call (whereas in contemporary Unix systems it was more like 40x-100x times as
expensive).

Another reason why the technique of run-time code generation isn't used that
much (and is mentioned in the thesis) is that instruction caches are hideously
expensive to flush.

But given stuff like TCCBoot (<http://bellard.org/tcc/tccboot.html>) I think
this could be a viable approach to kernels.

(Edit---grammar)

------
twelvechairs
This article has a host of interesting points and references which are great
to read, but the one-upmanship of 'its probably been done before' (which seems
omnipresent when discussing visual programming) really annoys me, because the
upshot of it is to discourage anyone who is interested in this kind of work
from pursuing it.

My point is that whilst lots of ideas have been tested before, what hasn't
been 'done' is a a visual programming environment that really takes off. And
when/if that happens, it will be worth more than 1000 well written research
papers and PARC prototypes, and its creators will deserve more success than
1000 academics who have tested some idea (or merely seen someone else try) but
never seen it through to widespread adoption....

~~~
silentbicycle
I came away from it with a very different message: This has been tried many
times before, by teams of brilliant people, and _you can still find their
notes_.

Half the problem is that every N years, people bravely try it again...and
start from zero, because they didn't do much research. History is full of
great ideas that didn't take off because computers weren't fast enough,
because touchscreens weren't popular yet, because wireless didn't exist, etc.
They often used different names, but there are many brilliant ideas waiting
for another dance.

~~~
twelvechairs
Yes, the article is supportive of new work (like lighttable), but its still
very heavy on "we've done it before". The title "Hey Kids, Get Off My Lawn",
and the closing sentence "when that new thing comes along, I’ll tell you we
built an early version of it at the Media Lab..." pretty much sum up this
sentiment that irks me.

~~~
vinodkd
if you're interested in visual programming environments and try to look up
prior efforts at all, you WILL find this sentiment to be true. It's somewhat
depressing actually. You'll not just find all these cool sounding environments
from the 80s that seemed light years ahead of today (or even something like
Light Table), but you'll also find studies on the difficulties people faced
with them and why they failed.

Text is not an easy thing to dislodge from the programmer's toolset.

My personal opinion keeps switching between "Text is basic and elemental, so
it's the most natural way to represent code and therefore is the preferred"
and "We havent built tools good enough or representations revolutionary enough
(or scalable enough) and that's why text remains preferred".

I can see how someone who's built a non-textual programming environment can
come to a "been there, done that" kind of attitude. Just take his enthusiasm
at new attempts at the problem as a more pertinent response and ignore the
rest.

~~~
silentbicycle
Another example: APL used a bunch of mnemonic symbols for operators -- for
example, the "reverse" operator is a circle with a vertical line through it.
Mirror image. This made sense back in 1962, with Mad Men-era Selectric
typewriters* , but it never sat comfortably with ASCII. Very dense, expressive
code. Like thinking in kanji, after a lifetime of alphabetic hackery.

* <http://en.wikipedia.org/wiki/Selectric>

The idea has many things going for it, and is worth another go in the era of
touchpads and graphic interfaces. APL has always had a die-hard following, and
intriguing ASCII offshoots such as J and K, but it's time for a second chance.

The APLs have many other interesting aspects (they're all about data-
parallelism and functional programming of a different flavor than Lisp and
Haskell), but to my knowledge nobody has sincerely retried the glyph-based
language thing again.

------
signalsignal
There are plenty of Visual Programming paradigms in development today. The
XCode Interface Builder, and Visual Studio's stuff as well, handle the tricky
layout and placements before it is converted to text for the compiler. But the
rest of the development is done through purely textual coding as the visual
paradigm becomes exponentially more difficult to manage as the project
"grows". So what happens is a hybrid.

~~~
rbanffy
The tools mentioned have very little with GUI design and a lot with being able
to explore and modify the whole system - not just the program - while
everything runs.

Do the Squeak by Example tutorial and come back when you finish it.

~~~
signalsignal
No. ;)

Edit: I just got down voted for replying "No." to a comment on HN. Hah! That's
this site in a nutshell for you.

~~~
rbanffy
I think it's a matter of attitude. Your response didn't add anything to the
discussion, you seemed to refuse perfectly valid advice (by your message, you
didn't read - or understand - the original article and the advice I gave you,
while given in a more than a little condescending tone - sorry for that -
would enlighten you) and people around here don't really welcome that.

I regard it as a feature rather than a bug.

~~~
signalsignal
> Do the (thing above) and come back when you finish it.

Do you even know what advice(1) is? That's not advice as I understand it.
That's an order. You're giving orders to a complete strange on a web forum.
Think about the intelligence behind that for a second.

My comment on this site was referencing the comment by Paul Topping(2) on the
TC link. His opinion, and one I agree with, is that Visual Languages become
too cumbersome for anything but small projects, and my example is using Visual
Programming for layout. The XIB's are XML in XCode.

1) <http://dictionary.reference.com/browse/advice>

2) [http://techcrunch.com/2012/05/27/hey-kids-get-off-my-lawn-
th...](http://techcrunch.com/2012/05/27/hey-kids-get-off-my-lawn-the-once-and-
future-visual-programming-
environment/?fb_comment_id=fbc_10150970512922349_23586927_10150971473562349)

------
chubot
I'd be interested to hear, from people who have experience with these older
systems, why they didn't take off.

Maybe it's that they were developed almost pre-web? I was using the web in
1994 on windows 3.1 but certainly never heard of smalltalk. I think it must
have been the Microsoft ecosystem and then later the Linux ecosystem that kept
"better" tools in the shadows. In 1994 windows was a fantastic upgrade from
DOS. You could not just have multiple windows, but multitask!

These days it seems like programming platforms like node.js can take off
astonishingly quickly because of faster and faster dissemination on the web.
It's not just more adoption, but more contribution too.

~~~
kristopolous
Execution. Execution. Execution.

Really. Ideas are 1% of the work. Doing the idea is 9%. Doing it well is the
other 90%. Different things matter every time, that's what makes it so hard.

~~~
fractallyte
And the missing 'other' 100% is getting anyone to take notice!

One can have the most wonderful, state-of-the-art, mature product - that
remains largely unknown by the 'majority' because... well, _why_?? I still
can't figure out this last bit...

(Modern Smalltalk is, unfortunately, a perfect example of this.)

~~~
kristopolous
I toss the "take notice" into the execution pile. That's why it's 90% of the
work. That includes understanding why brands fail regardless of the quality of
the good or service. It includes understanding the pains of your userbase, the
expectations they have going in, and what you have to do in order for them to
converge and spread your technology in an organic manner.

It is the holistic picture that constitutes success. That's the "other 90%"
and it's a black art.

------
Sandy_Klausner


