

Why Isn’t Programming Futuristic? - ctoth
http://www.ianbicking.org/blog/2013/10/why-isnt-programming-futuristic.html

======
Permit
I'm of the opinion that the way we interact with code is flawed. This author
touched on it when discussing text dumps. I agree with the author that little
time is spent modifying large modules of code (something NoFlow[1] is trying
their hand at).

However, I think these text dumps we work with should be organized
differently. As it stands, we interact with code on a file-by-file basis. But
any programmer knows that the execution path of a given program is rarely
going to be contained within one file, and it's not going to execute from the
top of the file to the bottom. Execution jumps from file to file, and
programmers spend much of their time maintaining a mental model of these
jumps.

I think this is the wrong way to program. I think execution path should be
more easily accessible to a programmer, and they shouldn't have to navigate
through function calls one "Go To Definition" at a time. This topic has become
fourth year design project for myself and a few peers and we're trying to play
with different representations of code on a function-by-function basis.

Here's a rough prototype we hacked together a few months ago:
[http://www.youtube.com/watch?v=Bm38U40HL4E](http://www.youtube.com/watch?v=Bm38U40HL4E)

We've since integrated it into Visual Studio so as to retain Intellisense,
syntax highlighting and existing plugin functionality. We're using Roslyn to
gain insights into semantic information about the code.

[1] [http://www.kickstarter.com/projects/noflo/noflo-
development-...](http://www.kickstarter.com/projects/noflo/noflo-development-
environment)

~~~
ibdknox
For this specific aspect of the problem, you'll probably find some inspiration
in code bubbles, debugger canvas, and Light Table. I implemented a prototype
of the former in VS before I left MSFT, it later became the basis for the
debugger canvas, and then Light Table happened.

[1]:
[http://www.andrewbragdon.com/codebubbles_site.asp](http://www.andrewbragdon.com/codebubbles_site.asp)

[2]:
[http://visualstudiogallery.msdn.microsoft.com/4a979842-b9aa-...](http://visualstudiogallery.msdn.microsoft.com/4a979842-b9aa-4adf-
bfef-83bd428a0acb)

[3]: [http://www.chris-granger.com/2012/04/12/light-table---
a-new-...](http://www.chris-granger.com/2012/04/12/light-table---a-new-ide-
concept/)

~~~
hackula1
I remember seeing code bubbles when you released it and thinking it looked
much more promising than the flow-based approaches to abstracting away code
files that we usually see. NoFlow and the like tend to ignore the fact that:

1) The approach has been tried countless times before and failed.

2) Programmers don't need code abstracted away. Text is great for telling a
machine what to do. The real pain point is in linking pieces of relevant text,
something that OOP does pretty successfully, but still leaves some room for
improvement.

3) Nobody has 25 feet of screen space to see an entire program on, and
scrolling around over giant flows is a huge PITA.

4) Attempts to solve the screen real estate issues through collapsable nesting
of flow paths usually lead the same incomprehensible rat's nests of logic that
regular old code does.

5) Business people don't want to code, despite their fantasies of kicking all
the expensive programmers to the curb. Computers usually do exactly what you
tell them to, and talented programmers are different than your typical
business people in that they have a knack for taking high level requirements
and turning them into highly complex, low level implementations. Someone who
has no interest in this sort of work will ever be any good at it IMHO.

6) Building flows requires 99% of the implementation to already be complete.
Stringing together prebuilt modules with if/thens is not that difficult or
unreadable in textual code anyway.

~~~
Glench
Regarding (3), I think if you need 25 feet of screen real estate in a
graphical environment, then your code is probably not organized well,
regardless of whether it's textual or graphical.

Regarding (1), I think we need to understand _why_ it's been tried and _why_
it's failed. Knowing that it's been tried before and failed should be enough
to make you skeptical, but it shouldn't be enough to outright reject something
unless it's a copying an old idea wholesale. That said, I think NoFlo
specifically has very few new or interesting ideas.

------
tikhonj
For the specific case of traversing objects (well, let's say data types), I
think Haskell's lens library is the ultimate modern example. In fact, I think
it basically covers everything he asked for in that section and more. It's
quite a large library, for better or worse, so it really is complete.

People have called it a "jQuery for data types", and that isn't such a bad
description. It allows you to uniformly interact with nested records (similar
to objects), collections and variants (values chosen from a known set of
possibilities--sum types).

It's systematic, modular and surprisingly powerful. Of course, like many
powerful abstractions--including Haskell itself--it takes a bit of up-front
effort to learn, but it's more than worth it.

~~~
jhickner
Does lens have a succinct way to traverse a nested data structure where each
node is a maybe?

Lenses + Prisms (nicest way I know of in lens):

nested ^? foo . _Just . bar . _Just . baz . _Just

vs. the Maybe monad:

foo >=> bar >=> baz $ nested

~~~
tel
That's the nicest I know as well. You could use `mapped` instead of `_Just` to
make it more generic

    
    
        nested ^? foo . mapped . bar . mapped . baz . mapped
    

Which is also a bit closer to the nature of `>=>` chaining. I actually really
like the _Just descent since it makes some of the failure modes for this lens
very explicit.

And then it should always be said that the lens has setter properties that the
`>=>` chain does not.

------
unoti
Anyone who thinks we're using programming technology of the 70's probably
wasn't programming in the 70's. Today it's routine to create systems in a
handful of days that would have taken a larger team a much greater amount of
time to create in the 70's, 80's, 90's, or even 00's. In fact, for years now,
power users do certain kinds of applications with spreadsheets and similar
tools-- without programmer intervention-- jobs that were previously handled by
systems analysts and development teams.

Interfacing different kinds of computer systems together today is orders of
magnitude simpler than it was even in the 90's, much less the 70's.

Sure, we're not all talking to our computers while they program themselves and
driving around in flying cars. But software development technology has come a
long, long way. Users are tons more empowered to do their own work now than
they used to be. And the amount of knowledge and training needed to make a
developer effective today is nothing like it used to be.

~~~
andrewflnr
C and Unix were invented in the 70's. Yes, things have improved a lot in the
meantime, but underneath, it seems like a lot of the really basic paradigms
are the same. We're mostly typing imperative code into a plain text editor,
saved in a disk file in a tree-shaped filesystem, that probably runs on a
Unix-like system. Maybe Windows, but in the space of all possible operating
systems/environments, they're not that far apart.

~~~
spin
I'm starting to think that the reason we keep building tree-like data
structures is because that's how our minds work. We can intuitively grasp
hierarchies of categories (a tree structure), with the occasional exception (a
symlink to another location).

Because we intuitively see the world in this way, we are good at building and
maintaining systems that work in this way.

Anything more sophisticated than that (algebra, relational databases) requires
a lot of study and practice to get good at.

~~~
loup-vaillant
> [Maybe] the reason we keep building tree-like data structures is because
> that's how our minds work.

No. That's how _physical space_ works. When you're a library, and you need
your user to be able to find the books, you don't have a choice: any given
book must be in _one_ shelf, in _one_ alley, in _one_ room. And bam, you have
a tree hierarchy that is three levels deep.

But our minds are better at dealing with tags. Just see how popular they are
in blogs. I bet that a tag based file system would be vastly better at storing
personal data than a hierarchical one.

~~~
im3w1l
I spent sometime yesterday thinking about what queries I would like a
filesystem (layout) to be able to answer.

-installed programs

-files installed (except configuration) by a program

-files installed (including configuration) by a program

-libraries

-a users personal files (including configuration).

-all files except bare bones install

-all files

-files on a specific hard drive

My conclusion was that my preferred layout would be

/os/ \- bare bones install

/home/ \- users files

/home/im3w1l/.conf/firefox - my configuration for firefox

/programs/ \- installed program and libraries

/programs/libfuse/.conf - configuration for libfuse

/programs/firefox/.conf/im3w1l - my configuration for firefox

C:/programs - installed programs on specific hard drive

And that if a file is deleted from one place it is deleted from all. So rm
/home/im3w1l/.conf/firefox -rf

would be the same as

rm /programs/firefox/.conf/im3w1l -rf

But I think a general tagged system has some problems, most notable filename
collisions. Adding a tag to a file could create a collision for instance.

~~~
loup-vaillant
For me a basic tag based file system needs several things: A unique ID for
files (a strong cryptographic hash of the contents is probably best), a name
for each file, and a list of tags for each files. And of course, a number of
logical volumes on which the files are "located". Where a directory structure
is actually needed, files could have special names, like "/os/glad/file_42"
(that's a _name_ , not a path).

A variation on this theme would be to do away with explicit names altogether,
and only use tags. The "name" of the file would just need to be reasonably
unique. That one is probably best.

Now, when you search for a file, you just query for tags. Can also be done
through the shell: it's just that those two commands would have the exact same
effect:

    
    
      cd /foo/bar
      cd /bar/foo
    

As for the volume, you need them to transfer files between your USB key and
your computer.

Oh, thinking of using cryptographic hashes to name files… When you modify the
file, the ID changes as well… that's a _new_ file! By default, the old version
should still be around. Imagine that: Git for the masses.

------
freshhawk
I think the explanation for this is relatively simple:

The obvious is that the low hanging fruit has been picked, so of course the
rate of change is going to decrease as the problems get larger and harder to
solve.

The less obvious is the massive influx of users for these languages/tools.
There is an unavoidable regression to the mean related to ability/willingness
to expend effort that comes from that influx.

In the previous era if you were involved in any way with programming you were,
by definition, interested in pushing the boundaries. Now the majority are
actively resistant to change like humans are with most things. This is their
job, and changing how things work threatens their livelihood.

So why have the subset of people who are willing and able to do these things
not doing them? I say some of them are doing it, but not enough to get a
critical mass (at least a critical mass that would lead to the kinds of
widespread improvements being discussed).

Obviously this is affected by the subculture I'm in but the most common reason
I've encountered why these new ideas don't get traction is because it's
considered a waste of time. Learning a new way of solving problems is time not
spent spamming other social networks trying to increase signups to your social
network. Growth Hacking is "getting shit done!", using some new technique that
requires learning something new in order to be more productive is considered
Ivory Tower bullshit.

Both Clojure and Haskell are communities I follow where there is real
interesting futuristic work being done right now. But Node.js, with it's basis
on callbacks is _way_ more popular. Even though everyone in the 70's agreed
that continuation passing style was a usability nightmare for programmers and
put a strict limit on the complexity a human using it could handle.

Looking at it another way: Programming is futuristic, it's just that most
people are too lazy to bother and just stick with the older stuff that works
the way they are comfortable with, then human nature kicks in and labels
everything they don't do as "stupid waste of time".

~~~
ianbicking
Node.js is just full of Worse Is Better. I think there's a danger for more
pure/forward-looking environments to look down on that, because there really
_is_ some Better in it – I think the work done to get us forward is going to
have to meld Worse Is Better with Actually Better.

------
beat
Programming is futuristic in a 1950s sort of way. There's a classic Astounding
Magazine cover from the era of a guy aggressively boarding a spaceship or
something with a ray gun in his hand and a slide rule in his teeth.

In other words, it's not futuristic because we're not terribly good at
imagining the future.

The guts of modern computing is based on the fundamental logical architecture
of Von Neumann and others, work of the 1940s and 1950s. We may be building
things in software that couldn't have been imagined back then, but we're
building it on structures designed back then.

Look at the iPad and modern tablet computers, in many ways the pinnacle of the
modern computer movement. Then look up Alan Kay's Dynabook, which he invented
(conceptually) in 1968! We're just now catching up with his vision from 45
years ago.

~~~
vezzy-fnord
Tablet computers the pinnacle? They're fairly generic from a conceptual point
of view. Pretty much the straightforward logical evolution of computing, as
size gets smaller and computational power gets higher.

Please don't mention the iPad as a pinnacle of anything. It is not. It is
merely a rehash of old technologies packed into a slick exterior and heavily
marketed. It is more a tool of social status than a utility. Compare this to
the HP Compaq TC1100, which was a tablet computer released seven years before
the first iPad and the specifications of which trumped those of the iPad by
double.

As for being victims of a constrained Von Neumann mindset, perhaps so. On the
other hand I haven't really noticed any non-Von Neumann languages that are too
practical for human purposes. Languages like APL and FP are quite esoteric,
compared to the straightforward, if theoretically lacking, ALGOL model.

~~~
seanmcdirmid
> Please don't mention the iPad as a pinnacle of anything. It is not. It is
> merely a rehash of old technologies packed into a slick exterior and heavily
> marketed. It is more a tool of social status than a utility. Compare this to
> the HP Compaq TC1100, which was a tablet computer released seven years
> before the first iPad and the specifications of which trumped those of the
> iPad by double.

The iPad is more than heavily marketed dribble; it was actually usable when
compared to the TC1100, and basically "realized the dream" of tablet computing
when predecessors couldn't: specifications are shit, usability is king.
Engineers often don't get that.

> As for being victims of a constrained Von Neumann mindset, perhaps so. On
> the other hand I haven't really noticed any non-Von Neumann languages that
> are too practical for human purposes. Languages like APL and FP are quite
> esoteric...

CUDA dominates the HPC world right now; MapReduce dominates big data
processing. Vector machines and data-parallel pipelines have won big in their
own domains. And I'm not even going to talk about relational database, all of
which are very non Von Neumann computing models and have been very successful.

~~~
TheLegace
I had the TC1100, was able to pick it up cheap from liquidator of computers.

I had to figure out ways to undervolt the CPU so I could extend the battery.
The battery typically lasted maybe 1-2 hours, but by undervolting I had gotten
it to about 3. It was incredibly bulky and heavy, and it had no actual touch
ability. It would heat up quite a bit.

Now the one good thing about it was it had a very accurate Wacom digitizer and
man could it write notes. Shame there was never a good app for it.(OneNote
maybe).

I think Microsoft took a cue from the way TC1100 provides a keyboard(not a
shitty SurfaceRT one) that you can connect to and it's sturdy. Something I
think the Surface is doing.

So overall not a usable device. My only question is that someone like Steve
Jobs being around the same time never thought to release an Ipad at the time.
I wonder why, did he think that the tech just wasn't ready hence not worth
spoiling the user experience.

------
vithlani
This is a sad discussion -- both the essay and here. Nobody has bothered to
look back in history.

"You need to be able to inspect and traverse objects, all objects."

Smalltalk? Lisp Machines?

"There also must be a culture where proper extensions are regularly provided
on objects. Powerful tools are built on powerful paradigms, and enabling a
paradigm isn’t the same thing as actually implementing it across a fully
developed programming environment."

The MOP?

Are we doomed to keep on asking these questions again and again? An IT
curriculum should include at least one unit where students are required to
study the history of these things and write an essay.

~~~
Jtsummers
Lisp and Smalltalk are not on the standard curriculum at most US schools. I
know that GT used to offer an OO course (speaking of abandoning history, can't
view old course webpages past 2008, and new ones are apparently hidden behind
a student only portal) which used Squeak, it was a co-requisite of Software
Engineering (circa 2003, don't know how it's changed since because I can't get
to the pages). I wasn't introduced to lisp until I took a special topics
course that used Common Lisp for AI in grad school. Ok, technically I used
scheme before then in a survey of programming languages course, but 2 weeks
exposure hardly counts. For various reasons I left GT about that time, and
finished up elsewhere. I do know that in 2002 (2001?) they introduced scheme
for CS 1301 (whatever the number changed to), but dropped it for python later
(cheating scandal that year, something like 200 students caught in CS 1 and CS
2).

Speaking of, that PL course was probably the closest to a CS history course
that I ever took. The only other courses that came close were ones where I
deliberately sought out papers on algorithms (AI, graphics) that had been
developed in the 70s/80s relevant to projects I was working on. Other students
just brute forced their solutions taking advantage of the much faster hardware
available at the time.

~~~
vithlani
Ah right, you were luckier then I was.

I took an AI course at uni so that I could see some Lisp in action (and learn
AI) but the bastard lecturers (2 of them) change it to ... JAVA! While still
using the Norvig textbook that had the algorithms in LISP-y pseudocode. And
with lead lecturer being a Vietnamese PhD who couldn't speak enough English to
explain the most basic concept. 2/3 of the class flunked.

Way to go on your initiative to look for papers and context around your other
courses! I bet the perspective it gave you has helped.

------
krosaen
Love the quote, "Phrasing problems in solvable terms is more effort than
solving them." as to why Prolog didn't take off.

I'm surprised, however, given the talk about ASTs, that the article mentions
nothing about LISPS and the notion of a language being homoiconic. Lot's of
really awesome stuff happening in clojure that really does feel like the
future (or the awesome past reloaded).

~~~
JackMorgan
Right, any lisp is basically a readable (and with macros dynamically editable)
AST. The author even suggests XML as a possible format, which is significantly
more difficult to read and edit than simple s-expressions.

Checkout this article for more about lisp s-expressions vs XML.
[http://www.defmacro.org/ramblings/lisp.html](http://www.defmacro.org/ramblings/lisp.html)

------
matthewmacleod
I can't help but feel this is an elaborate troll.

When I first encountered computer programming, we were still using line
numbers. Now, about 20 years later, we've got Python and Ruby and Haskell and
ubiquitous GC and… so many frameworks.

Programming is more futuristic than ever, and the tools available are
continuing to evolve.

------
weavejester
If you discard the idea of working with objects, and instead work with data
structures directly, you get generic querying and transversal for free.

A lot of these points look like they're solved by Clojure, but Clojure is my
hammer, so perhaps it's cognitive bias that these look like Clojure-shaped
nails.

~~~
nickbauman
Right. Actually a lot of the issues Bicking points out were handled by Lisps
years ago. And you don't have to give op on objects per se: Objects in Clojure
(with ad-hoc typing) can be treated as maps and darned near anything is seq-
able in that Lisp dialect! Having s-expressions and macros lets you traverse
and recombine things you could only imagine doing in Python. And don't even
get me started on the power of laziness in Clojure.

Don't get me wrong: Python pays the bills these days. And that language is an
able workhorse. But boy do I miss Clojure. The reason I don't use it at work
is that I work in Android, iOS and Google App Engine most of the time. Which
means Java, Python and Objective-C. The code world has infantilized away from
Lisp too far.

~~~
weavejester
Objects in Clojure are an implementation detail of the JVM. Clojure doesn't
really use objects _as_ objects.

Records are, essentially, just maps with some type metadata attached. They're
implemented as classes purely for performance purposes.

------
pshc
Yeah, we're stuck with plaintext currently. This is because every time someone
tries to write a better non-textual language, they fall into the trap of
writing a cute toy or flowcharty bullshit--not a robust tool that scales to
real codebases. And you can point to Smalltalk and Lisp machines all day, but
they have to _win,_ not be mere curiosities that lost gracefully. How do you
write a structured code editor that _wins_ against text? It's really, really
difficult.

I really, really want to fix this--write a real post-plaintext programming
environment. I've tried and failed so far (although at least I figured out the
data structures) and I have my own life to live, issues to deal with, etc. All
hope is not lost though. Rust is shaping up to be the exact foundation the
next generation of programming environments needs. We'll get out of this
plaintext plateau soon.

Re: "Better, more accessible ASTs," this is the aim of Steve Yegge's Grok
Project.

~~~
Ygg2
What happened to Grok? I heard it was going to be demoed after Steve left that
project.

------
Mikeb85
I dunno, I think programming is pretty futuristic. We've got live coding,
futuristic platforms, fast dynamic languages, etc... Of course, the most
futuristic platform is right under people's noses, the browser. It does things
that old Lisp programmers imagined, like live coding, even 3D games, with
sound, and even Kinect-like motion control.

Live coding:
[http://www.mrdoob.com/projects/htmleditor/](http://www.mrdoob.com/projects/htmleditor/)
WebRTC: [http://cbateman.com/demos/head-
coupled/](http://cbateman.com/demos/head-coupled/) (needs well-lit room to
track your eye position) 3D:
[http://patapom.com/topics/WebGL/cathedral/index.html](http://patapom.com/topics/WebGL/cathedral/index.html)

And of course there's plenty more that can be done. People tout an IDE like
Lighttable, which of course uses web technology to accomplish everything
special it does.

Edit - and as ugly as Javascript is, it's pretty powerful. Has all the power
of Lisp/Scheme, beat with a C-flavoured ugly stick for awhile, enhanced with
advanced runtimes (V8), and transformed by transpilers/compilers, is even used
to represent bytecode (asm.js) and emulate other platforms
([http://fir.sh/projects/jsnes/](http://fir.sh/projects/jsnes/) and
[http://copy.sh/v24/](http://copy.sh/v24/))

Edit2 - and of course, Smalltalk in the browser: [http://amber-
lang.net/](http://amber-lang.net/) and a Lisp machine-type thing in the
browser:
[http://alex.nisnevich.com/ecmachine/](http://alex.nisnevich.com/ecmachine/)

Edit3 - WTF how did I not see this...
[http://idflood.github.io/ThreeNodes.js/public/index.html](http://idflood.github.io/ThreeNodes.js/public/index.html)

~~~
pjmlp
Live coding => Smalltalk (1972), Lisp Machines (1979)

Fast dynamic languages => Self (1986)

WebRTC => oN-Line System (1968)

It doesn't look very futurist to me.

~~~
paulhodge
"The future is already here — it's just not very evenly distributed." \-
Gibson.

------
et1337
The author never explains why all these features are desirable.

Safe object traversal? Sure, reflection is a pain, but how would an
improvement in this area bring us into the "future"? Also, I may have
misunderstood, but isn't Lisp's object traversal about as safe as you can get?

More/better ASTs? I think he was trying to say "compilers should have APIs for
third-party tools". Otherwise I don't see why it matters whether the compiler
builds an AST or not.

"Direct manipulation of data" as coding? First of all, what does this even
mean, and how is it better than the current standard of ASCII files? The
author dismisses graphical programming languages as being too heavily focused
on "symbolic manipulation". I'm guessing he's talking about LabVIEW, which is
admittedly horrendous. But there are others. Blender's node-based shader
system[1] fulfills many of these seemingly arbitrary wish-list items. It's
easily accessible via Blender's Python API, it has a clearly visible AST, it's
deterministic and massively parallel...

In short, I'm confused and unconvinced that the future envisioned in this
article is better than what we have now.

[1] [http://www.blender.org/development/release-
logs/blender-242/...](http://www.blender.org/development/release-
logs/blender-242/blender-material-nodes/)

~~~
ianbicking
Safe object traversal: if you can traverse objects you can start to create
search algorithms (which is what Prolog's solver really is) that work across
different domains and objects. I think this is a path towards goal-oriented
programming.

ASTs: by representing the program in a more abstracted way, we can start to
build tools that manipulate them in ways other than ASCII editors.

Direct manipulation of data: well, Bret brought this one up, not me. I thought
it was a little odd. While spatial representation of code was about stretching
our ASCII into something else, I chose to interpret this as stretching our
concrete editing tools towards code. In other words, how do you add
abstractions to concrete editing tools. I think the simplest task would be the
one to start with: how do you parametrize something in a concrete data editor?

~~~
lmm
A modern IDE already knows the AST it's working with in intimate detail - it
has to, so that it can refactor it. But it's still displayed as (mostly) plain
text, because that's still the best format for humans to read code (or indeed
anything else) in.

------
rsl7
I'm doing some serious development right now for the first time in 12 years,
and it's way more fun and much easier to work with a geographically dispersed
team than it ever was before. It may not be hand-wavy object magic.. but let's
be real about this: we're not a future-oriented culture anymore. We're all
about the present. Lots has been written about this elsewhere.

The secret to creating advanced tools is to minimize the magic. Crazy powerful
tools still require the expert user to have a predictive mental model of what
is really going on so that accurate decisions can be made and valuable
experiences can be accumulated on the learning curve.

I would say, though, that the programming we do is by its very nature
futuristic. So few humans have been able to interact with information the way
programmers today do. The tool churn and huge volume of ideas and trials that
we're going through will, eventually, become a mainstream way of working with
ideas. It may take a long time.

The other thing is.. a hammer may seem like a simple and obvious tool, but
there is still a huge gap between an expert and a novice when it comes to
pounding nails. You still need a lot of practice.

------
derekp7
Personally, I prefer just seeing raw text on a screen, without a fancy IDE,
just a plain vi session. If it is my own code base, then I already know what
is where, and can navigate / update very efficiently. If however I am working
on someone else's code base, then it is necessary to study it for some time
before diving in. What can really help in this case is a "code map" \-- a
document that gives a tutorial introduction and reference to the code, what is
where, and why, etc. Included in the code map would be both program flow, and
data flow (what data goes in, how it is transformed, and where it ends up). So
what is needed is tools to help maintain that code map document so it doesn't
get out of date.

------
maxk42
Programming is futuristic. It's just that we can always imagine a better
future -- and that's what makes programming so awesome. It enables us to build
that future.

But if you compare Ruby or Clojure to Forth or COBOL, there's no way you can't
see how far we've come. And there's no end of improvement in sight.

~~~
squidsoup
Alternatively you could compare Ruby to SML and appreciate how far we've
regressed.

~~~
pkroll
Comparing, and... nope. They were both developed within a few years of each
other, and are from different paradigms, so regression seems like a cheap dig,
but not an accurate one.

~~~
squidsoup
No, ML was standardised in the 90s, but developed in the early 70s.

------
nickbauman
When I have a really computationally complex thing I need to write, I first
write it in Clojure (or some other Lisp, but Clojure is closest at hand these
days) so that I understand it fully. It's just simpler: I don't even have to
think about syntax in that language. To me, this is the first requirement of
what Bret Victor is talking about. Python and Ruby pale by comparison in
expressiveness. Because there is no "syntax" that will beat pure data. Then I
go and write it in whatever language I'm "supposed" to write it for work. It's
kinda shameful to me that this is where I'm at.

------
jdmitch
I've wondered if it would take a 'non-programmer' to develop a new
metaphor/representation of programming that is more futuristic. The OP has
some interesting suggestions of characteristics that 'futuristic programming'
would have, including safe object traversal, ubiquitous object extensions, and
code transport, but these are still conceived within the paradigm that we
think of programming today. Maybe programming needs an outsider to help us
start over conceptually?

~~~
Glench
There is a catch-22 here in that in order to come up with new ideas, you need
to be fairly well-versed in what kinds of things computers can do but can't
have been indoctrinated into assuming programming has to work a certain way.

~~~
Sirex
That is not a catch-22, that's a filter.

~~~
Glench
I'm saying in order to come up with new types of programming, you need to
already know about how programming works currently, which limits your
thinking.

------
andrewcooke
it's slow, but i think it's happening. co-routines are starting to become
popular 35 years after icon had something similar. that could be seen as step
on the road to integrated search (which needs some kind of idea of sequences
of results).

[incidentally, i wrote a recursive descent combinator lib that worked on
generic sequences, in python. afaik no-one ever used it for anything but
strings. not even for parsing binary data in comms protocols. i extended it to
a regexp engine; it was hopelessly slow. one reason regexps work so well is
that they're so efficiently implemented on simple sequences of bytes. the
overhead of "irregular" sequences is quite something - perhaps jits will help
here (although pypy didn't help me)...]

------
pjmlp
Yep, I guess it is good occasion to recall Vitor's presentation for those who
still don't know it.

"The Future of Programming",
[http://vimeo.com/71278954](http://vimeo.com/71278954)

