Hacker News new | comments | show | ask | jobs | submit login
The Future of Programming (worrydream.com)
614 points by rpearl on July 30, 2013 | hide | past | web | favorite | 341 comments

It's a fun talk by Bret and I think he echoes a lot of the murmurings that have been going around the community lately. It's funny that he latched onto some of the same core tenants we've been kicking around, but from a very different angle. I started with gathering data on what makes programming hard, he looked at history to see what made programming different. It's a neat approach and this talk laid a good conceptual foundation for the next step: coming up with a solution.

In my case, my work on Light Table has certainly proven at least one thing: what we have now is very far from where we could be. Programming is broken and I've finally come to an understanding of how we can categorize and systematically address that brokeness. If these ideas interest you, I highly encourage you to come to my StrangeLoop talk. I'll be presenting that next step forward: what a system like this would look like and what it can really do for us.

These are exciting times and I've never been as stoked as I am for what's coming, probably much sooner than people think.

EDIT: Here's the link to the talk https://thestrangeloop.com/sessions/tbd--11

APL. Start there. Evolve a true language from that reference plane. By this I mean with a true domain-specific (meaning: programming) alphabet (symbols) that encapsulate much of what we've learned in the last 60 years. A language allows you to speak (or type), think and describe concepts efficiently.

Programming in APL, for me at least, was like entering into a secondary zone after you were in the zone. The first step is to be in the "I am now focused on programming" zone. Then there's the "I am now in my problem space" zone. This is exactly how it works with APL.

I used the language extensively for probably a decade and nothing has ever approached it in this regard. Instead we are mired in the innards of the machine, micromanaging absolutely everything with incredible verbosity and granularity.

I really feel that for programming/computing to really evolve to another level we need to start loosing some of the links to the ancient world of programming. There's little difference between what you had to do with a Fortran program and what you do with some of the modern languages in common use. That's not he kind of progress that is going to make a dent.

> Instead we are mired in the innards of the machine micromanaging absolutely everything with incredible verbosity.

This is one area where Haskell really shines. If you want the machine to be able to do what you want without micromanaging how, than you need a way to formally specify what you mean in an unambiguous and verifiable way. Yet it also needs to be flexible enough to cross domain boundaries (pure code, IO, DSLs, etc).

Category theory has been doing exactly that in the math world for decades, and taking advantage of that in the programming world seems like a clear way forward.

The current state of the industry seems like team of medieval masons (programmers) struggling to build a cathedral with no knowledge of physics beyond anecdotes that have been passed down the generations (design patterns), while a crowd of peasants watch from all angles to see if the whole thing will fall down (unit tests).

Sure, you might be able to build something that way, but it's not exactly science, is it?

This is the kind of talk from Haskell folks that I find incredibly annoying. Where's Haskell's Squeak? Where's Haskell's Lisp Machine? It doesn't take much poking around to find out that non-trivial interactive programming like sophisticated games and user interfaces is still cutting very edge stuff.

Gimme a break.

I'm sorry, but you're upset because folks are passionate about a language that brings new perspective, and maybe is not exactly as useful in some areas as existing solutions? This is exactly the kind of attachment Bret warns about.

I don't think I expressed attachment to any particular solution or approach - I simply pointed out an extremely large aspect of modern software engineering where Haskell's supposed benefits aren't all that clear. So who's attached?

This strikes me as being mad the Tesla doesn't compete in F1.

Are you trying to compare interactive software, one of the dominant forms of programs and widely used by billions of people every day, to formula 1 cars, an engineering niche created solely for a set of artificial racing criteria?

A better analogy would be being mad that the Tesla can't drive on the interstate.

"sophisticated games" pretty specifically implies contemporary 3d gaming, which is not a useful criteria for exploring a fundamental paradigm shift in programming.

The fact that you think a lisp machine is an "extremely large aspect of modern software engineering" certainly makes me feel that you are expressing an attachment to a particular approach.

I'm not saying it's all there yet, just that it's a way forward.

We have many beautiful cathedrals don't we? So it is a bonafide fact that you can build something with the current state of industry. As far as the analogy goes I would alter it in that the peasants aren't simply watching, but poking the masonry with cudgels. Lastly, scientific methods of building aren't necessarily better, while they follow an order that is rooted in a doctrine, I can quickly think of all those scientifically built rockets that exploded on launch. To play devil's advocate, I'm not convinced that a scientific method is a better one than the current haphazard one we have in place for development.

I would think a major benefit of a scientific method would be the ability to measure performance. Without measurement, how can we progress.

Don't confuse local maxima for maxima. We need people exploring other slopes for the chance of an apex, or at least some higher local maxima.

I think your architecture metaphor is apt.

What makes APL different from, say, Lisp or Haskell? Do you have tutorials to recommend?

It's very hard to find good tutorials on APL because it's not very popular and most of its implementations are closed-source and not compatible with each other's language extensions, but it's most recognizable for its extreme use of non-standard codepoints. Every function in APL is defined by a single character, but those characters range from . to most of the Greek alphabet (taking similar meanings as in abstract math) to things like ⍋ (sort ascending). Wikipedia has a few fun examples if you just want a very brief taste; you can also read a tutorial from MicroAPL at http://www.microapl.com/apl/tutorial_contents.html

It's mostly good for being able to express mathematical formulas with very little translation from the math world - "executable proofs," I think the quote is - and having matrices of arbitrary dimension as first-class values is unusual if not unique. But for any practical purpose it's to Haskell what Haskell is to Java.

> But for any practical purpose it's to Haskell what Haskell is to Java.

Can you elaborate on this? As I understand, the core strengths of APL are succinct notation, built-in verbs which operate on vectors/matrices, and a requirement to program in a point-free style. All of this can be done in Haskell.

A Java programmer unfamiliar with Haskell looks at a Haskell program and shouts, "I can't make even the slightest bit of sense out of this!"

A Haskell programmer unfamiliar with APL looks at an APL program and...

Most Haskell programmers should be familiar with right-to-left point-free style, and should be able to infer that symbols stand in for names.

Of course, understanding the individual symbols is a different matter, but hardly requiring a conceptual leap.

>A Haskell programmer unfamiliar with APL looks at an APL program and...

And says "what's the big deal?". That's exactly the question, what is the big deal. APL isn't scary, I'm not shouting "I can't make sense of this", I am asking "how is this better than haskell in the same way haskell is better than java?".

I'm not really interested in debating the reaction of an imagined Haskell programmer. I was just restating what the grandparent's analogy meant.

Your question is fine, but not what he meant by the analogy.

I'm not imagined, I am real. I know you were restating the analogy, the problem is that the analogy is wrong. I can't find anything about APL that a haskell developer would find new or interesting or frightening or anything like that.


More esoteric organization/concepts for anyone coming from the C family (which is basically everyone), more out-there notation, more deserving of the title "write-only," and less ability to do anything you might want to do with a real computer beyond using it as a calculator. I wouldn't want to do much work with Haskell's GTK bindings, but at least they exist.

That tutorial is deeply unimpressive. It seems very excited about APL having functions, and not directly mapping to machine-level constructs. In 1962 I can imagine that being impressive (if you weren't familiar with Lisp or ALGOL); today, not so much. The one thing that does seem somewhat interesting is the emphasis it puts on "operators" (i.e., second-order functions). This is obviously not new to anyone familiar with functional programming, but I do like the way that tutorial jumps in quite quickly to the practical utility of a few simple second-order functions (reduce, product, map).

Like I said, it's hard to find good ones; I didn't say I had succeeded. I learned a bit of it for a programming language design course, but I never got beyond the basic concepts.

Definitely watch this video http://www.youtube.com/watch?v=a9xAKttWgP4

APL has its own codepage? I have to say, that's a better and simpler way of avoiding success at all costs than Haskell ever found.

Not that I dislike the idea -- on the contrary, I'm inclined to conclude from my excitement over this and Haskell that I dislike success...

Well in the end it doesn't matter if your language is looking for popularity or not. What matters is what you can do with it. You think a language with weird symbols all around can't win? Just look at Pearl.

On a related note, if one plans to sell the Language of The Future Of Programming, I swear this thing will know the same fate as Planner, NLS, Sketchpad, Prolog, Smalltalk and whatnot if it cannot help me with the problems I have to solve just tomorrow.

Try J. Or Kona (open source K). All ascii characters.

Haskell has a rule - to avoid popularity at all costs

You should parse it as "avoid (popularity at all costs)" rather than "(avoid popularity) at all costs".

Well of course it does. Popularity is a side effect.

All the decent tutorials that I know of were in book form. Unless someone's scanned them they're gone. I know mine got destroyed in a flooded basement.

Now, I didn't learn APL from a tutorial, I learned it (in 1976) from a book. This book: http://www.jsoftware.com/papers/APL.htm from 1962.

If my memory hasn't been completely corrupted by background radiation, I've seen papers as early as the mid 1950s about this notation.

APL started out as a notation for expressing computation (this is not precise but good enough). As far as I'm concerned it's sitting at a level of abstraction higher than Haskell (arguably like a library overtop Haskell).

Now, in the theme of this thread, APL was able to achieve all of this given the constraints at the time.

The MCM/70 was a microprocessor based laptop computer that shipped in 1974 (demonstrated in 1972, some prototypes delivered to customers in 1973) and ran APL using an 80 kHz (that kilo) 8008 (with a whole 8 bytes of stack) with 2 kBytes (that's kilo) RAM or maxed out at 8 kB (again, that's kilo) of RAM. This is a small slow machine that still ran APL (and nothing else). IEEE Annals of Computer History has this computer as the earliest commercial, non-kit personal computer (IEEE Annals of the History of Computing, 2003: pg. 62-75). And, I say again, it ran APL exclusively.

Control Data dominated the super computer market in the 70s. The CDC 7600 (designed by Cray himself, 36.4 MHz with 65 kWord (a word was some multiple of 12 bits, probably 60 bits but I'm fuzzy on that) and about 36 MFLOPS according to wikipedia) was normally programmed in FORTRAN. In fact, this would be a classic machine to run FORTRAN. However, the APL implementation available was often able to outperform it, almost always when coded by an engineer (and I mean like civil, mechanical, industrial, etc engineer, not a software engineer) rather than someone specialising in writing fast software.

I wish everyone would think about what these people accomplished given those constraints. And think about this world and think again about Bret Victor's talk.

Thank you. Please consider writing a blog so that this knowledge doesn't disappear.

Were those destroyed tutorials published books?

The ones I remember were all books. At the time, I thought this was one of the best books available: http://www.amazon.com/APL-Interactive-Approach-Leonard-Gilma... -- but I don't know if I'd pay $522 for it... actually I do know, and I wouldn't. The paper covered versions are just fine, and a much better price :-)

EDIT: I just opened the drop down on the paper covered versions. Prices between $34.13 and $1806.23!!! Is that real?!? Wow, I had five or six copies of something that seems to be incredibly valuable. Too late for an insurance claim on that basement flood.

Probably Amazon bot bidding wars: http://www.michaeleisen.org/blog/?p=358

I'd say abstraction and notation. Start here:


Grumble. I understand that you can't really share the talk before the conference, but man, you're teasing.

Share a thought or two with the peons who won't/can't travel :)

Haha it sucks actually - I love talking about this stuff, but I know I really need to save it for my talk so I'm tearing myself to pieces trying to keep quiet.

I guess one thing I will say is that our definition of programming is all over the place right now and that in order for us to get anywhere we need to scale it back to something that simplifies what it means to program. We're bogged down in so much incidental complexity that the definitions I hear from people are convoluted messes that have literally nothing to do with solving problems. That's a bad sign.

My thesis is that given the right definition all of the sudden things magically "just work" and you can use it to start cleaning up the mess. Without giving too much away I'll say that it has to do with focusing on data. :)

I feel the same way. As a programmer, I feel like there are tons of irrelevant details I have to deal with every day that really have nothing to do with the exercise of giving instructions to a computer.

That's what inspired me to work on my [nameless graph language](http://nickretallack.com/visual_language/#/ace0c51e4ee3f9d74...). I thought it would be simpler to express a program by connecting information sinks to information sources, instead of ordering things procedurally. By using a graph, I could create richer expressions than I could in text, which allowed me to remove temporary variables entirely. By making names irrelevant, using UUIDs instead, I no longer had to think about shadowing or namespacing.

Also, by avoiding text and names, I avoid many arguments about "coding style", which I find extremely stupid.

I find that people often argue about programming methodologies that are largely equivalent and interchangeable. For example, for every Object Oriented program, there is an equivalent non-object-oriented program that uses conditional logic in place of inheritance. For every curried program, there is an equivalent un-curried program that explicitly names its function arguments. In fact, it wouldn't even be that hard to write a program to convert from one to the other.

I'm pretty excited about the array of parallel processors in the presentation though. If we had that, with package-on-package memory for each one, message passing would be the obvious way to do everything. Not sure how to apply this to my own language yet, but I'll think of something.

Have you used any of the graph ("dataflow") environments in common usage?

Max/MSP, Pure Data, vvvv, meemoo, Quartz Composer, Touch Designer, LabView, Grasshopper, WebAudioToy, just to name a few.

I have. I should play with them more, since I don't quite get DataFlow yet.

I'm used to JavaScript, so that's what I based my language on. It's really a traditional programming language in disguise, kinda like JavaScript with some Haskell influence. It's nothing like a dataflow language. On that front, perhaps those languages are a lot more avante-guarde than mine.

> I'm pretty excited about the array of parallel processors in the presentation though. If we had that, with package-on-package memory for each one, message passing would be the obvious way to do everything.

Chuck Moore, the inventor of Forth, is working on these processors.


It hasn't been an easy road.


>By making names irrelevant, using UUIDs instead, I no longer had to think about shadowing or namespacing.

I've been trying to do something similar with a pet language :) Human names should never touch the compiler, they are annotations on a different layer.

But writing an editor for such a programming environment with better UX and scalability than a modern text-based editor is... an engineering challenge.

What do you think of my graph editor?

It's not perfect, and making lambdas is still a little awkward because I haven't made them resizable. Also, eventually I'd like the computer to automatically arrange and scale the nodes for you, for maximum readability. But I think it's pretty fun to use. It'd probably be even more fun on an iPad.

I'd love to make my IDE as fun to use as DragonBox

I think it's really nice! Usually these flow-chart languages have difficult UI, but this one is pretty easy to mess around in.

It would be good if, while clicking and dragging a new connection line that will replace an old one, the latter's line is dimmed to indicate that it will disappear. Also, those blue nodes need a distinguishing selection color.

It sounds like you're aiming more toward a fun tablet-usable interface, but:

Have you thought about what it would take to write large programs in such an editor? For small fun programs a graph visualization is cool, but larger programs will tend toward a nested indented structure (like existing text) for the sake of visual bandwidth, readability of mathematical expressions, control flow, etc.

There actually is a rather large program in there: http://nickretallack.com/visual_language/#/f2983238d90bd3e0a...

Use the arrow keys to move the box around. I suppose that's still a bit primitive, but I'll make some more involved programs once I fix up the scoping model a bit.

When I first started on this project, I thought at some point I would need to make a "zoom out" feature, because you might end up making a lot of nodes in one function. However, I have never needed this. As soon as you have too much stuff going on, you can just box-select some things and hit the "Join" button to get a new function. The restricted workspace actually forces you to abstract things more, and the lack of syntax allows you to reduce repetition more than would be practical in a textual language.

For example, in textual languages, reducing repetition often requires you to introduce intermediate variables, which could actually make the program's text longer, so people will avoid doing it. However, in my language you get intermediate variables by connecting two sinks to the same source. The addition in program length hardly noticeable.

Your visual language looks a lot like labview. I'm on a phone on a bus and haven't dug in too far, but have you used labview before?

I'd like to try labview, but doesn't it cost lots of money? I guess I'll sign up for an evaluation copy.

The closest things to my language that I have seen are Kismet and UScript. Mine is different though because it is lazily evaluated and uses recursion as the only method of looping.

Some other things that look superficially similar such as Quartz Composer, ThreeNode, PureData, etc. are actually totally different animals. They are more like circuit boards, and my language is a lot more like JavaScript.

Buy yourself a LEGO Mindstorms NXT kit. $200 USD. The LEGO NXT programming environment is a scaled-down version of LabVIEW.

Nice but still classical programming. I personally think if statements are the problem. The easiest languages are trivial ones with no branching. Not Turing complete, but they rock when applicable e.g. html, gcode

That's true. I intended it to be feature-comparable with JavaScript, since I think JavaScript is a pretty cool language, and that is what it is interpreted in.

I don't think it is possible to make a program without conditional branches.

Somebody posted a link below about "Data Driven" design in C++. In it was an example of a pattern where each object has a "dirty" flag, which determines whether it needs processing, but they found that failing branch prediction here took more cycles than simply removing the branch.

My thought was, instead, what if you created two versions of that method -- one to represent when the dirty flag is true, and another to represent when the dirty flag is false -- and then instead of toggling the dirty flag, you could change the jump address for that method to point to the version it should use. If this toggle happens long enough before the the processor calls that method, you would remove any possibility of branch prediction failure =].

I have no idea if this is practical or not, but it is amusing to consider programs that modify jump targets instead of using traditional conditional branches.

In actual compiled code, conditional branches (without branch prediction) are translated to jumps to different targets, which are specified inline with the instructions. Specifying a modifiable target would mean fetching it from a register (or worse, memory) and delaying execution until the fetch is complete (several cycles minimum on a pipelined machine). With branch prediction, instructions are predicated on a condition inline and we avoid the costly jump instructions.

Read more: http://en.wikipedia.org/wiki/Branch_predication EDIT: Also: http://en.wikipedia.org/wiki/Branch_predictor

I think we more commonly use the latter, which tries to guess which way the code will branch and load the appropriate jump target. It's actually typically very successful in modern processors.

What if it modified the program text, instead of some external load-able value? Self-modifying code could allow explicit branch prediction.

I think we need to shift to a different model ... like liquid flow. Liquid dynamics are non-linear (ie. computable), yet a river has no hard IF/ELSE boundaries, it has regions of high pressure etc. which alter its overall behaviour. You can still compute using (e.g. graph) flows, but you don't get the hard edges which are a cause of bugs. Of course it won't be good at building CRUD applications, but it would be natural for a different class of computational tasks (e.g. material stress, neural networks)

(PS angular-js has done all that dirty flag checking if you like that approach)

I don't get it, why are you afraid of scooping yourself?

In the same way that HN frowns upon stealth startups, shouldn't we frown upon 'stealth theories'? If your thoughts are novel and deep enough, revealing something about them will only increase interest in your future talks, since you are definitionally the foremost thinker in your unique worldview. If the idea fails scrutiny in some way, you should want to hear about it now so you can strengthen your position.

What's the downside, outside of using mystery to create artificial hype?

can't speak for the thread starter, but one downside to prematurely talking about something is confusion. Half formed thoughts, rambling imprecise language, etc can create confusion for his audience. The process of editing and preparing for a talk makes it more clear and concise. Maybe he is not ready yet to clearly communicate his concepts yet.

This reminds me of the "data-oriented design" debate that's been raging in game programming for a while. I assume you've seen e.g. http://harmful.cat-v.org/software/OO_programming/_pdf/Pitfal... (or similar papers)

I actually have a post about data-centricity in games, though from a very different angle. http://www.chris-granger.com/2012/12/11/anatomy-of-a-knockou...

It seems like one large breakthrough in programming could simply be using the features of a language in a manner that best suits the problem. That's what I get from your blog post: design for what makes sense - not for what looks normal during a review. One thing I envy from LISP is that there seem to be few 'best practices' that ultimately make our applications harder to modify.

That should have a nice catch phrase "solidified into disaster by 'best practices'"?

I've been thinking a lot about such issues too; particularly the pain points I have when ramping up against new systems. What information is missing that leaves me with questions? Can code deliver something thorough enough to be maintainable as a single source of truth?

I think, the differences between reading and writing code are as big as sending and receiving packets. It's difficult to write code that extrapolates the base information in your head driving the decisions. Not only that, but you also have to juggle logic puzzles as you're doing it. And on the other side, you have to learn new domain languages (or type hierarchies), as well as what the program is supposed to do in the first place.

I think the idea of interacting with code as you build it is great, but how can we do that AND fix the information gap at the same time?

> definition of programming is all over the place

I agree.

For example, people do seem to assume that programming must involve, in some way, coding. Do we really need to code in some programming-language to be programming?

Changing security settings, for example, in a browser lead to quite different behaviors of the program. Isn't the user of the browser programming because they change the behavior of the program?

And this leads to....

> with focusing on data

If we focus on data, and hopefully better abstractions on how to manipulate that data, then wouldn't any user able to alter a program because they can adjust "settings" at almost any point within the program: in real time.

Wouldn't this then enable a lot more people to become programmers?

Anyway, just some thoughts.

The difference between setting parameters and programming is obviously that programming allows the creation of new functions. "Coding" is literally the act of translating a more-or-less formally specified program (that is, a set of planned actions) into a particular computer language. However, if being a programmer was only like being a translator, programming wouldn't be too hard for mere mortals. That's on the other part - the one that involves methods, logic, knowledge like the CAP theorem - they have problems with. The fact is, not every one can re-discover the bubble sort algorithm like we all here did when we were 8 or so. That's why we are programmers, and that's why they are not (and don't even want to be -- but they are nonetheless nice people and have other qualities, like having money). And these problems don't vanish if you switch from control-flow to data-flow or some bastardized-flow; they just change shape.

As some people have mentioned in this HN post, it is hard to define what programming means.

A program is the result of the automation of some process (system) that people have thought up (even things that couldn't exist before computers). Programming is the act of taking that process (system) and describing it in a computing device of some kind.

Programming currently requires some kind of mapping from the "real world" to the "computer world". The current mapping is done primarily with source code. So, it currently seems that people who are good at programming are good at mapping from the "real world" into the "computer world" via coding.

You seem to be making the point that some people are just good at programming because they can do things like "re-discover the bubble sort algorithm" or understand CAP theorem. These are very domain specific problems.

For people who are able to "re-discover inventory control management" they would do a great job of automating it (programming) if they had an easier way to map that process (system) to a computing device.

The ultimate goal (other than maybe AI) is a 1-to-1 mapping between a "real world" process (system) and a computing device that automates it.

I'm working on a system based on Bret Victor's "Programming in Principle" talk. I believe to achieve such a system, you need to find a way to add semantics to code, as well as the utilization of proofs in order to have enough constraints that your environment is seamlessly self-aware while at the same time extensible.

I'm curious what you mean by data though. Is it data in the "big data" sense? What I mean is, are we talking about gathering a lot of data on coding? My approach is based on that, anyway: lots of data on code with a number of different analyzers (static and dynamic) that allows for extraction of common idioms and constraints, while allowing for the system to more easily help the user.

Of course, there's no magic and a lot of times I reach dead-ends, and while I'm eager to have enough to show the world, progress has been kinda slow lately.

Looking forward for your talk, be sure to link here on HN.

I'm still not sure how you'll make LightTable work (well scale), hopefully it involves a new programming model to make the problem more well defined?

We had some great discussions at LIXD a couple of weeks ago, wish you could have been there. Everyone seems to be reinventing programming these days. We are definitely in competition to some extent. The race is on.

> hopefully it involves a new programming model to make the problem more well defined

That's exactly what I'm up to :)

Great. I'll send you something about my new programming model when it is written up decently, but it has to do with being able to modularly re-execute effectful parts of the program after their code or data dependencies have changed.

@ibdknox. I've started programming Clojure in LightTable the other day. How does programming differ from editing text? Can we use gestures to navigate and produce code? I'm working on visual languages which work well in certain domains, but fail when one needs precise inputs. To me the language constructs we use are inherently tied with the production of code (Emacs + LISP). There is a very good reason the guy who built Linux come up with a great versioning system. It is fair to say that Bret does not quite now yet what he is talking about, as he says himself. As if something big is going to happen and it is hard to say what exactly it is. I hope LightTable or something like it replaces Emacs&Vim in a couple of years. I think that being able to code in the browser will turn out to be unbelievably important (although it looks not that useful today).

I always kick myself for not having made it to Strange Loop when I lived in St. Louis.

I'll (hopefully) be looking forward to a vid of it on infoq at some point. :)

FWIW, after the talk I'll likely end up putting up a big thing about it on the internet somewhere. :)

This made me feel like you were going to write the content and then randomly post it in the comments section of someone's blog on clocks as garden decorations or something like that.

Either way, looking forward too it!

Can you attend Splash in October? I know academic conferences are probably not your pace, but there will be some good talks and it might be worth your time for networking with some of the academic PL types.

I'll look into it - that's likely to be a very busy time so I can't commit right at the moment.


I very much enjoyed Bret's talk, but the visual programming part of his talk was rather half-baked. I say this as someone who has done visual coding professionally in the past. People have been trying to crack the "drawing programs" nut for decades. It's not a forgotten idea. It's so not forgotten that there is a wikipedia page listing dozens of attempts over the years: http://en.wikipedia.org/wiki/Visual_programming_language.

The reason we still code in text is because visual programming is not a hard problem -- it's a dozen hard problems. Think about all of the tools we use to consume, analyze, or produce textual source code. There are code navigators, searchers, transformers, formatters, highlighters, versioners, change managers, debuggers, compilers, analyzers, generators, and review tools. All of those use cases would need to be fulfilled. Unlike diagrams, text is a convenient serialization and storage format, you can leverage the Unix philosophy to use the best of breed of the tools you need. We don't have a lingua franca for diagrams like we do for text files.

It's not due to dogma or laziness that we use text to write code. It's because the above list of things are not trivial to get right and making them work on pictures is orders of magnitude harder than making them work with text.

EDIT: Wordsmithing

Modern IDEs are not text editors. They are heavily augmented with syntax highlighting, completion, code-folding, refactoring, squiggly red lines. This requires an IDE understanding your language and is effectively parsing the tokens as you type. I would suggest that a lot of the features we talk about have arrived already, it is just not explicit, and is tremendously complex to implement, simply because programmers are old die-hards and refuse to try different ideas.

Then there is the issue of reasoning about working systems. The job of the IDE ends when a software is built. If you encounter a bug though, having a runtime that has the smarts so that you can go in an poke around allow and even encourage experimentation, and improves comprehension.

Finally, there's the issue of code organization. A well artichected piece of software is tidy, because everything is in the right place. While a language-aware IDE can make sure you put the words in the right order, it has no concept of the architecture. A higher level DSL that is supported by the development environment directly might help. If we can somehow raise the abstraction level of the IDE, certain classes of programming problems could be as easy as filling in a form.

> Modern IDEs are not text editors. They are heavily augmented with syntax highlighting, completion, code-folding, refactoring, squiggly red lines.

How do any of those features make something 'not a text editor'? I'm pretty sure vim is still a text editor, and my vim does all of those things, with the possible exception of refactoring (and I'm not sure I want a program doing my refactoring in the first place).

A plane with auto-pilot may be still a plane, but the pilot's ability has been heavily augmented.

Incidentally, I spoke to a guy who had been developing Java on EMACs for 12 years. He tried Eclipse a month ago and was won over. Large languages - rightly or wrongly, like Java, benefit from having tight tool integration.

Yep. "We need to think hard about . . . consider the past . . . the reigning paradigm . . . in fifty years . . ."


"Visual programming . . ."

Oh, for God's sake . . .

Do you really think that in 20 to 40 years from now, we will still be hacking away in some computer language that only a very few "elite" people know?

Programming needs to be Democratized and I think one of the best ways to do this is by removing coding as a necessary step in programming process.

Most people can't describe what they want to a human let alone a computer. This is the skill of the programmer. Coding comes second.

If we can teach kids to analyse process, teaching them a programming language, whatever the paradigm, is trivial.

I have no idea what coding will look like in 40 years (although a very solid percentage of it will be no different to now) but it will be driven as much by fashion as by any perceived need to democratise it.

Of course, the alternative view is that programming is already democratised - I have seen the future and it is VB in Excel spreadsheets. /slits wrists

VB in excel spreadsheets is functional programming for the masses =)

Languages aren't, per se, elite, coding techniques are. Lots of ordinary non-programmers can successfully code to some degree in C, C++, Python, etc. But there are many advanced techniques that only experienced developers will be able to grapple with. Anyone can buy and use a hammer, but that doesn't mean that using a hammer makes anyone a carpenter.

Do I think that in 40 years or 100 years we will still be coding in a way that is compatible with using vim? Probably. And I don't see how that makes programming less "democratic".

> But there are many advanced techniques that only experienced developers will be able to grapple with.

At one time iterators were considered a technique and a design pattern. Now, they are a part of most languages. They are transparent. They are taken for granted.

Currently, programming takes place within the domain of software development. It is not surprising then that we value advanced techniques within the industry. Just like there are advanced techniques that are used within the domains of electrical engineering, mechanical engineering and biology (just to name a few).

As we get better at our job as programmers, we further make our "advanced techniques" transparent to those that use our systems. Sure, currently, these systems are usually very domain specific. However, there is nothing to say that we can not build better software development environments which are both non-domain specific and, at the same time, hide the underlying complexities that require experienced developers.

In my opinion, these development environments would use a type of visual language enabling a lot more people to program. I am biased because this is a problem I've been working on for quite a few years now.

Programming is expressing what you want. It's much easier to express yourself using language than by drawing. Democratizing programming by removing coding is just like democratizing literature by replacing all the words with pictures.

>It's much easier to express yourself using language than by drawing.

I've done a lot of programming at the white board and it involved a lot of drawing. And I suck at drawing. But I was able to get my ideas across to others.

This might also be of interest: http://www.agilemodeling.com/essays/communication.htm.

You might also want to consider that people learn and communicate best in different ways:

* Visual Learning - https://en.wikipedia.org/wiki/Visual_learning * Auditory Learning - https://en.wikipedia.org/wiki/Auditory_learning * Kinesthetic Learning - https://en.wikipedia.org/wiki/Kinesthetic_learning

and others.

And visual programming solves this how? People will have to learn how to connect arcane bricks instead of writing arcane text. Programming isn't going to be democratised until we approach natural language interpreters.

> And visual programming solves this how?

Every step in the development process moves those people with the domain-expertise/vision/creative process/etc. further from the solution. Removing steps, like coding, brings the solution closer to the domain experts/visions/create process/etc.

Visual programming makes it a lot easier for people to work collaboratively. For example, those with the domain-expertise can work closer with those that have programming experience in a visual language.

Just a few ways that visual languages could democratize programming.

Visual programming is not going to solve this unless the visualisation matches their domain-specific notation instead of a visual graph that has separate edges for "then" and "else". At that point, why bother with the visual notation instead of just putting it in text they understand equally well?

You make a fair point.

> why bother with the visual notation instead of just putting it in text they understand equally well?

Consistency of implementation - even across different domains.

Learning/Thinking - different people learn and think in different ways (Visual Learning - https://en.wikipedia.org/wiki/Visual_learning, Auditory Learning - https://en.wikipedia.org/wiki/Auditory_learning, Kinesthetic Learning - https://en.wikipedia.org/wiki/Kinesthetic_learning)

Ease of Use - It is not possible to have syntax errors. (Logical errors/misunderstandings of the problem being solved are still possible).

I'm not quite sure what "text they understand" means. Are you talking about natural language interpreters (as you mention above)? That would/will be some cool technology and my feeling is that it is a "next step" in software evolution. Maybe, more likely, the next step is a planned or constructed language interpreter (http://en.wikipedia.org/wiki/Constructed_language). Natural language is so tricky (but maybe not for very domain specific problems).

I mean text that approaches natural language if not natural language. I think something like Inform 7 is far more likely to be adopted by that audience than a visual graph that is just an abstraction of loops and functions. I think the benefits of a textual language matching a domain are much greater than a general-purpose visual programming language.

If it targets, say, a visual learner, I think a graph language won't help unless they are already visualising the program as a graph.

What I think, is that you are saying something that people (and not just "people," but some of the most brilliant people in the history of computing) have been saying for 20 to 40 years.

I've always liked Connections: https://en.wikipedia.org/wiki/Connections_(TV_series). What is so great about this show is that James Burke is able to point out how new and amazing ideas come about by connecting a few, seemingly unrelated, concepts to make a new idea.

In my opinion, the ability for someone to take a few observations about the world around them and turn it into something new and amazing is what makes them brilliant.

We are now at a point in history where a lot of people are able to take in a lot of different ideas leading to a lot of new discoveries (one of the reasons why I think new technology is now being created at an almost exponential rate).

You seem to be implying that a particular problem can't be solved because brilliant people in the past have not solved it yet. In my opinion, problems aren't solved yet because someone has not "connected the dots" yet.

Why, what have you got?

It's so not forgotten that there is a wikipedia page listing dozens of attempts over the years: http://en.wikipedia.org/wiki/Visual_programming_language.*

A mischaracertization. Software like Reaktor is extremely successful in its domain and widely deployed: http://www.native-instruments.com/en/products/komplete/synth... as is Max/MSP: http://cycling74.com/videos/product/

We don't have a lingua franca for diagrams like we do for text files.*

What is UML, then? If you feel stuck with this then maybe you need to look outside the text = code bubble and get some input on tool design from other sources. I agree that text is a convenient serialization and storage format, but it's a terrible design and analysis medium.

I mean, consider CSound, which is a tool for writing music with computers that has a venerable heritage going back to the 1970s. You have one set of code for defining the charactersistics of the sound, and another for defining the characteristic of the ntoes you play with those sounds: http://www.csounds.com/man/qr/score.htm and http://www.csounds.com/manual/html/index.html

CSound is a moderately good teaching tool, and given its heritage it's an impressive piece of technology. But nobody writes music in Csound except a few computer music professors and the students in their departments that have to do as part of their assignments, and 99% of music composed in CSound is a) dreadful and b) could have been done much faster on either a modular synthesizer or with Max/MSP. Electronic musicians feel the same way about CSound that you as a programmer would feel about an elderly relative that keeps talking about when everything was done with vacuum tubes and toggle switches...you respect it but it seems laughably primitive and has nothing to do with solving actual problems. The very few people that need low-level control on specific hardware platforms work in C or assembler.

I think this is pretty relevant here because one of Bret Victor's more impressive achievements is having written some very impressive operating software for a series of synthesizers from Alesis. I'd be pretty astonished if he even considered CSound for the task.

Far from being stuck in a bubble, I actually spent a couple years developing code in a UML-driven development environment (as in, I spent my days drawing UML diagrams that automatically turned into executable code). First of all, you cannot write any nontrivial program in UML alone. It is not nearly specific enough. UML is to a working program as a table of contents is to a technical manual. And in case you think I'm extrapolating from one bad experience, I've also used LabView have seen the parallel difficulties in that language.

Now, I agree that higher levels of abstraction will be needed in the future, but I disagree that visual programming is an obviously superior abstraction. In fact, I believe that people have been earnestly barking up that tree for decades with little success for reasons unrelated to old-fashioned attitudes. There are practical and technical reasons why developing visual programming tools and ecosystems will always be more difficult than developing text-based ones.

Take merging for example. Merging two versions of a source file is many times over a solved problem (not that there aren't new developments to be made). In contrast, merging two versions of a UML diagram is very much a manual process (to the extent that it's possible at all). Now consider creating a change management tool allows you to branch and merge UML diagrams. This is orders of magnitude harder yet. These are essential and straightforward use cases that are much more complex in a visual medium. Without these basic features, visual programming will not scale well to even medium-size teams.

I can go into more detail about issues with visual programming if I still haven't made my case. And I would love to hear from people with visual programming experience that have contradicting opinions. It's always possible that I missed something.

I appreciate the additional context and totally get where you're coming from. The only nitpick I'd make is this:

Merging two versions of a source file is many times over a solved problem

Granted - but isn't this also a limiting factor? It's not that I don't think anything should ever be reducible to code form, but why is that visual mapping of a complete program isn't a standard everyday tool? I mean, it's all very well that we have syntax highlighters showing keywords, variables and so on, but why is it that when I open a program there isn't a tool to automatically show me loops, arrays and so on?

Loops are one of the simplest programming structures; 90% of loops look like:

  LOOP foo FROM bar to baz:
    foo = foo + 1
I mean, software engineering shouldn't be about syntax, it should be about structure, and yet there don't seem to be many tools around that open up a source file and build branching diagrams and loop modules automatically. Why is that? Why don't we even have structural highlighting rather than syntax highlighting?

Can you elaborate. I see the structure, in the indenting. My IDE (Visual Studio) has little lines and + boxes that allow me to collapse and expand code like this. It's useless, because the most part I kind of care what "something" is, and the collapse is not replaced with a nice pseudocode "frange the kibbleflits" statement. I have tools that can generate diagrams showing me class hierarchies, call stacks, and so on. I rarely (almost never) find the useful. Maybe you have something different in mind?

Visual programming is used in a few widely used ETL tools and it's not that much fun if you know how to program. Max/MSP and Pd are fun, though.

> text = code bubble

Code Bubbles: http://www.andrewbragdon.com/codebubbles_site.asp


> It's because the above list of things are not trivial to get right...

It is a hard problem but solvable. We've been working on it for a few years. The "hardest" part was figuring out how to design away the need for complex interfaces (complex APIs). Once we solved this problem, it was a lot easier to build out a visual object language and associated framework (or lack thereof).

Something that is a bit difficult to figure out in a visual language is the merging of branches.

I would like to get your input on your experiences with visual coding in the past.

In 2040 someone will discover Haskell, shed tears on why C#++.cloud is so widespread instead in the industry, and use it to conclude the sorry state of the world. Seriously, don't compare what was published in papers 50 years ago with what business uses today, compare it with what is in papers now, and there are lots of interesting things going on all the time, when was the last time you even checked? Probabilistic programming? Applications of category theory to functional programming? Type theory? Software transactional memory?

Woody Allen did this great movie some time ago, "Midnight in Paris", where the main character, living in present times, dreams of moving back in time to the 1920s as the best time for literature ever. When the occasion to really go back appears though, he discovers the writers of the 1920s thought the best literature was done in 1890s, and so he has to go back again, then again, ... This talk is like this, sentiment blinding a sober assessment.

I feel like you missed the fact that he is obviously aware of the market/hardware reasons that caused programming to evolve in this manner, but it doesn't change the fact that this current model of programming may be a false evolutionary pathway.

He is pointing out experts tend to deny a perfectly valid way of exploring technology, because it doesn't follow the defined community-accepted standards built on assumptions of hardware and efficiency.

He's not not knocking the current model, he's not even saying these other models shouldn't have died, he's saying they shouldn't be forgotten and should often be reexamined in light of new technology which might make a better home for it.

> He's not knocking the current model

Yes, he is actually, repeatedly. For instance, at 9:30 in the video: "There won't be any, like, markup languages, or stylesheet languages, right? That would make no sense".

Industry is screwy. My perspective is always from heavy industry, where we consider upgrading to PLCs that run on Pentium 90 architecture _amazing_. There's good reason for that reactionism; never fix what ain't broke.

But the same philosophy is used in the softer analytics, where using state of the art really is better. Sure giant clunky Excel sheets _work_, but we can build far better charting tools. We can run statistics easier than MiniTab. Data can be interactive, searchable, and computable instead of rituals and incantations to lousy proprietary one-off enterprise buzz-word-a-tron programs.

We _could_ be using analytical tools that shape themselves to the data. Instead, we have to convince management that it's _possible_ to analyze and map data easily in these new ways. But once they see how much more powerful these ideas are - how much faster and cheaper they work - lower mgt. is thrilled. And if upper management is profit oriented, they'll like it too.

so you are saying marketing* is the problem, not technology.

*marketing defined as clearly communicating the benefits of the product/technology.

But by this argument, no one is ever allowed to criticize the current state of the world compared with the past, lest he be accused of nostalgia-clouded vision.

You can tear me to shreds for this, but until I can code Haskell in Visual Studio, it's going to be hard for it to gain any traction in $BIGCORP.

F# is a first class citizen in Visual Studio, yet it doesn't appear to have gained much traction in 'enterprise software development'.

I'm honestly not sure why that is, but it is a completely different way of thinking to code in functional languages.

That is $BIGCORP's loss, why would I care?

Because sometimes you don't have the luxury of choosing which programming language jobs are available for.

I don't follow. The notion put forward was "hey people who like haskell, stop improving haskell and start adding visual studio support or else $BIGCORP won't use haskell". Nobody cares if $BIGCORP uses haskell, they are free to cripple themselves if they want. If $BIGCORP actually seriously wanted to use haskell, they could afford to pay someone to add visual studio support for haskell.

Well, there must be a reason why research keeps discovering new ways of computing and programming while the industry is stucked with the outdated methods.

I loved that movie, but I don't think it is too relevant here. I mean, you can rediscover and read any literature written in the 20s or the 1890s, which is exactly what our field is not doing.

It's simple: industry and academia are too far apart today.

Look at the languages that come out of academia, then look at the languages that have been invented over the last few decades which have gained traction. The latter list includes a lot of crazy items, things like Perl, PHP, Javascript, Ruby, and Python.

Some of them with their merits but for the most part hugely flawed, in some cases bordering on fundamentally broken. But what do they have in common? They were all invented by people needing to solve immediate problems and they are all designed to solve practical problems. Interestingly, Python was invented while its author was working for a research institute but it was a side project.

The point being: languages invented by research organizations tend to be too distanced from real-world needs of everyday programmers to be even remotely practical. Which is why almost all of the new languages invented over the past 3 decades that have become popular have either been created by a single person or been created by industry.

LLVM and Scala come to mind as PL projects born in academia and enjoying wider adoption. Not all researchers are interested in solving the "real problems out there", but some do, and are successful at it.

Like most people, when you write "practical", or "real-world", you actually mean "short term".

I agree with the denotation of your comment, but I disagree with its connotation. We need more long term, and less "practicality".

I just watched most of this talk while a large C++ codebase was compiling, in the midst of trying to find one of many bugs caused by multiple interacting stateful systems, on a product with so much legacy code that it'll be lucky if it's sustainable for another ten years.

Like Bret's other talk, "Inventing on Principle", this talk has affected me deeply. I don't want this anymore. I want to invent the future.

A quote from the footnotes:

"'The most dangerous thought you can have a creative person is to think you know what you're doing.'

It's possible to misinterpret what I'm saying here. When I talk about not knowing what you're doing, I'm arguing against "expertise", a feeling of mastery that traps you in a particular way of thinking.

But I want to be clear -- I am not advocating ignorance. Instead, I'm suggesting a kind of informed skepticism, a kind of humility.

Ignorance is remaining willfully unaware of the existing base of knowledge in a field, proudly jumping in and stumbling around. This approach is fashionable in certain hacker/maker circles today, and it's poison."

I think much of the motivation for developing new paradigms stems from growing frustration with tool-induced blindness, for lack of a better term. We spend much of our time chasing that seg-fault error instead of engineering the solution to the problem we're trying to solve.

A new programming paradigm allows us to reframe a problem in a different space, much like how changing a matrix's basis changes its apparent complexity, so to speak.

The ultimate goal, I think, is to come up with a paradigm that would map computational problems, without loss of generality, to what our primate brains would find intuitive. This lowers our cognitive burden when attempting to describe a solution, and also to allow us to see clearer what the cause of a problem may be. For example, if you're a game developer, and you find some rendering problems due to some objects intersecting each other, but you're not sure where it happens, Instead of poring over text dump of numerical vector coordinates, it'd be better to visualize them. The abnormality would present themselves clearly, even to a layman's eyes. I suspect this is what Victor is trying to get at. Imagine, if you will, that you have a graphical representation of your code, and a piece of code that could potentially segfault shows up as an irregularity of some form (different textures, different color, different shape, etc), so you can spot them and fix them right away. The irregularity is not a result of some static error analysis, but is instead the result of some emergent property resulting from graphical presentation rules (mapping from problem space to graphic space). We're good at spatial visualization, so I wonder if it's valid to come up with a programming language that would leverage more of our built-in capability in that area. This may seem like wishful thinking or even intractable (perhaps due to a certain perception limitation...which we have to overcome using more cognitive resources), but I certainly hope we'll get there in our life time.

> The ultimate goal, I think, is to come up with a paradigm that would map computational problems, without loss of generality, to what our primate brains would find intuitive.

I really agree with this statement.

At the end of the video he warns of the dangers of "dogma".

He looks really nervous and impatient in this talk. He seems afraid that it won't be well received. If so, it is interesting to note that this is what dogma in fact leads to... repression of new ideas, fear of free thinkers and the stagnation of true scientific progress. It means guys like Bret Victor will feel awkward giving a talk that questions the status quo.

"Breakthroughs" do not happen when we are all surrounded by impenetrable walls of dogma. I wonder if we today could even recognize a true breakthrough in computing if we saw one. The only ones I see are from the era Bret is talking about. What happens when those are forgotten?

My friends, there is a simple thing I learned in another discpline outside of computing where I witnessed doing what others thought impossible: the power of irreverance. This is where true innovation comes from.

It means not only questioning whether you know what you are doing, but questioning whether others do. That frees you up to work on what you want to work on, even when it is in a different direction than everyone else. That is where innovation comes from: irreverance.

One thing I can't help but noticing is that the majority of discussions regarding this talk are focusing on the examples presented.

I thought it was pretty clear that the talk wasn't about whether constraint-based solvers and visual programming environments were the "future of programming." It was a talk about dogma. Brent points out that none of the examples he's mentioned are inherently important to what he was trying to get across: they were just examples. The point he was trying to elucidate was that our collective body of knowledge limits our ability to see new ways of thinking about the problems we face.

It is at least somewhat related to the adage, when you have a hammer every problem looks like a nail. He's just taking a historical view and using irony to illustrate his point. When computer technology reached a certain level of power there was a blossoming garden of innovative ideas because the majority of people didn't know what you cannot do.

What I think he was trying to say, and this is partly coloured by my own beliefs, is that beginner's mind is important. Dogma has a way of narrowing your view of the world. Innovation is slow and incremental but there's also a very real need to be wild and creative as well. There's room for both and we've just been focusing on one rather than the other for the last 40 years.

In this discussion I've been trying to make the point that he's missed the mark even in the idea that developer attitude is the inherent barrier preventing these breakthroughs. I believe he's stealing bases here. At least with respect to visual programming, there is objective evidence (that is easily google-able) that this problem is actively being tackled but with very little success. Active and recently failed projects seem to be glaring counterexamples to his broader point, at least with respect to the visual programming domain.

I suspect that my point about presuming developer attitudes are the biggest problems here can more broadly applied though I do not have enough experience with constraint-based solvers and his other examples to do more than wildly speculate.

Enough already ! Could anyone with 100 millions $ give this guy a team of 100 Phds to create the new software revolution ?

This guy is not a good or great or fabulous computer scientist, this guy is something else entirely. He's a true creative Thinker. He doesn't have a vision, he's got tons of them. Every subject he starts thinking about he comes with new ideas.

He shouldn't be doing presentations, he should run a company.

Based on his personal writings, it seems like he prefers to be left alone to work on his ideas. It does not seem like he wants to run a company, or really even work with others.

So maybe just give him the $100M and leave him alone to do whatever he feels like doing. I'm pretty sure something good could come out of it.

A company? Then we would have just one solution to the problems he sees. I think that just throwing a bunch of ideas at all of us is more effective. We can all think independently and come up with more novel ways to solve those problems.

He is a modern day Renaissance man.

Why was this downvoted?

Very good summary of the state of the art in the early 70s.

His analysis of the "API" problem reminds me of some of the ideas Jaron Lanier was floating around about ten years ago. I can't recall the name of it, but it was some sort of biologically inspired handshake mechanism between software 'agents'.

What I think such things require is an understanding of what is lacking in order to search for it; as near as I can tell, that requires some fashion of self-awareness. This, as far as I can conceive, recurses into someone writing code, whether it be Planner or XML. But my vision is cloudy on such matters.

I should note that I think Brett is one of the leading thinkers of his (my) generation, and have a lot of respect for his ideas.

I think you might be thinking of the RNA metaprotocol or Recursive Network Architecture.

Phenotropics, actually.


I dug around for a bit when I came across the idea but never could figure out where the reification of the idea went.

I'll eyeball RNA, but at first glance it doesn't appear quite the same idea.

An interesting talk, and certainly entertaining, but I think it falls very short. Ultimately it turns into typical "architecture astronaut" naval gazing. He focuses on the shortcomings of "traditional programming" while at the same time imagining only the positive aspects of untried methods. To be honest, such an approach is frankly childish, and unhelpful. His closing line is a good one but it's also trite, and the advice he seems to give leading up to it (i.e. "let's use all these revolutionary ideas from the '60s and '70s and come up with even more revolutionary ideas") is not practical.

To pick one example: he derides programming via "text dump" and lauds the idea of "direct manipulations of data". However, there are many very strong arguments for using plain-text (read "The Pragmatic Programmer" for some very excellent defenses of such). Moreover, it's not as though binary formats and "direct manipulations" haven't been tried. They've been tried a great many times. And except for specific use cases they've been found to be a horrible way to program with a plethora of failed attempts.

Similarly, he casually mentions a programming language founded on unique principles designed for concurrency, he doesn't name it but that language is Erlang. The interesting thing about Erlang is that it is a fully fledged language today. It exists, it has a ton of support (because it's used in industry), and it's easy to install and use. And it also does what it's advertised to do: excel at concurrency. However, there aren't many practical projects, even ones that are highly concurrency dependent, that use Erlang. And there are projects, such as couch db, which are based on Erlang but are moving away from it. Why is that? Is it because the programmers are afraid of changing their conceptions of "what it means to program"? Obviously not, they have already been using Erlang. Rather, it's because languages which are highly optimized for concurrency aren't always the best practical solution, even for problem domains that are highly concurrency bound, because there are a huge number of other practical constraints which can easily be just as or more important.

Again, here we have an example of someone pushing ideas that seem to have a lot of merit in the abstract but in the real world meet with so much complexity and roadblocks that they prove to be unworkable most of the time.

It's a classic "worse is better" scenario. His insult of the use of markup languages on the web is a perfect example of his wrongheadedness. It took me a while to realize that it was an insult because in reality the use of "text dump" markup languages is one of the key enabling features of the web. It's a big reason why it's been able to become so successful, so widespread, so flexible, and so powerful so quickly. But by the same token, it's filled with plenty of ugliness and inelegance and is quite easy to deride.

It's funny how he mentions unix with some hints of how awesome it is, or will be, but ignores the fact that it's also a "worse is better" sort of system. It's based off a very primitive core idea, everything is a file, and very heavily reliant on "text dump" based programming and configuration. Unix can be quite easily, and accurately, derided as a heaping pile of text dumps in a simple file system. But that model turns out to be so amazingly flexible and robust that it creates a huge amount of potential, which has been realized today in a unix heritage OS, linux, that runs on everything from watches to smartphones to servers to routers and so on.

Victor highlights several ideas which he thinks should be at the core of how we advance the state of the art in the practice of programming (e.g. goal based programming, direct manipulations of data, concurrency, etc.) but I would say that those issues are far from the most important in programming today. I'd list things such as development velocity and end-product reliability as being far more important. And the best ways to achieve those things are not even on his list.

Most damningly, he falls into his own trap of being blind to what "programming" can mean. He is stuck in a model where "programming" is the act of translating an idea to a machine representation. But we've known for decades that at best this is a minority amount of the work necessary to build software. For all of Victor's examples of the willingly blind programmers of the 1960s who saw things like symbolic coding, object oriented design and so forth as "not programming" and more like clerical work he makes fundamentally the same error. Today testing, integration, building, refactoring and so on are all hugely fundamental aspects of prototyping and critically important to end-product quality as well as development velocity. And increasingly tooling is placing such things closer and closer to "the act of programming", and yet Victor himself still seems to be quite blind to the idea of these things as "programming". Though I don't think that will be the view among programmers a few decades down the road.

I see where you are coming from, but I think you're getting mired in some of the details of the talk that perhaps rub you the wrong way and are therefore missing the larger point. Brett in all his talks is saying the same thing: take an honest look at what we call programming and tell me that we've reached the pinnacle of where we can go with it.

Whether or not you like this specific talk or the examples he has chosen, I think you would probably agree there is a lot of room for improvement. Brett is trying to stir the pot and get some people to break out and try radical ideas.

Many of the things he talks about in this presentation have been tried and "failed" but that doesn't mean you never look at them again. Technology and times change in ways that can breathe life into early ideas that didn't pan out initially. Many forget that dynamic typing and garbage collection were once cute ideas but failures in practice.

He doesn't mention things like testing, integration, building, and refactoring because they are all symptoms of the bigger problem that he's been railing against: namely that our programs are so complex we are unable to easily understand them to build decent, reliable software in an efficient way. So we have all these structures in place to help us get through the complexity and fragility of all this stuff we create. Instead we should be focusing on the madness that causes our software to balloon to millions of lines of incomprehensible code.

Please forgive my liberties with science words. :)

The purpose of refactoring is to remove the entropy that builds up in a system, organization, or process as it ages, grows in complexity, and expands to meet demands it wasn't meant to handle. It's not a symptom of a problem; it's acknowledgement that we live in a universe where energy is limited and entropy increases, where anything we humans call a useful system is doomed to someday fall apart—and sooner, not later, if it isn't actively maintained.

Refactoring is fundamental. Failure to refactor is why nations fall to revolutions, why companies get slower, and why industries can be disrupted. More figuratively, a lack of maintenance is also why large stars explode as supernovas and why people die of age. And as a totally non-special case, it's why programs become giant balls of hair if we keep changing stuff and never clean up cruft.

A system where refactoring is not a built-in process is a system that will fail. Even if we automate it or we somehow hide it from the user, refactoring still has to be there.

What if programming consists of only refactoring? Then there is no separate "refactoring step", just programming and neglect. This is what Bret Victor is getting at. It is about finding the right medium to work in.

We have that already i.e. coding to a test. It sucks because you never seem to grasp the entirety of a program but instead just hack until every flag is green. It doesn't prevent entropy either. Only thing that prevents code entropy is careful and deliberate application of best practices when needed i.e. a shit ton of extreme effort.

he shows an example in his talk of inventing on principle where the test happen in real time as the code is written. It's pretty neat.

yes but it does seem like the equations of general relativity needs less frequent refactoring than a java codebase.

Sure, but I think he ends up missing the mark. Ultimately his talk boils down to "let's revolutionize programming!" But as I said that ends up being fairly trite.

As for testing, integration, building, and refactoring I think it's hugely mistaken to view them as "symptoms of a problem". They are tools. And they aren't just tools used to grapple with explosions of complexity, they are tools that very much help us keep complexity in check. To use an analogy, it's not as though these development tools are like a hugely powerful locomotive that can drag whatever sorry piece of crap codebase you have out into the world regardless of its faults. Instead, they are tools that enable and encourage building better codebases, more componentized, more reliable, more understandable, etc.

Continuous integration techniques combined with robust unit and integration testing encourage developers to reduce their dependencies and the complexity of their code down as much as possible. They also help facilitate refactoring which makes reduction of complexity easier. And they actively discourage fragility, either at a component level or at a product/service level.

Without these tools there is a break in the feedback loop. Coders would just do whatever the fuck they wanted and try to smush it all together and then they'd spend the second 90% of development time (having already spent the first) stomping on everything until it builds and runs and sort of works. With more modern development processes coders feel the pain of fragile code because it means their tests fail. They feel the pain of spaghetti dependencies because they break the build too often. And they feel that pain much closer to the point of the act that caused it, so that they can fix it and learn their lesson at a much lower cost and hopefully without as much disruption of the work of others.

With any luck these tools will be even better in the future and will make it even easier to produce high quality code closer to the level of irreducible complexity of the project than is possible today.

These aren't the only ways that programming will change for the better but they are examples which I think it's easy for people to write off as not core to the process of programming.

Hrmmm ... you seem to still be tremendously missing the overarching purpose of this speech. Obviously to delve proficiently into every aspect of programming through the ages and why things are the way they are now (e.g. what 'won' out and why) would require a Course and not a talk. You seem to think that the points brought up in this talk diminish the idea that things now DO work. However, what was stressed was avoiding the trap of discontinuing the pursuit of thinking outside the box.

I am also questioning how much of the video you actually paid attention too (note: I am not questioning how much you watched). I say this because your critique is focused on the topics that he covered in the earlier parts of the video and then (LOL) you quickly criticize him for talking about concurrency (in your previous comment)... I clearly remember him talking about programming on Massively Parallel architectures without the need for sequential logic control via multiplexing using threads and locks. I imagine, though, it is possible you did not critique this point because it is obvious (to everyone) that this is the ultimate direction of computing (synchronously with the end of Moore’s law as well).

Ahhh now that’s interesting, we are entering an era where there could possibly be a legitimate use to trying/conceiving new methods of programming? Who would have thought?

Maybe you just realized that you would have looked extremely foolish spending time on critiquing that point? IDK … excuse my ignorance.

Also you constantly argue FOR basic management techniques and methods (as if that countermoves Bret’s arguments) … but you fail to realize that spatial structuring of programs would be a visual management technique in itself that could THEN too have tools developed along with it that would be isomorphic to modern integration, testing management. But I won’t bother delving into that subject as I am much more ignorant on this and more importantly … I would hate to upset you, Master.

Oh and btw (before accusations fly) I am not a Hero worshiper … this is the first time I have ever even heard of Bret Victor. Please don’t gasp too loud.

I definetely get what OP was trying to say. Bret presented something that sounds a lot like the future even though it definitely isn't. Some of listed alternatives like direct data manipulation, or visual languages or non-text languages have MAJOR deficiences and stumbling blocks that prevented them from achieving dominance. Though in some cases it basically does boil down to which is cheaper, more familiar.

I think the title of Bret's presentation was meant to be ironic. I think he meant something like this.

If you want to see the future of computing just look at all the things in computing's past that we've "forgotten" or "written off." Maybe we should look at some of those ideas we've dismissed, those ideas that we've decided "have MAJOR deficiencies and stumbling blocks", and write them back in?

The times have changed. Our devices are faster, denser, and cheaper now. Maybe let's go revisit the past and see what we wrote off because our devices then were too slow, too sparse, or too expensive. We shouldn't be so arrogant as to think that we can see clearer or farther than the people who came before.

That's a theme I see in many of Bret's talks. I spend my days thinking about programming education and I can relate. The art when it comes to programming education today is a not even close to the ideas described in Seymour Papert's Mindstorms, which he wrote in 1980.

LOGO had its failings but at least it was visionary. What are MOOCs doing to push the state of the art, really? Not that it's their job to push the state of the art -- but somebody should be!

This is consistent with other thing's he's written. For example, read A Brief Rant on the Future of Interaction Design (http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...). Not only does he use the same word in his title ("future"), but he makes similar points and relates the future to the past in a similar way.

"And yes, the fruits of this research are still crude, rudimentary, and sometimes kind of dubious. But look —

In 1968 — three years before the invention of the microprocessor — Alan Kay stumbled across Don Bitzer's early flat-panel display. Its resolution was 16 pixels by 16 pixels — an impressive improvement over their earlier 4 pixel by 4 pixel display.

Alan saw those 256 glowing orange squares, and he went home, and he picked up a pen, and he drew a picture of a goddamn iPad.

[picture of a device sketch that looks essentially identical to an iPad]

And then he chased that carrot through decades of groundbreaking research, much of which is responsible for the hardware and software that you're currently reading this with.

That's the kind of ambitious, long-range vision I'm talking about. Pictures Under Glass is old news. Let's start using our hands."

Okay. Single fingers are a amazing input device because of dexterity. Flat phones are amazing because they fit in my pockets. Text search is amazing because with 26 symbols I can query a very significant portion of world knowledge (I can't search for, say, a painting that looks like a Van Gogh by some other painter, so there are limits, obviously).

Maybe it is just a tone thing. Alan Kay did something notable - he drew the iPad, he didn't run around saying "somebody should invent something based on this thing I saw".

Flat works, and so do fingers. If you are going to denigrate design based on that, well, let's see the alternative that is superior. I'd love to live through a Xerox kind of revolution again.

Go watch these three talks of his: Inventing on Principle, Dynamic Drawings, and this keynote on YouTube (http://m.youtube.com/watch?v=-QJytzcd7Wo&desktop_uri=%2Fwatc...)

Next, read the following essays of his: Explorable Explanations, Learnable Programming, and Up and Down The Ladder of Abstractions

Do you still think he's all talk?

Also, I can't tell if you were implying otherwise, butAlan Kay did a few other notable things like Smalltalk and OOP.

I've seen some of his stuff. I am reacting to a talk where all he says is "this is wrong". I've written about some of that stuff in other posts here, so I won't duplicate it. He by and large argues to throw math away, and shows toy examples where he scrubs a hard coded constant to change program behavior. Almost nothing I do depends on something so tiny that I could scrub to alter my algorithms.

Alan Kay is awesome. He did change things for the better, I'm sorry if you thought I meant otherwise . His iPad sketch was of something that had immediately obvious value. A scrubbing calculator? Not so much.

Hmm didn't you completely miss his look, the projector etc? He wasn't pretending to stand in 2013 and talk about the future of programming. He went back in time and talked about the four major trends that excisted back then.

The future in that talk means "today"

No. I'm saying, there is a reason those things haven't' become reality. They have much greater hidden cost than presented. It is the eqivalent of someone dressing in 20th century robe of Edisson and crying over the cruel fate that fell on DC. Much like DC these ideas might see a comeback but only because the context has changed. Not being aware of history is one blunder but failing to see why those thing weren't realized is another blunder.

I get it, I really do. And I'm very sympathetic to Victor's goals. I just don't buy it, I think he's mistaken about the most important factors to unlock innovation in programming.

His central conceit is the idea is that various revolutionary computing concepts which first surfaced in the early days of programming (the 1960s and '70s) have since been abandoned in favor of boring workaday tools of much more limited potential. More so that new, revolutionary concepts in programming haven't received attention because programmers have become too narrow minded. And that is very simply fundamentally an untrue characterization of reality.

Sure, let's look at concurrency, one of his examples. He bemoans the parallelization model of sequential programming with threads and locks as being excessively complex and inherently self-limited. And he's absolutely correct, it's a horrible method of parallelism. But it's not as though people aren't aware of that, or as though people haven't been actively developing alternate, highly innovative ways to tackle the problem every year since the 1970s. Look at Haskell, OCaml,vector computing, CUDA/GPU coding, or node.js. Or Scala, Erlang, or Rust, all three of which implement the touted revolutionary "actor model" that Victor brandishes.

Or look at direct data manipulations as "programming". This hasn't been ignored, it's been actively worked on in every way imaginable. CASE programming received a lot of attention, and still does. Various workflow based programming models have received just as much attention. What about Flash? Hypercard? Etc. And there are many niche uses where direct data manipulation has proven to be highly useful. But again and again it's proven to be basically incompatible with general purpose programming, likely because of a fundamental impedance mismatch. A total of billions of dollars in investment has gone into these technologies, it's counterfactual to put forward the notion that we are blind to alternatives or that we haven't tried.

Or look at his example of the smalltalk browser. How can any modern coder look at that and not laugh. Any modern IDE like Eclipse or Visual Studio can present to the developer exactly that interface.

Again and again it looks either like Victor is either blindly ignorant of the practice of programming in the real-world or he is simply adhering to the "no true Scottsman" fallacy. Imagining that the ideas he brings up haven't "truly" been tried, not seriously and honestly, they've just been toyed with and abandoned. Except that in some cases, such as the actor model, they have not just been tried they've been developed into robust solutions and they are made use of in industry when and if they are warranted. It's hilarious that we're even having this discussion on a forum written in Arc, of all things.

To circle back to the particular examples I gave of alternative important advances in programming (focusing on development velocity and reliability), I find it amusing and ironic that some folks so easily dismiss these ideas because they are so seemingly mundane. But they are mundane in precisely the ways that structured programming was mundane when it was in its infancy. It was easy to write off structured programming as nothing more than clerical work preparatory to actual programming, but now we know that not to be true. It's also quite easy to write off testing and integration, as examples, as extraneous supporting work that falls outside "real programming". However, I believe that when the tooling of programming advances to more intimately embrace these things we'll see an unprecedented explosion in programming innovation and productivity, to a degree where people used to relying on such tools will look on our programming today as just as primitive as folks using pre-structured programming appear to us today.

Certainly a lot of programmers today have their heads down, because they're concentrated on the work immediately in front of them. But the idea that programming as a whole is trapped inside some sort of "box" which it is incapable of contemplating the outside of is utterly wrong with numerous examples of substantial and fundamental innovation happening all the time.

I think Victor is annoyed that the perfect ideal of those concepts he mentions haven't magically achieved reification without accumulating the necessary complexity and kruft that comes with translating abstract ideas into practical realities. And I think he's annoyed that fundamentally flawed and imperfect ideas, such as the x86 architecture, continue to survive and be immanently practical solutions decade after decade after decade.

It turns out that the real world doesn't give a crap about our aesthetic sensibilities, sometimes the best solution isn't always elegant. To people who refuse to poke their head out of the elegance box the world will always seem as though it turned its back on perfection.

> I get it, I really do.

It's always a red flag when people have to say that. Many experts don't profess to understand something which they spent a long time understanding.

Ironically, Bret Victor mentioned, "The most dangerous thought that you can have as a creative person is to think that you know what you're doing..."

The points you mention are bewildering, since in my universe, most "technologists" ironically hate change. And learning new things. They seem to perceive potentially better ways of doing things like a particularly offensive veggie, rant at length rather than even simply taste the damn thing, and at best hide behind "Well it'd be great to try these new things, but we have a deadline now!" Knowing that managers fall for this line each time, due to the pattern-matching they're trained in.

(Of course, when they fail to meet these deadlines due to program complexity, they do not reconsider their assumptions. Their excuses are every bit as incremental as their approach to tech. The books they read — if they read at all — tell them to do X, so by god X should work, unless we simply didn't do enough X.)

It's not enough to reject concrete new technologies. They even fight learning about them in order to apply vague lessons into their solutions.

Fortunately, HN provides a good illustration of Bret Victor's point: "There can be a lot of resistance to new ways of working that require to kind of unlearn what you've already learned, and think in new ways. And there can even be outright hostility." In real life, I've actually seen people shout and nearly come to blows while resisting learning a new thing.

You haven't addressed any of inclinedPlane's criticism of Brett's talk. Rather your entire comment seems to be variations on "There are people who irrationally dislike new technology."

Well, I don't agree with your premise, that I haven't addressed any of their criticisms.

A main theme underlying their complaint is that there's "numerous examples of substantial and fundamental innovation happening all the time."

But Bret Victor clearly knows this. Obviously, he does not think every-single-person-in-the-world has failed to pursue other computational models. The question is, how does the mainstream programming culture react to them? With hostility? Aggressive ignorance? Is it politically hard for you to use these ideas at work, even when they appear to provide natural solutions?

Do we live in a programming culture where people choose the technologies they do, after an openminded survey of different models? Does someone critique the complectedness of the actor model, when explaining why they decided to use PHP or Python? Do they justify the von Neumann paradigm, using the Connection Machine as a negative case study?

There are other shaky points on these HN threads. For instance, inferring that visual programming languages were debunked, based on a few instances. (Particularly when the poster doesn't, say, evaluate what was wrong with the instances they have in mind, nor wonder if they really have exhausted the space of potential visual languages.)

@cali: I completely agree with your points. @InclinedPlane is missing the main argument.

Here is my take: TLDR: Computing needs an existential crisis before current programming zeitgeist is replaced. Until then, we need to encourage as many people as possible to live on the bleeding edge of "Programming" epistemology.

Long Version: For better or for worse, humans are pragmatic. Fundamentally, we don't change our behavior until there is a fire at our front door. In this same sense, I don't think we are going to rewrite the book on what it means to "program," until we reach an existential peril. Intel demonstrated this by switching to multicore processors after realizing Moore's law could simply not continue via a simple increase in clock speed.

You can't take one of Bret's talks as his entire critique. This talk is part of a body of work in which he points out and demonstrates our lack of imagination. Bret himself points out another seemingly irrelevant historical anecdote to explain his work: Arab Numerals. From Bret himself:

"Have you ever tried multiplying roman numerals? It’s incredibly, ridiculously difficult. That’s why, before the 14th century, everyone thought that multiplication was an incredibly difficult concept, and only for the mathematical elite. Then arabic numerals came along, with their nice place values, and we discovered that even seven-year-olds can handle multiplication just fine. There was nothing difficult about the concept of multiplication—the problem was that numbers, at the time, had a bad user interface."

Interestingly enough, the "bad user interface" wasn't enough to dethrone roman numerals until the renaissance. The PRAGMATIC reason we abandoned roman numerals was due to the increased trading in the Mediterranean.

Personally, I believe that Brett is providing the foundation for the next level of abstraction that computing will experience. That's a big deal. Godspeed.

Perhaps. But I think he is a visual thinker (his website is littered with phrases like "the programmer needs to see....". And that is a powerful component of thinking, to be sure. But, think about math. Plots and charts are sometimes extremely useful, and we can throw them up and interact with them in real time with tools like Mathcad. Its great. But, it only goes so far. I have to do math (filtering, calculus, signal processing) most every day at work. I have some Python scripts to visualize some stuff, but by and large I work symbolically because that is the abstraction that gives me the most leverage. Sure, I can take a continuous function that is plotted, and visually see the integral and derivative, and that can be a very useful thing. OTOH, if I want to design a filter, I need to design it with criteria in mind, solve equations and so on, not put an equation in a tool like mathcad, and tweek coefficients and terms until it looks right. Visual processing falls down for something like that.

Others have posted about the new IDEs that they are trying to create. Great! Bring them to us. If they work, we will use them. But I fundamentally disagree with the premise that visual is just flat out better. Absolutely, have the conversation, and push the boundaries. But to claim that people that say "you know, symbolic math actually works better in most cases" are resisting change (you didn't say that so much as others) is silly. We are just stating facts.

Take your arabic numbers example. Roman numerals are what, essentially VISUAL!! III is 3. It's a horrible way to do arithmetic. Or imagine a 'visual calculator', where you try to multiply 3*7 by stacking blocks or something. Just the kind of thing I might use to teach a third grader, but never, ever, something I am going to use to balance my checkbook or compute loads on bridge trusses. I'm imagining sliders to change the x and y, and the blocks rearranging themselves. Great teaching tool. A terrible way to do math because it is a very striking, but also very weak abstraction.

Take bridge trusses. Imagine a visual program that shows loads in colors - high forces are red, perhaps. A great tool, obviously. (we have such things, btw). But to design a bridge that way? Never. There is no intellectual scaffolding there (pun intended). I can make arbitrary configurations, look at how colors change and such, but engineering is multidimensional. What do the materials cost? How hard are they to get and transport? How many people will be needed to bolt this strut? How do the materials work in compression vs expansion? What are the effects of weather and age? What are the resonances. It's a huge optimization problem that I'm not going to solve visually (though, again, visual will often help me conceptualize a specific element). That I am not thinking or working purely visually is not evidence that I am not being "creative" - I'm just choosing the correct abstraction for the job. Sometimes that is visual, sometimes not.

So, okay, the claim is that perhaps visual will/should be the next major abstraction in programming. I am skeptical, for all the reasons above - my current non-visual tools provide me a better abstraction in so many cases. Prove me wrong, and I will happily use your tool. But please don't claim these things haven't been thought of, or that we are being reactionary by pointing out the reasons we choose symbolic and textual abstractions over visual ones when we have the choice (I admit sometimes the choice isn't there).

Bret has previously given a talk[1] that addresses this point. He discusses the importance of using symbolic, visual, and interactive methods to understand and design systems. [2] He specifically shows an example of digital filter design that uses all three. [3]

Programming is very focused on symbolic reasoning right now, so it makes sense for him to focus on visual and interactive representations and interactive models often are intertwined with visual representation. His focus on a balanced approach to programming seems like a constant harping on visualization because of this. I think he is trying to get the feedback loop between creator and creation as tight as possible and using all available means to represent that system.

The prototypes I have seen of his that are direct programming tend not to look like LabView, instead they are augmented IDEs that have visual representations of processing that are linked to the symbolic representations that were used to create them. [4] This way you can manipulate the output and see how the system changes, see how the linkages in the system relate, and change the symbols to get different output. it is a tool for making systems represented by symbols but interacting with the system can come through a visual or symbolic representation.

[1] http://worrydream.com/MediaForThinkingTheUnthinkable/note.ht...

[2] http://vimeo.com/67076984#t=12m51s

[3] http://vimeo.com/67076984#t=16m55s

[4] http://vimeo.com/67076984#t=25m0s

Part of Bret's theory of learning (which I agree with) is that when "illustrating" or "explaining" an idea it is to important use multiple simultaneous representations, not solely symbolic and not solely visual. This increases the "surface area" of comprehension so that a learner is much more likely to find something in this constellation of representations that relate to their prior understanding. In fact, that comprehension might only come out of seeing the constellation. No representation alone would have sufficed.

Further, you then want to build a feedback loop by allowing direct manipulation of any of the varied representations and have the other representations change accordingly. This not only lets you see the same idea from multiple perspectives -- visual, symbolic, etc. -- but lets the learner see the ideas in motion.

This is where the "real time" stuff comes in and also why he gets annoyed when people see the it as the point of his work. It's not; it's just a technology to accelerate the learning process. It's a very compelling technology, but it's not the foundation of his work. This is like reducing Galileo to a really good telescope engineer -- not that Bret Victor is Galileo.

I think he emphasizes the visual only because it's so underdeveloped relative to symbolic. He thinks we need better metaphors, not just better symbols or syntax. He's not an advocate of working "purely visually." It's the relationship between the representations that matters. You want to create a world where you can freely use the right metaphor for the job, so to speak.

That's his mission. It's the mission of every constructivist interested in using computers for education. Bret is really good at pushing the state of the art which is why folks like me get really excited about him! :D

You might not think Bret's talks are about education or learning, but virtually every one is. A huge theme of his work is this question: "If people learn via a continual feedback loop with their environment -- in programming we sometimes call this 'debugging' -- then what are our programming environments teaching us? Are they good teachers? Are they (unknowingly) teaching us bad lessons? Can we make them better teachers?"

The thing is that computing has both reached the limits of where "text dump" programming can go AND has found the text-dump programming is something like a "local maximum" among the different clear options available to programmers.

It seems like we need something different. But the underlying problem might be that our intuitions about "what's better" don't seem to work. Perhaps an even wider range of ideas needs to be considered and not simply the alternatives that seem intuitively appealing (but which have failed compared to the now-standard approach).

I agree with this. To get out of this local trap we are going to need something revolutionary. This is not something you can plow money into, it will come, if indeed it ever comes, from left field. My bet is there is a new 'frame' to be found somewhere out in the land of mathematical abstraction. I think to solve this one we are going to have to get right down to the nitty gritty, where does complexity come from, how specifically does structure emerge from non structure? How can we design such systems?

It's true you couldn't plow money into such a project. But I always wondered why, when confronted with a problem like this, you couldn't hire one smart organizers who hire forty dispersed teams who'd each follow a different lead. And hire another ten teams who'd be tasked with following and integrating the work of the forty teams (numbers arbitrary but you get the picture).

I suppose that's how grants are supposed to work already but it seems these mostly degenerated to all following the intellectual trend with the most currency.

> it turns into typical "architecture astronaut" naval gazing

I take exception to your critique of your Mr Victor's presentation. I am sad to see that your wall of text has reached the top of this discussion on HN. To be honest, it's probably because no one has the time to wade through all of the logical fallacies, especially the ad hominem attacks and needlessly inflammatory language ("falls very short," "architecture astronaut naval gazing," "untried methods," "frankly childish, and unhelpful,"trite," "not practical," etc)

You seem to be reacting just like the "absolute binary programmers" that Bret predicts. As far as I can gather, you are fond of existing web programming tools (HTML, CSS, JS, etc) and took Bret's criticism as some sort of personal insult (I guess you like making websites).

I think that Bret's talk is about freeing your mind from thinking that the status quo of programming methodologies is the final say on the matter, and he points out that alternative methodologies (especially more human-centric and visual methodologies) are a neglected research area that was once more fruitful in Computer Science's formative years.

Bret's observations in this particular presentation are valid and insightful in their own right. His presentation style is also creative and enjoyable. Nothing in this presentation deserves the type of language that you invoke, especially in light of the rest Bret's recent works (http://worrydream.com/) that are neatly summed up by this latest presentation.

I'm not surprised at the language; it's war, after all. Bret and Alan Kay and others are saying, "We in this industry are pathetic and not even marginally professional." It's hard to hear and invokes sometimes an emotional response.

And what makes it hard to hear is that we know deep in our hearts, that's it's true, and as an industry, we're not really trying all that hard. It used to be Computer Science; now it's Computer Pop.

Bret and Alan Kay and others are saying, "We in this industry are pathetic and not even marginally professional." It's hard to hear and invokes sometimes an emotional response.

It sounds like sour grapes to me. Everyone else is pathetic and unprofessional because they didn't fall in love with our language and practices.

Indeed, they didn't. And it likely cost the world trillions (I'm being conservative, here). The sour grapes are justified here. To give a few examples:

In the sixties, people were able to build interactive systems with virtually no delay. Nowadays we have computers that are millions times faster, yet still lag. Seriously, more than 30 seconds just to turn on the damn computer? My father's Atari ST wast faster than my brand new computer in this respect.

Right now, we use the wrong programming languages for many projects, often multiplying code size by at least 2 to 5. I know learning a new language takes time, but if you know only 2 languages and one paradigm, either you're pathetic, or your teachers are.

X86 still dominates the desktop.

>In the sixties, people were able to build interactive systems with virtually no delay.

That did virtually nothing. It is easy to be fast when you do nothing.

>I know learning a new language takes time, but if you know only 2 languages and one paradigm, either you're pathetic, or your teachers are. X86 still dominates the desktop.

Wow, so CS is all about what hardware you buy and what languages you program in? I guess we will just have to agree to disagree on what CS is. While programming languages are part of CS, what language you chose to write an app in really is not.

> > In the sixties, people were able to build interactive systems with virtually no delay.

> That did virtually nothing. It is easy to be fast when you do nothing.

This is kind of the point. Current interactive systems tend to do lots of useless things, most of which are not perceptible (except for the delays they cause)

> Wow, so CS is all about what hardware you buy and what languages you program in?

No. Computer Science is about assessing the qualities of current programming tools, and inventing better ones. Without forgetting humans warts and limitations of course.

On the other hand, programming (solving problems with computers), is about choosing hardware and languages (among other things). You wouldn't your project to cost 5 times more than it could just because you've chosen the wrong tools.

You wouldn't your project to cost 5 times more than it could just because you've chosen the wrong tools.

Yep, If there were really tools out there that could beat what is in current use by a factor of 5 then they would have won, and once they exist they will win. Because they would have had the time to A) Implement something better. B) Use all that extra time to build an easy migration path so that those on the lesser platform could migrate over.

So where is the processor that is 5x better than x86? Where is the language that is 5x better than C, C++, Java, C#,(whatever you consider to be the best of the worse to be.) I would love to use a truly better tool, I would love to use a processor so blazingly fast that it singed my eyebrows.

This is kind of the point. Current interactive systems tend to do lots of useless things, most of which are not perceptible (except for the delays they cause)

Right because all of us sitting around with our 1/5x tools have time to bang out imperceptible features.

Thanks for saying all that. I was thinking it, but restrained myself since there seemed to be a lot of hero worship over this person going on here. But it needs to be said. Everything in that video is of stuff that has been researched for decades. It isn't mainstream largely because it is facile to say 'declarative programming' or what have you, but something entirely different for it to be easier and better. Prolog is still around. Go download a free compiler, and try to write a 3D graphical loop that give you 60 fps. Try to write some seismic code with it. Try to write web browser. Not so easy. Much was promised by things like Prolog, declarative programming, logic programming, expert systems, and so on, but again it is easy to promise, hard to deliver. We didn't give up, or forget the ideas, it is just that the payoff wasn't there (except in niche areas where in fact all of these things are going strong, as you would expect).

Graphical programming doesn't work because programs are not 2 dimensional, they are N dimensional, and you spend all your time trying to fit things on a screen in a way that doesn't look like a tangled ball of yarn (hint, can't be done). I've gone through several CASE tools through my decades, and they all stink. Not to mention, I don't really think visually, but more 'structurally' - in terms of the interrelations of things. You can't capture that in 2D, and the problems that 2D create more than overwhelm whatever advantages you might get going from 1D (text files) to 2D.

Things like CSP have never been lost, though they were niche for awhile. Look at Ada's rendevous model, for example.

Right. Personally I've had plenty of experience with certain examples of "declarative programming" and "direct manipulation of data" programming and other than a few fairly niche use cases they are typically horrid for general purpose programming. Think about how "direct manipulation" programming fits into a source control / branching workflow, for example. Unless there's a text intermediary that is extremely human friendly you have a nightmare on your hands. And if there is such an intermediary then you're almost always better off just "directly manipulating" that.

> Think about how "direct manipulation" programming fits into a source control / branching workflow, for example.

Think about how "automobiles" fit into a horse breeding / grooming workflow, for example.

No reason that source-control for a visual programming language couldn't be visual.

Think about how "direct manipulation" programming fits into a source control / branching workflow, for example.

Trivially. Since virtually all currently used languages form syntactic trees (the exception being such beasts as Forth, Postscript etc.), you could use persistent data structures (which are trees again) for programs in these languages. Serializing the persistent data structure in a log-like fashion would be equivalent to working with a Git repository, only on a more fine-grained level. Essentially, this would also unify the notion of in-editor undo/redo and commit-based versioning; there would be no difference between the two at all. You'd simply tag the whole thing every now and then whenever you reach a development milestone.

> you spend all your time trying to fit things on a screen in a way that doesn't look like a tangled ball of yarn (hint, can't be done)

The way we code now leads to tangled balls of yarn. That won't be fixed by simply moving to a graphical (visual) programming language.

Well, there is yarn and then there is yarn. I don't mean spaghetti code, which is its own problem separate from representation. I'm thinking about interconnection of components, which is fine. Every layer of linux, say, makes calls to the same low level functions. If you tried to draw that it would be unreadable, but it is perfectly fine code - it is okay for everyone to call sqrt (say) because sqrt has no side effects. Well, sqrt is silly, but I don't know the kernel architecture - replace that with virtual memory functions or whatever makes sense.

> If you tried to draw that it would be unreadable, but it is perfectly fine code

You seem to be implying that no one could figure out how to apply the "7 +/- 2 rule"(https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus...) to a visual programming language.

I have actually been thinking about 1D coding vs 2D coding. Isn't 2D describing nD a little bit closer? Like a photograph of a sculpture... a little easier to get the concept than with written description, no matter how eloquent.

Re: the ball of yarn, we're trying to design that better in NoFlo's UI. Think about a subway map that designs itself around your focus. Zoom out to see the whole system, in to see the 1D code.

All I can say is, have you tried to use a CASE tool to do actual coding? I have, forced on me by various MIL-STD compliant projects.

X and Y both talk to A and B. Represent that in 2D without crossing lines.

Okay, you can, sure. If X and Y are at the top, and A and B are at the bottom, Twist A and Y, and the interconnection x in the middle goes away. But, you know, X is related to Y (same level in the sw stack), and I really wanted to represent them at the same level. Opps.

And, I'm sure you can see that all it takes is one additional complication, and you are at a point where you have crossed lines no matter what.

Textually there is no worry about layout, graphically, there is. I've seen engineers spend days and weeks just trying to get boxes lined up, moving things around endlessly as requirements change - you just spend an inordinate amount of time doing everything but engineering. You are drawing, and trying to make a pretty picture. And, that is not exactly wasted time. We all know people spend too much effort making PowerPoint 'pretty', and I am not talking about that. I mean that if the image is not readable then it is not usable, so you have to do protracted layout sessions.

Layout is NP-hard. Don't make me do layout to write code.

tl;dr version - code is multi-dimensional, but not in a 'layout' way. If you force me to do 2D layout you force me to work in an unnatural way that is unrelated to what I am actually trying to do. You haven't relaxed the problem by 1 dimension by introducing layout, but multiplied the constraints like crazy (that's a technical math term, I think!)

And then there is the information compression problem. Realistically how much can you display on a screen graphically. I argue far less than textually. I already do everything I can to maximize what I can see - scrolling involves a context switch I do not want to do. So, in {} languages I put the { on the same line as the expression "if(){" to save a line, and so on. Try a graphical UML display of a single class - you can generally only fit a few methods in, good luck with private data, and all bets are off if methods are more than 1-2 short words long. I love UML for a one time, high level view of an architecture, but for actually working in? Horrible, horrible, horrible. For example, I have a ton of tiny classes that do just 1 thing that get used everywhere. Do I represent that exactly once, and then everywhere else you have to remember that diagram? Do I copy it everywhere, and face editing hell if I change something? Do I have to reposition everything if I make a method name longer? Do I let the tool do the layout, and give me an unreadable mess? And so on. The bottom line is you comprehend better if you can see it all on one "page" - and graphical programming has always meant less information on that page. That's a net loss in my book. (This was very hand-wavey; I've conflated class diagrams with graphical programming for exmaple - we'd both have to have access to a whiteboard to really sketch out all of the various issues).

Views into 1D code is a different issue, which is what I think you are talking about with NoFlo (I've never seen it). If you can solve the layout problem you will be my hero, perhaps, so long as I can retain the textual representation that makes things like git, awk, sed, and so on so powerful. But I ask what is that going to buy me opposed to a typical IDE with solutions/projects/folders/files on a tab, a class browser in another tab, auto-complete and easy navigation (ctrl+right click to go to definition, and so on)? Can I 'grep' all occurrences of a word (I may want to grep comments, this is not strictly a code search)?

Hope this all doesn't come across as shooting you down or bickering, but I am passionate about this stuff, and I am guessing you are also. I've been promised the wonders of the next graphical revolution since the days of structured design, and to my way of thinking none of it has panned out. Not because of the resistance or stupidity of the unwashed masses, but because what we are doing does not inherently fit into 2D layout. There's a huge impedance mismatch between the two which I assert (without proof) will never be fixed. Prove me wrong! (I say that nicely, with a smile)

Sorry for the length; I didn't have time to make it shorter.

I write all of my software in a 2D, interactive, live-executing environment. Yes, layout is a problem. But you get good at it, and then it's not a problem anymore.

Moreover, the UI for the system I use is pretty basic and only has a few layout aids – align objects, straighten or auto-route patch cords, auto-distribute, etc. I can easily imagine a more advanced system that would solve most layout problems.

A 2D editor with all of the power or vim or emacs would be formidable. Your bad experience with "CASE tools" does not prove the rule.

What environment?

>Sorry for the length; I didn't have time to make it shorter. favorite phrase

let me try the tl;dr

assembler over machine won as well as it lost to the next high level thing because on practical terms it was easier and more practical, reality decided based on constraints..

if it doesn't get mainstream it means it's not worth it because it's more expensive...

Yes, with Pure Data and Quartz composer.

As a JS hacker I wanted to bring that kind of coding to the browser for kids so I made http://meemoo.org/ as my thesis. Now I have linked up with http://noflojs.org/ to bring the concept to more general purpose JS, Node and browser.

I won't have really convinced myself until I rewrite the graph editor with the graph editor. Working on that now.

bingo, but it can also goes the other way back, the photograph example:

how much time do you need by tangling lines (or any other method you can come with) to define all the level of detail you are looking?

now, "no matter how eloquent" if the photo can be made digital it can be saved to file and it can be described with a rather simple language, all 0 and 1, so it can be done, and methods for being that eloquent exist...

what if the programs written on text actually are a representation of some more complex ideas? (IMO that's what they are, code is just the way of ... coding those ideas to text...) and text is visual remember... (same abstraction for words and the ideas they represent)

> I'd list things such as development velocity and end-product reliability as being far more important.

Your main thesis is that software and computing should be optimized to ship products to consumers.

The main thesis of guys like Alan Kay is that we should strive to make software and computing that is optimized for expanding human potential.

Deep down most of us got in to computing because it is a fantastic way to manipulate our world.

Bret Victor's talks instill a sense of wonderment and discovery, something that has often been brow-beaten out of most of us working stiffs. The talks make us feel like there is more to our profession than just commerce. And you know what? There is. And you've forgotten that to the point where you're actually rallying against it!

Come back to the light, fine sir!

> Your main thesis is that software and computing should be optimized to ship products to consumers.

Those were just examples of other things I thought were more important, it wasn't an exhaustive list. However, it's interesting that you focus in on "optimizing to ship products to consumers", when I made mention of no such thing. I mentioned development velocity and end-product reliability. These are things that are important to the process of software development regardless of the scale of the project or the team working on it or the financial implications of the project.

They are tools. Tools for making things. They enable both faceless corporations who want to make filthy lucre by shipping boring line-of-business apps and individuals who want to "expand human potential" or "instill a sense of wonderment and discovery".

Reliability and robustness are very fundamental aspects to all software, no matter how it's built. And tools such as automated builds combined with unit and integration tests have proven to be immensely powerful in facilitating the creation of reliable software.

If your point is that non-commercial software need not take advantage of testing or productivity tools because producing a finished product that runs reliably is unimportant if you are merely trying to "expand human potential" or what-have-you then I reject that premise entirely.

If you refuse to acknowledge that the tools of the trade in the corporate world represent a fundamentally important contribution to the act of programming then you are guilty of the same willful blindness that Bret Victor derides so heartily in his talk.

You now, in some sense those early visionaries were beaten by the disruptive innovators of their day.

I think the argument here is that 1000 little choices favoring incremental advantage in the short term add up to a sub-optimal long term, but I'm not so sure. I have a *NIX machine in my phone. Designers "threw it in there" as the easy path. And it works.

c'mon, "designers threw it in there"? don't you think it was a hard thought choice by skilled engineers?

Just trying to show the Linux kernel as an inexpensive building block in this day and age. One that is used casually, in Raspbery Pi's, in virtualization, etc.

Nope, Android is based on Linux because it was available and relatively easy to get going quickly.

>> I'd list things such as development velocity and end-product reliability as being far more important.

Your main thesis is that software and computing should be optimized to ship products to consumers.

No, the main thesis is that should be optimized to solve problems and to try to adjust it as easily as it could..

>The main thesis of guys like Alan Kay is that we should strive to make software and computing that is optimized for expanding human potential.

we are, even with our current tools, now you have the opportunity to express yourself to a the world in this place, everything done with these limiting tools..., it's IMO the presentation about exploring if maybe there is a better approach...., quotes on maybe

>Come back to the light, fine sir! All are lights... is just the adequate combination required... you don't put the ultra bright leds of your vehicle in your living room or viceversa ...

Brilliant analysis! Navel gazing indeed. Typical NCA (Non-Coding Architect) stuff.

This reminds me of the UML and the Model-Driven Architecture movement of the days before, where architect astronauts imagined a happy little world where you could just get away from that dirty coding, join some boxes with lines in all sorts of charts and then have that generate your code. And it will produce code you actually want to ship and that does what you want to do.

This disdain for writing code is not new. This classic essay about "code as design" from 1992 (!) is still relevant today:


In the presenter's worldview it seems as though a lot of subtle details are ignored or just not seen, whereas in reality seemingly subtle details can sometimes be hugely important. Consider Ruby vs Python, for example. From a 10,000 foot view they almost look like the same language, but at a practical level they are very different. And a lot of that comes down to the details. There are dozens of new languages within the last few decades or so that share almost all of the same grab bag of features in a broad sense but where the rubber meets the road end up being very different languages with very different strengths. Consider, for example, C# vs Go vs Rust vs Coffeescript vs Lua. They are all hugely different languages but they are also very closely related languages.

I suspect that the killer programming medium of 2050 isn't going to be some transformatively different methodology for programming that is unrecognizable to us, it's going to be something with a lot of similarities to things I've listed above but with a different set of design choices and tradeoffs, with a more well put together underlying structure and tooling, and likely with a few new ways of doing old things thrown in and placed closer to the core than we're used to today (my guess would be error handling, testing, compiling, package management, and revision control).

There is just so much potential in plain jane text based programming that I find it odd that someone would so easily clump it into a single category and write it all off at the same time. It's a medium that can embrace everything from Java on the one hand to Haskell or lisp on the other, we haven't come anywhere close to reaching the limits of expressiveness available in text-based programming.

You can cast this entire comment in terms of hex/assembler vs C/Fortran and you get the same logical form.

We haven't come anywhere close to reaching the limits of expressiveness in assembler either, yet we've mostly given up on it for better things.

Try arguing the devil's argument position. What can you come up with that's might be better than text-based programming? Nothing? We're really in the best of all possible worlds?

I don't think it's fair to call him a Non-Coding Architect. Have you seen his other talks, or the articles he's published via his website http://worrydream.com ? Bret clearly codes.

But does he ship?

Sometimes not shipping gives us more freedom to explore.

I really wish he did. I think one of the greatest disservices he does himself is not shipping working code for the examples in his presentation. We've seen what and we're intrigued, but ship something that shows how so we can take the idea and run with it.

So, have you seen Media for Thinking the Unthinkable?


The working code for the Nile viewer presented is on GitHub:


I think the whole point of his series of talks is to inspire others to invent new things that not even he has thought of.

A delay in releasing code would be valuable then. Those too impatient to wait can start hacking on something new now and give lots of thought to this frontier and those that want to explore casually can do so a few months later when the source is released. Releasing nothing is a non-solution. Why make everyone else stumble where you have? That's just inconsiderate.

Dicebat Bernardus Carnotensis nos esse quasi nanos, gigantium humeris insidentes, ut possimus plura eis et remotiora videre, non utique proprii visus acumine, aut eminentia corporis, sed quia in altum subvenimur et extollimur magnitudine gigantea.

bingo, this remembers me of people not having time to get bored and then innovate by giving your mind some free space to go around. The typical scenario of the problem solution once you give it a break....

> But does he ship?

Why does that matter?

Fooling around with a paint brush in your study is fine, but real artist ship.

A bunch of ideas that sound great in theory are just that, it is only by surviving the crucible of the real world that ideas are validated and truly tested. When Guy Steele and James Gosling were the only software developers in the world who could program in Java, every Java program was a masterpiece. It is only once the tool was placed in the hands of mere mortals that its flaws were truly known.

Sometimes the journey is the product.

Walk around a good gallery. There are a pretty good number of pieces entitled "Study #3", or something of that sort. An artist is playing around with a tool, or a technique, trying to figure out something new.

Piano music is probably where this concept gets the most attention. Many études, such as those by Chopin, are among the most significant musical works of the era.

Yes, sometimes.

In another talk Bret claims that you basically cannot do visual art/design without immediate feedback. I was wondering how he thought people that create metal sculptures via welding, or carve marble, possibly work. It's just trivially wrong to assert you need that immediate feeback, and calls all of the reasoning into question.

Good point. I think programmers would be better off dropping the artistic pretensions altogether and accepting that they are much closer to engineers and architects in their construction of digital sandcastles.

and some artists create amazing art coding it in Processing; just take a look at Casey Reas's works.

also Beethoven wrote down his complex music quite often w/o using the instrument as he heard it in his mind...

You're forgetting about the hundred even thousands of painting they did that are not in the gallery. These paintings are the same as "shipping" even though you never see them in the gallery.

You can't play around with a tool or technique without actually producing something. You can talk about how a 47.3% incline on the brush gives the optimal result all day long, but it's the artist that actually paints that matters.

> Fooling around with a paint brush in your study is fine, but real artist ship.

Van Gogh didn't ship.

> Why does that matter?

Because I want to play with his Drawing Dynamic Viz demo. http://worrydream.com/DrawingDynamicVisualizationsTalkAddend...

He probably doesn't. He stays too much time not doing the machine work :)

The fact that you point "shipping" as a part of this discussion just shows how much he's right.

he is not allowed to talk about his ipad / Apple stuff. did TBL ship the W3C? protocols are the perfect example of shipping by design.

Typical NCA (Non-Coding Architect) stuff.

I assure you that devices of this sort require a great deal of code: http://cachepe.zzounds.com/media/quality,85/Ion_front-c20cdb...

Can you expand on what's wrong with declarative design? I'm not talking about UML, but modeling specifically.

Since I've been doing it for quite a few years I guess I know a thing or two about MDA/MDE. And it's not about disdain for writing code.

> And there are projects, such as couch db, which are based on Erlang but are moving away from it. Why is that?

That is news to me. CouchDB is knee deep in Erlang and loving it. They are merging with BigCouch (from Cloudant) which is also full on Erlang.

Come to think of it, you are probably thinking of Couchbase, which doesn't really have much "couch" in except for name and couch's original author working on it.

> Rather, it's because languages which are highly optimized for concurrency aren't always the best practical solution, even for problem domains that are highly concurrency bound, because there are a huge number of other practical constraints which can easily be just as or more important.

That is true however what is missing is that Erlang is optimized for _fault_tolerance_ first then, concurrency. Fault tolerance means isolation of resources and there is a price to pay for that. High concurrency, actor model, functional programming, immutable data, run-time code reloading all kind of flow from "fault tolerance first" idea.

It is funny, many libraries/languages/project that try to copy Erlang completely miss that one main point about and go on implementing "actors" run the good 'ol ring benchmark and claim "we surpassed Erlang, look at these results!". Yeah that is pretty amusing. I want to see them do a completely concurrent GC and hot code reloading (note: those are hard to add on, they have to be baked in to the language).

They also seem to miss the preemptive scheduling, built-in flow-control and per-process GC (which leads to minimal GC pauses). Those are impossible to achieve without a purposely built VM. No solution on Sun JVM will ever be able to replace Erlang for applications which require low-latency processing. Similarly, no native-code solution can do so either: you need your runtime to be able to preempt user code at any point of time (i.e. Go is not a replacement for erlang).

> Those are impossible to achieve without a purposely built VM. No solution on Sun JVM will ever be able to replace Erlang for applications which require low-latency processing.

Impossibility claims are very hard to prove and are often wrong, as in this case.

First, commercial hard real-time versions of the JVM with strong timing and preempting guarantees exist and are commonly used in the defense industry. To the best of my knowledge, there are no mission- and safety- critical weapon systems written in Erlang; I personally know several in Java. These are systems with hard real-time requirements that blow stuff up.

In addition, Azul's JVM guarantees no GC pauses larger than a few milliseconds (though it has no preemption guarantees).

But the fact of the matter is that even a vanilla HotSpot VM is so versatile and performant, that in practice, and if you're careful about what you're doing, you'll achieve pretty much everything Erlang gives you and lots more.

People making this claim (Joe Armstrong first among them) often fail to mention that those features that are hardest to replicate on the JVM are usually the less important ones (like perfect isolation of processes for near-perfect fault-tolerance requirements). But when it comes to low-latency stuff, the JVM can and does handily beat Erlang.

P.S. As one of the authors of said ring-benchmark-winning actor frameworks for the JVM, I can say that we do hot code swapping already, and if you buy the right JVM you also get a fully concurrent GC, and general performance that far exceeds Erlang's.

> First, commercial hard real-time versions of the JVM with strong timing and preempting guarantees exist and are commonly used in the defense industry. To the best of my knowledge, there are no mission- and safety- critical weapon systems written in Erlang; I personally know several in Java. These are systems with hard real-time requirements that blow stuff up.

That's why I said Sun JVM in first place. Azul and realtime Java are those purposely built VMs I mentioned.

Your claim about Sun JVM is more interesting. If it is so versatile why there are no network applications on JVM exist that provide at least adequate performance? Sure, JVM is blazing fast as far as code execution speed goes; the point is that writing robust zero copy networking code is so hard on JVM that this raw execution speed does not help.

I'm not sure what you mean when you say network applications that provide at least adequate performance. Aren't Java web-servers at the very top of every performance test? Isn't Java the #1 choice for low-latency high-frequency-trading applications? Aren't HBase, Hadoop and Storm running on the JVM?

The whole point of java.nio introduced over 10 years ago, back in Java 1.4, is robust zero-copy networking (with direct byte-buffers). Higher-level networking frameworks, like the very popular Netty, are based on NIO (although, truth be told, up until the last version of Netty, there was quite a bit of copying going on in there), and Netty is at the very top of high-performance networking frameworks in any language or environment.

> No solution on Sun JVM will ever be able to replace Erlang for applications which require low-latency processing


I've spent a great deal of time trying to make a very similar erlang system reach 1/100 of the throughput/latency that the LMAX guys managed in pure java. There are days when I cry out in my sleep for a shared mutable variable.

If you need shared state to pass a lot of data between CPUs than erlang might not be a right solution; however the part that needs to do it can be isolated, implemented in C, and communicated with from BEAM.

What always amuses me about LMAX is the way they describe it (breakthrough! Invention!), while what they "invented" is a ring buffer and is the the solution everybody arrives to first. This is the way how all device drivers communicate with peripheral devices, for example; and fast IPC mechanism people used in UNIX for decades. Even more funny, that it takes less code to implement it in C from scratch than use LMAX library.

Your criticism seems to be framed against where we are at today.

As programmers we have a fragmented feedback cycle regardless of whether we are writing our software in Erlang or Lisp or C++.

While it is true that realistic matters like 'integration' and 'development velocity' are important enough in modern-day programming to determine what path we must take we shouldn't let it change our destination.

If you were to envision programming nirvana would it be mostly test coverage and scrum boards?

> If you were to envision programming nirvana would it be mostly test coverage and scrum boards?

Far from it. Indeed I think that TDD is vastly over-used and often harmful and SCRUM is more often development poison than anything else. But the fact that these things are popular despite the frequent difficulty of implementing them correctly is, I think, indicative of two things. First, that there is something of serious and fundamental value there which has caused so many people to latch onto such ideas zealously, even without fully understanding where the value in such ideas comes from. And second, that due to their being distanced from the "practice of programming" they are more subject to misinterpretation and incorrect implementation (this is a hard problem in programming as even the fundamentals of object oriented design aren't immune to such problems even though they tend to be baked into programming languages fairly deeply these days).

I think that unquestionably a routine build/test cycle is a massive aid to development quality. It doesn't just facilitate keeping a shipping product on schedule it has lots of benefits that diffuse out to every aspect of development in an almost fractal fashion. For example, having a robust unit test suite vastly facilitates refactoring, which makes it easier to improve code quality, which makes it easier to maintain and modify code, which makes it easier to add or change features, and so forth. It's a snowball effect. Similarly I think that unquestionably a source control system is a massive aid to development quality and the pace. That shouldn't be a controversial statement today though it would have been a few decades ago. More so I think that unquestionably the branching and merging capabilities of advanced source control systems are a huge aid in producing software.

Development velocity has a lot of secondary and higher order effects that impact everything about the software project. It makes it easier to change directions during development, it lowers the overhead for every individual contributor, and so on. Projects with higher development velocity are more agile, they are able to respond to end-user feedback and test feedback and are more likely to produce a reliable product that represents something the end-users actually want without wasting a lot of developer time along the way.

Some people have tried to formalize such "agile" processes into very specific sets of guidelines but I think for the most part they've failed to do so successfully, and have instead created rules which serve a far too narrow niche of the programming landscape and are also in many cases too vague to be applied reliably. But that doesn't mean that agility or increased development velocity in general are bad ideas, they are almost always hugely advantageous. But they need to be exercised with a great deal of thought and pragmatism.

Also, as to testing, it also suffers from the problem of being too distanced from the task of programming. There are many core problems in testing such as the fact that test code tends to be of lower quality than product code, the problems of untested or conflicting assumptions in test code (who tests the tests?), the difficulty of creating accurate mocks, and so on. These problems can, and should, be addressed but one of the reasons why they've been slow to be addressed is that testing is still seen as something that gets bolted onto a programming language, rather than something that is an integral part of coding.

Anyway, I've rambled too long I think, it's a deep topic, but hopefully I've addressed some of your points.

It's funny that you mention testing. TDD/BDD/whatever IS declarative programming, except you're doing the declarative-to-imperative translation yourself.

TDD has always felt sort of wrong to me because it really felt like I was writing the same code twice. Progress, in this regard, would be the spec functioning as actual code.

Characterizations like 'wrongheadedness' have no part in this discussion. If his conclusions are wrong you can explain why without generalizing to his nature as a person.

"Worse is better", aka "New Jersey style", as in RPG's famous essay? [1]

[1] http://www.dreamsongs.com/WorseIsBetter.html

Daniel Weinreb's blog post response to "Worse is Better" is worth reading. DLW was "the MIT guy" and was a news.yc user (dlweinreb).



His "computers should figure out how to talk to each other" immediately reminded me the "computers should heal themselves" one finds in "objects have failed" from the same author. Both shells seem equally empty to me.

Also, if you want more fuel, you might find funny that he refers to GreenArrays in his section about parallel computing. Chuck Moore, the guy behind it, is probably the last and ultimate "binary programmer" on this planet. But at the same time, he invented a "reverse syntax highlighting", where you set the colors of your tokens in order to set their functionq, in a non-plain-text-source system (see ColorForth).

I have no idea why you're calling Chuck Moore a "binary programmer", by the definition given in today's talk.

Forth is anything but machine code. Forth and Lisp both share the rare ability to describe both the lowest and the highest layers of abstraction equally well.

Chuck Moore is definitely an interesting guy. It's hard to stereotype him, but he is definitely closer to the metal than most other language designers.

For one thing, Forth is the machine code for the chips he designs. Moreover, in his various iterations of his systems on the x86, he was never afraid to insert hex codes in his source when he needed too, typically in order to implement his primitives, because he judged that an assembler was unnecessary. At one point he tried to build a system in which he coded in something rather close to object code. This system led him to his colorForth, in which you actually edit the object code with a specialized editor that makes it look like you're editing normal source code.

Forth does absolutely not share the ability to describe both high and low level equally well. Heck, Moore even rejects the idea of "levels" of programming.

Bret Victor's talk wasn't about any particular technology. It was about being able to change your mind. It's not important that "binary programmers" programmed in machine code. It's important that they refused to change their minds. We should avoid being "binary programmers" in this sense.

> For one thing, Forth is the machine code for the chips he designs.

You're right, I should've said Forth isn't just machine code.

> Forth does absolutely not share the ability to describe both high and low level equally well. Heck, Moore even rejects the idea of "levels" of programming.

This is a misunderstanding. He rejects complex programming hierarchies, wishing instead to simply have a programmer-Forth interface and a Forth-machine interface. He describes programming in Forth as building up the language towards the problem, from a lower level to a higher level:

"The whole point of Forth was that you didn't write programs in Forth, you wrote vocabularies in Forth. When you devised an application, you wrote a hundred words or so that discussed the application, and you used those hundred words to write a one line definition to solve the application. It is not easy to find those hundred words, but they exist, they always exist." [1]


"Yes, I am struck by the duality between Lisp and Lambda Calculus vs. Forth and postfix. But I am not impressed by the productivity of functional languages." [2]

Here's what others have said:

"Forth certainly starts out as a low-level language; however, as you define additional words, the level of abstraction increases arbitrarily." [3]

Do you consider Factor a Forth? I do.

"Factor allows the clean integration of high-level and low-level code with extensive support for calling libraries in other languages and for efficient manipulation of binary data." [4]

1. http://c2.com/cgi/wiki?ForthValues

2. http://developers.slashdot.org/story/01/09/11/139249/chuck-m...

3. http://c2.com/cgi/wiki?ForthVsLisp

4. http://factorcode.org/littledan/dls.pdf

Absolutely. I was waiting for him to mention what I think of as the Unix/Plan 9/REST principle the whole time. IMO this is one of the most important concepts in computing, but too few people are explicitly aware of it. Unfortunately he didn't mention it.

Really what Victor is complaining about is the web. He doesn't like the fact that we are hand-coding HTML and CSS in vim instead of directly manipulating spatial objects. (Although HTML is certainly declarative. Browsers actually do separate intent from device-specific details. We are not writing Win32 API calls to draw stuff, though he didn't acknowledge that.)

It has been impressed on me a lot lately how much the web is simply a distributed Unix. It's built on a file-system-like addressing scheme. Everything is a stream of bytes (with some additional HTTP header metadata). There are bunch of orthogonal domain-specific languages (HTML/CSS/etc vs troff/sed/etc). They both have a certain messiness, but that's necessary and not accidental.

This design is not accidental. It was taken from Unix and renamed "REST". The Unix/Plan 9/REST principle is essentially the same as the Alan Perlis quote: "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." [1] The single data structure is the stream of bytes, or the file / file descriptor.

For the source code example, how would you write a language-independent grep if every language had its own representation? How about diff? hg or git? merge tools? A tool to jump to source location from compiler output? It takes multiple languages to solve any non-trivial problem, so you will end up with an M x N combinatorial explosion (N tools for each of M languages), whereas you want M + N (M languages + N tools that operate on ALL languages).

Most good programming languages have the same flavor -- they are built around a single data structure. In C, this is the pointer + offset (structs, arrays). In Python/Lua it's the dictionary. In R it's the data frame; in Matlab it's the matrix. In Lisp/Scheme it's the list.

Java and C++ tend to have exploding codebase size because of the proliferation of types, which cause the M * N explosion. Rich Hickey has some good things to say about this.

I would posit that Windows and certain other software ecosystems have reached a fundamental scaling limit because of the O(M*N) explosion. Even if you have $100 billion, you can't write enough code to cover this space.

Another part of this is the dichotomy between visually-oriented people and language-oriented people. A great read on this schism is: http://www.cryptonomicon.com/beginning.html . IMO language-oriented tools compose better and abstract better than visual tools. In this thread, there is a great point that code is not 2D or 3D; it has richer structure than can really be represented that way.

I really like Bret Victor's talks and ideas. His other talks are actually proposing solutions, and they are astounding. But this one comes off more as complaining, without any real solutions.

He completely misunderstands the reason for the current state of affairs. It's NOT because we are ignorant of history. It's because language-oriented abstractions scale better and let programmers get things done more quickly.

That's not to say this won't change, so I'm glad he's working on it.

[1] http://www.cs.yale.edu/quotes.html

> Most good programming languages have the same flavor -- they are built around a single data structure. In C, this is the pointer + offset (structs, arrays). In Python/Lua it's the dictionary. In R it's the data frame; in Matlab it's the matrix. In Lisp/Scheme it's the list.

Lists are not very important for Lisp, apart from writing macros.

> Java and C++ tend to have exploding codebase size because of the proliferation of types, which cause the M * N explosion. Rich Hickey has some good things to say about this.

Haskell has even more types, and no exloding codebases. The `M * N explosion' is handled differently there.

> For the source code example, how would you write a language-independent grep if every language had its own representation? How about diff? hg or git? merge tools? A tool to jump to source location from compiler output? It takes multiple languages to solve any non-trivial problem, so you will end up with an M x N combinatorial explosion (N tools for each of M languages), whereas you want M + N (M languages + N tools that operate on ALL languages).

You'd use plugins and common interfaces. (I'm all in favour of text, but the alternative is still possible, if hard.)

> Lists are not very important for Lisp, apart from writing macros.

I'm not sure I agree. Sure, in most dialects you are given access to Arrays, Classes, and other types that are well used. And you can choose to avoid lists, just like you can avoid using dictionaries in Python, and Lua. But I find that the cons cell is used rather commonly in standard Lisp code.

You can't --really-- avoid dictionaries in python, as namespaces and classes actually are dictionaries, and can be treated as such.

In Lua, all global variables are inserted into the global dictionary _G, which is accessible at runtime. This means you can't even write a simple program consisting of only functions becouse they are all added and exectued from that global dictionary.

There where also other languages which could have been mentioned. In Javascript for instance, functions and arrays are actually just special objects/dictionaries. You can call .length on a function, you can add functions to the prototype of Array.

Those are just implementation details. They're not really relevant to the way your program is constructed or the way you reason about it.

I think Haskell handles the combinations explosion with its polymorphic types and higher-order abstractions. There are many, many types, but there are also abstractions over types. Java/C++ do not get that. `sort :: Ord a => [a] -> [a]` works for infinite amount of types that have `Ord` instance.

I don't agree that lists are not very important for Lisp, they're essential for functional programming as we know it today.

It's not an either-or. My prediction is that Victor's tools will be an optional layer on top of text-based representations. I'd go as far as to say that source code will always be represented as text. You can always build Visual Studio and IntelliJ and arbitrarily complex representations on top of text. It's just that it takes a lot of engineering effort, and the tools become obsolete as new languages are developed. We HAD Visual Studio for VB; it's just that everyone moved onto the web and Perl/Python/Ruby/JS, and they got by fine without IDEs.

There are people trying to come up with a common structured base for all languages. The problem is that if it's common to all languages, then it won't offer much more than text does. Languages are that diverse.

I don't want to get into a flame war, but Haskell hasn't passed a certain threshold for it to be even considered for the problem of "exploding code base size". That said, the design of C++ STL is basically to avoid the M*N explosion with strong types. It is well done but it also causes a lot of well-known problems. Unfortunately most C++ code is not as carefully designed as the STL.

>I don't want to get into a flame war, but Haskell hasn't passed a certain threshold for it to be even considered for the problem of "exploding code base size".

What threshold?

>It is well done but it also causes a lot of well-known problems.

Like what? And why do you assume those problems are inherent to having types?

Lists are not very important for Lisp, apart from writing macros.

Or in other words, you haven't quite grokked Lisp yet. The macros are the point!

>Java and C++ tend to have exploding codebase size because of the proliferation of types, which cause the M * N explosion.

I think haskell and friends demonstrate that your explanation for java and C++ "exploding" is incorrect. Haskell is all about types, lots of types, and making your own types is so basic and simple that it happens all the time everywhere. Yet, there is no code explosion.

See my comment below about C++ STL. There are ways to avoid the combinatorial explosion with strong types, but there are also downsides.

@InclinedPlane: I would suggest to you to ask yourself one question: what is the difference between a programmer and a user? if I code in language XY I'm already a consumer of a library called XY (and the operating system and the global network). most "programmers" today have nothing to do with memory (and the hardware of course). the next big thing is never just a simple iteration of the current paradigm. the problem with many ideas he mentions were not practical for a long time. On the other hand much of computing has simply to do with conventions (protocols of different kinds).

Some ideas worth mentioning are in Gerry Sussman's video.

The link in the presentation.

To add to the UNIX thought, it goes beyond text configuration--the very design of system calls that can fail with EINTR error code was a kind of worse is better design approach.

> Similarly, he casually mentions a programming language founded on unique principles designed for concurrency, he doesn't name it but that language is Erlang.

I haven't seen the talk yet and just browsed the slides, but just from your description Mozart/Oz could also fit the bill since it was designed for distributed/concurrent programming as well. Furthermore, Oz's "Browser" has some f-ing cool interactive stuff made possible due to the specific model of concurrency in the system. I must say that programming in Mozart/Oz feels completely different to Erlang, despite that fact that both have a common origin in Prolog.

<edit: adding more ..>

> He is stuck in a model where "programming" is the act of translating an idea to a machine representation. But we've known for decades that at best this is a minority amount of the work necessary to build software.

There is a school of thought whereby "programming" is the act of coding itself. To put it in other words, it is a process of manipulating a formal system to cause effects in the world. That system could be a linear stream of symbols, or a 2D space of tiles, or any of myriad forms, but in the end much of the "pleasure of programming" is attributable to the possibility of play with such a system.

To jump a bit ahead, consider the Leap Motion controller. What if we had a system built where we can sculpt 3D geometries and had a way to map these "sculptures" to programs for doing various things? I say this 'cos "programming", a lot of the times, feels like origami to me when I'm actually coding. Lisps, in particular, evoke that feeling strongly. So, I'm excited about Leap Motion for the potential impact it can have on "programming".

I think representations are important, and the "school of direct manipulation" misses this point. Just because we have great computing power at our finger tips today, we won't revert to using roman numerals for numbers. One way to interpret the claims of proponents of direct manipulation is that programming ought to be a dialogue between a representation and the effect on the world instead of a monologue or, at best, a long distance call.

Bret has expressed favour for dynamic representations in some of his writings, but I'm not entirely sure that they are the best for dynamic processes. There is nothing uncool about static representations like code. (Well, that's all we've had for ages now, anyway.) What we've been lacking is a variety of static representations, since language has been central to our programming culture and history. What would an alien civilization program in if they had multidimensional communication means?

To conclude, my current belief is that anyone searching for "the one language" or "the one system" to rule them all is trying to find Joshu's "Mu" by studying scriptures. Every system (a.k.a. representation) is going to have certain aspects that it handles well and certain others that it does poorly on. That ought to be a theorem or something, but I'm not sophisticated enough, yet, to formally articulate that :)

Ok, just saw the talk and Bret's certainly referring to Erlang here.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact