
The Future of Programming - rpearl
http://worrydream.com/dbx/
======
ibdknox
It's a fun talk by Bret and I think he echoes a lot of the murmurings that
have been going around the community lately. It's funny that he latched onto
some of the same core tenants we've been kicking around, but from a very
different angle. I started with gathering data on what makes programming hard,
he looked at history to see what made programming different. It's a neat
approach and this talk laid a good conceptual foundation for the next step:
coming up with a solution.

In my case, my work on Light Table has certainly proven at least one thing:
what we have now is very far from where we could be. Programming is broken and
I've finally come to an understanding of how we can categorize and
systematically address that brokeness. If these ideas interest you, I highly
encourage you to come to my StrangeLoop talk. I'll be presenting that next
step forward: what a system like this would look like and what it can really
do for us.

These are exciting times and I've never been as stoked as I am for what's
coming, probably much sooner than people think.

EDIT: Here's the link to the talk [https://thestrangeloop.com/sessions/tbd--
11](https://thestrangeloop.com/sessions/tbd--11)

~~~
robomartin
APL. Start there. Evolve a true language from that reference plane. By this I
mean with a true domain-specific (meaning: programming) alphabet (symbols)
that encapsulate much of what we've learned in the last 60 years. A language
allows you to speak (or type), think and describe concepts efficiently.

Programming in APL, for me at least, was like entering into a secondary zone
after you were in the zone. The first step is to be in the "I am now focused
on programming" zone. Then there's the "I am now in my problem space" zone.
This is exactly how it works with APL.

I used the language extensively for probably a decade and nothing has ever
approached it in this regard. Instead we are mired in the innards of the
machine, micromanaging absolutely everything with incredible verbosity and
granularity.

I really feel that for programming/computing to really evolve to another level
we need to start loosing some of the links to the ancient world of
programming. There's little difference between what you had to do with a
Fortran program and what you do with some of the modern languages in common
use. That's not he kind of progress that is going to make a dent.

~~~
ced
What makes APL different from, say, Lisp or Haskell? Do you have tutorials to
recommend?

~~~
GeneralMayhem
It's very hard to find good tutorials on APL because it's not very popular and
most of its implementations are closed-source and not compatible with each
other's language extensions, but it's most recognizable for its extreme use of
non-standard codepoints. Every function in APL is defined by a single
character, but those characters range from . to most of the Greek alphabet
(taking similar meanings as in abstract math) to things like ⍋ (sort
ascending). Wikipedia has a few fun examples if you just want a very brief
taste; you can also read a tutorial from MicroAPL at
[http://www.microapl.com/apl/tutorial_contents.html](http://www.microapl.com/apl/tutorial_contents.html)

It's mostly good for being able to express mathematical formulas with very
little translation from the math world - "executable proofs," I think the
quote is - and having matrices of arbitrary dimension as first-class values is
unusual if not unique. But for any practical purpose it's to Haskell what
Haskell is to Java.

~~~
mietek
> But for any practical purpose it's to Haskell what Haskell is to Java.

Can you elaborate on this? As I understand, the core strengths of APL are
succinct notation, built-in verbs which operate on vectors/matrices, and a
requirement to program in a point-free style. All of this can be done in
Haskell.

~~~
jfarmer
A Java programmer unfamiliar with Haskell looks at a Haskell program and
shouts, "I can't make even the slightest bit of sense out of this!"

A Haskell programmer unfamiliar with APL looks at an APL program and...

~~~
asdasf
>A Haskell programmer unfamiliar with APL looks at an APL program and...

And says "what's the big deal?". That's exactly the question, what is the big
deal. APL isn't scary, I'm not shouting "I can't make sense of this", I am
asking "how is this better than haskell in the same way haskell is better than
java?".

~~~
jfarmer
I'm not really interested in debating the reaction of an imagined Haskell
programmer. I was just restating what the grandparent's analogy meant.

Your question is fine, but not what he meant by the analogy.

~~~
asdasf
I'm not imagined, I am real. I know you were restating the analogy, the
problem is that the analogy is wrong. I can't find anything about APL that a
haskell developer would find new or interesting or frightening or anything
like that.

~~~
jfarmer
Ok.

------
humanrebar
I very much enjoyed Bret's talk, but the visual programming part of his talk
was rather half-baked. I say this as someone who has done visual coding
professionally in the past. People have been trying to crack the "drawing
programs" nut for decades. It's not a forgotten idea. It's so not forgotten
that there is a wikipedia page listing dozens of attempts over the years:
[http://en.wikipedia.org/wiki/Visual_programming_language](http://en.wikipedia.org/wiki/Visual_programming_language).

The reason we still code in text is because visual programming is not a hard
problem -- it's a dozen hard problems. Think about all of the tools we use to
consume, analyze, or produce textual source code. There are code navigators,
searchers, transformers, formatters, highlighters, versioners, change
managers, debuggers, compilers, analyzers, generators, and review tools. All
of those use cases would need to be fulfilled. Unlike diagrams, text is a
convenient serialization and storage format, you can leverage the Unix
philosophy to use the best of breed of the tools you need. We don't have a
lingua franca for diagrams like we do for text files.

It's not due to dogma or laziness that we use text to write code. It's because
the above list of things are not trivial to get right and making them work on
pictures is orders of magnitude harder than making them work with text.

EDIT: Wordsmithing

~~~
anigbrowl
_It 's so not forgotten that there is a wikipedia page listing dozens of
attempts over the years:
[http://en.wikipedia.org/wiki/Visual_programming_language.*](http://en.wikipedia.org/wiki/Visual_programming_language.*)

A mischaracertization. Software like Reaktor is extremely successful in its
domain and widely deployed: [http://www.native-
instruments.com/en/products/komplete/synth...](http://www.native-
instruments.com/en/products/komplete/synths-
samplers/reaktor-5/overview/modify-construct/) as is Max/MSP:
[http://cycling74.com/videos/product/](http://cycling74.com/videos/product/)

_We don't have a lingua franca for diagrams like we do for text files.*

What is UML, then? If you feel stuck with this then maybe you need to look
outside the text = code bubble and get some input on tool design from other
sources. I agree that text is a convenient serialization and storage format,
but it's a terrible design and analysis medium.

I mean, consider CSound, which is a tool for writing music with computers that
has a venerable heritage going back to the 1970s. You have one set of code for
defining the charactersistics of the sound, and another for defining the
characteristic of the ntoes you play with those sounds:
[http://www.csounds.com/man/qr/score.htm](http://www.csounds.com/man/qr/score.htm)
and
[http://www.csounds.com/manual/html/index.html](http://www.csounds.com/manual/html/index.html)

CSound is a moderately good teaching tool, and given its heritage it's an
impressive piece of technology. But _nobody_ writes music in Csound except a
few computer music professors and the students in their departments that have
to do as part of their assignments, and 99% of music composed in CSound is a)
dreadful and b) could have been done much faster on either a modular
synthesizer or with Max/MSP. Electronic musicians feel the same way about
CSound that you as a programmer would feel about an elderly relative that
keeps talking about when everything was done with vacuum tubes and toggle
switches...you respect it but it seems laughably primitive and has nothing to
do with solving actual problems. The very few people that need low-level
control on specific hardware platforms work in C or assembler.

I think this is pretty relevant here because one of Bret Victor's more
impressive achievements is having written some very impressive operating
software for a series of synthesizers from Alesis. I'd be pretty astonished if
he even considered CSound for the task.

~~~
humanrebar
Far from being stuck in a bubble, I actually spent a couple years developing
code in a UML-driven development environment (as in, I spent my days drawing
UML diagrams that automatically turned into executable code). First of all,
you cannot write any nontrivial program in UML alone. It is not nearly
specific enough. UML is to a working program as a table of contents is to a
technical manual. And in case you think I'm extrapolating from one bad
experience, I've also used LabView have seen the parallel difficulties in that
language.

Now, I agree that higher levels of abstraction will be needed in the future,
but I disagree that visual programming is an obviously superior abstraction.
In fact, I believe that people have been earnestly barking up that tree for
decades with little success for reasons unrelated to old-fashioned attitudes.
There are practical and technical reasons why developing visual programming
tools and ecosystems will always be more difficult than developing text-based
ones.

Take merging for example. Merging two versions of a source file is many times
over a solved problem (not that there aren't new developments to be made). In
contrast, merging two versions of a UML diagram is very much a manual process
(to the extent that it's possible at all). Now consider creating a change
management tool allows you to branch and merge UML diagrams. This is orders of
magnitude harder yet. These are essential and straightforward use cases that
are much more complex in a visual medium. Without these basic features, visual
programming will not scale well to even medium-size teams.

I can go into more detail about issues with visual programming if I still
haven't made my case. And I would love to hear from people with visual
programming experience that have contradicting opinions. It's always possible
that I missed something.

~~~
anigbrowl
I appreciate the additional context and totally get where you're coming from.
The only nitpick I'd make is this:

 _Merging two versions of a source file is many times over a solved problem_

Granted - but isn't this also a limiting factor? It's not that I don't think
anything should ever be reducible to code form, but why is that visual mapping
of a complete program isn't a standard everyday tool? I mean, it's all very
well that we have syntax highlighters showing keywords, variables and so on,
but why is it that when I open a program there isn't a tool to automatically
show me loops, arrays and so on?

Loops are one of the simplest programming structures; 90% of loops look like:

    
    
      LOOP foo FROM bar to baz:
        something
        something
        something
        profit
        foo = foo + 1
      END LOOP
    

I mean, software engineering shouldn't be about _syntax_ , it should be about
_structure_ , and yet there don't seem to be many tools around that open up a
source file and build branching diagrams and loop modules automatically. Why
is that? Why don't we even have structural highlighting rather than syntax
highlighting?

~~~
RogerL
Can you elaborate. I see the structure, in the indenting. My IDE (Visual
Studio) has little lines and + boxes that allow me to collapse and expand code
like this. It's useless, because the most part I kind of care what "something"
is, and the collapse is not replaced with a nice pseudocode "frange the
kibbleflits" statement. I have tools that can generate diagrams showing me
class hierarchies, call stacks, and so on. I rarely (almost never) find the
useful. Maybe you have something different in mind?

------
stiff
In 2040 someone will discover Haskell, shed tears on why C#++.cloud is so
widespread instead in the industry, and use it to conclude the sorry state of
the world. Seriously, don't compare what was published in papers 50 years ago
with what business uses today, compare it with what is in papers now, and
there are lots of interesting things going on all the time, when was the last
time you even checked? Probabilistic programming? Applications of category
theory to functional programming? Type theory? Software transactional memory?

Woody Allen did this great movie some time ago, "Midnight in Paris", where the
main character, living in present times, dreams of moving back in time to the
1920s as the best time for literature ever. When the occasion to really go
back appears though, he discovers the writers of the 1920s thought the best
literature was done in 1890s, and so he has to go back again, then again, ...
This talk is like this, sentiment blinding a sober assessment.

~~~
voltagex_
You can tear me to shreds for this, but until I can code Haskell in Visual
Studio, it's going to be hard for it to gain any traction in $BIGCORP.

~~~
squidsoup
F# is a first class citizen in Visual Studio, yet it doesn't appear to have
gained much traction in 'enterprise software development'.

~~~
voltagex_
I'm honestly not sure why that is, but it is a completely different way of
thinking to code in functional languages.

------
Kronopath
I just watched most of this talk while a large C++ codebase was compiling, in
the midst of trying to find one of many bugs caused by multiple interacting
stateful systems, on a product with so much legacy code that it'll be lucky if
it's sustainable for another ten years.

Like Bret's other talk, "Inventing on Principle", this talk has affected me
deeply. I don't want this anymore. I want to invent the future.

------
michaelrbock
A quote from the footnotes:

" _' The most dangerous thought you can have a creative person is to think you
know what you're doing.'_

It's possible to misinterpret what I'm saying here. When I talk about not
knowing what you're doing, I'm arguing against "expertise", a feeling of
mastery that traps you in a particular way of thinking.

But I want to be clear -- _I am not advocating ignorance_. Instead, I'm
suggesting a kind of informed skepticism, a kind of humility.

Ignorance is remaining willfully unaware of the existing base of knowledge in
a field, proudly jumping in and stumbling around. This approach is fashionable
in certain hacker/maker circles today, and it's poison."

------
oh_teh_meows
I think much of the motivation for developing new paradigms stems from growing
frustration with tool-induced blindness, for lack of a better term. We spend
much of our time chasing that seg-fault error instead of engineering the
solution to the problem we're trying to solve.

A new programming paradigm allows us to reframe a problem in a different
space, much like how changing a matrix's basis changes its apparent
complexity, so to speak.

The ultimate goal, I think, is to come up with a paradigm that would map
computational problems, without loss of generality, to what our primate brains
would find intuitive. This lowers our cognitive burden when attempting to
describe a solution, and also to allow us to see clearer what the cause of a
problem may be. For example, if you're a game developer, and you find some
rendering problems due to some objects intersecting each other, but you're not
sure where it happens, Instead of poring over text dump of numerical vector
coordinates, it'd be better to visualize them. The abnormality would present
themselves clearly, even to a layman's eyes. I suspect this is what Victor is
trying to get at. Imagine, if you will, that you have a graphical
representation of your code, and a piece of code that could potentially
segfault shows up as an irregularity of some form (different textures,
different color, different shape, etc), so you can spot them and fix them
right away. The irregularity is not a result of some static error analysis,
but is instead the result of some emergent property resulting from graphical
presentation rules (mapping from problem space to graphic space). We're good
at spatial visualization, so I wonder if it's valid to come up with a
programming language that would leverage more of our built-in capability in
that area. This may seem like wishful thinking or even intractable (perhaps
due to a certain perception limitation...which we have to overcome using more
cognitive resources), but I certainly hope we'll get there in our life time.

~~~
ericHosick
> The ultimate goal, I think, is to come up with a paradigm that would map
> computational problems, without loss of generality, to what our primate
> brains would find intuitive.

I really agree with this statement.

------
jingo
At the end of the video he warns of the dangers of "dogma".

He looks really nervous and impatient in this talk. He seems afraid that it
won't be well received. If so, it is interesting to note that this is what
dogma in fact leads to... repression of new ideas, fear of free thinkers and
the stagnation of true scientific progress. It means guys like Bret Victor
will feel awkward giving a talk that questions the status quo.

"Breakthroughs" do not happen when we are all surrounded by impenetrable walls
of dogma. I wonder if we today could even recognize a true breakthrough in
computing if we saw one. The only ones I see are from the era Bret is talking
about. What happens when those are forgotten?

My friends, there is a simple thing I learned in another discpline outside of
computing where I witnessed doing what others thought impossible: the power of
irreverance. This is where true innovation comes from.

It means not only questioning whether you know what you are doing, but
questioning whether others do. That frees you up to work on what you want to
work on, even when it is in a different direction than everyone else. That is
where innovation comes from: irreverance.

------
agentultra
One thing I can't help but noticing is that the majority of discussions
regarding this talk are focusing on the examples presented.

I thought it was pretty clear that the talk _wasn 't_ about whether
_constraint-based solvers_ and _visual programming environments_ were the
"future of programming." It was a talk about _dogma_. Brent points out that
none of the examples he's mentioned are inherently important to what he was
trying to get across: they were just examples. The point he was trying to
elucidate was that our collective body of knowledge limits our ability to see
new ways of thinking about the problems we face.

It is at least somewhat related to the adage, _when you have a hammer every
problem looks like a nail._ He's just taking a historical view and using irony
to illustrate his point. When computer technology reached a certain level of
power there was a blossoming garden of innovative ideas because the majority
of people didn't know _what you cannot do_.

What I think he was trying to say, and this is partly coloured by my own
beliefs, is that beginner's mind is important. Dogma has a way of narrowing
your view of the world. Innovation is slow and incremental but there's also a
very real need to be wild and creative as well. There's room for both and
we've just been focusing on one rather than the other for the last 40 years.

~~~
humanrebar
In this discussion I've been trying to make the point that he's missed the
mark even in the idea that developer attitude is the inherent barrier
preventing these breakthroughs. I believe he's stealing bases here. At least
with respect to visual programming, there is objective evidence (that is
easily google-able) that this problem is actively being tackled but with very
little success. Active and recently failed projects seem to be glaring
counterexamples to his broader point, at least with respect to the visual
programming domain.

I suspect that my point about presuming developer attitudes are the biggest
problems here can more broadly applied though I do not have enough experience
with constraint-based solvers and his other examples to do more than wildly
speculate.

------
bsaul
Enough already ! Could anyone with 100 millions $ give this guy a team of 100
Phds to create the new software revolution ?

This guy is not a good or great or fabulous computer scientist, this guy is
something else entirely. He's a true creative Thinker. He doesn't have a
vision, he's got tons of them. Every subject he starts thinking about he comes
with new ideas.

He shouldn't be doing presentations, he should run a company.

~~~
rdouble
Based on his personal writings, it seems like he prefers to be left alone to
work on his ideas. It does not seem like he wants to run a company, or really
even work with others.

~~~
TeMPOraL
So maybe just give him the $100M and leave him alone to do whatever he feels
like doing. I'm pretty sure something good could come out of it.

------
pnathan
Very good summary of the state of the art in the early 70s.

His analysis of the "API" problem reminds me of some of the ideas Jaron Lanier
was floating around about ten years ago. I can't recall the name of it, but it
was some sort of biologically inspired handshake mechanism between software
'agents'.

What I think such things require is an understanding of what is lacking in
order to search for it; as near as I can tell, that requires some fashion of
self-awareness. This, as far as I can conceive, recurses into someone writing
code, whether it be Planner or XML. But my vision is cloudy on such matters.

I should note that I think Brett is one of the leading thinkers of his (my)
generation, and have a lot of respect for his ideas.

~~~
AsymetricCom
I think you might be thinking of the RNA metaprotocol or Recursive Network
Architecture.

~~~
pnathan
Phenotropics, actually.

[http://discovermagazine.com/2007/jul/jaron2019s-world](http://discovermagazine.com/2007/jul/jaron2019s-world)

I dug around for a bit when I came across the idea but never could figure out
where the reification of the idea went.

I'll eyeball RNA, but at first glance it doesn't appear quite the same idea.

------
InclinedPlane
An interesting talk, and certainly entertaining, but I think it falls very
short. Ultimately it turns into typical "architecture astronaut" naval gazing.
He focuses on the shortcomings of "traditional programming" while at the same
time imagining only the positive aspects of untried methods. To be honest,
such an approach is frankly childish, and unhelpful. His closing line is a
good one but it's also trite, and the advice he seems to give leading up to it
(i.e. "let's use all these revolutionary ideas from the '60s and '70s and come
up with even more revolutionary ideas") is not practical.

To pick one example: he derides programming via "text dump" and lauds the idea
of "direct manipulations of data". However, there are many very strong
arguments for using plain-text (read "The Pragmatic Programmer" for some very
excellent defenses of such). Moreover, it's not as though binary formats and
"direct manipulations" haven't been tried. They've been tried a great many
times. And except for specific use cases they've been found to be a horrible
way to program with a plethora of failed attempts.

Similarly, he casually mentions a programming language founded on unique
principles designed for concurrency, he doesn't name it but that language is
Erlang. The interesting thing about Erlang is that it is a fully fledged
language today. It exists, it has a ton of support (because it's used in
industry), and it's easy to install and use. And it also does what it's
advertised to do: excel at concurrency. However, there aren't many practical
projects, even ones that are highly concurrency dependent, that use Erlang.
And there are projects, such as couch db, which are based on Erlang but are
moving away from it. Why is that? Is it because the programmers are afraid of
changing their conceptions of "what it means to program"? Obviously not, they
have already been using Erlang. Rather, it's because languages which are
highly optimized for concurrency aren't always the best practical solution,
even for problem domains that are highly concurrency bound, because there are
a huge number of other practical constraints which can easily be just as or
more important.

Again, here we have an example of someone pushing ideas that seem to have a
lot of merit in the abstract but in the real world meet with so much
complexity and roadblocks that they prove to be unworkable most of the time.

It's a classic "worse is better" scenario. His insult of the use of markup
languages on the web is a perfect example of his wrongheadedness. It took me a
while to realize that it was an insult because in reality the use of "text
dump" markup languages is one of the key enabling features of the web. It's a
big reason why it's been able to become so successful, so widespread, so
flexible, and so powerful so quickly. But by the same token, it's filled with
plenty of ugliness and inelegance and is quite easy to deride.

It's funny how he mentions unix with some hints of how awesome it is, or will
be, but ignores the fact that it's also a "worse is better" sort of system.
It's based off a very primitive core idea, everything is a file, and very
heavily reliant on "text dump" based programming and configuration. Unix can
be quite easily, and accurately, derided as a heaping pile of text dumps in a
simple file system. But that model turns out to be so amazingly flexible and
robust that it creates a huge amount of potential, which has been realized
today in a unix heritage OS, linux, that runs on everything from watches to
smartphones to servers to routers and so on.

Victor highlights several ideas which he thinks should be at the core of how
we advance the state of the art in the practice of programming (e.g. goal
based programming, direct manipulations of data, concurrency, etc.) but I
would say that those issues are far from the most important in programming
today. I'd list things such as development velocity and end-product
reliability as being far more important. And the best ways to achieve those
things are not even on his list.

Most damningly, he falls into his own trap of being blind to what
"programming" can mean. He is stuck in a model where "programming" is the act
of translating an idea to a machine representation. But we've known for
decades that at best this is a minority amount of the work necessary to build
software. For all of Victor's examples of the willingly blind programmers of
the 1960s who saw things like symbolic coding, object oriented design and so
forth as "not programming" and more like clerical work he makes fundamentally
the same error. Today testing, integration, building, refactoring and so on
are all hugely fundamental aspects of prototyping and critically important to
end-product quality as well as development velocity. And increasingly tooling
is placing such things closer and closer to "the act of programming", and yet
Victor himself still seems to be quite blind to the idea of these things as
"programming". Though I don't think that will be the view among programmers a
few decades down the road.

~~~
astral303
Brilliant analysis! Navel gazing indeed. Typical NCA (Non-Coding Architect)
stuff.

This reminds me of the UML and the Model-Driven Architecture movement of the
days before, where architect astronauts imagined a happy little world where
you could just get away from that dirty coding, join some boxes with lines in
all sorts of charts and then have that generate your code. And it will produce
code you actually want to ship and that does what you want to do.

This disdain for writing code is not new. This classic essay about "code as
design" from 1992 (!) is still relevant today:

[http://www.developerdotstar.com/mag/articles/reeves_original...](http://www.developerdotstar.com/mag/articles/reeves_originalletter.html)

~~~
InclinedPlane
In the presenter's worldview it seems as though a lot of subtle details are
ignored or just not seen, whereas in reality seemingly subtle details can
sometimes be hugely important. Consider Ruby vs Python, for example. From a
10,000 foot view they almost look like the same language, but at a practical
level they are very different. And a lot of that comes down to the details.
There are dozens of new languages within the last few decades or so that share
almost all of the same grab bag of features in a broad sense but where the
rubber meets the road end up being very different languages with very
different strengths. Consider, for example, C# vs Go vs Rust vs Coffeescript
vs Lua. They are all hugely different languages but they are also very closely
related languages.

I suspect that the killer programming medium of 2050 isn't going to be some
transformatively different methodology for programming that is unrecognizable
to us, it's going to be something with a lot of similarities to things I've
listed above but with a different set of design choices and tradeoffs, with a
more well put together underlying structure and tooling, and likely with a few
new ways of doing old things thrown in and placed closer to the core than
we're used to today (my guess would be error handling, testing, compiling,
package management, and revision control).

There is just so much potential in plain jane text based programming that I
find it odd that someone would so easily clump it into a single category and
write it all off at the same time. It's a medium that can embrace everything
from Java on the one hand to Haskell or lisp on the other, we haven't come
anywhere close to reaching the limits of expressiveness available in text-
based programming.

~~~
bonaldi
You can cast this entire comment in terms of hex/assembler vs C/Fortran and
you get the same logical form.

We haven't come anywhere close to reaching the limits of expressiveness in
assembler either, yet we've mostly given up on it for better things.

Try arguing the devil's argument position. What can you come up with that's
might be better than text-based programming? Nothing? We're really in the best
of all possible worlds?

------
rpearl
"The most dangerous thought you can have as a creative person is to think you
know what you're doing."

~~~
tmoertel
Or as the physicist and Bayesian pioneer E. T. Janes, wrote:

 _In any field, the Establishment is seldom in pursuit of the truth, because
it is composed of those who sincerely believe that they are already in
possession of it._

From _Probability Theory: The Logic of Science_ , E.T. Jaynes, 2003.

------
tel
> Ignorance is remaining willfully unaware of the existing base of knowledge
> in a field, proudly jumping in and stumbling around. This approach is
> fashionable in certain hacker/maker circles today, and it's poison.

> Learn tools, and use tools, but don't accept tools. Always distrust them;
> always be alert for alternative ways of thinking. This is what I mean by
> avoiding the conviction that you "know what you're doing".

These two statements have done a better job explaining my feelings on
expertise than almost any of my attempts. Thank you, Bret.

~~~
ryanSrich
> Ignorance is remaining willfully unaware of the existing base of knowledge
> in a field, proudly jumping in and stumbling around. This approach is
> fashionable in certain hacker/maker circles today, and it's poison.

Any more abstraction on this statement?

I'm interpreting it as "don't try new things because you don't know what
you're doing", which just so happens to feel like the exact opposite of what
Bret is trying to convey.

~~~
tel
As I'm interpreting it, it's a cautionary statement against worshipping
ignorance. It's brave and difficult to do something that's dissimilar to the
ways you've learned and become powerful through performing. It's foolish to
dive in without learning all that you can about what those who have been here
before discovered.

I don't think it's cautioning against diving in prematurely. It's cautioning
against thinking you'll do better than those who have come before by pure
virtue of not knowing what yhey've done.

------
hcarvalhoalves
Love Bret's style. Also love how the crowd went silent after the "API" slide.

~~~
kintamanimatt
I don't know that it was a bad thing though. As soon as I saw that I started
to think about how that might be possible, or even if it could be possible.
Fundamentally there has to be some kind of common discovery protocol
underlying it; it just doesn't appear to be possible (yet) to have two unknown
systems talk to each other with an unknown protocol. That'd be like two
monoglots, a German and Russian speaker figuring out how to talk fluently with
each other. I suppose it would be possible using gestures and props, but these
non-verbal clues could themselves be thought of as a kind of discovery
protocol for figuring out the more efficient protocol that enables verbal
communication.

~~~
peregrine
You should look into Hypermedia API's. The entire point is to have a
discoverable API where the developer doesn't need to know low level details.
Theoretically, you could write a library to parse, adapt, and act on another
API.

[http://stackoverflow.com/questions/15214526/why-
hypermedia-a...](http://stackoverflow.com/questions/15214526/why-hypermedia-
api)

------
ionforce
This talk is so fluffy and empty. Surely there must be someone else who did
not enjoy it.

~~~
cia_plant
Yeah I felt like this had even less content than most of Bret's other talks,
which are always kind of short on specifics.

~~~
dirtyaura
Huh? Bret's talks short on specifics?

His talks constantly feature _working_ demos of the ideas he is pushing,
subtly demonstrating a lot of well-thought-out interaction design details. If
you watch his "Media for Thinking the Unthinkable"
([http://vimeo.com/67076984](http://vimeo.com/67076984)) it's a gold mine of
specifics. I've watched it several times and always pick some new ideas for my
UI design work.

The difference to a run-of-the-mill talk is that he is _showing_ the details,
not telling the details.

~~~
RogerL
Thanks for the link.

I have the exact opposite reaction to the video. He is solving toy problems
with toy ideas. I think his page Kill Math
([http://worrydream.com/KillMath/](http://worrydream.com/KillMath/))
illuminates this point. I don't think he can think symbolically very well (no
insult intended, I can't think visually very well). There are certainly times
where graphing things make a lot of sense, but to throw out analytical math?
Come on. By and large he is getting the "feel" of a system, but he cannot
really reason about it, prove things about it, extend it, or design new
systems with vision (there are obvious counterexamples).

In another video he shows an IDE where he scrubs constants, and it changes the
behavior of the concurrently running program (changing the size of an ellipse
or tree branch). It's neat. But, again, toy problem. First of all, we
shouldn't be programming with constants. Second, anything complicated will
have relationships between the data - scrubbing one value will just end up
giving you nonsense. Third, it just doesn't make any sense in many contexts. I
work in computer vision currently, and I can't think of anything but the most
superficial way I could incorporate scrubbing. He made some comment about how
no one could know what a bezier curve is unless they had a nice little picture
of it in their IDE to match the function call. That's silly. I actually use
splines and other curve fitting in my work, and I have to actually understand
the math. Do I use cubic splines, a Hermite interpolation, bezier, or
something else? I don't decide that by drawing some pictures - the search
space is too big, I'll never cover all the possibilities. I have to do math to
figure out the best choice.

In that same video he went on to demonstrate programming binary search using
visual techniques. Unfortunately he wrote a buggy implementation, and stood
there exclaiming how his visual technique found a different bug. It did, a
super trivial one, but it completely failed to reveal the deeper issue. And,
there was no real way for his visual method to have found it.

Visualization is an very powerful tool, but it is one tool in the toolchest.
There is a scene in the movie Contact with Jodie Foster using headphones to
listen to the SETI signal. We all know that is bogus - the search space is far
too vast for aural search to work.

His ideas are terribly wrong headed. Make interfaces to help give us
intuition? Absolutely! Use graphics where analytics fail. Of course! But don't
conclude that math is a "freakish knack", as he does, or that math is some
sort of temple (he calls mathematicians "clergy", and then goes on to throw in
an insult that many are just pretending to understand).

I posted in another comment how crazy it would be to have a calculator that
scrubs. Well, he shows one on that page. Really? The day bridge designers
start using scrubbing apps to design our bridges is the day I'm never crossing
a bridge again.

Edit to add: his website is another example of this. I can't find anything on
it. There are a bunch of pictures, and my eyes saccade around, but what is
here, what is his point? I dunno. I can click, and click, and click, and start
to get an idea, but there is always more hidden away behind pictures. It's
barely workable as a personal website, and would be a disaster as a way to
organize anything larger. I don't mean to pick on it - as an art project or
glimpse into how he thinks, it's great. I just point out it illustrates (pun
kind of intended) the strengths and limits of visual presentation. You tell
me, for example, without grep or google search, whether he has written about
coffee.

If you disagree, please reply in pictures only! ;)

------
xlayn
Come to wonder when I try to see the places on where this can be applied in my
particular working field. If I check my everyday working flow, it seems like
I'm constrained to all the scenarios that he mention, and I'm aware of how
limiting it can be for what the technology and multiple cores... I'm talking
about working on files, not interacting visually with the computer, not
letting the computer figure out the stuff... not working parallely

and then I notice...

how I deliver software to a distributed environment of virtual machines some
running on the same cpu, some boxes with their own one and realize that maybe
the everyday cpu that you buy for your everyday box, is that small cpu on the
cpu grid he shows.... network between the cpus are the lines that connects
them.... and notice that I don't remember when it was the last time that I
wrote the last tcp stack for connecting those machines.... so they somehow are
figuring they out on they own how to talk to each other (notice how this is
different from having a goal and try to achieve it) I still think we are way
far from this happening (probably luckily for us)...

so: what if all he mentions here does somehow exist but it requires to shift
the way you see stuff?

------
_pius
This is a brilliant, trenchant indictment of the state of our industry.

------
ibudiallo
I am half way there and i have never been disappointed by Bret Victor. He
comes from the future.

~~~
grey-area
Or, in this case, the past.

------
0xdeadbeefbabe
Subtext seemed like a good abstraction what happened there?
[http://www.oscon.com/oscon2010/public/schedule/detail/15484](http://www.oscon.com/oscon2010/public/schedule/detail/15484)

------
Knotwilg
It will not help the discussion forward to behave like fans and treat any
substantial critique as "you are one of those old fashioned mindless
programming dudes".

On the other hand, in the light of Victor's achievements in industry
(including "shipping" stuff) one cannot dismiss him as a smooth talking TEDdie
either.

Victor has provided many crafted examples of what can be achieved in the
fields of engineering, mathematics and programming, or any field of science
and technology, if the feedback loop between the tool and its user is
improved.

Indeed, this 30 minute talk does not compare to an industrial delivery. It has
some theatre and some deliberate exaggerations or unfair treatment of society
evolutions. Such is the nature of talks.

I do not think he sees the current state of affairs as a great mistake. He
will surely acknowledge all practical circumstances and conceptual challenges
that have made certain inferior designs survive while superior ones did not
materialize.

The message is: we shouldn't accept this state of affairs as final or as one
that can only be marginally improved. It can still be radically improved. The
industry is still fresh - even ideas from the 60s are valid and underachieved.

I see his critique as a positive statement of hope and encouragement, not as a
pointing finger to all you silly programmers.

------
ThomPete
"they didn't know what they were doing, so they tried everything"

------
rasur
A whole bunch of interesting stuff in there. Undoubtedly I shall spend most of
my forthcoming holiday reading up on papers and other works as old as I am and
realising - yet again - everything old is new again (except for the bits that
have been willfully ignored in favour of being reinvented, badly ;) )

------
artagnon
The art of programming is evolving steadily; more powerful hardware becomes
available, and compiler technology evolves.

Ofcourse there will be resistance to change, and new compilers don't mature
overnight. At the end of the day, it boils down to what can be parsed
unambiguously, written down easily by human beings, and executed quickly. If
you get off on reading research papers on dependent types and writing Agda
programs to store in your attic, that's your choice; the rest of us will be
happily writing Linux in C99 and powering the world.

Programming has not fundamentally changed in any way. x86 is the clear winner
as far as commodity hardware is concerned, and serious infrastructure is all
written in C. There is a significant risk to adopting any new language; the
syntax might look pretty, but you figure out that the compiler team consists
of incompetent monkeys writing leaking garbage collectors. We are pushing the
boundaries everyday:

\- Linux has never been better: it continues improve steadily (oh, and at what
pace!). New filesystems optimized for SSDs, real virtualization using KVM, an
amazing scheduler, and a new system calls. All software is limited by how well
the kernel can run it.

\- We're in the golden age of concurrency. Various runtimes are trying various
techniques: erlang uses a message-passing actor hammer, async is a bit of an
afterthought in C#, Node.js tries to get V8 to do it leveraging callbacks,
Haskell pushes forward with a theoretically-sound STM, and new languages like
Go implement it deep at the scheduler-level.

\- For a vast majority of applications, it's very clear that automatic memory
management is a good trade-off. We're look down upon hideous nonsense like the
reference-counter in cpython, and strive to write concurrent moving GCs. While
JRuby has the advantage of piggy-banking on a mature runtime, the MRI
community is taking GC very seriously. V8 apparently has a very sophisticated
GC as well, otherwise Javascript wouldn't be performant.

\- As far as typing is concerned, Ruby has definitely pushed the boundaries of
dynamic programming. Javascript is another language with very loosely defined
semantics, that many people are fond of. As far as typed languages go, there
are only hideous languages like Java and C#. Go seems to have a nice flavor of
type inference to it, but only time will tell if it'll be a successful model.
Types make for faster code, because your compiler has to spend that much less
time inspecting your object: V8 does a lot of type inference behind the scenes
too.

\- As far as extensibility is concerned, it's obvious that nothing can beat a
syntax-less language (aka. Lisp). However, Lisps have historically suffered
from a lack of typesystem and object system: CLOS is a disaster, and Typed
Racket seems to be going nowhere. Clojure tries to bring some modern flavors
into this paradigm (core.async et al), while piggy-banking on the JVM. Not
sure where it's going though.

\- As far as object systems go, nothing beats Java's factories. It's a great
way to fit together many shoddily-written components safely, and Dalvik does
exactly that. You don't need a package-manager, and applications have very
little scope for misbehaving because of the suffocating typesystem. Sure, it
might not be be pleasant to write Java code, but we really have no other way
of fitting so many tiny pieces together. It's used in enterprise for much the
same reasons: it's too expensive to discipline programmers to write good code,
so just constrain them with a really tight object system/typesystem.

\- As far as functional programming goes, it's fair to say that all languages
have incorporated some amount of it: Ruby differentiates between gsub and
gsub! for instance. Being purely functional is a cute theoretical exercise, as
the scarab beetle on the Real World Haskell book so aptly indicates.

\- As far as manual memory management goes (when you need kernels and web
browsers), there's C and there's C++. Rust introduces some interesting pointer
semantics, but it doesn't look like the project will last very long.

Well, that ends my rant: I've hopefully provided some food for thought.

~~~
munificent
> We're in the golden age of concurrency. Various runtimes are trying various
> techniques: erlang uses a message-passing actor hammer, async is a bit of an
> afterthought in C#, Node.js tries to get V8 to do it leveraging callbacks,
> Haskell pushes forward with a theoretically-sound STM, and new languages
> like Go implement it deep at the scheduler-level.

No, a better analogy is that we're in the Cambrian explosion of concurrency.
We have a bunch of really strange lifeforms all evolving very rapidly in weird
ways because there's little selection pressure.

Once one of these lifeforms turns out to be significantly better, then it will
outcompete all of the others and _then_ we'll be in something more like a
golden age. Right now, we still clearly don't know what we're doing.

~~~
artagnon
We've been doing concurrency for many years now; it's called pthreads. Large
applications like Linux, web browsers, webservers, and databases do it all the
time.

The question is: how do we design a runtime that makes it harder for the user
to introduces races without sacrificing performance or control? One extreme
approach is to constrain the user to write only purely functional code, and
auto-parallelize everything, like Haskell does (it's obvious why this is a
theoretical exercise). Another is to get rid of all shared memory and restrict
all interaction between threads to message passing like Erlang does
(obviously, you have to throw performance out the window). Yet another
approach is to run independent threads and keep polling for changes at a
superficial level (like Node.js does; performance and maintainability is
shot). The approach that modern languages are taking is to build concurrency
as a language primitive built into the runtime (see how go's proc.c schedules
various channels in chan.c; it has a nice race detection algorithm in race.c).

There is more pressure than ever to build applications that leverages more
cores to build highly available internet applications. Multi-cores have
existed long enough, and are now prevalent even on mobile devices. No
radically different solution to concurrency is magically going to appear
tomorrow: programmers _need_ to understand concurrency, and work with existing
systems.

~~~
slacka
> We've been doing concurrency for many years now; it's called pthreads.

Sometimes, the major advances come when fresh ideas are infused from the
outside. In Darwin's case it was his geological work that inspired his theory.
In concurrency maybe it will be ideas from neuroscience.

> No radically different solution to concurrency is magically going to appear
> tomorrow: programmers _need_ to understand concurrency, and work with
> existing systems.

The environment is changing. In 2007, the oxygen levels started increasing,
single threaded CPU scaling hit the wall. It has gone from doubling every 2
years to a few % increases per year.

We are only at the beginning of this paradigm shift to massively multi-core
CPUs. Both the tools and the theory are still in their infancy. In HW there
are many promising advances being explored, such as GPUs, Intel Phi, new
FPGAs, and projects like Parallella.

The software side also requires new tools to drive these new technologies.
Maybe a radical new idea, but more likely some evolved form of CSP,
functional, flow-Based, and/or reactive programming models from the 70s, that
didn't work with the HW environment at the time will fill this new niche.

For example, one of the smartest guys I know working on a neuromorphic
engineering where he's creating a ASIC with thousand of cores now and may
evolve to (b)millions. If this trilobite emerges on top, whatever language is
used to program it might have been terrible in the 70s or for your "existing
systems" but it may be the future of programming.

~~~
artagnon
> Sometimes, the major advances come when fresh ideas are infused from the
> outside.

I agree with this largely; over-specialization leads to myopia (often
accompanied by emotional attachment to one's work).

> In Darwin's case it was his geological work that inspired his theory.

If you read On the Origin of Species, you'll see that Darwin started from very
simple observations about cross-pollination leading to hybrid plant strains.
He spent years studying various species of animals. In the book, he begins out
very modestly, following step by step from his Christian foundations, without
making any outrageous claims. The fossils he collected on his Beagle
expedition sparked his interest in the field, and served as good evidence for
his theory.

> In concurrency maybe it will be ideas from neuroscience.

Unlikely, considering what little we know about the neocortex. The brain is
not primarily a computation machine at all; it's a hierarchical memory system
that makes mild extrapolations. There is some interest in applying what we
know to computer science, but I've not seen anything concrete so far (read:
code; not some abstract papers).

> We are only at the beginning of this paradigm shift to massively multi-core
> CPUs.

From the point of view of manufacturing, it makes most sense. It's probably
too expensive to design and manufacture a single core in which all the
transistors dance to a very high clock frequency. Not to mention power
consumption, heat dissipation, and failures. In a multi-core, you have the
flexibility to switch off a few cores to save power, run them at different
clock speeds, and cope with failures. Even from the point of view of Linux,
scheduling tons of routines on one core can get very complicated.

> In HW there are many promising advances being explored, such as GPUs, Intel
> Phi, new FPGAs, and projects like Parallella.

Ofcourse, but I don't speculate much about the distant future. The fact of the
matter is that silicon-based x86 CPUs will rule commodity hardware in the
foreseeable future.

> [...]

All this speculation is fine. Nothing is going to happen overnight; in the
best case, we'll see an announcement about a new concurrent language on HN
tomorrow, which might turn into a real language with users after 10 years of
work ;) I'll probably participate and write patches for it.

For the record, Go (which is considered "new") is over 5 years old now.

~~~
slacka
I think you missed my point about Darwin. Darwin was inspired by the geologic
theory, gradualism, where small changes are summed up over long time periods.
It was this outside theory applied to biology that helped him to shape his
radical new theory.

Right now threads are the only game in town, and I think you're right. For
existing hardware, there probably won't be any magic solution, at least no
with some major tradeoff like performance hit you get with Erlang.

I was thinking about neuromorphic hardware when I mentioned neuroscience. From
what I hear the software side there is more analogous to HDL.

Go is great stopgap for existing thread based HW. But if the goal is to
achieve strong AI, we're going to need some outside inspiration. Possibility
from a hierarchical memory system, a massively parallel one.

I wish I could offer less speculation, and more solid ideas. Hopefully someone
here on HN will. I think that was the point of the video. To inspire.

------
tmarthal
"I do think that it would be really cool if the actor model is like picked up
by the Swedish phone company."

Does anyone have an explanation for this reference? It was at the end of the
concurrency section, while talking about the distributed graph model.

~27:20

~~~
zalew
Ericsson - Erlang

[http://en.wikipedia.org/wiki/Erlang_%28programming_language%...](http://en.wikipedia.org/wiki/Erlang_%28programming_language%29)

[http://en.wikipedia.org/wiki/Open_Telecom_Platform](http://en.wikipedia.org/wiki/Open_Telecom_Platform)

------
arikrak
I like his overall message, but I wonder about the details. E.g. he attacks
the existence of HTML and CSS, but there needs to be some universal format to
store the markup and design in. So I guess he's attacking the idea of hand-
coding them instead of using a WYSIWYG editor. But you can use something like
Dreamweaver, Expression Web, or even more recent web apps like Divshot. I
guess the problem in that they're not good enough yet, but that's not because
people aren't trying to do it, it's because it is hard to do.

------
marcamillion
Is this an actual talk he gave in 1973 or is this a spoof or something?

If so, it seems he missed the mark (significantly) on web development.

He said "if in a few decades we get a document format on some sort of web of
computers, I am sure we will be creating those documents by direct
manipulation - there won't be any markup languages or stylesheets, that will
make no sense."

So that is either very sarcastic and cheeky, or straight up wrong.

What am I missing?

~~~
humanrebar
It's not an actual talk or a spoof. It's probably best described as a farce
because he's using a lot of irony to make his points. I guess you could call
it sarcasm.

I think he's wrong as well. Often non-technical managers assume that since
something is simple to describe, it will be simple to implement. This is the
tech talk equivalent of that attitude.

Also, there are CMSs and WYSIWYG webpage creators that operate at various
levels of success. Markup languages and stylesheets coexist partly because
they meet different use cases. For example, I've never heard of a spec for a
WYSIWYG "language", so you're guaranteed to have to deal with vendor lock-in
and a lack of portability unless you can then generate some text documents in
a standardized language.

~~~
EvanMiller
It is satire, not farce.

~~~
humanrebar
Yeah, that's probably a better word for it.

------
nikso
A powerful thought.

We should feel lucky that what we love is such novel and unexplored field.

I'm quite confident that we will eventually move forward from this seemingly
stale period of programming paradigms. Because after all, we all know the
frustration brought from the initial stages of learning a new thing; and we
all know the much greater awe of mastering it.

------
chj
I don't think the future of programming is necessarily visual programming.
Nature didn't program human bodies visually, and yet we are the most powerful
living machines with powerful operating systems. But we do need to find a new
"programing medium" like proteins that can build up ideas "organically".

------
jjindev
Isn't this all just evidence that "better" in most cases is a small margin?
You can hate X, and prefer Y, but in most cases the X guys will finish their
project. Methodology based crash and burns are pretty rare. And the things
that are "not terrible" are not separated by that much.

------
zekenie
This is awesome. It makes me realize how much of the time I am just applying
the same formula over and over again and not really being creative. The flip
side, I would argue is that reinventing the wheel all the time is expensive.
There's a reason why standards have formed.

~~~
ganarajpr
If you try to reinvent the wheel, you will get another wheel. I think what
Bret is talking about there is wondering about if we could create airplanes
and spaceships perhaps!

------
tomasien
I just learned today that Smalltalk was the inspiration for a lot of what NeXT
ended up doing with Objective C, which makes so much sense. At the end of the
day, Xcode is just another set of text files in many ways, but in so many
others it's so much more.

------
dman
Eternally relevant when discussing worlds of computing that could have been -
[http://www.dreamsongs.com/RiseOfWorseIsBetter.html](http://www.dreamsongs.com/RiseOfWorseIsBetter.html)

------
rusew
Does anyone know what font he uses in his slides? I really like their look.

~~~
masswerk
Me too! – It has this classic modern look with a distinct impact printer feel
about it.

------
mmphosis

      1. coding -> direct manipulation of data
      2. procedures -> goals and constraints
      3. text dump -> spatial representations
      4. sequential -> parallel

------
nickmain
I hope that in another 40 years the fact that programming and programmers ever
existed is seen as temporary blip in the evolution of computing.

------
6ren
Could someone upload this to youtube, please? vimeo is unwatchable on
old/underpowered devices.

~~~
slacka
Here you go:

[http://youtu.be/8pTEmbeENF4](http://youtu.be/8pTEmbeENF4)

------
cconroy
I wonder if there were people like this during the printing press days... ?

------
asselinpaul
Watching this now but I expect greatness like all his talks.

------
gdonelli
Bret Victor is awesome

------
calibraxis
What should I read to understand his anti-API point better? Learn about RDF?
Or is there something better?

~~~
ericHosick
APIs are currently the core fabric by which communication occurs within a
computing system. As long as we use them, we end up with specialization of
communication between computing systems.

This specialization, in my opinion, is the root cause problem in programming
computing systems.

Bret Victor had this to say "The only way it (communication between systems)
can scale, they (computers) have to figure out (dynamically), a common
language".

Here I feel he is missing a key point. It is not a common language we are
looking for, but a common architecture by which information is communicated
between systems. Or, in this case, a non-architecture or anti-API by which
communication takes place between systems.

~~~
DigitalJack
I don't think he meant a literal language. For communication you have to
establish a common understanding of _something_ between multiple entities.

------
dbpokorny
I wrote down some thoughts on this subject.

[http://dbpokorny.blogspot.com/2013/07/permission-based-
progr...](http://dbpokorny.blogspot.com/2013/07/permission-based-programming-
languages.html)

------
dschiptsov
Yeah, all the fundamental things were invented and researched before I was
born.) and everything is still relevant and actual even in the midst of J*
mass hysteria.)

------
creed0r
ABSOLUT.MUST.SEE.

