
Toward a better programming - ibdknox
http://www.chris-granger.com/2014/03/27/toward-a-better-programming/
======
kens
I alternate between thinking that programming has improved tremendously in the
past 30 years, and thinking that programming has gone nowhere.

On the positive side, things that were cutting-edge hard problems in the 80s
are now homework assignments or fun side projects. For instance, write a ray
tracer, a spreadsheet, Tetris, or an interactive GUI.

On the negative side, there seems to be a huge amount of stagnation in
programming languages and environments. People are still typing the same Unix
commands into 25x80 terminal windows. People are still using vi to edit
programs as sequential lines of text in files using languages from the 80s
(C++) or 90s (Java). If you look at programming the Eniac with patch cords,
we're obviously a huge leap beyond that. But if you look at programming in
Fortran, what we do now isn't much more advanced. You'd think that given the
insane increases in hardware performance from Moore's law, that programming
should be a lot more advanced.

Thinking of Paul Graham's essay "What you can't say", if someone came from the
future I expect they would find our current programming practices ridiculous.
That essay focuses on things people don't say because of conformity and moral
forces. But I think just as big an issue is things people don't say because
they literally can't say them - the vocabulary and ideas don't exist. That's
my problem - I can see something is very wrong with programming, but I don't
know how to explain it.

~~~
jimmaswell
> People are still typing the same Unix commands into 25x80 terminal windows.
> People are still using vi to edit programs as sequential lines of text in
> files using languages from the 80s (C++) or 90s (Java).

Well yes, there are always people who stick to the older methods, or sometimes
the situation calls for programming through a terminal window, but there have
been big advancements since those methods, like the great environment of
Visual Studio and such. Why do you say programming has gone nowhere because
there are people who don't use the newer advancements?

~~~
falcolas
> Why do you say programming has gone nowhere because there are people who
> don't use the newer advancements?

Speaking for myself, its because Visual Studio and its ilk don't feel like
advancements. They feel like bandaids. They do their damnedest to reduce the
pain of programming by automatically producing boilerplate, auto completing
words, providing documentation, and providing dozens of ways to jump around
text files.

Personally, I don't feel that the bandaid does enough to justify using it
(granted, my main language doesn't work well with Visual Studio, so there's
that too).

The main source of the pain is, to me, is that we're still working strictly
with textual representations of non-textual concepts and logic, no matter how
those concepts might better be rendered. We're still writing `if` and `while`
and `for` and `int` and ` __char` while discussing pointers and garbage
collection and optimizing heap allocation... Instead of solving the problem,
we 're stuck down describing actions and formulas to the machinery. No IDE
does anything to actually address that problem.

Sorry, rant, but this problem certainly resonates with me.

~~~
jimmaswell
>The main source of the pain is, to me, is that we're still working strictly
with textual representations of non-textual concepts and logic, no matter how
those concepts might better be rendered.

I can't see any issue with representing logic abstractly with symbols. It's
the same for calculus. Of course the ideas we're representing aren't actually
the things we use to represent them, the same as written communication.

Non-textual programming has been explored to some degree, such as Scratch, but
it's not seen as much of a useful thing.

>Instead of solving the problem, we're stuck down describing actions and
formulas to the machinery. No IDE does anything to actually address that
problem.

Describing actions and formulas to a machine in order to make it do something
useful is pretty much the definition of programming. IDEs make it a more
convenient process.

Unless you want to directly transplant the ideas out of your neural paths into
the computer, maybe some AI computer in the future based on a human brain,
this is how it's going to be.

~~~
misuba
> I can't see any issue with representing logic abstractly with symbols.

That's the problem: text isn't abstract enough. So we put some of the text
into little blobs that have names (other methods), and use those names
instead, and we call that "abstraction," but black-box abstraction doesn't
help us see. The symbols in calculus, by contrast, are symbols that help you
see. The OA is calling for abstractions over operating a computer that help us
see.

~~~
flyrain
Agree. There is must be more abstract way to present ideas than text. In this
way, programs are easier to understand and modification, and have less errors
and bugs.

~~~
krakensden
I am suspicious. I think it would certainly be easier in some ways for rank
beginners- it would make spelling errors and certain classes of syntax errors
impossible- but those aren't really the bugs that cause experienced
programmers grief. It's generally subtly bad logic, which is more about how
people are terrible. Plus, we already know how to create computer languages
that largely avoid those problems.

Written language is wonderful in many respects, and I sometimes thing people
discount these things out of familiarity. Keyboards too- you can do things
very quickly and very precisely with keyboards. Those things matter for your
sense of productivity and satisfaction.

------
freyrs3
This strikes me as armchair philosophizing about the nature of programming
language design. Programming languages are not intentionally complex in most
cases, they're complex because the problems they solve are genuinely hard and
not because we've artificially made them that way.

There is always a need for two types of languages, higher level domain
languages and general purpose languages. Building general purpose languages is
a process of trying to build abstractions that always have a well-defined
translation into something the machine understands. It's all about the cold
hard facts of logic, hardware and constraints. Domain languages on the other
hand do exactly what he describes, "a way of encoding thought such that the
computer can help us", such as Excel or Matlab, etc. If you're free from the
constraint of having to compile arbitrary programs to physical machines and
can instead focus on translating a small set of programs to an abstract
machine then the way you approach the language design is entirely different
and the problems you encounter are much different and often more shallow.

What I strongly disagree with is claiming that the complexities that plague
general purpose languages are somehow mitigated by building more domain
specific languages. Let's not forget that "programming" runs the whole gamut
from embedded systems programming in assembly all the way to very high level
theorem proving in Coq and understanding anything about the nature of that
entire spectrum is difficult indeed.

~~~
ibdknox
> There is always a need for two types of languages, higher level domain
> languages and general purpose languages.

I never suggested otherwise, just that when you're in a domain you should be
_in_ that domain. That solution requires something more general purpose to
glue domains together, which is the crux of the problem. What does such a
language look like? How do you ensure you don't lose all the good properties
you gain from the domain specific languages/editors when passing between them?

I think you present a false dichotomy though. General purpose languages are
just as much about encoding a process. The distinction between compiling to
the machine vs some abstract machine also isn't really relevant: this is about
semantics, not implementation. And if you let implementation dictate the
semantics you won't get very far from where we are now.

> What I strongly disagree with is claiming that the complexities that plague
> general purpose languages are somehow mitigated by building more domain
> specific languages.

I never said that :) I said that programming would be greatly improved by
being observable, direct, and incidentally simple. And again those have
nothing to do with what "level" you're programming at, they're just principles
to apply. I do think there is a general solution that can encompass most of
the levels (though I'm not interested in trying to do that any time soon), but
there is a common case here and it certainly isn't high level theorem proving
or embedded systems. It's stupidly simple automation tasks, or forms over data
apps, or business workflows. The world works on poorly written excel
spreadsheets and balls of Java mud. You don't have to fix everything to make a
_huge_ impact and the things we learn in doing so can help us push everything
else forward too.

~~~
scribu
> How do you ensure you don't lose all the good properties you gain from the
> domain specific languages/editors when passing between them?

That's a very interesting (and I'm betting hard to solve) problem! However,
it's very hard for me to see how Aurora would help with that. From the demo,
it looks like yet another visual programming system; such systems don't seem
particularly interoperable.

~~~
ibdknox
The clever part of the strategy I was using in that demo was that all domains
are expressed declaratively as datastructures. This meant that the glue
language only needed to be very good at manipulating vectors and maps. You
built up a structure that represented music, html, or whatever and then just
labeled it as such. Interop between domains then becomes pretty simplistic
data transformation from one domain's format to another. And given how
constrained the glue language could be, you could build incredibly powerful
tools that make that easy. You could literally template out the structure you
want and just drag/drop things in, fix a few cases that we maybe get wrong and
you're done - you've translated tweets into music.

We ended up abandoning that path for now as there are some aspects of
functional programming that prove pretty hard to teach people about and seem
largely incidental.

~~~
jonathanedwards
Here's some meat. So how does FP fall down?

~~~
jamii
Explicitly managing hierarchical data structures leads to a lot of code that
isn't directly related to the problem at hand. A lot of attention is dedicated
to finding the correct _place_ to put your data. Compared to eg relational or
graph data models, where that kind of denormalisation is understood to be an
optimisation made at the expense of program clarity / flexibility.

The pervasive use of ordering in functional programming inhibits composition.
(Even in lazy languages the order of function application is important).
Compare to eg Glitch or Bloom where different pieces of functionality can be
combined without regard for order of execution/application. This better
enables the ideals that BOT was reaching for - programming via composition of
behaviour. In a BOT plugin you can not only add behaviour but remove/override
other behaviours which turns out to be very valuable for flexible modification
of Light Table.

A more concrete problem is displaying and explaining nested scope, closures
and shadowing. As a programmer I have internalised those ideas to the point
that they seem obvious and natural but when we showed our prototypes to people
it was an endless source of confusion.

Functional programming is certainly a good model for expressing _computation_
but for a glue language the hard problems are coordination and state
management. We're now leaning more towards the ideals in functional-relational
programming where reactivity, coordination and state management are handled by
a data-centric glue language while computation is handed off to some other
partner language.

~~~
ibdknox
> A more concrete problem is displaying and explaining nested scope, closures
> and shadowing.

That's what really killed it for me and also one of the things that I found
pretty surprising. Tracking references is apparently way harder than I
realized. And while I thought we could come up with a decent way to do it, it
really did just confuse people. I tried a bunch of different strategies, from
names, to boxes that follow you around, to dataflow graphs. None of them
seemed to be good enough.

------
RogerL
There's a reason the game Pictionary is hard, despite the "a picture is worth
a thousand words" saying. And that is that images, while evocative, are not
very precise. Try to draw how you feel.

If you are using card[0][12] to refer to Card::AceSpades, well, time to learn
enums or named constants. If, on the other hand, the array can be sorted,
shuffled, and so on, what value is it to show an image of a specific state in
my code?

There's a reason we don't use symbolic representation of equations, and it has
nothing to do with ASCII. It's because this is implemented on a processor that
simulates a continuous value with a discrete value, which introduces all kinds
of trade offs. We have a live thread on that now: why is a _a_ a _a_ a _a not
(a_ a _a)_ (a _a_ a). I need to be able to represent exactly how the
computation is done. If I don't care, there is Mathematica, and and the like,
to be sure.

If you disagree with me, please post your response in the form of an image.
And then we will have a discussion with how powerful textual representation
actually is. I'll use words, you use pictures. Be specific.

~~~
ibdknox
It's not about choosing one or the other, it's about allowing both. I can use
symbols (though not sentences or other usefully descriptive language), but do
I have an opportunity to represent those symbols at all? no.

I'm not saying we should forsake language, if you look at the now very out of
date Aurora demo, all the operations have sentence descriptions. This
certainly isn't an all or nothing thing. If it makes sense to visualize some
aspect of the system in a way that is meaningful to me, I should be able to do
so - that is after all how people often solve hard problems.

~~~
RogerL
Sure, there are plenty of cases where visualization is helpful. But I see so
many blog posts about it, and not much in the way of actual progress.

Take the card again. It's your example, after all. I cannot think of any way
to use that to, say, write a small AI to play poker. I suppose I could see a
use in a debugging situation for my 'hand' variable to display a little 5@
symbol (where @ is the suit symbol). But okay, let's think about that. What
does it take to get that into the system?

No system 'knows' about cards. So I need a graphics designer to make a symbol
for a card. I surely don't want an entire image of a card, because I have 20
other variables I am potentially interested in, which is why in this context a
5@ makes sense (like you would see in a bridge column in a newspaper). So
somebody has to craft the art, we have to plug it into my dev sysstem, we need
to coordinate it with the entire team, and so on. Then, it is still a very
custom, one off solution. I use enums, you use ints, the python team is just
using strings like "5H" \- it goes on and on. I don't see a scalable solution
here.

Well, I do see _one_ scalable solution. It is called text. My debugger shows a
textual depiction of my variable, and my wetware translates that. I'm a good
reader, and I can quickly learn to read 54, "5H", FiveHearts as being the
representation of that card. Will I visually "see" the value of a particular
hand as quickly? Probably not, unless I'm working this code a lot. But I'll
take that over firing up a graphics team and so on.

I do plenty of visualizations. It is a big reason for me using Python. If I
want to write a Kalman filter, first thing I'm doing is firing up matplotlib
to look at the results. But again, this is a custom process. I want to look at
the noise, I want to look at the size of the kalman gain, I want to plot the
filter output vs the covariance matrices, I want to.... program. Which I do
textually, just fine, to generate the graphics I need.

I've dealt with data flow type things before. They are a royal pain. Oh, to
start, it's great. Plop a few rectangles on the screen, connect with a few
lines, and wow, you've designed a nand gate, or maybe a filter in matlab, or
is it a video processing toolchain? Easy peasy. But when I need to start
manipulating things programmatically it is suddenly a huge pain.

I am taking time out of writing an AI to categorize people based on what they
are doing in a video (computer vision problem) to post this message. At a
rudimentary level graphical display is great. It is certainly much easier for
me to see my results displayed overlaid on the video, as opposed to trying to
eyeball a JSON file or something. But to actually program this highly visual
thing? I have never, ever heard anything but hand waving as to how I would do
that in anything other than a textual way. I really don't think I would want
to.

Anyway, scale things up in a way that I don't have to write so many matplotlib
calls and you will have my attention. But I just haven't seen it. I've been
programming since the early 80s, and graphical programming of some form or
another has been touted as 'almost here'. Still haven't seen it, except in
highly specialized disciplines, and I don't want to see it. "Pictures are
worth a thousand words" because of compression. It's a PCA - distill a bunch
of data down to a few dimensions. Sometimes I really want that, but not when
programming, where all the data matters. I don't want a low order
representation of my program.

~~~
ibdknox
> So I need a graphics designer to make a symbol for a card.

I think this is the crux of the debate. The point isn't high quality
visualizations, it's about bringing the simple little pictures you'd draw to
solve your problem directly into the environment. Can you draw a box and put
some text in it? Tada! Your own little representation of a card.

I'm not suggesting that you hire people out to build your representations :)
This is about providing tools for understanding. Maybe you don't see value in
that, and there's no reason you can't just keep seeing things as plain raw
text (that's just a representation itself).

> Anyway, scale things up in a way that I don't have to write so many
> matplotlib calls and you will have my attention.

Give us a bit and I think we can provide a whole lot more than just that. But
we'll see!

~~~
vorg
Just use Unicode, and a programming language that uses the full power of
Unicode symbology in its syntax. E.g.

♠♣♥♦ × A23456789TJQK

~~~
mercurial
Please don't. People are already terrible at naming things, I for one am not
going to try the entire Unicode table to find out which symbol you chose for
"MetadataService". Plain text is fine, it's searchable, readable, and somewhat
portable (minus the line ending debacle).

If you need something more, vim has the "conceal" feature which can be used to
replace (on the lines the cursor is not on) a given text with another (eg show
⟹ instead of =>). Would you be better off if there was an option to do this
for variable/class/method names? I'm not sure.

~~~
vorg
> vim can be used to replace a given text with another (eg show ⟹ instead of
> =>)

If you use the short ⇒ to substitute for => (rather than long ⟹ as in your
example), as well as many other Unicode symbols, then the overall code can be
much shorter and thus more understandable.

The spec for the Fortress programming language made a point of not
distinguishing between Unicode tokens in the program text and the ASCII keys
used to enter them. Perhaps that's the best way to go?

~~~
Pacabel
Why do you think that "much shorter" implies "more understandable"?

I think we have a lot of experience to suggest otherwise.

Anyone who has had to maintain old Fortran or C code will likely know what I
mean. With some early implementations limiting variable and function
identifiers to 8 characters or less, we'd see a proliferation of short
identifiers used. Such code is by far some of the hardest to work with due to
variable and function names that are short to the point of being almost
meaningless.

Then there are languages like APL and Perl, which make extensive use of
symbols. APL has seen very limited use, and Perl code is well-known for
suffering from maintenance issues unless extreme care is taken when initially
creating the code.

Balance is probably best. We don't want excessively long identifiers like is
often the case in Java, but we surely don't want excessively short ones,
either.

~~~
mercurial
As somebody who spent some years writing Perl code, I don't feel that having a
few well-defined ASCII symbols were such an issue. The problems with Perl are
that symbols change depending on the context (eg, an array @items needs to be
accessed via $items[$i] to get an item at position $i, to tell Perl it is a
scalar context), and weak typing. Even with changing symbols, it makes it
easier to distinguish between scalars, arrays and hashes, especially with
syntax highlighting. As opposed to languages like Haskell or Scala, in which
library designers are free to display their creativity with such immediately
obvious operators as '$$+-'.

Edited to add that I agree with your overall point. Shorter is not always
clearer. It can be a benefit to have a few Unicode symbols displayed via
'conceal' but it's not (at least in my experience) a major productivity gain.
And the number needs to be kept small. If I want Unicode symbol soup, I'll
play a roguelike.

------
j2kun
I'm concerned about Chris's desire to express mathematical formulas directly
in an editing environment.

Coming from a mathematician with more than enough programming experience under
his belt, programming is far more rigorous than mathematics. The reason nobody
writes math in code is not because of ASCII, and it's not even because of the
low-level hardware as someone else mentioned. It's because math is so jam-
packed with overloaded operators and ad hoc notation that it would be an
impossible feat to standardize any nontrivial subset of it. This is largely
because mathematical notation is designed for compactness, so that
mathematicians don't have to write down so much crap when trying to express
their ideas. Your vision is about accessibility and transparency and focusing
on problem solving. Making people pack and unpack mathematical notation to
understand what their program is doing goes against all three of those!

So where is this coming from?

PS. I suppose you could do something like, have layovers/mouseovers on the
typeset math that give a description of the variables, or something like that,
but still sum(L) / len(L) is so much simpler and more descriptive than \sigma
x_i / n

~~~
anaphor
I agree with you, and incidentally so does Gerald Sussman (co-inventor of
Scheme). He helped write an entire book on Lagrangian mechanics that uses
Scheme because he believes the math notation is too fuzzy and confusing for
people.

[https://mitpress.mit.edu/sites/default/files/titles/content/...](https://mitpress.mit.edu/sites/default/files/titles/content/sicm/book.html)

~~~
freyrs3
Both this and the subsequent text on differential geometry are very good but
they are written against this enormous undocumented scheme library (scmutils)
that is in my opinion very difficult to debug or figure out how macros expand
out.

------
mamcx
Natural language (like english, spanish) show why this kind of thinking lead
to nowhere, and why a programming language is more like english than like
glyphs.

Sometime the post not say: We want to make a program about _everything_. To
make that possible, is necesary a way to express everything that could be need
to be communicate. Words/Alphabet provide the best way.

In a normal language, when a culture discover something (let say, internet)
and before don't exist words to describe internet-things then it "pop" from
nowhere to existence. Write language have this ability in better ways than
glyphs.

In programming, if we need a way to express how loop things, then will "pop"
from nowhere that "FOR x IN Y" is how that will be.

Words are more flexible. Are cheap to write. Faster to communicate and cross
boundaries.

But of course that have a Editor helper so a HEX value could be show as a
color is neat - But then if a HEX value is NOT a color?, then you need a very
strong type system, and I not see how build one better than with words.

------
zwieback
Interesting work and I really liked the LightTable video but I think there's a
reason these types of environments haven't taken off.

To understand why programming remains hard it just takes a few minutes of
working on a lower-level system, something that does a little I/O or has a
couple of concurrent events, maybe an interrupt or two. I cannot envision a
live system that would allow me to debug those systems very well, which is not
to say current tools couldn't be improved upon.

One thing I've noticed working with embedded ARM systems is that we now have
instruction and sometimes data trace debuggers that let us rewind the
execution of a buggy program to some extent. The debugger workstations are an
order of magnitude more powerful than the observed system so we can do amazing
things with our trace probes. However, high-level software would need
debugging systems an order of magnitude more powerful than the client they
debug as well.

~~~
jamii
It depends entirely on how much state they need to capture. Ocaml has long had
a time travelling debugger ([http://caml.inria.fr/pub/docs/manual-
ocaml-400/manual030.htm...](http://caml.inria.fr/pub/docs/manual-
ocaml-400/manual030.html)) which is very useful in the small. Data-centric
languages like Bloom ([http://www.bloom-lang.net/](http://www.bloom-
lang.net/)) can cheaply reconstruct past states using the transaction log.
Frameworks like Opis
([https://web.archive.org/web/20120304212940/http://perso.elev...](https://web.archive.org/web/20120304212940/http://perso.eleves.bretagne.ens-
cachan.fr/~dagand/opis)) allow not only moving forward and backwards but can
exhaustively explore all possible branches using finite state model-checking.
The key in each case is to distinguish between essential state and derived
state.
[http://shaffner.us/cs/papers/tarpit.pdf‎](http://shaffner.us/cs/papers/tarpit.pdf‎)
has more to say on that front.

------
jostylr
Both the indirect and incidentally complex can be helped with literate
programming. We have been telling stories for thousands of years and the idea
of literate programming is to facilitate that. We do not just tell them in a
linear order, but jump around in whatever way makes sense. It is about
understanding the context of the code which can be hard.

But the problem of being unobservable is harder. Literate programming might
help in making chunks more accessible for understanding/replacing/toggling,
but live flow forwards-backwards, it would not. But I have recently coded up
an event library that logs the flow of the program nicely. Used appropriately,
it probably could be used to step in and out as well.

I am not convinced that radical new tools are needed. We just have to be true
to our nature as storytellers.

I find it puzzling why he talks about events as being problems. They seem like
ideal ways of handling disjointed states. Isn't that how we organize our own
ways?

I also find it puzzling to promote Excel's model. I find it horrendous. People
have done very complex things with it which are fragile and incomprehensible.
With code, you can read it and figure it out; literate programming helps this
tremendously. But with something like Excel or XCode's interface builder, the
structure is obscured and is very fragile. Spreadsheets are great for data
entry, but not for programming-type tasks.

I think creation is rather easy; it is maintenance that is hard. And for that,
you need to understand the code.

------
chenglou
I have a tremendous respect for people who dare to dream big despite all
cynicism and common assumptions, and especially people who have the skills to
actually make the changes. Please keep doing the work you're doing.

------
Detrus
Toward a better computer UI

The Aurora demo did not look like a big improvement until maybe
[http://youtu.be/L6iUm_Cqx2s?t=7m54s](http://youtu.be/L6iUm_Cqx2s?t=7m54s)
where the TodoMVC demo beats even Polymer in LOC count and readability.

I've been thinking of similar new "programming" as the main computer UI, to
ensure it's easy to use and the main UI people know. Forget Steve Jobs and
XEROX, they threw out the baby with the bath water.

Using a computer is really calling some functions, typing some text input in
between, calling some more.

Doing a few common tasks today is

    
    
      opening a web browser
      clicking Email
      reading some
      replying
      getting a reply back, possibly a notification
    
      clicking HN
      commenting on an article in a totally different UI than email
      going to threads tab manually to see any response
      
    

And the same yet annoyingly different UI deal on another forum, on youtube,
facebook, etc. Just imagine what the least skilled computer users could do if
you gave them a computing interface that didn't reflect the world of fiefdoms
that creates it.

FaceTwitterEtsyRedditHN fiefdoms proliferate because of the separation between
the XEROX GUI and calling a bunch of functions in Command Line. Siri and
similar AI agents are the next step in simple UIs. What people really want to
do is

    
    
      tell Dustin you don't agree with his assessment of Facebook's UI changes
      type/voice your disagreement
      share with public
    

And when you send Dustin and his circle of acquaintances a more private
message, you

    
    
      type it
      share message with Dustin and his circle of designers/hackers
    

To figure out if more people agreed with you or Dustin

    
    
      sentiment analysis of comments about Dustin's article compared to mine
    

That should be the UI more or less. Implement it however, natural language,
Siri AI, a neat collection of functions.

Today's UI would involve going to a cute blog service because it has a proper
visual template. This requires being one of the cool kids and knowing of this
service. Then going to Goolge+ or email for the more private message. Then
opening up an IDE or some text sentiment API and going through their whole
other world of incantations.

Our glue/CRUD programming is a mess because using computers in general is a
mess.

------
sold
The standard deviation is a poor example IMO, in many languages you can get
much closer to mathematical notation.

    
    
        def stddev(x):
            avg = sum(x)/len(x)
            return sqrt(sum((xi-avg)**2 for xi in x) / len(x))
    
        stddev xs = let avg = sum xs / length xs
                    in sqrt $ sum [(x-avg)**2 | x <- xs] / length xs

~~~
jeorgun
It's even a poor example of C++. Using valarray, you end up with basically the
same thing as your above examples:

    
    
        #include <valarray>
        #include <iostream>
        
        double standard_dev(const std::valarray<double> &vals)
        {
        	return sqrt(pow(vals - (vals.sum() / vals.size()), 2).sum() / vals.size());
        }
        
        int main()
        {
        	std::cout << standard_dev({2, 4, 4, 4, 5, 5, 7, 8}) << '\n';
        }
    

…and none of those are really much less readable than the math version. All in
all, that "example" clearly wasn't made in good faith, and left a bad taste in
my mouth.

------
qnaal
Hate to break it to you people, but rms was always right- the #1 reason why
programming sucks is that everyone wants complete control over all of the
bullshit they threw together and thought they could sell.

Imagine an environment like a lisp machine, where all the code you run is open
and available for you to inspect and edit. Imagine a vast indexed, cross-
referenced, and mass-moderated collection of algorithm implementations and
code snippets for every kind of project that's ever been worked on, at your
fingertips.

Discussing how we might want slightly better ways to write and view the code
we have written is ignoring the elephant problem- that everything you write
has probably been written cleaner and more efficiently several times before.

If you don't think that's fucked up, think about this: The only reason to lock
down your code is an economic one, despite that all the code being made freely
usable would massively increase the total economic value of the software
ecosystem.

~~~
tonyedgecombe
Locking down my code for economic reasons has worked pretty well for me. It's
allowed me to have a pretty good lifestyle running my business for the last
fifteen years and kept my customers happy because they know I have a financial
incentive to keep maintaining my products.

~~~
qnaal
and he's oh so healthy

in his body and his mind

------
crusso
I liked this article. I particularly liked the way the author attacked the
problem by clearing his notions of what programming is and attempting to come
at it from a new angle. I'll be interested to see what his group comes up
with.

That said, I think that fundamentally the problem isn't with programming, it's
with US. :) Human beings are imprecise, easily confused by complexity, unable
to keep more than a couple of things in mind at a time, can't think well in
dimensions beyond 3 (if that), unable to work easily with abstractions, etc.
Yet we're giving instructions to computers which are (in their own way) many
orders of magnitude better at those tasks.

Short of AI that's able to contextually understand what we're telling them to
do, my intuition is that the situation is only going to improve incrementally.

~~~
gldalmaso
I agree. I beleive that most of the incidental complexity has to do with the
fact that in end, every single thing greater than a single bit in the digital
realm is a convention.

A byte is a convention over bits. An instruction is a convention over bytes. A
programming language is a convention over instructions.

It turns out that every time someone sets out to solve a problem with
programming, they create their own convention.

It just so happens that either there is no convetion over how to create
covnentions, or it is just not followed and thus creates a paralel convention.

We cannot get our arbitrary conventions in line with each other, unless we
plan in advance.

Considering, its amazing how far we have come in the middle of this chaos of
unrestrained creation.

------
bachback
Leibniz wrote in 1666: "We have spoken of the art of complication of the
sciences, i.e., of inventive logic... But when the tables of categories of our
art of complication have been formed, something greater will emerge. For let
the first terms, of the combination of which all others consist, be designated
by signs; these signs will be a kind of alphabet. It will be convenient for
the signs to be as natural as possible—e.g., for one, a point; for numbers,
points; for the relations of one entity with another, lines; for the variation
of angles and of extremities in lines, kinds of relations. If these are
correctly and ingeniously established, this universal writing will be as easy
as it is common,and will be capable of being read without any dictionary; at
the same time, a fundamental knowledge of all things will be obtained. The
whole of such a writing will be made of geometrical figures, as it were, and
of a kind of pictures — just as the ancient Egyptians did, and the Chinese do
today. Their pictures, however, are not reduced to a fixed alphabet... with
the result that a tremendous strain on the memory is necessary, which is the
contrary of what we propose"
[http://en.wikipedia.org/wiki/Characteristica_universalis](http://en.wikipedia.org/wiki/Characteristica_universalis)

~~~
gregw134
You might like The Universal Computer: The Road from Leibniz to Turing
[http://www.amazon.com/The-Universal-Computer-Leibniz-
Turing/...](http://www.amazon.com/The-Universal-Computer-Leibniz-
Turing/dp/0393047857)

------
PaulAJ
The standard deviation example conflates two questions:

1: Why can't we use standard mathematical notation instead of strings of
ASCII?

2: Why do we need lots of control flow and libraries when implementing a
mathematical equation as an algorithm.

The first is simple: as others have pointed out here, math notation is too
irregular and informal to make a programming language out of it.

The second is more important. In pretty much any programming language I can
write:

    
    
        d = sqrt (b^2 - 4*a*c)
    
        x1 = (-b + d)/(2*a)
    
        x2 = (-b - d)/(2*a)
    

which is a term-by-term translation of the quadratic equation. But when I want
to write this in C++ I need a loop to evaluate the sigma term.

But in Haskell I can write this:

    
    
        stDev :: [Double] -> Double
    
        stDev xs = sqrt((1/(n-1)) * sum (map (\x -> (x-m)^2)) xs)
    
           where
    
              n = fromIntegral $ length xs
    
              m = sum xs / n
    

This is a term-by-term translation of the formula, in the same way that the
quadratic example was. Just as I use "sqrt" instead of the square root sign I
use "sum" instead of sigma and "map" with a lambda expression to capture the
internal expression.

Experienced programmers will note that this is an inefficient implementation
because it iterates over the list three times, which illustrates the other
problem with using mathematics; the most efficient algorithm is often not the
most elegant one to write down.

------
phantomb
Historically it has been easy to claim that programming is merely incidentally
complex but hard to actually produce working techniques that can dispel the
complexity.

The truth is that programming is one of the most complex human undertakings by
nature, and many of the difficulties faced by programmers - such as the
invisible and unvisualizable nature of software - are intractable.

There are still no silver bullets.

[http://en.wikipedia.org/wiki/No_Silver_Bullet](http://en.wikipedia.org/wiki/No_Silver_Bullet)
[http://faculty.salisbury.edu/~xswang/Research/Papers/SERelat...](http://faculty.salisbury.edu/~xswang/Research/Papers/SERelated/no-
silver-bullet.pdf)

------
dude42
Sadly I feel that LT has jumped the shark at this point. What started off as a
cool new take on code editors has now somehow turned into a grand view of how
to "fix programming". I can get behind an editor not based around text files,
or one that allows for easy extensbility. But I can't stand behind some
project that tries to "fix everything".

As each new version of LT comes out I feel that it's suffering more and more
from a clear lack of direction. And that makes me sad.

------
JoelOtter
Forgive me if my understanding is totally out of whack, but it seems here that
the writer is calling for an additional layer of abstraction in programming -
type systems being an example.

While in some cases that would be great, I'm not entirely sure more
abstraction is what I want. Having a decent understanding of the different
layers involved, from logic gates right up to high-level languages, has helped
me tremendously as a programmer. For example, when writing in C, because I
know some of the optimisations GCC makes, I know where to sacrifice efficiency
for readability because the compiler will optimise it out anyway. I would
worry that adding more abstraction will create more excuses not to delve into
the inner workings, which wouldn't be to a programmer's benefit. Interested to
hear thoughts on this!

~~~
Detrus
I think this improved programming vision starts at a higher level language
like Clojure/JS/Haskell and builds on that.

To allow the everyday Joe to use simplified programming all the way down to
machine code is a harder task. Languages like Haskell try to do it with an
advanced compiler that can make enough sense of the high level language to
generate efficient machine code.

Of course you'll still lose performance on some things compared to manual
assembler but with larger programs advanced compilers often beat writing
C/manual assembly.

Honestly the bigger performance problem is not wether you can make a high
level language that generates perfect machine code but wether you can get
through the politics/economics of JS/Obj-C/Java to distribute it.

------
michaelsbradley
Chris, have you read Prof. David Harel's[1] essay _Can Programming be
Liberated, Period?_ [2]

The sentiments expressed in the conclusion of Harel's article _Statecharts in
the Making: A Personal Account_ [3] really jumped out at me last year. When I
read your blog post, I got the impression you are reaching related
conclusions:

"If asked about the lessons to be learned from the statecharts story, I would
definitely put tool support for executability and experience in real-world use
at the top of the list. Too much computer science research on languages,
methodologies, and semantics never finds its way into the real world, even in
the long term, because these two issues do not get sufficient priority.

One of the most interesting aspects of this story is the fact that the work
was not done in an academic tower, inventing something and trying to push it
down the throats of real-world engineers. It was done by going into the lion's
den, working with the people in industry. This is something I would not
hesitate to recommend to young researchers; in order to affect the real world,
one must go there and roll up one's sleeves. One secret is to try to get a
handle on the thought processes of the engineers doing the real work and who
will ultimately use these ideas and tools. In my case, they were the avionics
engineers, and when I do biological modeling, they are biologists. If what you
come up with does not jibe with how they think, they will not use it. It's
that simple."

[1]
[http://www.wisdom.weizmann.ac.il/~harel/papers.html](http://www.wisdom.weizmann.ac.il/~harel/papers.html)

[2]
[http://www.wisdom.weizmann.ac.il/~harel/papers/LiberatingPro...](http://www.wisdom.weizmann.ac.il/~harel/papers/LiberatingProgramming.pdf)

[3]
[http://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts.H...](http://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts.History.CACM.pdf)

~~~
ibdknox
I haven't seen that, thanks so much for the pointer!

> in order to affect the real world, one must go there and roll up one's
> sleeves

This has always been our strategy :) Whatever we do come up with, it will be
entirely shaped by working with real people on coming up with something that
actually solves the problem.

------
SlyShy
Wolfram Language addresses a lot of these points. Equations and images both
get treated symbolically, so we can manipulate them the same way we manipulate
the rest of the "code" (data).

~~~
bsilvereagle
It doesn't handle the "true" debugging discussed in the article. One of the
goals of the author is to move away from stepping through breakpoints and
print statements to watch data "flow" through a program.

~~~
taliesinb
With debugging, we'll get there. I have some prototypes, but it's a long way
from a research prototype to production, and we're still quite busy on getting
actual products out the door.

And even at the moment, the fact that so much of a typical program in the
Wolfram Language is referentially transparent means its easy to pick something
up out of your codebase and mess around with it, then put it back. That's a
huge win over procedural languages.

But in terms of the language, many of the ideas Chris is talking about are
already possible (and common) in the Wolfram Language:

It's functional and symbolic, so programs are _all_ about applying
transformations to data. In fact, the entire language is 'data', with the
interesting side effect that some 'data' evaluates and rewrites itself (e.g.
If).

The mathematical sum notation is unsurprisingly straightforward in WL.

And StandardForm downvalues allow for arbitrary visual display of objects in
the frontend.

For example, the card would have a symbolic representation like
PlayingCard["Spade", 1], but you could write

    
    
      StandardForm[PlayCard[suit_, n_]] := ImageCompose[$cardImages[suit], $cardNumbers[n]];
    

to actually render the card whenever it shows up in the FrontEnd.

Graphics display as graphics, Datasets display as browseable hierarchical
representations of their contents along with schema, etc...

------
jonahx
I love seeing the challenges of programming analyzed from this high-level
perspective, and I love Chris's vision.

I thought the `person.walk()` example, however, was misplaced. The whole point
of encapsulation is to avoid thinking about internal details, so if you are
criticizing encapsulation for hiding internal details you are saying that
encapsulation _never_ has any legitimate use.

I was left wondering if that was Chris's position, but convinced it couldn't
be.

~~~
ibdknox
Black boxing is very, very important and necessary if we're ever going to
build a complex system, BUT my point is that you _should_ be able to see what
it does if you need to. So I don't think we're at odds in our thinking.

~~~
gridaphobe
That seems like more of an argument in favor of having all source code
available (i.e. not using closed-source libraries) than an argument against
OOP. The question of what code executes when you call `person.walk()` is no
different than the question of what code executes when you call `(person
:walk)`: it depends entirely on the value of `person`! This is the core of
dynamic dispatch in OOP and higher-order functions in FP, they enable
behavioral abstraction. You can impose restrictions on the behavior through
types or contracts, but at the end of the day you can't know the precise
behavior except in a specific call. And this is _precisely_ where a live
programming environment comes in handy.

------
DanielBMarkham
I've been lucky to write at least one small application per year, although
most of my work is now on the creative side: books, videos, web pages, and
such.

So I find myself getting "cold" and then coming back into it. The thing about
taking a week to set up a dev environment is spot on. It's completely insane
that it should take a week of work just to sit down and write a for-next loop
or change a button's text somewhere.

The problem with programming is simple: it's full of programmers. So every
damn little thing they do, they generalize and then make into a library.
Software providers keep making languages do more -- and become correspondingly
more complex.

When I switched to Ocaml and F# a few years ago, I was astounded at how
_little_ I use most of the crap clogging up my programming system. I also
found that while writing an app, I'd create a couple dozen functions. I'd use
a couple dozen more from the stock libraries. And that was it. 30-40 symbols
in my head and I was solving real-world problems making people happy.

Compare that to the mess you can get into just _getting started_ in an
environment like C++. Crazy stuff.

There's also a serious structural problem with OOP itself. Instead of hiding
complexity and providing black-box components to clients, we're creating semi-
opaque non-intuitive messes of "wires". A lot of what I'm seeing people upset
about in the industry, from TDD to stuff like this post, has its roots in OOP.

Having said all that and agreeing with the author, I'm a bit lost as to just
what the heck he is ranting on about. I look forward to seeing more real
tangible stuff -- I understand he's working on it. Best of luck.

------
jakejake
I liked the part of the article concerning "what is programming" and how we
seemingly see ourselves plumbers and glue makers - mashing together various
parts and trying to get them to work.

I felt that the article takes a somewhat depressing view. Sure, these days we
probably do all spend a lot of time getting two pieces of code written by
others to work together. The article suggests there's no fun or creativity in
that, but I find it plenty interesting. I see it as standing on the shoulders
of giants, rather than just glumly fitting pipes together. It's the payoff of
reusable code and modular systems. I happily use pre-made web servers,
operating systems, network stack, code libraries etc. Even though it can be
frustrating at times when things don't work, in the end my creations wouldn't
even be possible without these things.

------
jeffbr13
I love Chris Granger's work, and LightTable, but _jeeez_ my eyes were going
weird by the "Chasing Local Maxima" section.

Turn the contrast down!

~~~
ibdknox
#ddd -> #ccc

It seems like I can never win the contrast debate :p Try it now.

~~~
seanmcdirmid
The problem is dark backgrounds rarely work well unless if you have nice OLED
display. I know they are cooler, and its the current hotness among young
people whose eyes haven't started to give out yet...but dark themes really are
limited by current LCD displays. Not to mention, everyone has a different
display as well as eyes, and you can't really predict how the text will bleed
from one viewer to the next!

This is what I get from being married to a visual designer.

------
arh68
> _programming is our way of encoding thought such that the computer can help
> us with it._

I really liked this. But I think we're encoding _work_ , not _thought_.

If I could add to the list of hard problems: cache invalidation, naming
things, _encoding things_.

I think the problem in a lot of cases is that the language came first, then
the problem/domain familiarity comes later. When your language lines up with
your problem, it's just a matter of _implementing the language_. Your
algorithms then don't change over time, just the quality of that DSL's
implementation.

------
3rd3
I think this article forgot to emphasize the act of reading documentation
which probably takes 25% to 50% of the time programming. I think Google and
StackOverflow already greatly improved it but maybe there is still room for
improvement. Maybe one can crowd source code snippets in a huge Wikipedia-like
repository for various languages. I’m imagining a context-sensitive auto-
complete and search tool in which one can quickly browse this repository of
code snippets which all are prepared to easily adapt to existing variables and
function names.

------
anaphor
Just a few quotes from Alan Perlis:

There will always be things we wish to say in our programs that in all known
languages can only be said poorly.

Re graphics: A picture is worth 10K words - but only those to describe the
picture. Hardly any sets of 10K words can be adequately described with
pictures.

Make no mistake about it: Computers process numbers - not symbols. We measure
our understanding (and control) by the extent to which we can arithmetize an
activity.

------
andrewl
Chris' criticisms of the current state of programming remind me of Alan Kay's
quote, "Most software today is very much like an Egyptian pyramid with
millions of bricks piled on top of each other, with no structural integrity,
but just done by brute force and thousands of slaves."

Thank you for all the work on Light Table, and I'm looking forward to seeing
what the team does with Aurora.

------
zapov
As someone who is trying to improve the situation ([https://dsl-
platform.com](https://dsl-platform.com)) it's strange getting feedback from
other developers. While we are obviously not very good at marketing, when you
talk to other developers about programming done on a higher level of
abstraction usual responses are:

* I'm not interested in your framework (even if it's not a framework) * so you've built another ORM just like many before you (even if there is no ORM inside it) * not interested in your language, I can get most of what I need writing comments in php (even if it's not remotely the same)

It takes a lot of time to transfer some of the ideas and benefits to the other
side and no, you can't do it in a one minute pitch that average developer can
relate to.

------
agentultra
Visual representations are not terribly hard to come by in this day any age.
It's almost trivial to write a little script that can visualize your tree
data-structures or relations. Plenty of good environments allow us to mingle
all kinds of data.

I'm more interested in programs that understand programs and their run-time
characteristics. It'd be nice to query a system that could predict regressions
in key performance characteristics based on a proposed change (something like
a constraint propagation solver on a data-flow graph of continuous domains);
even in the face of ambiguous type information. Something like a nest of
intelligent agents that can handle the complexity of implementation issues in
concert with a human operator. We have a lot of these tools now but they're
still so primitive.

------
Locke1689
The author is correct that programming is currently under-addressing a
specific set of use cases: solving problems with conceptually simple models in
equally simple ways; in other words, "keep simple programs simple."

However, thinking about computation as only simple programs minimizes the
opportunities in the opposite domain: using computation to supplement the
inherently fragile and limited modeling that human brains can perform.

While presenting simplicity and understanding can help very much in realizing
a simple mental model as a program, it won't help if the program being written
is fundamentally beyond the capability of a human brain to model.

The overall approach is very valuable. Tooling can greatly assist both goals,
but the tooling one chooses in each domain will vary greatly.

------
sdgsdgsdg
Programming is taking the patterns which make up a thought and approximate
them in the patterns which can be expressed in a programming language.
Sometimes the thoughts we have are not easily expressed in the patterns of the
computer language which we write in. What is needed is a computer language
which pulls the patterns from our thoughts and allows them to be used within
the computer language. In other words we need to automatically determine the
correct language in which to express the particular problem a user is trying
to solve. This is AI, we need compression - modularisation of phase space
through time. The only way to bring about the paradigm shift he is describing
in any real sense is to apply machine learning to programming.

------
analyst74
I am optimistic about our field.

Things have not stayed stale for the past 20~30 years, in fact, state of
programming have not stayed stale even in the recent 10 years.

We've been progressively solving problems we face, inventing tools, languages,
frameworks to make our lives easier. Which further allows us to solve more
complicated problems, or similar problems faster.

Problems we face now, like concurrency, big data, lack of cheap programmers to
solve business problems were not even problems before, they are now, because
they are possible now.

Once we solve those problems of today, we will face new problems, I don't know
what they would be, but I am certain many of them would be problems we
consider impractical or even impossible today.

~~~
sanderjd
Yeah it's interesting, every time I hear "software has been stagnant for
decades!", I think to myself that my god, it's hard enough to keep up with the
stagnant state of things, I can't imagine trying to keep up with actual
progress!

~~~
Detrus
Keeping up with actual progress should be easier. The current "stagnant" state
could be called that because your attention is wasted on miracle cures that
promise the moon, but mostly deliver a minor improvement or make things worse.

~~~
sanderjd
I don't see a bunch of miracle cures that promise the moon, I see a bunch of
things that promise, and sometimes deliver, hard-won incremental improvement.
The OP seems a lot more like a moon-promising miracle-cure than all the
stagnant stuff I'm wasting my attention on.

~~~
Detrus
To clarify my moon examples would be NodeJS, MongoDB is web scale,
HTML5/WebGL/VMs/Flash on mobiles, fast JIT/VMs for languages that aren't
designed to be fast from the beginning etc.

Things that are technically hard and get a lot of hype. And maybe
MVC/OOP/DI/TDD design patterns and agile.

The OP is promising something that's more of an architecture design issue like
those of MVC libs. If he fails it will be because of a product design that
doesn't catch on. It has no guarantee of catching on even if it's good. LISP
and Haskell didn't. But their ideas trickle into other languages.

~~~
sanderjd
Yeah I had a fairly good sense of what you meant by promise-the-moon
technologies, and I believe many of those you mentioned to be exactly the sort
of hard-won incremental improvements that I was talking about. Good ideas
trickling down is also the exact sort of hard-won incremental improvement I'm
talking about.

I suppose my general point is that things aren't stagnant, they are merely at
a point where real progress tends to be hard-won and incremental. This may be
frustrating to visionaries, but it seems both inevitable and perfectly fine to
me.

~~~
Detrus
Except MongoDB's and Node.js's incremental improvements in their marketed use
case of easy scalability weren't worth your time if you really were concerned
with scalability. You would have been better served by existing systems.

So the marketing pivoted to being simple for MongoDB and being SSJS for Node.
In Mongo's case scalability was severely hampered by the fundamental design,
but many developers fell for the marketing and it cost them. Node.js can
perform on some hello world benchmarks, but writing large scalable systems was
a minefield of instability, callback hell bugs, lack of JS support for CPU
intensive tasks, etc. It's still catching up to systems that existed in 2007.

The incremental improvement on scalability is nowhere to be seen. They do
improve some other metric like programmer enthusiasm. Other newcomers did
improve on easy scalability after more careful thought and years of effort but
the hype machine largely left the topic.

A similar case can be made for HTML5/Flash promises for mobiles. You can use
it but it often makes the process more difficult than writing two native apps
in many cases. Good luck guessing which.

~~~
sanderjd
This is sort of my point about incremental improvement being hard-won, though.
It's really difficult to make something that is actually better than other
things, even for pretty narrow criteria. That's why I'm always suspicious of
things (like the OP) that claim they will bring a major sea-change of
betterness across broad criteria.

------
programminggeek
You want better programming? Get better requirements and less complexity.
Programming languages and IDE's are part of the problem, but a lot of the
problems come from the actual program requirements.

In many cases, it's the edge cases and feature creep that makes software
genuinely terrible and by the time you layer in all that knowledge, it is a
mess.

I don't care if you use VIM, EMACS, Visual Studio, or even some fancy
graphical programming system. Complexity is complexity and managing and
implementing that complexity is a complex thing.

Until we have tools to better manage complexity, we will have messes and the
best tool to manage complexity are communication related, not software
related.

------
lstroud
This seems reminiscent of the "wolfram language" stuff a couple of weeks ago.
Perhaps it's a trend, but I can't shake the feeling like I am seeing a rehash
of the 4GL fiasco of the 90s.

I have a lot of respect for Chris. So, I hope I am wrong.

------
3rd3
I think a lot could be won by reducing complexity of the systems. In modern
operating systems we stack too many abstraction layers ontop of each other.
Emacs is a great example of a development environment which prevents a lot of
complexity because everything is written in one language (Emacs Lisp),
functions are available throughout the system, one can rewrite functions at
runtime and one can easily pinpoint the source code of any function with the
find-function command. It would actually be great to have an operating system
as simple, extensible and flexible.

------
dmoney
What I'd like for programming is a universal translator. Somebody writes a
program in Java or Lisp, and I can read and modify it in Python and the author
can read my changes in their own pet language. I write an Ant script and you
can consume it with rubygems. You give me a program compiled into machine
language or Java or .NET bytecode and I can read it in Python and run my
modified version in the JVM, CLR, Mac, iPhone, Android, browser.
Transparently, only looking at what the source language was if I get curious.

------
NAFV_P
> _Writing a program is an error-prone exercise in translation. Even math,
> from which our programming languages are born, has to be translated into
> something like this:_

The article then compares some verbose C++ with a mathematical equation. That
is hardly a fair comparison, the C++ code can be written and read by a human
in a text editor, right click the equation > inspect element ... it's a gif. I
loaded the gif into a text editor, it's hardcore gibberish.

Personally, I would stick with the verbose C++.

------
datawander
I wholly agree with this article. The exact point the author is getting at is
something that I have been trying to say, but rather inarticulately (probably
because I didn't actually go out and survey people and define "what is
programming and what is wrong with it").

I really can't wait for programming to be more than just if statements and
thinking about code as a grouping of ascii files and glueing libraries
together. Things like Akka are nice steps in that direction.

------
mc_hammer
i have to disagree somewhat. imho the difference is in abstraction. i think
good forms of abstraction have allowed computing proceed as far as it has, and
will allow it to proceed further.

i think abstraction may correllate with a ide or librarys usefulness,
popularity, and development time, moreso than what your video demonstrates.

i have a question, how many clicks would getting this snippet from above to
work?

you also have to navigate various dropdown menus? (dropdowns are pretty
terrible UI, and i would think reading diff dropdown lists im not familar with
would be jarring.) IMHO it would be like writing software with 2 mouse
buttons, dropdowns or other visual elements, and instead of with keyboard, and
would actually be slower. the opposite of my point above

    
    
        #include <valarray>
        #include <iostream>
        
        double standard_dev(const std::valarray<double> &vals)
        {
        	return sqrt(pow(vals - (vals.sum() / vals.size()), 2).sum() / vals.size());
        }
        
        int main()
        {
        	std::cout << standard_dev({2, 4, 4, 4, 5, 5, 7, 8}) << '\n';
        }

------
e12e
I'm wondering, did the author ever play with Smalltalk/Self? Essentially those
environments let you interact with objects directly, in about as much as makes
sense. Seems a good fit for the "card game" complaint.

Doesn't help with the mathematical notation, though (Although it would be
possible to do something about that, I suppose).

------
DennisP
I hope the production release will be editable by keyboard alone, instead of
needing the mouse for every little thing.

~~~
ibdknox
that prototype is basically nothing like what the end result will be. And
yeah, it will be keyboardable :)

------
AdrianRossouw
man. i've been thinking about this stuff a lot.

especially after I saw rich hickey's presentation "simple made easy" (my notes
on it [1]).

I'm actually on a mission now to find ways to do things that are more straight
forward. One of my finds is [2] 'microservices', which I think will resonate
with how I perceive software these days.

[1] [http://daemon.co.za/2014/03/simple-and-easy-vocabulary-to-
de...](http://daemon.co.za/2014/03/simple-and-easy-vocabulary-to-describe-
software-complexity) [2]
[http://martinfowler.com/articles/microservices.html](http://martinfowler.com/articles/microservices.html)

------
clavalle
I'm intrigued.

This is a problem that many, many very smart people have spent careers on.
Putting out a teaser post is brave and I have to believe you know what you are
doing.

I am looking forward to the first taste. Do you have an ETA ?

------
ilaksh
I have been saying stuff like this for years, although not as eloquently or
detailed. But now Chris Granger is saying it, and no one can say he's not a
"real" programmer, so you have to listen.

I think it boils down to a cultural failure, like the article mentions at the
end. For example, I am a programmer myself. Which means that I generate and
work with lots of static, cryptic colorful ASCII text program sources. If I
stop doing that, I'm not a programmer anymore. By definition. I really think
that is the definition of programming, and that is the big issue.

I wonder if the current version of Aurora derives any inspiration from
"intentional programming"?

Also wonder when we can see a demo of the new version.

~~~
jamii
> I wonder if the current version of Aurora derives any inspiration from
> "intentional programming"?

The long term vision definitely does. At the moment we are mostly focused on
building a good glue language. By itself it is already very capable for
building CRUD apps and reactive UIs. If we can nail the tooling and make it as
approachable as excel then that gives us a solid platform for more adventurous
research.

------
leishulang
Sounds so philosophical ... almost sounds like something to do with how to get
strong A.I and expecting some sort of universal answer ... such as 42.

------
hibikir
There are entire families of problems that would be better solved with a far
more visual approach to code. For instance, worrydream has some UX concepts on
learnable programming that just feel much better than what we use today.

We could do similar things to visualize actor systems, handle database
manipulation and the like. The problem is that all we are really doing is
asking for visualization aids that are only good at small things, and we have
to build them, one at a time. Without general purpose visualizations, we need
toolsets to build visualizations, which needs more tools. It's tools all the
way down.

You can build tools for a narrow niche, just like the lispers just build their
DSLs for each individual problem. But even in a world without a sea of silly
parenthesis and a syntax that is built for compilers, not humans, under every
single line of easy, readable, domain-centric code lies library code that is
100% incidental complexity, and we can't get rid of it.

Languages are hard. Writing code that attempts to be its own language is
harder still. But those facts are not really the problem: They are a symptom.
The real problem is that we are not equipped to deal with the detail we need
to do our jobs.

Let's take, for instance, our carefree friends that want to build contracts on
top of Bitcoin, by making them executable. I am sure a whole lot of people
here realize their folly: The problem is that no problem that is really worth
putting into a contract is well defined enough to turn it into code. We work
with a level of ambiguity that our computers can't deal with. So what we are
doing, build libraries on top of libraries, each a bit better, is about as
good a job as we can do.

I do see how, for very specific domains, we can find highly reusable, visual
high level abstractions. But the effort required to build that, with the best
tools out there, just doesn't make any practical sense for a very narrow
domain: We can build it, but there is no ROI.

I think the best we can do today is to, instead of concentrate so much on how
shiny each new tool really is, to go back to the real basics of what makes a
program work. The same things that made old C programs readable works just as
well in Scala, but without half the boilerplate. We just have to forget about
how exciting the new toys can be, or how smart they can make us feel, and
evaluate them just on the basis of how can they really help us solve problems
faster. Applying proper technique, like having code that has a narrative and
consistent abstraction levels, will help us build tools faster, and therefore
make it cheaper to, eventually, allow for more useful general purpose
visualization plugins.

------
sgy
[http://www.paulgraham.com/progbot.html](http://www.paulgraham.com/progbot.html)

------
GnarfGnarf
Chris Granger sure doesn't make it easy to contact him.

------
aoakenfo
demonstrates an immediate connection with their tool:
[http://vimeo.com/36579366](http://vimeo.com/36579366)

