
The Future of Programming - christianbryant
http://radar.oreilly.com/2013/01/the-future-of-programming.html
======
slacka
> "For the programmer, that means we must grapple with problems such as
> concurrency, locking, asynchronicity,.."

From my perspective, this is the problem of this decade in HW and SW
engineering. Around 2007, Intel hit the wall with single threaded CPU scaling.
It has gone from doubling every 2 years to a few % increase per year. We at
the beginning of this paradigm shift to massively multi-core CPUs. Both the
tools and the theory are still in their infancy. In HW there are many
promising advances being explored, such as GPUs, Intel Phi, new FPGA, and
projects like Parallella.

The software side also requires new tools to drive these new technologies. I
think traditional threading will be viewed as a stopgap hack and will be
replaced by some form of functional, flow-Based, and/or reactive programming
models.

A few years ago, I had to help our EEs write some testing software in LabVIEW.
I was blow away by how elegantly it solves concurrency and fault tolerance to
bad input data. It took no extra design to utilize multiprocessing and
multithreading hardware. The synchronous model in our program eliminated the
issue of deadlock and race conditions problems that come up when using
asynchronous threads.

Another project with great potential is NoFlo. What SW solutions have others
run into in this field?

~~~
msutherl
I've been using multi-media 'coordination languages' like Max/MSP/Jitter, Pure
Data, SuperCollider, Touch Designer, and vvvv for years after I got hooked on
the model from my first synthesizer (the Nord Micro Modular). When I had to
learn traditional programming in school (Java at first), it felt hopelessly
barbaric to me. Imperative text-based languages are incredibly time consuming,
error-prone, and difficult to use in comparison.

Don't get me wrong, dataflow programming has its own gotchas– explicitly
guaranteeing proper order can be tricky – and it's very hard for people from a
traditional background to switch over (they're always reaching for the for-
loop that isn't there), but it's quite clear to me that I can throw together a
complex multimedia system with, say, camera input, some computer vision,
effects, video playback, etc. in, say, 3 hours whereas it would take somebody
starting from scratch in any other language at least twice that time if not 10
times[1]. Moreover, after they've finished, they enter a much longer period of
intensive debugging, whereas dataflow systems I've built have been much closer
to bug-free on the first draft.

Whenever I introduce this stuff to engineers, it blows their minds. I'm
beginning to feel like it's a matter of serious importance to spread the word
far and wide about alternative programming paradigms that make many of the
problems of this decade magically disappear and make programming more
efficient and easier to learn.

Shoot me an email if you're interested in more thoughts / pointers on this
topic – skiptracer at gmail.

[1] Somebody might be able to argue that they could be just as fast with
Processing or openFrameworks, but anecdotally developing in those environments
is still 2x slower and bug fixing goes on forever.

~~~
msutherl
There's something I've ignored here, which is that most of these systems are
at a higher level of abstraction than text-based languages. Modules (called
"objects") in Max, for instance, are written in C. Max is more analogous to
Unix than an actual programming language (by admission of its author).

A question for me is why we write things like web servers in a programming
language at all. After using dataflow systems, I believe there are much better
abstractions possible than Rails-style web frameworks.

~~~
rdtsc
These ideas are not new. And _general_ click and drag programming has been
"just around the corner" since GUIs became popular.

Now in specific domain, industrial & control system, signal processing, even
designing GUIs themselves, click and drag has worked. It lets non-programmers
but subject matter experts get stuff done.

But for generic stuff it hasn't worked too well. UML was one such push.
Managers were just going to draw a diagram on the screen, click, "Generate
Code" and bam, no need for silly code monkeys anymore. Well it hasn't quite
worked that way. Now the "Generate Code" button just does an SMTP send to a
.ch or .in domain some place.

There is another such system. Quite exotic called DRAKON.

[https://en.wikipedia.org/wiki/DRAKON](https://en.wikipedia.org/wiki/DRAKON)

It comes the Soviet space shuttle program. Yes they had a successful space
shuttle once that few unmanned and even landed itself (before US space
shuttles could even do that). Some of the people working on that wanted to
bring programming down to non-programming engineers. And created that system.
There was some interested in it lately as well. But can't say it exactly took
over the world.

~~~
msutherl
In my casual experience with such systems, they are extremely limited compared
to something like Max, which is in turn extremely limited and badly designed.

Max is not just 'click and drag' – you can do quite complex configuration,
cross-modulation of signals and parameters, etc. plus general purpose
programming. You can also script things with JavaScript or Lua and write new
routines ("objects") in C, Java, languages that compile to Java, and
JavaScript.

They key is that the dataflow environment specifies a strict interface between
"objects" and an overall execution pattern, but you can dig in to the stuff
you're clicking on and change/optimize the underlying code.

Recently Cycling '74 (who make Max) added a new lower-level dataflow language
that maps to C code or alternately GLSL shaders, which makes it super easy for
people who don't know how to program to write efficient, auto-optimized audio
and graphics routines. Another system along those lines is Faust:
[http://faust.grame.fr/](http://faust.grame.fr/).

For a more open-ended system that combines dataflow ideas with traditional
programming concepts, a time-line editor, and in-text GUI controls, check out
Field:
[http://www.openendedgroup.com/field/](http://www.openendedgroup.com/field/).

DRAKON looks like a naive low-level attempt at sticking imperative routines
into flowcharts. Diagrammed UML systems sound awful – you probably lose all
the power of object oriented programming by transposing it into a GUI. Signal
chain diagram languages like MATLAB's SIMULINK[1], Analog Devices' Signal
Chain Designer[2], or Cypress' PSoC Designer are usually very basic and
nothing like what a proper dataflow system could be.

[1]
[http://www.mathworks.com/products/simulink/](http://www.mathworks.com/products/simulink/)
[2]
[http://www.analog.com/en/content/Signal_Chain_Designer/fca.h...](http://www.analog.com/en/content/Signal_Chain_Designer/fca.html)
[3]
[http://www.cypress.com/?id=2492&source=productshome](http://www.cypress.com/?id=2492&source=productshome)

------
nikster
This article is based on a flawed promise: "The goal is to be able to describe
the essential skills that programmers need for the coming decade.".

Good luck with that. You might as well attempt to predict the weather for the
next decade.

As for the content, it's equally flawed. It's not just one device anymore, and
probably not going to be a PC. Well - obviously. And frameworks are apparently
a house of cards and things might break down horribly - a bizarre statement if
there ever was one as frameworks empower individual programmers and small
upstart teams to do amazing things which would simply be impossible without
them.

Here's my prediction: Everything is changing, and everything will continue to
change. If you're a programmer, you have to change and keep on top of things.
It doesn't matter if you're just starting out or if you're like my genius ex-
boss and are currently fixing up a software product at the spry age of 80.

~~~
freehunter
>You might as well attempt to predict the weather for the next decade.

I can do that, with just as much accuracy as the author is striving for. It
will be hot in the summer, and it will be cold in the winter. There will be
storms in between. Some people may experience snow, while others will get rain
instead, depending on your location.

------
toddmorey
> On frameworks: "Why should we use computers like this, simultaneously
> building a house of cards and confining computing power to that which the
> programmer can fit in their head? Is there a way to hit reset on this view
> of software?"

I'm not entirely with him on the evils of frameworks. Software is built on
software the way knowledge is built on knowledge. I'm happy to stand on the
shoulders of giants. It means I don't have to rewrite what's already been well
written (and well tested). There are tradeoffs, sure, and it's really
important to understand the hows and whys of the frameworks you select (this
is where open source really matters). Still, I think narrowing the problem
scope so you can "execute the program you’re writing inside your head"—and
inside the ambitious timeline of your startup—is the great gift of modern
computing, not its curse.

~~~
nikster
You're too kind.

Frameworks are the reason I can go and build a top class product in 6 months
all on my own. If we didn't have them, each new product would require an army
of programmers re-inventing wheels, and only big corporations with lots of
money could afford to even attempt it.

Frameworks rock. Abstraction also rocks. Whenever I need something more than
once, I make it abstract. I automate it, abstract it, make it re-usable. And
99% of the time - I am not kidding, it's really that high - I will re-use that
piece of software, re-run that script, etc.

A senseless article, all in all.

~~~
fauigerzigerk
Frameworks have a lot of issues. First of all, they tend to exclude each
other, hence preventing you from reusing code that might be a better fit for a
particular problem.

Second, they provide a perverse incentive to force every new application
feature into the structure of the framework you decided to use at the
beginning of the project.

Frameworks are like debt. They give us a head start at the cost of paying more
down the road. Sometimes that head start is worth the higher overall cost and
sometimes it's not.

I think libraries are a lot less problematic and still provide most of the
benefits of frameworks.

------
christianbryant
Though many see it as science fiction, quantum "programming" shouldn't be
excluded from the list. I realize he aimed to generate a practical discussion
around where we are now and what we need to do in the immediate (5 years)
future, but just as nanotech really needed to get off the ground in people's
heads before it could get a foothold in popular culture as an actual
technology, quantum computing is in need of more public analysis and
simulation. He might have added a last section there titled "And Beyond..."
for topics like this :)

~~~
goldfeld
Are there technologies aimed at quantum programming available and accessible
today, even in a completely experimental status?

~~~
mietek
Check out Quipper:

[http://arxiv.org/pdf/1304.5485v1.pdf](http://arxiv.org/pdf/1304.5485v1.pdf)

------
sarreph
I'm most looking forward to typing my program requirements into the STACK-
OVERFLOW-PARSING-INTERPRETING-COMPILING-MACHINE, and it gobbling up all the
nuggets of information together into a fully-formed program.

That's part of the future.

~~~
AYBABTME
Looking forward to the day I'll put my mind-reading headband and go for a ride
while I program in thought.

~~~
tunesmith
Not sure why this was downvoted... I've wondered often what it would take to
develop a programming language that one could create in while being physically
mobile. I do some of my best programming/thinking while walking around since
it's conducive to inspiration, but then I have to wait until I get home behind
my keyboard to try and implement something.

~~~
AYBABTME
Yup, I was serious in my idea. Maybe downvoters thought I was sarcastic.

I'd love to be able to experiment both the physical and digital world at the
same time, discontinuing my hand/eyes as an interface with the computer.

I believe it will eventually happen. I'm not exactly sure of the current state
of the art, but I've seen research replacing lost limbs with robotic arms that
are solely controlled by thoughts. Seen also images induced into one's vision
by stimulation of the brain.

Our brain evolved with I/O devices meant to match our physical environment. As
far as I know[1], we don't have any I/O device meant to convey ideas in
themselves. We need to serialize our thoughts into words, images and movements
to get them deserialized into other's minds. All this process is lossy and has
a limited bandwidth.

 _Anyway, I 'm not talking about things I really know, so I won't elaborate
further into this. I try to limit my opinions on HN to stuff that I actually
know. You can't fool anyone around here; next thing I know, a brain researcher
will come and break all my ideas apart in the comments._

[1]That's meant to exclude telepathy stuff

~~~
bennyg
I also would like to see stuff like this. I hope it only evolves to stuff more
like when I can see the whole algorithm (not in code, but in process I guess)
in my head versus thinking "okay brain, for loop starting at x == 0, while x
is less than this array's count."

------
miguelrochefort
The future of computing is design by contract and intentional programming. You
define your goal, and that's about it. Most technicalities will be delegated
to machines.

Most programmers will develop smart agents that will manipulate a
decentralized and unique semantic data source. All of this is quite obvious.

~~~
antrix
Curious to know, other than SQL, are there any other examples of successful
technologies that achieve this intentional, goal-driven programming?

~~~
primaryobjects
This sounds like genetic algorithm programming (see my other comment below
[https://news.ycombinator.com/item?id=6080492](https://news.ycombinator.com/item?id=6080492)).
Writing a program consists of defining the end state. The GA then runs through
thousands of epochs, getting closer and closer to the end state (determined by
a fitness score for each program), until the solution is found.

I was able to achieve automated programs for printing text, simple math
operations, string manipulation, and conditionals. After that, programming the
fitness methods started getting complicated.

------
danso
> _The prevailing form of programming today, object orientation, is generally
> hostile to data. Its focus on behavior wraps up data in access methods, and
> wraps up collections of data even more tightly. In the mathematical world,
> data just is, it has no behavior, yet the rigors of C++ or Java require
> developers to worry about how it is accessed._

Er...I thought one of the underlying parts of OOP was creating a domain for
data, such that derivable attributes and relationships were encapsulated in
the data model? The way that R treats data as a first-class citizen is nice
for some statistical modeling, but doesn't seem robust enough for all the
other ways that we need to organize and munge data.

------
coldcode
Looking at all the languages near the top of language popularity chart, I
don't think OO is going away. Last I checked Python was still an OO language.
If you look at all the jobs on the job boards OO is still the dominant flavor
as it has been for 20 years or so (only plain C is the oddball). Will that
still be the case 20 years from now?

~~~
weavejester
I think we're starting to see a shift away from the areas that OOP is
traditionally comfortable with.

Increasingly, modern system design tends to place an emphasis on bare,
immutable data, whereas OO tends to work with encapsulated, mutable state. If
the only thing you're using objects for is namespacing and polymorphism, then
there's not much point to designing software within the OO paradigm.

~~~
cldr
I have noticed this too; all the "functional" languages I've seen (i.e. those
with immutable data) encourage working with bare data. Is encapsulation only
necessary for mutable data?

~~~
weavejester
I think immutability removes one of the reasons for encapsulation: controlling
state change. With that gone, the disadvantages of encapsulation might
outweigh its advantages in a lot of cases.

------
narzac
Well the article touches some good points, since it made me think...

The problem is the language and paradigms used not the framework or how many
levels you abstracted the problem at hand.

The way as i see it, writing programs by manipulating data excessively, will
lead us nowhere but complexity, which unfortunately is introduced by the
program itself. Mainstream languages such as C++, Java, C# etc., should not be
taught in schools as if they are the ultimate solution, and fp is sth not
practical, it is a twofaced claim while stealing ideas from fp and patching
these languages, nowadays.

Another one, saying how fast the IT changes at every chance and sticking with
ancient programming languages. When someone points out the dilemma, then
claiming not to having enough developers for say Haskell, Clojure, Go etc..
May be you should fix the education system morons, instead of building more
complex frameworks, platforms.

Of course there are particular areas, such as simulations, modelling time
dependent large data sets, embedded development etc.. where some languages
will be the best suited while others will be overkill or not just fast/ viable
etc.. Of course, I am not blindly saying, "Death to imperative languages" :P
However, they shouldn't dominate.

As a final comment, The future of programming is already here, The question is
are we ready for the future...

~~~
kabdib
Whether you like it or not, your world runs on C, C++ and assembly. These
languages form the ninety-percent-plus core of modern computing's foundation.
Lift the hood of nearly any embedded system, BIOS or OS and you'll find these
in heavy use. You'll find C code in the networking layers that let you talk to
the world, and in the very light switches that let you go to the bathroom at
night. C runs your dishwasher, your car, your elevators, and probably your
toothbrush.

Then a lot of stuff gets layered on top of this. And sure, languages /do/ jit,
but these are generally hothouse flowers, surrounded by an infrastructure
provided by C, C++ and assembly. Few systems are native boot without involving
a bunch of C.

I'm happy if people are satisfied to work in the layers above all this stuff.
Frankly, not many folks (as a percentage of the programming population) can do
good kernel level work. But don't pretend that it doesn't exist, or that it is
somehow morally inferior to hacking away in Haskell.

Maybe this will change in thirty years; I think that's the time scale required
to make a fundamental change in the way we program modern systems.

I'd /love/ to see a native Erlang system, soup to nuts. But there's little
economic incentive to make one, given that the lower layers are actually doing
a pretty decent job.

~~~
Chris_Newton
_Whether you like it or not, your world runs on C, C++ and assembly. These
languages form the ninety-percent-plus core of modern computing 's
foundation._

Isn’t that part of the problem? There is no technical reason we couldn’t have
a language that offered the same fine control and hardware integration as C,
compiled to native executable code in a similar way, but was both safer and
more expressive. There is no advantage in having an awkward syntax for
specifying types or in making all pointers nullable.

Mainstream industrial languages today are a triumph of good enough, and they
continue to dominate primarily because of momentum and the size of the
surrounding ecosystem rather than technical merit in the language itself.
Unfortunately, this creates a vicious circle that reinforces the status quo,
and the few organisations with sufficient resources to break that cycle have
limited economic incentive to do so.

~~~
kabdib
There have been many contenders, notably Eiffel, Oberon and D. There are many
others that I don't immediately remember the names of.

These languages have /great/ technical merit. They offer safety (in various
forms) and other interesting technologies that C definitely lacks. They were
lauded by academics and industry pundits. So why didn't they take the industry
by storm?

Perceived technical merit is a terrible way to choose a language.

Pascal was widely regarded as a great language, a wonderful model, and it was
widely used in the 80s by various large companies. Today it is mostly dead. I
believe this is because Pascal only did an adequate job of expressing stuff at
the hardware and kernel level, and that C was better. Certainly nearly
everyone at Apple that I worked with breathed a sigh of relief when it became
obvious that it was okay to write C instead of Pascal for new projects. For
the most part we'd been writing C for years anyway, just in Pascal. About the
only thing that people missed were nested procedures (whereupon, C++).

Your new "adult" language is going to need a set of very compelling offerings
over and above "well, it's safer" in order to succeed.

Take a look at things people are doing /to/ C in order to be better:

\- "analyze" builds that do control graph analysis and find bugs (not just
ones endemic to C, but actual logic bugs, too)

\- declarative sugar that helps tools to reason about what things like drivers
are trying to do

\- control extensions (commonly seen as macros providing 'foreach' like
support)

\- ways for tools to enforce local conventions (without spending tons of
manpower on parsers and so forth)

Come up with a language as good as C at low-level programming, that has great
debugger support, offers easy tool plugins, and that has interoperability with
the gazillions of libraries already available [take a page from C#'s great
interop story here], and you might have something. Go "academic" and just say
"this is good for you, use it instead," and the working programmers will see
nothing in it for them and ignore you, just as they've ignored or abandoned
dozens of other offerings in the last 30 years.

~~~
Chris_Newton
_There have been many contenders, notably Eiffel, Oberon and D. [...] So why
didn 't they take the industry by storm?_

Some possible reasons, based on my limited knowledge of those languages:

Eiffel — Emphasis on simplicity over performance optimisation; emphasis on OO
programming style; legal issues around various parts of the ecosystem in the
early days

Oberon — Limitations of basic type system, such as a lack of enumeration types
and the way coercion of numerical types worked until recent versions

D — Many of the same major strengths and weaknesses as the more established
C++; two rival “standard” libraries for a long time

 _Your new "adult" language is going to need a set of very compelling
offerings over and above "well, it's safer" in order to succeed._

Of course. You can throw in “it’s easier to write” and “it’s more powerful”
and you still only have a small part of the big picture, because in reality so
much depends on the surrounding ecosystem: development tools, libraries, and
so on. However, there is no reason we couldn’t have a language that was
superior to C in both safety and expressive power, remained compatible with
calling to/from C functions at ABI level for library compatibility and ease of
porting, and used a clean grammar to help tool developers.

 _Take a look at things people are doing /to/ C in order to be better:_

While I don’t disagree with any of your examples, I’m not sure they really
tell us anything useful. The absence of other things that people might do
could be because they aren’t particularly valuable or it could be because they
are valuable but also prohibitively difficult or expensive to achieve starting
from C as the foundation.

------
ericHosick
In the future, programming will not be done through coding but through
composition. I'm a bit biased on this prediction.

~~~
alatkins
Kind of like how OO was going to give us massive libraries of OTS software
components with which we could just 'wire-up' programs?

~~~
zanny
You generally can, though. All the web servers, all the frameworks, all the
packages. If I wanted to make a graphing calaculator in Python I wouldn't
write a reverse polish calculator or do more complex string parsing on user
input, I'd use numpy and sympy, with a qt gui, probably written in qml.

I see very few domain problems where the boatload majority of the work hasn't
been done for you, and you just glue legos together. Network stack? Most
languages have an httplib or you could get libftp or some such from a foss
repository.

The only real problem is figuring out which api is easier to use, since there
usually are competing choices on a lot of these drop in solutions. qt or
boost? django or bottle? backbone or angular?

~~~
alatkins
Yes I agree, but software reuse isn't exactly new. The commenter was obviously
referring to functional programming techniques, and I was trying to point out
that other silver bullets have been spruiked in the past, only to fall short.

And besides, someone still has to program these libraries - it isn't turtles
all the way down.

------
primaryobjects
I'm still hoping for computer programs to be written without humans
[http://www.primaryobjects.com/CMS/Article149.aspx](http://www.primaryobjects.com/CMS/Article149.aspx)

That was my initial experiment with self-programming AI, although ultimately,
the fitness methods were starting to grow in complexity themselves.

~~~
Peaker
Defining the spec would be the "programming".

------
kriro
Increasing the problem space (as opposed to constantly shrinking it which the
author suggests as a trend) can actually make things easier to solve.

You can see the small version of that in a language like Prolog where a common
pattern is to generalize the problem because more generalized problems tend to
be easier to solve/reason about (often because they already come with easy to
discover base cases and steps for a recursive approach)

------
puredanger
Sounds like a recipe for Strange Loop
[http://thestrangeloop.com/sesssions](http://thestrangeloop.com/sesssions) !

------
brianberns
> In the mathematical world, data just is, it has no behavior, yet the rigors
> of C++ or Java require developers to worry about how it is accessed.

This may be true, but in the _real_ world, data and behavior are still
intimately tied together. Data-oriented (i.e. functional) programming is going
to enhance OO programming, not replace it entirely.

~~~
textminer
I've moved recently from building data manipulation and machine learning
systems in Python to a stack that's primarily C++, and it's striking how much
less nimble I now feel transforming that data or in building machine learning
pipelines.

I suppose building basic tools as Apache Thrift services would allow one the
ease of prototyping ideas in Python before building a performant system in
something like Java or C++.

~~~
brianberns
I suspect that's probably due to C++ being a lower-level language than Python,
not due to any inherent problems with OO programming.

BTW, Python describes itself as object oriented, so it illustrates my point
that OO and functional are not at odds.

~~~
pjmlp
Many developers tend to think OO == Java/C#/C++ as they never learned other OO
paradigms.

------
bsg75
> there’s a bias to languages such as Python or Clojure, which make data
> easier to manipulate

I assume its the libraries available for Python (Pandas, NumPy, Blaze) that
are the basis for this quote. Is it also the case for Clojure (as compared to
other FP languages)?

------
ShardPhoenix
> We’re making faster and more powerful CPUs, but getting the same kind of
> subjective application performance that we did a decade ago.

My modern computer with an SSD feels subjectively faster/snappier to me than
any computer I've owned in the past.

------
skierscott
> Look around your home. There are processors and programming in most every
> electronic device you have

And those devices are packing more punch for their size -- just look at the
Raspberry Pi and it's competitors.

------
floor_
Nothing about massively parallel programming. Bummer.

------
tossmeup
This feels like an odd angle to take in concern around the next decade of
programming.

It starts a level below what problems might be the next thing to tackle and
focuses on what shiny objects have already got enough traction to be
considered concern of the past 3-5 years.

Short-sighted. Boring. Couldn't dance to it.

1-3 years are feasibly predictable 3-7 are 50/50 hunches 20%CI 8-9 you might
as well be talking jet packs 10+ You aren't talking about saving or
controlling the world? Refactor!

10 years from now, we blow IP up. we napster algorithmic experiences because
you can't patent wiping your ass 10 years from now, we make devices cheaper
than water from scrap plastic 10 years from now, any human can talk to any
other human anytime, anywhere 10 years from now, my meta-data is meaningless
and we defund NSA because they are deaf dumb and useless 10 years from now, we
eat the rich and feed the poor

Why bother looking 10 years ahead without setting some real goals or at least
looking at things that might actually drive the innovation in the next 10
years vs what is already well-planned?

We're all going to be 10 years closer to death and you are believing we'll
have "smart dust"? A Roomba in every house!

Do you know how long it's been since I first heard that there would be fucking
smart dust? You mean we need to lay a powder of infrastructure down to detect
what distributed, distant sensors could tell you? Looks like a whale just
barfed in the ocean and we got a ton of our dust back online... water's still
wet AND salty there! Dial back the Antarctica dust belcher for a minute to
balance out the South Pacific by next year... stupid whales, when will you
learn?

He should have stopped at sensors. We'll have 1984. Smart dust? We'll be
crotch-deep in gray goo.

Whales are like, "IIIIIIII WIIIIIIILLLLLLL
DEEEEEEEEEESSSSSSSSTRRRRRRRRRROOOOOOOOYYYYYYYYYYYYYY!!!!"

~~~
celeryreally
You lost me somewhere between eating the rich, and whales...

~~~
miester_barfie
Yes, but it was fun

