
Next-Paradigm Programming Languages: What Will They Look Like? - furcyd
https://arxiv.org/abs/1905.00402
======
Waterluvian
I want a language that is designed alongside an editor/IDE. I want to stop
putting comments in my code. I want it to be first-class for my code to be in
the left pane and my comments to be in the right pane, always binded together
with anchors but always separate so my comments don't have to adhere to the
limitations of the code's text area.

And then I want to put rich things into my comments section like graphics and
tables and such. And I don't want to have to write table-like Markdown or a
shorthand that converts into graphics. I want all the WYSIWYG of Word or
PowerPoint or Inkscape just right there feeling 100% natural and native.

Jupyter is pretty darn close to this. And I like it a lot. But I feel like I'm
giving up a lot to make a notebook. I want this in a much more heavy lifting
format.

~~~
otabdeveloper2
No, you don't want this. Every new generation of new programmers since the
1960's talks about this, but nobody to date has actually written any useful
program using pictures and prose.

Sorry. Code is code, that's what your job entails and you can't make
programming into something it isn't.

~~~
a1369209993
To be fair, Waterluvian was talking about including pictures and tables and
such in _comments_ , while still writing actual code. It's probably still a
horrible idea, but for much more subtle (and less certain) reasons than
"waaah, I don't want to have to do programming in order to make a program".

~~~
maxxxxx
I would love to be able to include pictures and diagrams in comments. If it
was in a text format like HTML or JSON it would source control nicely. The IDE
could then display and allow editing them.

~~~
flukus
Slap a text editor into doxygen documentation and you'd basically have this.

------
snidane
I think programming languages today focus too much on features and then they
all look like another Algol variations with couple of features borrowed from
elsewhere.

I don't think people these days try to understand what are the building blocks
of languages and computation.

I'd wager the true next generation programming languages will be ultra
minimalistic languages, retaining only the essentials while not losing on
legibility.

Lisp for example is still not minimal as it has 'special forms', ie. hacks to
make conditionals work. The unintiutive prefix notation also doesn't help.

Also not many languages allow people to explore true powers of deeper stack
modifications by delimited continuations. Using them the users would be able
to create their own looping and threading constructs as well as custom
exceptions and iterator mechanisms and effect handlers.

It's important to realize that the economic incentives for truly innovative
languages are bleak. The academia is stuck in incrementalism. Private sector
wouldn't pay for it and the benefits would only be collected by cloud vendors,
adding yet another trillions to their valuations.

Why bother.

~~~
Iv
There is some active research in academia and there is also the factor that
veteran programmers almost all develop an itch they would love to scratch.
Text editors, programming language, debuggers, terminals...

I know that if I had a few more hours a day I would try to write yet another
of these things.

(Where is my cross-platform, OSX-Linux-Windows one-file text editor that just
has smart indent and syntax highlighting for python?)

~~~
grumdan
> (Where is my cross-platform, OSX-Linux-Windows one-file text editor that
> just has smart indent and syntax highlighting for python?)

I know people are gonna protest this suggestion vociferously, but Emacs ticks
all these boxes (except that it works for multiple files), with almost no
configuration. Admittedly, getting auto-completion to work cross-platform for
various languages is still quite tricky.

~~~
Iv
By one-file, I mean one executable file. Also, emacs does not have the
defaults most people expect since the 90s.

------
kettlecorn
A few advancements that could be very good:

* Moving towards representing code on disk as an abstract syntax tree, and anyone can edit with whatever syntax they prefer. Syntax becomes virtually irrelevant.

* The above point opens the possibility to represent code as visual graphs (even to code within virtual reality). Graphs and visual programming are useful, but to be able to toggle between textual and graph representation gives you the best of both worlds.

* Smarter compilers. For example: easy to use compile time constraints. Think about the ability to specifiy something like the minimum value an integer can be set to. This could allow the compiler to produce compile time errors for things like index out of bound errors, with zero runtime overhead. More complex constraints could be user defined. The mindset of programming could move to thinking about how you can do as much as possible at compile time.

* The ability to embed documentation alongside code. For most codebases the ability to understand the code is part of value of the code. Comments don't go far enough, particularly when working with visual concepts. Being able to embed readily accessible documention within the code could help dramatically.

~~~
polityagent
In case you're unaware, your smarter compilers bullet point is called
dependent typing. You can try it out in languages like Idris. Also the person
who wrote "The little schemer" recently released "The little Typer" which is a
nice intro to some of the theory behind it, very easy to follow.

~~~
mbrock
Dependent type systems are one way to verify logical constraints in code, but
not the only way. Look at Dafny for a good example of how checked invariants
can be integrated into an ordinary imperative language. Or ESC/Java for the
same approach but integrated into Java. The basic technique goes back at least
to Dijkstra’s predicate transformers and nowadays the verification is mostly
automatic with SMT solvers.

~~~
bjz_
Those can be integrated with dependent types too - for an example look at F*.

------
dahfizz
> The dream of programming language design is to bring about orders-of-
> magnitude productivity improvements in software development tasks.

I can understand why there is such a focus on productivity, but I think it's
awful for the industry when it's our number 1 priority. Productivity is
meaningless if the thing that you are producing is of poor quality.

It's why we make web apps out of 500 npm dependencies when a native app would
be faster and preferred by the user. It's why we use languages where we don't
have to "fight against the type system" because we prefer writing code faster
to writing correct code.

Again, I understand perfectly why this is the case. It just kinda makes me sad
that we treat ourselves as garden hoses spewing out as much code as possible
without as much thought into making a better product.

~~~
cageface
Unfortunately users just don't care in most cases. The app stores have taught
a generation of users that software should be free so the pressure is on
developers to build things in the cheapest way possible. In most cases that
takes real native apps off the table. I particularly blame Apple here for
being so successful at commoditizing software in order to sell their hardware.
This might come back to bite them now that their hardware sales are slowing
though.

In markets where users are still willing to pay a premium you still see high
quality native apps. Ableton Live is $750, but plenty of users are happy to
pay that because you can't build Live with javascript and html.

~~~
dTal
It's not clear to me that this is actually a bad thing. Computer time is less
valuable than human time, and that goes for both developers and users. For
most one-off computing tasks, far better to fumble with something shoddy but
free that someone dashed off in a few hours, than invest a day's wages into a
product with man-years worth of polishing.

There's free food everywhere and we're complaining it's not gourmet.

------
sitkack
If I were to write an exploratory survey blog post of Next-Paradigm languages
I would cover problems that are to be solved and ways to represent those
problems.

That there are many alternative forms of computation, and that the inability
to represent that form of computation in the host language leads to complexity
and code bloat.

    
    
        * constraint programming, logical, spatial, temporal
        * lazy evaluation
        * back tracking, memoization
        * reversible
        * succinct
        * differentiable
        * anytime
        * incremental
        * probabilistic
        * lattice/crdt
        * quantum
        * sketching
        * programming by example
        * transactional
        * failure tolerant 
    

Now how programs that embody those techniques are represented, I don't know
what those languages are like, the semantics or the syntax.

------
Analemma_
I've been trying to imagine what programming will look like in the far future,
and I just can't believe that it will be fundamentally structured around
typing English-like text into a text editor, with a smattering of tools to
help manipulate the text and debug the resulting binary, like it is now. There
has to be more that computers can do to augment our brains and help us become
more efficient at writing code. Gary Bernhardt's "Whole New World" talk
([https://www.destroyallsoftware.com/talks/a-whole-new-
world](https://www.destroyallsoftware.com/talks/a-whole-new-world)) is a good
start, but it could go so much further.

Unfortunately it seems like most working programmers are deeply suspicious of
new paradigms for producing programs. Understandably so, since all the "visual
programming" tools up until now have been either teaching toys or unusable
disasters, but I think we're limiting ourselves tremendously.

But it may not be possible to arrive at my glorious imagined future
incrementally. It may take some genius just sitting down and working on this
for years and producing a completely finished system to convince people of its
capability.

Sorry this is a bit lofty and light on specifics. It's more a feeling that I
have that it would be ridiculous if programming in 2100 looked the same as it
does now, and not something I've thought about deeply.

~~~
keyle
FYI Unreal Engine's "blueprint" is a visual programming language that is
neither a teaching toy nor an unusable disaster.

~~~
Too
[https://blueprintsfromhell.tumblr.com/](https://blueprintsfromhell.tumblr.com/)

~~~
keyle
Numerous website highlights similar things in text :)

And considering blueprints spit out C++ once compiled, I'd take my chance with
BP!

------
theamk
This is an discussion topic, and this text provides _a_ viewpoint. However,
from how I read it, the "current paradigm" is imperative languages like C/C++,
and "next paradigm programming language" is Datalog (this word occurs 31 times
on 9 pages of text).

What is missing are functional and descriptive languages.

No explicit loops? "heavily leverage parallelism, yet completely hide it"?
lazy evaluation? This sound like functional languages.

"Example: An Interactive Graphical Application." describes HTML+CSS exactly.

I understand author really loves Datalog, but omission of other languages
makes the message less effective.

~~~
tsimionescu
> "heavily leverage parallelism, yet completely hide it"? lazy evaluation?
> This sound like functional languages.

I agree with your general comment, but this statement seems a bit optimistic
on the functional languages side. Lazy-evaluation? Haskell does it, but most
other functional languages don't.

Heavily leverage parallelism but hide it - this is often touted as an
advantage of pure functional languages, but from what I've heard, in practice
it has never panned out, at least if I'm interpreting it correctly as
automatic parallelization. Turns out, while the compiler can prove it would be
safe to do so, a lot of the time it would massively _hurt_ performance, and
the compiler can't tell when that is the case or not.

So, even in a language like Haskell, you still have to explicitly parallelize
code, think about chunking etc. And if we stray from pure parallelism and into
concurrency, all of the problems of having to do explicit synchronization come
back, you just have less to worry about with no(/less) shared state.

Funny thing, one langauge that does 'leverage parallelism, but hide it' is
x86_64 assembly, where the processor automatically executes (parts of)
assembly instructions in parallel, based on data dependencies and available
compute ports.

------
Animats
The author is pushing a successor to Prolog. This is way outside the
mainstream, but maybe it will work.

I suspect that future programming languages will either be garbage-collected
or will have something like Rust's borrow checker. Nobody needs a new language
with dangling pointers and buffer overflows. C++ is trying to retrofit
ownership semantics, which is good, but has major backwards compatibility
problems.

Indentation and code layout will be automatic. Either in the editor, or
something like "go fmt". Nobody is going to put up with the indentation and
the delimiters being out of sync. Also, the ultra long lines of functional
programming have to be laid out in some standard way to be readable.

Does each language really have to have its own build and packaging system?

~~~
Aardappel
Here's a possible borrow checker alternative:
[http://aardappel.github.io/lobster/memory_management.html](http://aardappel.github.io/lobster/memory_management.html)

------
spiralganglion
For folks interested in future programming tools — be it languages, editors,
IDEs, visual programming, new abstractions, what have you — I recommend
looking at the [https://futureofcoding.org](https://futureofcoding.org)
community. The podcast has a number of great interviews with people exploring
all corners of this problem space, and the Slack group is full of people
actively working on these hypothetical-future tools and sharing their
findings.

------
alkonaut
Things I'd like to see

\- Tiered abstraction levels. I want to write high-level functional code when
I can, which expands to more complex code under the hood. When I _want to_ I
can instead manually expand the logic, perhaps the only time I need to code
imperatively. Think a project today that has high level haskell or python code
but implements bits in C where necessary. For the highest level bits, things
like formal verification, safe concurrency, code contracts etc should be much
simpler to use. For the low level bits, I'd just take responsibility for
correctness (memory safety, formal correctness, concurrency) myself (like
"unsafe" in Rust/C#). Perhaps the number of abstraction levels should be more
than 2.

\- Integration of tools: 1) Today a compiler, editor and VCS are 3 different
tools, and their lowest common denominator today is _text files_. I'd like to
see a system that version controls a syntax tree for a whole project, and
allows semantic diffing. A build server could trivially do incremental builds,
moving a symbol doesn't break history, reliably running only impacted tests
for a 10h test suite is possible. This doesn't necessarily mean everything
needs to be one big tool - but the lowest common denominator of tools could
move from UTF-8 files to a binary representation of the whole syntax tree.
Having a more complex binary representation would have drawbacks but also
other benefits like trivial inclusion of a picture in a comment that can be
previewed with a mouse hover in the editor, or displayed inline. 2) Better
integration with documentation and issue tracking. Same as the AST
representation: we need to be able to link documentation, pictures, links to
issues etc from code in a way that doesn't rot if we change directory paths,
issue trackers and so on. A broken link to a document in a comment should be a
build error like any other error.

------
6keZbCECT2uB
An overview of a problem that I run into is the need to pair some input,
validate that input, query the system based on the input, expose the
intermediate stages for inspection and modification by arbitrary consumers,
conditionally modify the underlying system in some way, and provide well
defined semantics around the outcome while handling failure recovery,
distributing the processing into multiple, cancelable stages, and sharing the
system context and other forms of memory with concurrent operations.

The right way to behave in each situation based off of the input, the state of
the system, or failures encountered is determined by history and by many
people.

A language that makes my job easier solves problems like: 1\. Help me describe
my system as straightforwardly as the above description 2\. Prevent me from
neglecting to handle failure and reduce the work to specify how to handle
failure 3\. Prevent me from introducing logic errors. e.g. concurrency without
synchronization, or passing an int where a string is expected. 3a. Help me
verify that my solution does what I expect. 4\. Make it easy to accumulate
data and pass information through various interfaces. 5\. Make it easy to
extend behavior without modifying existing code.

Given that we probably spend 95% of our time plumbing information, handling
failure, and reducing logic errors, an order of magnitude of productivity
increase is realistic. Choosing an algorithm or data structure is less than
five percent of time allocated.

Rich Hickey and Clojure seems to focus on people like me. Rust seems to focus
on another subset of the challenges I face. One of the useful ideas I've
encountered is that coding is a specification design process rather than a
manufacturing process. We start with an ambiguous description and specify
behavior in increasing detail until a computer can work with it. The article
suggests that we can subordinate some of this detail to the compiler. The
details that it chooses to subordinate aren't the details that disrupt my
productivity.

~~~
tigershark
It seems that you are describing Haskell or ocaml/f#, I don’t see how clojure
would fit the bill given that it’s a dynamic language.

~~~
6keZbCECT2uB
Those languages do indeed have a lot to offer for helping ensure safety. I
attempted to describe 'situated programs' which Rich developed Clojure to
solve. I've struggled to use Haskell in these kinds of environments maybe
_because_ it lets you do so much static analysis.

The article assumes that as we get more productive at programming our time
will be dominated by tests. While that's probably true, I would consider it
important in this future world to minimize what tests need to be run and
written. We do this today in Ocaml by making some classes of inconsistency
inexpressible. We do this today in Rust by making some kinds of failure
inexpressible.

------
keyle
How about a gesture based programming language that translates to a visual one
on screen? Is it so far fetched?

I've been typing programs for so many years I think we've gone as far as we
could go with text, auto complete, live compilers, hot reload etc.

It wouldn't be that uncomfortable seeing a developer spends most of time
reading / analysing and, god forbid, the open office would allow it (!@#$%^
noise): THINKING.

Here is a thought, want to make the future of programming languages? Remove
distractions.

~~~
Pamar
How would you "name variables" or "refer to code blocks" without typing?

How will you search for specific things in a codebase (where did I assign this
variable again?) without text?

Have you ever tried running "diff" on "graphical/flowchart" code?

~~~
kettlecorn
Diff tools might actually be able to work better on graphical/flowchart code.

It could theoretically be easier for the tools to identify exactly what
changed and present that information rather than superfluous stuff like
whitespace changes.

~~~
Pamar
My questions are based on actual experience using a "graphical" language
[1]... I am not using it anymore, but the "theoretical" advantages you mention
never actually became real for me (or my former colleagues still using it,
either).

[1]
[https://en.wikipedia.org/wiki/WebMethods_Flow](https://en.wikipedia.org/wiki/WebMethods_Flow)

~~~
kettlecorn
I don't doubt that existing graphical languages have very poor tooling.

I believe that diff tools could be better if they operate on some format that
represents the meaning of the code, not the details of how it's written. New
tools would have to be written, so it's largely a matter of pragmatism and
momentum that prevents such a thing.

FYI I googled "WebMethods Flow merge tools" and one of the more useful
discussions actually featured you from 3 years ago expressing similar
sentiment:
[https://news.ycombinator.com/item?id=12106945](https://news.ycombinator.com/item?id=12106945)

~~~
Pamar
I am old so I tend to repeat myself a lot :)

Honestly - I would be happy to use a graphical (or any other alternative to
traditional) language. I was reasonably enthusiastic when I started working
with webMethods and some things I like a lot. But honestly I cannot say it
scales well beyond simple transformations / mapping tasks.

------
fouc
I've always felt that Ruby was the language that got closest to natural
language or to natural pseudo-code while retaining conciseness and not getting
overly verbose. I hope more new languages are heavily influenced by Ruby or go
further towards being highly readable and concise.

~~~
tragomaskhalos
Ruby is fantastic for DSLs; however I fear it's becoming Betamax to Python's
VHS.

~~~
wilsonnb3
It’s been the Betamax to pythons VHS for a long time.

Outside of rails, ruby is pretty unpopular.

------
AnimalMuppet
Let's start with Fred Brooks: There aren't any new paradigms, techniques, or
languages that (realistically) promise even a one-order-of-magnitude
improvement in programmer productivity. (Paraphrased from "No Silver Bullet".)

That said, I think the biggest way a language can help is by having a great
library. Code I don't have to write is a huge productivity boost. An
outstanding example (for its day) was the Java library. It was like Barbie -
it had _everything_. And it was organized (and documented) well enough that
you could find what you needed pretty easily.

For the language itself, I don't have any great answers. But I observe that
many of the comments here focus on syntax. Syntax matters, but don't forget
that semantics matter also.

~~~
PeCaN
> There aren't any new paradigms, techniques, or languages that
> (realistically) promise even a one-order-of-magnitude improvement in
> programmer productivity

I am doing 10×. It's possible.

~~~
AnimalMuppet
Maybe you are. (I'd need more information to know how to evaluate your claim.)
But if Brooks is right, you're not doing it because of the language you use,
or the programming paradigm or technique you use.

------
UweSchmidt
I'd like to see tests as a first class concept.

Every class / any construct gets a test stub. The compiler collects
information during debugging on test data and how the elements of the
applications are connected and expands tests based on this information.

------
sriku
I've often asked myself this question - are there any fundamental reasons why
_language_ needs to be the most expressive way to instruct computers? If not,
what else?

.. and I've tried to resist answering it and letting the question ferment
instead.

~~~
nekopa
You made me laugh. For some reason the image of me (an overweight programmer)
writing my next website via interpretive dance - in the vein of the dance
scene in "The Big Lebowski" \- won't stop running through my head.

------
PaulHoule
That article seems focused on languages aimed at the professional programmer.

I think the non-professional programmer is a bigger market. That is, the
person who uses spreadsheets or writes some simple scripts.

Another issue is readability by non-professionals. Executives may not need to
write business rules but they ought to be able to understand them enough to
sign off on their correctness.

~~~
tempodox
All that does sound a lot like COBOL.

~~~
PaulHoule
COBOL is good for what it is.

I wouldn't use it to get the eigenvalues of a matrix though.

------
nickmqb
Most languages today come with a runtime. That makes it hard to interoperate
with other languages/environments. To avoid this, a new programming language
should either have no runtime (or a very minimal one), or be very explicit
about which features it expects from its runtime and allow the runtime to be
swapped out by something else (we might see something like that in the WASM
sphere).

Furthermore, I believe we'll start seeing an increase in tooling around
languages, and a focus on reducing iteration latency. Traditionally,
REPLs/live editing have been mostly associated with higher level languages,
but there's no reason that those things could not be applied to low level
languages as well.

Finally, while I think that there will be always be room for both simpler and
complex languages, the market for simple languages is somewhat underserved
right now, so we'll start seeing more of those in the nearby future.

Disclosure: I'm the author of Muon, which is a new programming language that
tries to embody these principles:
[https://github.com/nickmqb/muon](https://github.com/nickmqb/muon)

------
voldacar
I'd love to see a modern successor to occam, or something similar that has
first-class, painless multithreading.

It's strange to me that in this era of 32+ core machines, the language model
is still single-threaded by default with (often significant) extra effort
required to execute multiple bits of code simultaneously. Not to mention
languages like python which restrict you to a single CPU core.

~~~
Qwertystop
Erlang/Elixir?

~~~
voldacar
I guess so, but erlang runs in a vm that handles setting up OS threads and
uses its own scheduler to manage erlang "threads" \- I think that's cool and
all but it would be nice to have a language that could do that more natively
with less abstraction imho. Occam ran on bare metal

------
agentultra
I hope the next languages will come from interactive theorem provers and
program synthesis. I'd much rather work at the level of types, propositions,
models, and proofs and solve the interesting problems. The rest is book
keeping and often repeated over and over.

And I hope the interfaces for these languages won't require the use of a
glowing screen and a sadistic keyboard. I'd rather like it for computers to
disappear into the background. I'd like to reason about my programs in a
physical space where I can freely walk around, write on a note pad, draw on a
chalk board, converse, and re-arrange the room to my liking. I quite dislike
how I've developed astigmatism, am at high risk for RSI, and probably other
health ailments because we can't think of a computing environment better than
what we have right now... just with more pixels, pop ups, nags, swishy
animations, etc.

------
postalrat
How bout a language that lets you write the tests and it creates your
application code.

~~~
vnorilo
This might sound good to a TDD practitioner, but I'd suggest that the value of
tests is that they independently verify an implementation. Couple them
directly and soon you will be writing tests on your tests.

OTOH, saying "this must be true" and letting the compiler work it out sounds a
lot like logic programming.

~~~
postalrat
I've felt testing can be reduced to writing those functions twice and hoping
at least one is correct. There has to be a better way.

~~~
vnorilo
The best way is making no mistakes :)

The second best is machine-checkable proofs, like types.

The fallback is tests. Sometimes the first two can't be applied.

------
TheOtherHobbes
Among a very long list of things that will probably never happen, I would like
to see software projects become a kind of conceptual/logical history of
beliefs and intention with the most recent iteration representing the current
state of knowledge and practice about some domain of interest.

And I'd like to see tools that can extract value and improve efficiency by
searching paths-not-explored and comparing them with the current knowledge
state.

It's easy to forget that code is a means to an end, not an end in itself, and
there may be other ways to reach those ends.

IMO there's been too little work done in CS on robust knowledge engineering -
as opposed to lambda calculus-inspired algebraic manipulation.

ML is catching up a little, but a lot more may be possible.

------
v_lisivka
Goal-based language (similar to Makefile's), driven by AI, with automatic
error handling of any kind.

------
devonkim
A language that doesn't encourage its users to re-invent wheels and to instead
craft re-usable modules that work well across hardware and locality changes
through solidly tested and reviewed foundations would help improve
productivity while still being comprehensible to everyone. This is the power
of runtimes like the JVM and BEAM but is not really integrated into any
language particularly well IMO. I'm thinking of a language that integrates
algorithm constraints beyond types into the runtime into something more like a
meta-runtime that helps run functions run with the appropriate level of
resourcing.

I'm just hoping that someone at compile time will be told "warning: function
will take 2M cores to sort an arbitrary list within 20 ms"

------
naasking
I think a true next gen language will have distribution-related concepts
built-in. For instance, a strict monotonic subset that will make it infinitely
scalable.

For problems that can't fit in this subset, it will need concepts for
coordination and reasoning about the state/consistency of the systems
involved.

Concurrency and mobility/distribution are the big problems that programming
languages have yet to satisfactorily solve. We have data structures and
limited models that help with these problems (eg. CRDTs), but it would be a
huge productivity boost to put these abstractions right in the language so you
can read, write and reason about programs more easily.

~~~
douglasfshearer
Does Erlang not satisfy this?

~~~
naasking
Erlang solves some of it, but having to manually coordinate actors is not
suitable for every problem. Collaborative editing for instance.

------
mrfusion
I think we’ll be working in bigger concept “chunks”.

Sort of like Django today but it will intelligently connect the plumbing for
you.

For example you’d just select/say, “add a user authentication system to this
app” and it would figure out and guess at the best way to do that.

Or “get me something to store the data users enter on this new form” and it
would have some smarts to store it in a way that makes sense for your
purposes.

You could always drill into what the system creates for you and change things
but hopefully most of the time it comes up with something reasonable.

------
andy_ppp
I'd like for people to go back to understanding why atomicity is important and
not "design" systems with microservices that have at most a few thousand
users. As for the future I'm already extremely happy with the compromises
Elixir gives me but I'd like to see a redux/mobx type thing that is setup from
the server infrastructure and knows how to interact with the frontend without
me constantly having to keep things in sync and up to date manually.

------
marknadal
Stack based concatenative operator languages are the future, if you haven't
heard of them (and railroad programming) do yourself a treat and look them
up!!

~~~
lincpa
Are you have heard of them(Everything is a pipeline, everything is RMDB)? ;-)

[https://github.com/linpengcheng/PurefunctionPipelineDataflow](https://github.com/linpengcheng/PurefunctionPipelineDataflow)

------
tanilama
Unless hardware changes drastically, I can't really foresee the demand for a
new programming language.

Who is there to back it? In this era, it has to be one of the big techs out
there. Unless they somehow find that educating their 10s of thousands
engineers to a new language, and changing their infra accordingly is worthy
cost comparing to the benefits the language brings, I didn't see how this
would happen really.

~~~
PeCaN
Existing languages are _terrible_ for today's hardware.

~~~
tanilama
Not true. They have been optimized for decades. Programming language is as
much as about implementation as abstraction

~~~
rini17
Nope, only compilers/interpreters were optimized, not the languages
themselves. Languages are the same as conceived in the era of single-core CPUs
with one-cycle memory latencies.

------
drinfinity
I think things are going to split up even more:

\- languages for physicists \- languages for game developers \- languages for
business apps \- languages for mobile apps \- etc

These domains turn out - IMHO - to have vastly different needs and are better
served by specialist tools. Some of them textually, some of them visually
(game design: Unreal's blueprint).

------
ww520
I just want LOLCODE to become mainstream.

------
preordained
How about an new kind of hardware-enhanced IDE that stooge slaps the
programmer if he even thinks about doing something like making a test that
depends on stacktrace line numbers (just saw this...uyyy)

What if I just <whack>...okay, maybe I'll try something else.

------
winter_blue
It's interesting that this paper talks about compilers becoming a lot more
sophisticated and powerful, and simplifying many aspects of programming. I've
always hoped for this, and want to work on this as a personal side research
project.

------
synthc
I'd like to see a language with built in version control, so that you can call
older versions of parts of the code, compare performance between branches all
within a running process. I believe there is a lot of potential here.

------
ksaj
It'll look like Lisp. Or it'll take a single feature of Lisp and call it new
(as has been the case for quite some time). Eventually every language family
will be a mutually incompatible collection of Lisp variants.

------
raxxorrax
Educated guess: Very much like the old ones. That would be my conclusion at
least by looking back the last 20 years.

Sure, there are exotics like languages for code golfing and domain specific
languages, but overall things stayed pretty familiar.

------
thrower123
I suspect that we'll be writing C, C++, Java and Javascript for the next 50
years. Unless WebAssembly really takes off, and then Javascript might finally
go to the dustbin of history, as it should.

------
jasonhansel
I really wonder if machine learning is going to allow us to create more
intelligent editors/linters/compilers. In fact, it could start to automate
most of the actual process of writing code.

~~~
Impossible
Facebook demonstrated a version of this recently
[https://ai.facebook.com/blog/aroma-ml-for-code-
recommendatio...](https://ai.facebook.com/blog/aroma-ml-for-code-
recommendation/).

------
PeCaN
I'm working on one. It's nothing like any of the comments here. You all are
trying to add complexity and I removed it.

All the comments here are along the lines of "I want a faster horse".

------
bluejay2387
They will mostly look like what we have today, but they will myopically focus
on one particular set of principles and have fancier names.

------
bregma
Truly the next programming language would be spoken English. Something like
"Jarvis, give me the optimal schedule for a speaking tour of 27 North American
cities and I don't want to pass through the same airport twice. Also, outline
each stop in invisible red lines and I need it sent to my iPhone in Outlook
format by yesterday."

Anything less and you may as well be using JCL on punched cards. $END

------
captainmarble
My future coding will be chains of functional blocks with configurations.

~~~
chopin
This sounds to be a nightmare to debug.

------
westoncb
There's something I think could be game-changing on the front-end of
programming languages:

 _Just_ (as in 'only') change the (internal) representation of source code
from character sequences to something that doesn't need a complex parsing
process. In other words, eliminating the concept of syntax from language
design, while keeping other language properties intact (could even use e.g. an
LLVM back-end), allowing whoever desires it to write their own editors etc. as
always.

(This is different from eliminating text: IMO, the visualization should remain
primarily text. I say to eliminate syntax in the sense that the text
'visualization' no longer has a connection with semantics, nor any grammatical
constraints.)

The reason I say _' just'_ change that: there are other systems structured to
render AST-like structures (rather than parsing), but they are integrated with
particular software systems. We need a general purpose alternate format with a
similar independence of any project that 'character sequences' have for
current source code.

\------

I have a proposal for one that's very simple and general: represent languages
as graphs of language constructs, and represent particular programs as paths
through a language graph. More details at [1].

Making this substitution we can: throw out parsing, allow easy customization
of language appearance (including things that would traditionally fall under
'syntax'), have much easier access to language insight (source representation
is directly in terms of a language definition), give far easier to access to
experimenting with language UX ideas, and more.

I'd love to hear thoughtful feedback on this. I will likely invest a solid
amount of time/effort into building a prototype before long, so if someone can
spot a potential weakness that I've missed, I'd be forever grateful.

(That said, please attempt at least a small amount of charity in reading: I
wrote this in 2015 and most all I've gotten is people replying, "visual
programming doesn't work" —even though it's expressly _not_ advocating visual
programming. I believe I really am talking about something fairly new here:
and I am more than willing to listen if I'm mistaken. I have too many projects
already; I have no need for this to work out. I just haven't been able to
disprove it as... not being what it appears to be, and so I've felt an
obligation to see it through for a long time.)

[1] [http://westoncb.blogspot.com/2015/06/how-to-make-view-
indepe...](http://westoncb.blogspot.com/2015/06/how-to-make-view-independent-
program.html)

~~~
chubot
I skimmed your blog post, and I'm not quite sure I understand how it will make
editing easier. It seems more complex to me, although that might be because
it's unfamiliar.

If you want some intelligent counterpoint, here is a post by the designer of
Rust on why text is a good representation for programs:

 _Always Bet On Text_
[https://graydon2.dreamwidth.org/193447.html](https://graydon2.dreamwidth.org/193447.html)

As someone who's working on a programming language, and has implemented a few
DSLs and many parsers, I tend to agree with him.

Parsing is a pain in the ass, but I think it can made easier. The difference
between text and structured data is "just" parsing, so I don't feel that
switching to structured data can be a real paradigm shift. If there was a
paradigm shift to be found, then it would have been implemented already by
converting text to structured data, and operating on that data.

And this of course already happens in dozens of IDEs, and that's great, but it
already happened. And while I have seen people be fantastically productive in
IDEs, a lot of the best programmers I've seen also use emacs/vim.

But your ideas do seem more fleshed out than most, and I think that developing
a prototype is the only way to make them more concrete, and more clearly see
the benefits and costs. I do think there are benefits to making structured
data the primary format -- I just think they are outweighed by the costs.
Graydon's post outlined a bunch of things you lose if you abandon text.

~~~
westoncb
I can’t give a full reply atm, but briefly I’d say: yes, it’s more complex
than character sequences but that complexity (which isn’t much) could be
shifted to a lib once for any number of languages, as opposed to needing to
parse every language uniquely.

~~~
chubot
That's true, but what I'm saying is that parsing is not a bottleneck in
programming tools.

Parsing every language uniquely is a lot of work. But it is doable and more or
less "done". After that you have a whole bunch of other problems to solve
(i.e. all the other things an IDE or debugger does), and that's where the real
work is.

You can get rid of parsing by making structured data the primary format rather
than the secondary format. But then you introduce a whole host of other
problems. For example, how do you 'git merge'? You can't even collaborate
anymore because merging is a textual algorithm. You would have to write a new
merge algorithm for every single programming language, because each one has
its own structured data format.

So basically you get a slight benefit by removing text, in exchange for a huge
cost.

BTW here are some posts I've written about parsing:

[http://www.oilshell.org/blog/tags.html?tag=parsing#parsing](http://www.oilshell.org/blog/tags.html?tag=parsing#parsing)

Again, parsing is a pain in the ass, but I believe it can be made easier, and
it's not a bottleneck in any case.

In my mind, the biggest bottlenecks in programming are:

1) Understanding the problem well enough to formulate the solution, whether
it's in a programming language, math, or a visual syntax.

2) "Size is code's worst enemy". Many programming languages are inappropriate
to express the problems they are solving. The code becomes unmanageably large
pretty quickly. A million lines of C++ code is not a great way to express a
solution to anything, yet it's the state of the art for vital projects like
LLVM and Clang, Word, Photoshop, etc.

Both of these are 1000x bigger problems than parsing IMO. I'm not trying to
criticize your specific idea, but I have heard the general idea to "get rid of
text" several times, including more than once on this thread ... so that's my
general rebuttal, if you're interested in a reasoned critique.

~~~
westoncb
Again I’m out atm and can’t reply on full, but I think if you read my post in
more detail it already has the response to much of what you’re describing.

I agree it’s not the bottleneck to programming languages overall, but it is
the bottleneck to the HCI/UX component of language design.

Edit: the git merge problem can be resolved by having a canonical text
representation or doing a structure diff instead.

As for the rest of your response, you’re basically saying you didn’t take the
time to get a basic understanding of my proposal, and you think other problems
are more important than the specific one I’m addressing. I’m not sure how that
could lead to a fruitful discussion.

I will take a look at the post about benefits of text though, so thanks for
that. (Apologies if my frustration is coming out here, it’s just that I have
had many responses—most much worse than yours—basically saying, “I didn’t read
what you wrote but you should consider X instead.”)

------
louthy
Abstraction to the next level up is classically how language paradigms have
progressed. From flicking switches that represent bits, to assembly language,
to C, to memory managed languages, etc. Programming languages are not for the
computer, they're for the computer programmers' weak grey matter between their
ears. Future languages should be entirely about improved coping strategies for
us.

Higher level abstractions allow the programmer to do more with fewer
instructions. The closest we've got to that right now is monadic programming
in Haskell (and other languages with first-class support for monads). They
allow encapsulation of common patterns (if/then/else, exception handling,
state management, environment passing, list processing, etc.). But it would be
naive to think we're done.

Anybody who works on web-applications over a long time will realise we've run
out of luck with our current languages. Code-bases that only grow and projects
that never end. I work on a code-base of 15 million+ lines of code, and the
language gives very little help in terms of managing that. Especially when the
monolith gets broken up into services.

We need:

* Abstraction from the network to easily support distributed applications. Erlang has its actor system, which is the closest to this, I think. The standard criticism is that because networks can fail you can't build a one-size fits all system. Looking at the actor model it's clearly possible to build something that fits most common use-cases for distributed applications.

* Abstraction from error handling

* Abstraction from complex control flows over time - i.e. code where one line runs, then the next line might run 6 months later

* Abstraction from threads: \- No mutable data \- Built in support for synchronisation, coordination, and resolution. For example, types that support vector-clocks transparently, or the ability to have locally synchronised versions of something on a server, etc.

* Improved type systems that allow for easy type-driven development, so the compiler is enforcing the business rules of the system. Languages with dependent type systems are closest to this, but still often feel quite clunky. Perhaps even ban the use of types like int, string, etc. directly - you can only alias them into new-types like: Metres, PersonName, etc. so you're forced into using stronger types (although someone would probably just alias `int` to `Int` to get around it).

* Adaptive type systems that can describe a network of services. They would fail to compile if you send the wrong message to a service, and fail to compile if, a service which should be available, isn't.

* Something that's as simple to understand as Python for new devs.

* No null

* OO to die in a fire

I've personally had to build all the stuff above to facilitate the scale of
our system - and by scale, I mean scale of code-base; any language that makes
this stuff transparent and picks good safe default behaviours will speed up
the process of writing new code and maintaining old code.

One thing that's of note when each paradigm comes along is how there's a mass
of devs from the old paradigm that say "that's not real programming"; I think
we'll only be at the next paradigm when we have that moment.

~~~
yingliu4203
A wonderful list, with one more feature desired: seamless integration with
database, especially Relational db.

------
jrawlings
Bayesian programming anyone?

~~~
valzam
I think most programmers have very strong prior(opinion)s already...

------
jerf
I just want to reply to almost every suggestion on this page that we've tried
that, numerous times. Most suggestions here need not to explain why the
suggestion is wonderful, but why it is that previous efforts to implement the
suggestion didn't take the world by storm if they were so wonderful.

That extends to probably about half of the linked paper, as well, though I
daresay it fares better than most I've seen.

I think we've mostly mined out the thoughts of the past, running on machines
essentially simulating the machines of the past. All the bright ideas people
had fifty years ago but had no prayer of getting to work on the hardware of
the time has had time to be tried out. While I'm sure there is some path
dependency in our current best practices, I personally think it's easy to
overestimate it. There's a lot of languages running around, and if something
was really 10 times better, I think we'd notice.

I think the next step up is going to require new thoughts, mostly driven by
taking a fresh look at the hardware we have now and reconsidering the way we
currently lay down a ~1970s paradigm on top of whatever hardware we get. It
may also require some new hardware to be developed, as we stall out at current
clock speeds. I don't know exactly what it will look like, but, well, first of
all I see multiple languages, not one grand one, but languages like...

... something natively built to deal with hetereogeneous execution on GPU,
CPU, slow-but-efficient-CPU, FPGA, and so on.

... something that successfully wraps up the sort of work that Haskell has
been pioneering and moves us beyond working with ints and strings to something
higher; an integrated view of functors and other higher structures, probably
wrapped up behind friendly names, and using that to work with parallelism
appropriately. There's a sense in which it's weird that it's 2019 and I'm
still getting slabs of 32 bits to hold numbers, and the semantics of this
hasn't hardly changed.

... Rust arguably already fits in here, but more efforts to pull down work
done on the fringes of safety into something that people can actually use to
build high-performance, yet safe, systems. These languages may not be 10x
development speed improvements at the local level, but they can accelerate
large projects by making a lot of things safe to do that right now are
terrifying on a large code base. In general I still feel there's a lot of work
to be done in the field of large programs. A lot of up-and-coming programming
languages are still _way_ too focused on making individual lines powerful,
because that's _cool_ , or because it's easy to evaluate how a new proposed
feature does or does not make a 3-statement snippet now become one, but
neglect how to make big programs hold together safely and without the whole
thing calcifying to the point you can't move it forward as time goes on.

... as a specific case, languages or libraries designed to deal with the
burgeoning field of "what can we safely do over a network", via CRDTs,
lattices, and other safe mechanisms, integrated into the entire language's
outlook rather than being a bodged-on addition.

... languages like Jai that detach the operational semantics of a data type
from its storage, so that I can create an array of structs and just tell that
array to actually be a struct of arrays, and maybe compress the array data if
I'm willing to guarantee I'm always going to go sequentially across this
array, etc., or tell it my access pattern so it can store the array optimally,
using our modern hardware more efficiently than pretending it's still 1970 and
memory accesses are all the same cost. Maybe throw in some cache awareness, or
a pervasive cache-obliviousness, again at the language level if possible.

One thing I do not see, for what it's worth, is an increase in the declarative
nature of programming languages. I don't particularly believe in declarative
languages anyhow, but a lot of these things involve giving the programmers the
ability to say more than current languages let them say, not less.

Many of these can work together harmoniously; safety, "something something
CRDTs lattices", and data structures that integrate higher-order safety
characteristics like functor and monad probably all go together pretty well. A
language to address heterogeneity in computation might just work well on a
network computer, too.

