Hacker News new | past | comments | ask | show | jobs | submit login
Next-Paradigm Programming Languages: What Will They Look Like? (arxiv.org)
217 points by furcyd 19 days ago | hide | past | web | favorite | 226 comments



I want a language that is designed alongside an editor/IDE. I want to stop putting comments in my code. I want it to be first-class for my code to be in the left pane and my comments to be in the right pane, always binded together with anchors but always separate so my comments don't have to adhere to the limitations of the code's text area.

And then I want to put rich things into my comments section like graphics and tables and such. And I don't want to have to write table-like Markdown or a shorthand that converts into graphics. I want all the WYSIWYG of Word or PowerPoint or Inkscape just right there feeling 100% natural and native.

Jupyter is pretty darn close to this. And I like it a lot. But I feel like I'm giving up a lot to make a notebook. I want this in a much more heavy lifting format.


> WYSIWYG

WYSIWG is what you get, but not what you want. Anyone who has done serious work in Word, or some other WYSIWG editor has got to the point where the document becomes FUBAR, and you have to copy the plain text, and start again with a new document.

That is not acceptable for a programming environment. Copy an paste is the enemy of correctness.


> Anyone who has done serious work in Word, or some other WYSIWG editor has got to the point where the document becomes FUBAR, and you have to copy the plain text, and start again with a new document

I'm sorry but, speaking from experience, that is just not true in all cases. Granted, it takes a certain kind of methodic work (and more know-how of the tool than would be reasonable) to work around many of the problems but if you put in the time to learn, Word (and similar software) is more than adequate for large-scale editing work.

It's no wonder the absolute majority of people use it; the learn how to avoid the problems (or live with them) and there is near zero incentive to personally switch. The majority of the onus is paid by the next batch of people who have to learn it.

I'll also agree that it's not what I want for comments and that better models exist for 90% of all text-editing tasks out there. But it does work


What do you mean by "large-scale editing work"? Are you talking about the size of the document? The duration of the editing lifecycle? The number of participating authors and authoring systems? The full editing/consumption lifecycle for a long lived document that goes through many revisions and publications for several decades?

Things I've seen fail with Word in my career that would be unacceptable for programming:

- Relatively simple docs being eaten alive, as described in grandparent comment, after several dozen authors make small edits over the course of a few weeks.

- Complete change-tracking disasters when attempting to merge concurrent edits to different parts of a document by several authors.

- Docs where the WYSIWYG behavior completely changed when opened 5-10 years later by newer versions of Word. Attempting to import/export/save as to rescue it involves much of the same cut-paste and hand recovery described previously.

- Docs destroyed by international collaboration, where different locales of each editor seem to accumulate contradictions in the saved document over time.

- Inconsistent WYSIWYG between Word on Windows and Word on Mac OS.

I don't think it is responsible to promote a programming language that does not have a tooling-independent and platform-independent textual representation that carries ALL program semantics as manipulable, printable character sequences. That promotes disposable code and prevents real programming in the large.

(edited to try to rescue bullet list)


>> WYSIWG is what you get, but not what you want.

Nothing stops you from allowing the user to access the document itself to clean up the automatically-created mess.

It should be more like the XML/visual editing modes of .Net platform UIs.


multiple projection modes.

'canonical view' is raw markdown/tex/whatever with text-editing controls. WYSIWYG view is presentation style with rich text editing controls.

this paradigm could carry over to the code itself with projections omitting/adding type annotations, comments, in-lined documentation, maybe even selectively expanding/eliding parts of the code itself i.e. orthogonal aspects like logging


No, you don't want this. Every new generation of new programmers since the 1960's talks about this, but nobody to date has actually written any useful program using pictures and prose.

Sorry. Code is code, that's what your job entails and you can't make programming into something it isn't.


To be fair, Waterluvian was talking about including pictures and tables and such in comments, while still writing actual code. It's probably still a horrible idea, but for much more subtle (and less certain) reasons than "waaah, I don't want to have to do programming in order to make a program".


Exactly this. Every sufficiently complex program that refers to graphs DO have those graphs. They are either in a separate document, or in the programmers head, or (poorly) added to the source code. There are plugins (e.g. for Visual Studio) that will render images refered to by file paths, in the editor. For complex graph-related stuff I imagine that can be pretty neat.


Would it be "a horrible idea" to include diagrams in code that implements a graph algorithm, or images in image-processing code?


I would love to be able to include pictures and diagrams in comments. If it was in a text format like HTML or JSON it would source control nicely. The IDE could then display and allow editing them.


Slap a text editor into doxygen documentation and you'd basically have this.


Python+sphinx allows to get that (and a lot more).


I've heard this said about every new generation of programming techniques. It's true about every technique for which tooling isn't yet widely available, right up until somebody writes the necessary tools.

You could replace the subject of your comment with, say, "automatic garbage collection", and it would have been equally true ("that's just what your job entails!"), right up until GC made its way into the first mainstream languages, and it very quickly became a tool used by almost everybody.


GC is a much older technique than it looks, though: it's described in McCarthy's 1959 Lisp paper. https://homepages.inf.ed.ac.uk/wadler/papers/papers-we-love/...

We spent about 3 decades arguing back and forth about performance until the wave of bytecode+GC languages took over and accepted those tradeoffs: Perl, Python, Java, Javascript.

Holdouts remain in the high performance, embedded, and deterministic areas of software.


Code as live objects is almost as old (Lisp, Smalltalk)


The point is that the idea of drawing diagrams instead of writing code (and/or integrating documentation text right into the source code) is a really old idea.

The idea is older than structured programming, older than the C programming language, older than the idea of object-orientation. It's probably even older than McCarthy's LISP.

Safe to say that if it hadn't caught on during this time, then it never will.


People developing safety critical embedded systems using ANSYS’ SCADE Suite would disagree about nobody having written useful programs using pictures and prose.


Code is code, but sometimes pages of code implement a simple mathematical equation or a process diagram. I would like to see it in addition to the code.


Add a plugin to your IDE, to display embedded graphics from comments.


Isn't this more an IDE problem than a language problem?


That isn't what he was saying, nor is what you are saying true. There isn't an intrinsic reason for code to be a text stream. There are environments that are in broad use where it is not, such as Simulink, LabView, and some of the big game engines today.

It's easy to be parochial about this when Unix like environments force everyone into a lowest common denominator of text streams, but don't take the limitations and bad ideas of Unix as laws of the universe.

Edit:

Summarizing the list of non-text mode programming languages in general use that people have mentioned in this thread:

ANSYS SCADE

Sikuli

Unity

LabView

Simulink

max/msp

SynthMaker

Kyme

LUSTRE


plenty of peopple are writing programs which get actual usage in visual languages - e.g. with max/msp, or the old SynthMaker, Kyma... as wells as some avionics software written in e.g. LUSTRE...


More recently, Unreal Engine's Blueprints too (and UE's Kismet before that). 3D modelling tools and texturing tools also often use visual languages for shaders, and, as you've mentioned, in audio its also relatively common. Lustre/SCADE is my favourite example though (and is used in a lot more than just avionics too, according to the ANSYS marketing material: transportation, energy, etc)


Unity definitely and effectively mixes pictures, audio and code, and has a large chunk of the mobile app market. It's automatic asset import is very good and fully customizable.


i guess i’m not a programmer copying all these pictures, diagrams, and comments with arrows anchored to the relevant code portions. (yes, i am able and do do this.)


This is not true. Sikuli was a super cool language, and it was fun to write with. I'm sorry it hasn't received more attention.


I want something like Rap Genius but in my editor.


In the 80s at FORTH, Inc. we had 'shadow blocks'. Forth code would go in one block and comments in the corresponding shadow block, and you'd flip back and forth with a keystroke. (We only had 80x25 text mode for coding, iirc. Your graphics would've been out of reach.)

I don't remember how or if people kept the shadow comments lined up with code (I was only there for one summer in high school), but you could extend the editor with ease, so...


> We only had 80x25 text mode for coding, iirc.

Traditionally a block was 64x16 I think (1KB).


Right -- I mean that was the only text mode the hardware supported, unless you had an EGA adapter which could squeeze in 80x43. Side-by-side blocks were right out.


This sounds like literate programming of some kind. A bit more advanced than Knuth's original WEB implementation, but certainly matching the idea that your code co-exists with discussion and diagram and so forth.

I used noweb at a previous employer, and it certainly was difficult to argue with the results; beautiful, readable documentation that existed as a first-class product, alongside the code it was describing. The quality of the software we generated was the highest I've ever seen in terms of things like deadlines and budgets and bugs and testing.


The code-to-IDE transformation described here could be trivial. Right-pane only shows comments, which are neatly folded away in the left-pane. Doesn't really scream "next level" to me but I'd definitely use it.


That's what it sounds like. So, it's pure text (and easy to version control), and you can compile it two ways: Once for the machine to execute, once for humans to read.


You could probably achieve this with an IDE that understands your rich text comments (maybe code them in JSON or base64 binary). To me it seems like a pure presentation problem.


This.It is already happening. For example, for annotations in some PHP frameworks. The annotations are created as comments in some specified format and then the framework uses reflection to read and parse those comments. Some IDEs can also parse those comments making it somewhat similar to what GP suggested although in a very primitive and simplified form


People forget ASCII is just a presentation mode. There's no reason for Ox41 to be an 'A'.


I want a language that is designed alongside an editor/IDE.

Why would this require a new language? And why would it require a new editor?

For one thing, at a certain scale, code and documentation are examined in a browser more often than in an editor.

Your requirements seem too focused on insignificant details and not enough on the big picture.


That's only because the browser is the lowest common denominator for rendering. We could all be using emacs if the rest of the planet was comfortable with lisp and orgmode.


Apostate. Vim is the one and only true editor we all need.


TempleOS can put graphics and formatted text into your source code, it does a lot of other neat things I wish modern computers did.


Any language should be if not developed by the tool developer, at least developed with tooling in mind.

Compiliation for an editor (partial/incremental) is a very different beast from "normal" compilation, and any compiler that is first completed for regular compilation and then tries to address tooling as an afterthought, will ultimately never be useful in tools.

I realize writing a new compiler is hard enough in itself, but projects like the Roslyn poject shows it's at least feasible to make a compiler-as-a-service with both regular compilation and tooling in mind.


> Any language should be if not developed by the tool developer, at least developed with tooling in mind.

I completely agree. A few things I think would help with this:

1) Provide a standard "machine readable" concrete syntax for the language, alongside any "human readable" syntax. S-expressions would work well for this; if someone prefers JSON or XML then I won't argue. The implementation (compiler or interpreter) should have a mode which converts the human syntax to the machine syntax. The compiler/interpreter should handle inputs in either format. This avoids tools having to parse human-readable formats (or worse, run arbitrary preprocessors, etc.), and makes it easier to feed auto-generated or modified code into the system (e.g. for type checking, etc.). Machine-readable formats are also be more robust, since everything's delimited in a standard way, so unknown things (e.g. new language constructs) won't mess up the parsing of everything else, making tooling easier (e.g. linting, documentation generators, etc.). Note that Lisps and Schemes have this already, by virtue of using a machine-readable language as their human-readable language. Others can maybe just serialise their parser's output (as long as it's concrete, not abstract).

2) Allow arbitrary metadata to be attached to pieces of the syntax tree (in both formats). This can be used for comments, but also for source location (e.g. if it's been translated from the human-readable format, or maybe even transpiled from something else). The parent's desire for graphics, etc. in comments could be handled by such annotations, since they're data not just ignored bytes, so they could have e.g. entire Scribble documents attached. Tooling can use this for any data they like, e.g. hints for static analysers, dependency information, required language version/features, blame information, optimisation hints, etc. I think Clojure already allows this, but I've not used Clojure.

3) Keep context/configuration close to the code, or at least allow it to be. For example, if the meaning of some code changes depending on language version, it should be possible to specify the language version as part of that code. That way, tooling is able to obtain and use that information. This could use the annotations from (2). The worst case is when crucial information (like language version, or which arbitrary program to run as a preprocessor, etc.) is kept in some external configuration file, in an arbitrary location, in some bespoke file format, such that no other tooling can make sense of the code (I'm looking at you Cabal!). Even if we want separate config files, the tools which handle them should be able to resolve and propagate that information into the code, e.g. we could ask `tool annotate src/foo.lang` to get a copy of `src/foo.lang` which any relevant context/metadata from the config file appended as a file header. IDEs can use this to get more info about the files they're editing, and propagate that info into the snippets they send into the implementation for type-checking, etc.


smalltalk, in particular pharo smalltalk, is doing exactly that. i don't think it supports including graphics in comments, but there is no reason that couldn't be changed.

to top it off, it had the potential for that already since its inception. so not exactly a new paradigm


I'm also becoming a fan of Pharo the more I use it. It just feels so good to use. My only complaint about it is that I really miss ctors with arguments.

I don't like init functions; an object should know from its inception the basic things it needs to in order to function, and I think it's a great miss there is no real way to force this.


i am not familiar with actors, but i'll read up on that. however your second point sounds kind of logical, so i wonder what the problem is here. can you give an example of how an init function is used to do something that an object should already know?


> actors

Sorry, I should've been clearer there. I meant constructors.

> can you give an example of how an init function is used to do something that an object should already know?

The only way to instantiate a new version of an object in Pharo is with the new message to the class, for example with

  aDate := Date new
However, that point does not yet know what its time offset from epoch is, which is absolutely vital when trying to use a date. It needs to be instantiated at construction with that information. How pharo gets around is by essentially creating class messages which hand you back the instantiated object, such as with

  aDate := Date year: 2015 month: 12 day: 31
However, this is just syntactic sugar (as far as I can tell, please someone correct me if I'm wrong) for a class message calling new and setting the properties on the object it just created. It is a fairly clean way to create objects, in general.

However, the method 'new' can still be used (there is no access modifiers in Pharo), so it's still possible to call an object without ensuring it's initialized properly. I have yet to find a good way around this.


> it's still possible to call an object without ensuring it's initialized properly. I have yet to find a good way around this.

Implement the class's new method with an implementation that just throws an UnsupportedOperation exception?


oh, i get it, so there was actually only one point: constructors with arguments instead of init functions that wrap around new but don't disable it.

and yes i agree, it should be possible to prevent the creation of uninitialized objects.


> I want all the WYSIWYG of Word or PowerPoint or Inkscape just right there feeling 100% natural and native.

There's a thing for everyone, but that dystopian stuff you are describing here I cannot fathom even in my worst nightmares. The text file is a very powerful abstraction for programming and I hope it will never day.


text is terrible for communicating high-level structure and data flow in programs, as well as asynchronous and parallel programming paradigms. it’s clear that if programming is a way of thinking as much as telling a computer what to do then it is well behind nearly every other field in terms of visualizing and seeing and touching the hidden processes.


I like this idea.

It could take literate programming [1] to the next level. Existing tools like docco [2] can output beautiful documentation [3] with comments set alongside code. The language/editor described above would allow us to input code in the same side-by-side format.

[1]: https://en.m.wikipedia.org/wiki/Literate_programming

[2]: https://github.com/jashkenas/docco

[3]: https://underscorejs.org/docs/underscore.html


Markdown Literary programming that doesn't break the syntax of any programming language.

Literary Programming, Programming was the first, Literary was the second.

the main purpose of the Code comment area markup method is to live Preview directly in the Code Editor Preview panel without exporting or any preprocessing.

Just add a line comment character of the programming language before each line of Markdown.

In the comments of code, you can draw flowcharts, tasklist, display data visualizations, etc.

The method is to add extension instructions in any programming language comment area:

markdown

manual eval code, live eval code, print result, display data visualization and other directives When previewing or converting a format, you only need to simply preprocess: delete line comment characters with regular expressions, example: sed 's/^;//' x.clj

Note:

line comment character of Clojure(Lisp) is ;

line comment characters of the current file type can be obtained from the editor's API.

when we edit the code, we can preview the effect in real time. Editing literary code has a live preview panel like most markdown editors.

Advantages

fast, live, simple, no interference.

It don't break the syntax of any programming language, you can compile directly. comment area markup method can be applied to any programming language and any markup (including Org,rst, asciidoc, etc.), which is the greatest advantage.

you only need a single line code to delete line comment characters using regular expressions, then you can use any Markdown parse or converter.

Support any code editor that supports Markdwon Live preview, allowing the source code of any programming language to become rich text in real time. In the code's comment area, You can use the markdown to draw flowcharts, tables, task lists, and display images on the live preview panel, enhance the readability of your code.

If you extend the Markdwon tag, you can implement the eval code, print result, display data visualization and other instruction tags, to achieve live programming, live test.

When writing (reading or refactoring) code files, It can modify and live preview directly in the editor without exporting or any preprocessing.

Reliable. Maximum code accuracy is guaranteed, and markup language errors do not affect the code.

It hasn't interfere anyone to read the code. Markdown is simple, so if it doesn’t have syntax highlighting, it doesn’t have much effect on writing and reading. And having a gray comment area doesn’t affect reading code, especially for people who don’t understand the markup language. Strict distinction between markdown and code, and gray comment area can reduce the amount of information in the source code file, conducive to reading code.

https://github.com/linpengcheng/PurefunctionPipelineDataflow...


> I want all the WYSIWYG of Word or PowerPoint or Inkscape just right there feeling 100% natural and native.

This specific bit reminds me of the elegant UX implemented in Typora. [1]

[1]: https://typora.io/


> I want a language that is designed alongside an editor/IDE.

Check out Luna

https://www.luna-lang.org/


Nice ideas but you might be better off with a folding editor or IDE where you can hide very long comment blocks.

If you haven’t tried a literate programming language, you might like it. You basically write a document with figures, etc. it is, with bits of executable code.

BTW, most of my coding in my last job was in Jupiter notebooks (to use Cloud GPU compute). Really not too bad for short ML scripts when you can still put slow changing library code you write in separate files. That said, not a great environment. I prefer configuring ITerm2 or xterm so I can get matplotlib plots to appear inline in SSH sessions.


PLT Racket lets you use images and other rich objects in their editor, and provides a framework for rich editors which that editor is written in. So there are pieces there.

Block oriented Forths used to use "shadow blocks" for documentation. You had a block that was code, and a fixed offset to a corresponding block that was documentation, with a keybinding to switch between them.

Various tidbits of this have been done in Smalltalk (of course).

But if you have already invested in a world of tool chains that assume ASCII or Unicode text streams as source code representation, then it's hard to get out of it.


I've yet to see anybody that really bothers writing comments or documentation, and, more importantly, keeps it updated to match reality. This is only getting worse these days, where even major libraries and frameworks from Fortune 50 companies barely have autogen'd javadoc-style information available.


You can do that for example with Haiku, so the comments would be saved as extra attribute. Sadly you would lost it if you move the sources to non-extra-attrib friendy FS. On Haiku some editor already using this functionality to save the cursor position, window size and position, current selection, etc as attribute.


Racket isn't perfect but it's the best example I know of. In DrRacket you can have image literals.


CNC programming with CAM software is very heavily visually oriented. I don't usually touch a single line of g-code other than to occasionally check certain things.

Your comment also reminded me of Godot's editor and gdscript. It's fairly domain specific(though the editor is actually a godot game). Everything's designed to work with gdscript. The language and the editor are designed to work seamlessly together in much the way you describe.


Isn't this a good description of stuff like Pharo?


The future of programming languages is and has always been to simply reinvent smalltalk.

Seriously, read the original "Design Principles Behind Smalltalk" [0] and it still sounds very fresh, even radical today.

[0] https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....


Particularly GToolkit which is built on top of Pharo.


Yes. Everyone should check it out:

https://pharo.org/

They have a pretty friendly community too.


I don't think this is a good idea, and I don't think it will ever happen on a large scale, simply because programmers use different editors for valid reasons.

It's a dealbreaker if everybody working on the same project has to use the same editor.

Development practices that seem suboptimal for the individual are often optimal for teams. And teams develop all the major pieces of software we use.


Using the same editor is not implied though. Only a common format for "rich" comments that is supported by a sufficiently large subset of editors. This could be as simple as using the same markdown/HTML/whatever comments as before but with WYSIWYG support in the editors.


One example of a project that is sort-of heading that direction is Jupytext

https://github.com/mwouts/jupytext


I'm working on something similiar but way different. High-level REPL with a UI editor enabling you to rapidly develop over multiple levels of abstractions that end up doing token translations to existing or provided custom implementations. You'll basically define data, define flow and persepctives on data and that's it.


Is there anything in Mathematica/the Wolfram language that falls short of what you want?


The pricetag


Projectional editing?

https://en.wikipedia.org/wiki/Structure_editor

I've seen that demo from JetBrains MPS, looks good. Hopefully it gets wide usage.


Why not using git, as in every significant code section, which deserves a comment shall be commited and commented freely in the git commit message?

Then what you need is just a git blame(?) to visualize the current code section's comments.

Would this work?


That would require a commit to be localised to a block of code, and even atomic commits can be split across multiple files and modules.

Plus, you would end up in a conflict between recording history and recording the present. What happens if you replace a complex method call across a codebase? The comments should explain the call, but the commit message needs to record how the call changed, and why - which you probably don't care about when coding.


I really like the idea of having better options for commenting right by your code.

I do believe a lot of code would benefit from better built-in documentation, which it essentially is.

I Will think about this some more


> And then I want to put rich things into my comments section like graphics and tables and such.

Swift Playgrounds has rich comments, but they are still inlined with code.


For R programming, Rstudio has got a notebook-type but within IDE a format called `Rmarkdown` that's pretty popular in academia.


Why don't you just use Java + IntelliJ or one of their sister IDEs?


I think programming languages today focus too much on features and then they all look like another Algol variations with couple of features borrowed from elsewhere.

I don't think people these days try to understand what are the building blocks of languages and computation.

I'd wager the true next generation programming languages will be ultra minimalistic languages, retaining only the essentials while not losing on legibility.

Lisp for example is still not minimal as it has 'special forms', ie. hacks to make conditionals work. The unintiutive prefix notation also doesn't help.

Also not many languages allow people to explore true powers of deeper stack modifications by delimited continuations. Using them the users would be able to create their own looping and threading constructs as well as custom exceptions and iterator mechanisms and effect handlers.

It's important to realize that the economic incentives for truly innovative languages are bleak. The academia is stuck in incrementalism. Private sector wouldn't pay for it and the benefits would only be collected by cloud vendors, adding yet another trillions to their valuations.

Why bother.


Also not many languages allow people to explore true powers of deeper stack modifications by delimited continuations. Using them the users would be able to create their own looping and threading constructs as well as custom exceptions and iterator mechanisms and effect handlers.

The concept of a "programmable" programming language is intuitively appealing, but in practice turns out to not be so helpful for actually shipping software. Everybody builds their own, incompatible versions of common utilities. This is why languages like Lisp or Forth are great for exploratory hacking with small teams but are very rarely used these days for larger scale development and lag far behind more mainstream languages in tooling.

What you really want is a tasteful "omakase" language design that gives you a carefully selected menu of building blocks that average developers can use to be productive. Languages like Kotlin and Swift are a good example of this. They cherry pick a lot of the good ideas from academic functional programming without going down the Scala/Haskell rabbit hole. Golang takes this to an extreme by providing native typed containers without exposing generics to the programmer.

The success of a language these days depends at least as much on the library ecosystem and tooling as it does on the language itself.


> programmable programming language ... appealing ... not so helpful

This has also been my observation, and that of many actual Lisp hackers I talked to. It also mirrors the experience the developers of Smalltalk documented: at first they had ultimate flexibility, every object could in effect re-define the language completely. Not so good.

On the other hand, we also seem to be stuck a bit in a local maximum of expressiveness that, to many including myself, seems obviously not expressive enough.

What if ... the idea of a programmable programming language is actually good, but the implementations have not been as useful as we thought?

I think what is missing is a good meta-model and a good "language library", for want of a better term.

With most programmable programming languages, you are basically given the "it can do everything, now have fun" treatment. So everyone has some fun, implements "their own, incompatible versions of common utilities" then realises this isn't quite as useful as it was thought to be and gives up.

Going back to OO, I would claim that most OO "languages" by themselves are somewhat useless. What makes them incredibly useful is their class libraries. It is of course the languages that make these class libraries possible, but that's a second-order effect.

In fact, Smalltalkers have the saying "Talk small, but carry a big class library" (repurposing "speak softly and carry a big stick, you will go far.").

For me, the most useful meta-model is the analysis of software into components, connectors and styles that the software architecture people came up with. It appears to capture most of the variations in how we build software and has a useful, empirically built taxonomy.

In this meta-model, the connectors pretty much define styles, so the "language library" I mentioned above becomes a "connector library" in actual usable artefacts. Interestingly, existing languages seem to be describable as a set of connectors, and the connectors are very much related, so this does seem to solve the problem of lack of guidance.


This makes sense, the latest C++ standard for example makes C++ a great language, but without a great standard library built on top (the current STL is quite clunky) C++ is much more inconvenient to use than Python for example.


As cageface notes - it's not that we can't "just use Lisp" - it's that the metaprogramming leverage added by those systems isn't the form of leverage that helps with shipping. You can achieve "good-enough" metaprogramming with an external code generator and customized checks in the build process. The lasting power of the C preprocessor and its crude lexical macros is a testament to how far you get with this "worse is better" level expressiveness.

What production teams focus on instead tends to be the meat and potatoes aspects of libraries, documentation, build time, and debug time. A new user of Rust who writes C-like code can still benefit; they'll have to learn some things while fighting the borrow checker, and they may not have exactly the libraries they need(which can often be the showstopper) but their debug time will go down and their builds are likely to get simpler, if not faster.

It is true that we could use more syntax that explicitly deals with concurrency, but that's also the thing that we have little agreement on.


What good meta programming gives you is that a lot of language features can become library features. A good example of be Julia’s differential equation support or its XLA support. A language like Swift needs to modify its compiler to do similar things.


This is often not a good thing. You now have to version the library separately from the language itself and deal with dependency issues. Also you get competing and incompatible implementations of things everyone needs. This is exactly why people bag on JS for needing 600 node modules to build anything.


Having to handle library versions and long library dependencies isn't a result of a programmable language, it's the result of having better tools for sharing and integrating code without having to reinvent and maintain independently everything for every tool, and that's generally a good thing (even if it has extra steps to solve over having everything imaginable already in the base language).

And incompatible implementations are not really exclusive to metaprogramming either. Pytorch, Tensorflow, Numpy and the Python standard libraries are not compatible by default, each has to deliberately create (or not) tools to convert from one type to the other (or build entirely on top of the other).

You can argue about the Lisp Curse, in which having a lower entry barrier to make your own tool makes it easier for each create it's own "perfect" one (which is flawed/lacking to everyone else), while languages in which the sunk cost is higher would make people gravitate towards something already created and improve it with their own use case. That's definitely true, but since that's something cultural then it can probably be handled in some way (for example deciding global common interfaces for interoperability, blessed packages for each domain and the most popular extensions of the language being promoted as official language feature after enough deliberation).


There is some active research in academia and there is also the factor that veteran programmers almost all develop an itch they would love to scratch. Text editors, programming language, debuggers, terminals...

I know that if I had a few more hours a day I would try to write yet another of these things.

(Where is my cross-platform, OSX-Linux-Windows one-file text editor that just has smart indent and syntax highlighting for python?)


> (Where is my cross-platform, OSX-Linux-Windows one-file text editor that just has smart indent and syntax highlighting for python?)

I know people are gonna protest this suggestion vociferously, but Emacs ticks all these boxes (except that it works for multiple files), with almost no configuration. Admittedly, getting auto-completion to work cross-platform for various languages is still quite tricky.


By one-file, I mean one executable file. Also, emacs does not have the defaults most people expect since the 90s.


>> (Where is my cross-platform, OSX-Linux-Windows one-file text editor that just has smart indent and syntax highlighting for python?)

Just use vim? I mean, if the complexity of the projects you work with require access to a single file you might as well use _any_ editor that supports automated syntax highlights.


This got me started on my own editor. https://news.ycombinator.com/item?id=14046446


Sublime Text 3? I don’t think it’s quite “one-file” but it gets pretty close.


PyCharm.


This is what I am using for dev. Does not qualify as a light one-file app.


> I'd wager the true next generation programming languages will be ultra minimalistic languages, retaining only the essentials while not losing on legibility.

This is a fashionable idea, but I don't like it. Minimalistic languages rely on encodings, which are always far more expensive than using native features. Minimalism is fine in theoretical/academic settings -- fewer proofs to worry about -- but it's not great when performance matters.


Each time I come back to C# I find more and more 'decorations' etc. that qualify behaviour of code in one way or another, and it's getting to the point that I feel I need a reference book on the desk as I try to parse some source. These features might make the language more powerful for experts, but they make it much more difficult for devs that have to drop in and out.


If you have a minimalistic language you have a similar problem. Instead of learning language features you will need to learn libraries. You cannot run away from complexity. The advantage of the language features of libraries will always be cost.


> will need to learn libraries

which is easier for me personally, because the calls to the libraries are generally very much in keeping with the core language syntax, whereas the decorations can be very much a framework specific deviation. I can generally easily identify calls to libraries, whereas I cannot guess easily at the consequence of a particular decoration I have never encountered before.


Check out John Shutt's Kernel lisp, it uses some clever constructs to reduce the number of special forms needed, and also supports delimited continuations


A few advancements that could be very good:

* Moving towards representing code on disk as an abstract syntax tree, and anyone can edit with whatever syntax they prefer. Syntax becomes virtually irrelevant.

* The above point opens the possibility to represent code as visual graphs (even to code within virtual reality). Graphs and visual programming are useful, but to be able to toggle between textual and graph representation gives you the best of both worlds.

* Smarter compilers. For example: easy to use compile time constraints. Think about the ability to specifiy something like the minimum value an integer can be set to. This could allow the compiler to produce compile time errors for things like index out of bound errors, with zero runtime overhead. More complex constraints could be user defined. The mindset of programming could move to thinking about how you can do as much as possible at compile time.

* The ability to embed documentation alongside code. For most codebases the ability to understand the code is part of value of the code. Comments don't go far enough, particularly when working with visual concepts. Being able to embed readily accessible documention within the code could help dramatically.


> Syntax becomes virtually irrelevant

This is probably a big mistake. Code must communicate ideas. Communication requires a shared language. A language has syntax, not just semantics. A given semantics has only a few natural expressions as syntax.

Can't pair program, can't program during a talk or a stream, etc. without that shared language.

If anything, we should be looking at the reverse of what you said: reading code is more important than ever due to the scale of programs, so we should be looking into creating the most readable language ever.

Edit: I think overloadable and definable operators can be very useful (like Haskell, OCaml), and that's probably as much syntactic flexibility as we need. Even this can be abused though.


> I think overloadable and definable operators can be very useful (like Haskell, OCaml), and that's probably as much syntactic flexibility as we need. Even this can be abused though.

I remember being very concerned about the performance of some Ocaml code that was overriding (* *) until I realised what it was doing :)

Not sure how to type that on HN!


Use a code block:

  (* *)
Then the asterisks show up as literals. Two leading spaces for each line in the code block.


The longer I work as a developer the more I appreciate simple, reliable tools. I don’t want it to be possible for people to have different syntaxes. I want there to be one syntax enforced by the tool. That consistency is a lot more valuable in production code, diffs etc than letting people choose tabs vs spaces.

Since I started using prettier with JS I hate switching to a language that doesn’t have it.


That criticism doesn't make sense if one syntax is completely identical to another in terms of the underlying semantics. Different syntax, same language.


I’m always sharing diffs of code, screenshots, browsing code snippets online etc. Unless every single one of these tools also allows me to configure the way the AST is rendered then this approach adds a lot of confusion for not much benefit.

The actual representation of code is just as important as the AST in many cases. If it wasn’t then we would all just code in lisp.


> Different syntax, same language.

This is probably a fiction for any reasonably complex language. Being able to view a C program in S-expression form doesn't solve any real open problems in software engineering, but it certainly creates some new ones.


In case you're unaware, your smarter compilers bullet point is called dependent typing. You can try it out in languages like Idris. Also the person who wrote "The little schemer" recently released "The little Typer" which is a nice intro to some of the theory behind it, very easy to follow.


Dependent type systems are one way to verify logical constraints in code, but not the only way. Look at Dafny for a good example of how checked invariants can be integrated into an ordinary imperative language. Or ESC/Java for the same approach but integrated into Java. The basic technique goes back at least to Dijkstra’s predicate transformers and nowadays the verification is mostly automatic with SMT solvers.


Those can be integrated with dependent types too - for an example look at F*.


When I researched this concept previously I did come across Idris but I did not actually study it much.

I will do that now, thank you for reminding me of it again!


You might be interested in Jetbrains MPS [0] which uses a projectional editor. The same AST can be projected to different 'visualizations', e.g. as a table AND as a just code

[0] https://www.jetbrains.com/mps/


> The dream of programming language design is to bring about orders-of-magnitude productivity improvements in software development tasks.

I can understand why there is such a focus on productivity, but I think it's awful for the industry when it's our number 1 priority. Productivity is meaningless if the thing that you are producing is of poor quality.

It's why we make web apps out of 500 npm dependencies when a native app would be faster and preferred by the user. It's why we use languages where we don't have to "fight against the type system" because we prefer writing code faster to writing correct code.

Again, I understand perfectly why this is the case. It just kinda makes me sad that we treat ourselves as garden hoses spewing out as much code as possible without as much thought into making a better product.


Unfortunately users just don't care in most cases. The app stores have taught a generation of users that software should be free so the pressure is on developers to build things in the cheapest way possible. In most cases that takes real native apps off the table. I particularly blame Apple here for being so successful at commoditizing software in order to sell their hardware. This might come back to bite them now that their hardware sales are slowing though.

In markets where users are still willing to pay a premium you still see high quality native apps. Ableton Live is $750, but plenty of users are happy to pay that because you can't build Live with javascript and html.


It's not clear to me that this is actually a bad thing. Computer time is less valuable than human time, and that goes for both developers and users. For most one-off computing tasks, far better to fumble with something shoddy but free that someone dashed off in a few hours, than invest a day's wages into a product with man-years worth of polishing.

There's free food everywhere and we're complaining it's not gourmet.


>Unfortunately users just don't care in most cases.

Users almost universaly don't care. This also applies to businesses, C-levels and managers. However, while they don't care it affects them indirectly as they end up using bloated sub-optimal software possibly with bugs and sometimes significantly slower than in could be. It then affects their productivity as well.

My point is that one could at least try educating the non-IT decision makers why it's important. My team has successfuly convinced the CEO that 2 month long refactoring project is really important and that while it won't directly affect the functionality it will increase productivity of the our team in future by making the team more productive and allowing for faster implementation of new features in the future. From my experience the "technical debt" metaphor is great for explaining such things to business oriented people. That metaphor is usefull because it is compatible with other debt related metaphors, like interest rate or refinancing. We could explain extra time needed to implement stuff in non-refactored bloated codebase as an interest-rate.

Also, the use of car-related analogies is quite effective.

My point is that while users don't care about the technological intricacies, the effects can be explained by the use of metaphors and concepts that they know well.


So then the idea should be to make a language (or environment) that allows building higher quality with the speed of JS... But really no-one but a handful of people (like some people here and myself) cares enough (to do something about it) about this; companies or end users.

People are used to very shitty software; non-tech people are just swearing at their screen but in a way like 'we cannot do anything about it anyway'. Almost all sites and apps are broken in many ways without even looking at usability/UX (a lot of time seems to be put in frontend and blingbling while the backend/logic is horribly broken in many ways).


It has always been interesting to me that developers hate LoC metrics but brag about writing a Twitter "clone" in a weekend.


> developers

Developers meaning in some circles; I know many corporate / enterprise devs that really pride themselves on committing (if they have version management.....) and producing as many lines of code as possible. Usually Java or C# or VB#.


More code is usually a greater liability. Less developer time is usually cheaper.


If I were to write an exploratory survey blog post of Next-Paradigm languages I would cover problems that are to be solved and ways to represent those problems.

That there are many alternative forms of computation, and that the inability to represent that form of computation in the host language leads to complexity and code bloat.

    * constraint programming, logical, spatial, temporal
    * lazy evaluation
    * back tracking, memoization
    * reversible
    * succinct
    * differentiable
    * anytime
    * incremental
    * probabilistic
    * lattice/crdt
    * quantum
    * sketching
    * programming by example
    * transactional
    * failure tolerant 
Now how programs that embody those techniques are represented, I don't know what those languages are like, the semantics or the syntax.


I've been trying to imagine what programming will look like in the far future, and I just can't believe that it will be fundamentally structured around typing English-like text into a text editor, with a smattering of tools to help manipulate the text and debug the resulting binary, like it is now. There has to be more that computers can do to augment our brains and help us become more efficient at writing code. Gary Bernhardt's "Whole New World" talk (https://www.destroyallsoftware.com/talks/a-whole-new-world) is a good start, but it could go so much further.

Unfortunately it seems like most working programmers are deeply suspicious of new paradigms for producing programs. Understandably so, since all the "visual programming" tools up until now have been either teaching toys or unusable disasters, but I think we're limiting ourselves tremendously.

But it may not be possible to arrive at my glorious imagined future incrementally. It may take some genius just sitting down and working on this for years and producing a completely finished system to convince people of its capability.

Sorry this is a bit lofty and light on specifics. It's more a feeling that I have that it would be ridiculous if programming in 2100 looked the same as it does now, and not something I've thought about deeply.


The reason the visual languages even come up so often is because people outside of programming believe that code is the hard part. It looks scary, so if we just took that part out everyone could be a programmer.

Which makes you wonder why every single attempt to do this makes easy things trivial and everything else difficult to impossible.

Programming languages are (usually) tools created to help get things done. They're not (usually) there to make your life harder, or to force you to learn something. So why does it seem like that's the starting assumption to every alternative proposed?

(NOTE: If your scope is limited, it makes sense. Unreal Engine 4's Shader Blueprints are great, not only is the scope limited but the visual approach allows you to preview each step at a glance.)


I guess it's somewhat the same complaint with math and musical notation. Some mathematical expressions are very complex and compact but people use it since it is the best we got.


Yeah exactly. It's not that there's no improvement to be had, it's that you're not going to get far if you assume the current system is there to make things harder.

I think part of it is, despite widespread acceptance of puns, most people seem to think English (or any other widely spoken language) is a lot less ambiguous than it really is.

There's a reason legal documents are the way they are as well. Granted in that field there is sometimes incentive to obfuscate your meaning, but I still don't think that's the reason for most of the difficulty in outsiders easily reading and writing legalese.

The fact of the matter is, expressing logic (or music) in words is hard. The more detailed you need to be, the harder it gets. That's why the example of making a sandwich comes up so often in introductions to programming[0]. If people can't express in their native language how to do something they do every day, in an unambiguous enough manner for a "computer" to perform the action, that speaks to a deeper problem than the representation being too hard.

[0]: It's not a great example, because the instructor usually is going to maliciously try to find any logic hole they can, making it feel more like a hazing ritual than a lesson to be learned. But it does come up a lot for a reason.


I hope programming advances a lot, but I'm not convinced that the medium is the interesting part. It's the semantic model that matters, and I think text has won so far because it's the best way we've come up with to express the models we have.

So maybe we'll be programming in a medium other than text, but I sure hope that's not the main advance that's made.


I consider myself a serious programmer (fond of Clojure, Haskell, writing hyper-performant graphics code, etc).

I'm also very fond of visual languages, so much so that I started collecting and cataloging all the noteworthy examples of them I came across in my research travels: https://github.com/ivanreese/visual-programming-codex

Yes, the vast majority of visual languages are uninspired and underdeveloped. But the same is true of text languages — look at Pascal or xBase or C++ templates.

Yes, it's possible to make a terrible mess in a visual language. But the same is true of text languages — we had to learn the hard way not to goto, not to do TCL style string-based metaprogramming.

When we think of the greatness of text languages, we're thinking of Idris and APL and Racket, not the Java you see on The Daily WTF.

When we think of visual languages, we should look for similar examples of greatness (modern Max/MSP is not a bad place to start), and not indulge in Blueprints From Hell schadenfreude.


There needs to be an "Assembly" of visual languages. Otherwise, we will still be limited if we code them in text languages.


could you explain this idea a bit more? it sounds interesting, but i don't quite see why "visual assembly" would be necessary. most "high level" languages are written in something lower level like C (possibly with Assembly bits), but that's an implementation detail – the semantics of C have little bearing on the semantics of the implemented language. why would it matter what makes a visual language tick under the hood?


Reasons we use text:

- It's very close to the computer's "native language"; a raw series of bytes on disk can be stored and processed efficiently, and are analogous to the binary form, which has to be a series of bytes

- It's visually compact, so you can view a large amount of complexity per screen space

- It can be created and edited purely by keyboard. I'm not an anti-mouse zealot - I often use it when I'm going back and tweaking code - but nobody can deny that the keyboard is a faster and more direct way of getting new ideas out of your head and into the computer

I also think there's value in being able to get down to the baseline and see the "raw stuff" that makes up a program. Being able to see that full definition with no layers of indirection, even if you always use layers of indirection when actually working on it.

None of these are insurmountable, but I think they'd have to all be addressed by any true graphical replacement for textual code. Personally my vote is for a more fully graphical editor that still operates on textual code underneath, which can still be read by humans on its own.


There is another factor in favor for text - it is precise in meaning and easy to output as opposed to even "memorize an 8 bit Mario sprite and get all colors right without a single pixel out of place".


I expect programming to trend towards being more declarative. Developers will state what they want and it will be up to the interpreter/compiler to come up with the most efficient solution.


I find that giving names to things is a way to manage complexity and relationships which scales to far more complicated systems than visual representations that I'm familiar with. Maybe the inability of a visual representation to cope with complexity is an advantage in promoting less complex systems, but making systems which handle our world that are that simple seems expensive.

I started introductory work in circuit design via a visual interface. You could drag and drop components, size them, connect them, and so on. I had to do quite a lot of work on the layout for even simple designs to appear comprehensible. There were just so many relationships, and crossing wires is so problematic that it's hard to provide high level detail or low level detail. You graduate to connecting things by names, then to a description language which is purely in terms of names.

Programming visualization struggles to remain comprehensible in view of the full complexity of the underlying system. I took a single variable, and tracked every function which interacted with this variable based on read or write relationships across a single (fairly top level) function call. The graph that I generated from this fundamental tree-like structure was unusably complex. After collapsing nodes appropriately, I could explore the graph to get some understanding, but no one that I shared this with understood how to use it. I don't know if this is argument for visual representations (since I used it to explore these relationships) or against.

Nevertheless, I see no reason why we need to draw such a distinction between a text-based specification and a visual one. We certainly can design a language-visual dual which is trivially isomorphic, so getting that system hardly requires a paradigm shift.


You give people a language with a lot of influence from mathematics like Haskell, which should be easy to write because we familiarise ourselves with mathematics from a very young age.

You give people a language which is very close to english, like Python, which is accessible to an even larger audience. Thousands of packages get written just to make it even closer to english, even more plug-and-play.

At the level of Haskell, their comfy syntax means you need to understand the logical implications of literally everything you write. At the level of Python, their comfy syntax sacrifices control over performance. Both good syntax, both problematic.

I'm beginning to think you can't have your cake and eat it too when it comes to languages. Maybe you'd even want to make the argument that it's a good thing that languages are inaccessible to most people.


FYI Unreal Engine's "blueprint" is a visual programming language that is neither a teaching toy nor an unusable disaster.



Numerous website highlights similar things in text :)

And considering blueprints spit out C++ once compiled, I'd take my chance with BP!


So is Max/MSP/Jitter.


Ostensibly, it won't look the same as it does now, but in reality it will still be hacked together with Perl.


Check out the work done by Viewpoints Research Institute.

http://www.vpri.org/writings.php

APL did away with English-like text and was moderately successful, but the high learning curve inhibited popularity.


This is an discussion topic, and this text provides a viewpoint. However, from how I read it, the "current paradigm" is imperative languages like C/C++, and "next paradigm programming language" is Datalog (this word occurs 31 times on 9 pages of text).

What is missing are functional and descriptive languages.

No explicit loops? "heavily leverage parallelism, yet completely hide it"? lazy evaluation? This sound like functional languages.

"Example: An Interactive Graphical Application." describes HTML+CSS exactly.

I understand author really loves Datalog, but omission of other languages makes the message less effective.


> "heavily leverage parallelism, yet completely hide it"? lazy evaluation? This sound like functional languages.

I agree with your general comment, but this statement seems a bit optimistic on the functional languages side. Lazy-evaluation? Haskell does it, but most other functional languages don't.

Heavily leverage parallelism but hide it - this is often touted as an advantage of pure functional languages, but from what I've heard, in practice it has never panned out, at least if I'm interpreting it correctly as automatic parallelization. Turns out, while the compiler can prove it would be safe to do so, a lot of the time it would massively hurt performance, and the compiler can't tell when that is the case or not.

So, even in a language like Haskell, you still have to explicitly parallelize code, think about chunking etc. And if we stray from pure parallelism and into concurrency, all of the problems of having to do explicit synchronization come back, you just have less to worry about with no(/less) shared state.

Funny thing, one langauge that does 'leverage parallelism, but hide it' is x86_64 assembly, where the processor automatically executes (parts of) assembly instructions in parallel, based on data dependencies and available compute ports.


The author is pushing a successor to Prolog. This is way outside the mainstream, but maybe it will work.

I suspect that future programming languages will either be garbage-collected or will have something like Rust's borrow checker. Nobody needs a new language with dangling pointers and buffer overflows. C++ is trying to retrofit ownership semantics, which is good, but has major backwards compatibility problems.

Indentation and code layout will be automatic. Either in the editor, or something like "go fmt". Nobody is going to put up with the indentation and the delimiters being out of sync. Also, the ultra long lines of functional programming have to be laid out in some standard way to be readable.

Does each language really have to have its own build and packaging system?


Here's a possible borrow checker alternative: http://aardappel.github.io/lobster/memory_management.html


For folks interested in future programming tools — be it languages, editors, IDEs, visual programming, new abstractions, what have you — I recommend looking at the https://futureofcoding.org community. The podcast has a number of great interviews with people exploring all corners of this problem space, and the Slack group is full of people actively working on these hypothetical-future tools and sharing their findings.


Things I'd like to see

- Tiered abstraction levels. I want to write high-level functional code when I can, which expands to more complex code under the hood. When I want to I can instead manually expand the logic, perhaps the only time I need to code imperatively. Think a project today that has high level haskell or python code but implements bits in C where necessary. For the highest level bits, things like formal verification, safe concurrency, code contracts etc should be much simpler to use. For the low level bits, I'd just take responsibility for correctness (memory safety, formal correctness, concurrency) myself (like "unsafe" in Rust/C#). Perhaps the number of abstraction levels should be more than 2.

- Integration of tools: 1) Today a compiler, editor and VCS are 3 different tools, and their lowest common denominator today is text files. I'd like to see a system that version controls a syntax tree for a whole project, and allows semantic diffing. A build server could trivially do incremental builds, moving a symbol doesn't break history, reliably running only impacted tests for a 10h test suite is possible. This doesn't necessarily mean everything needs to be one big tool - but the lowest common denominator of tools could move from UTF-8 files to a binary representation of the whole syntax tree. Having a more complex binary representation would have drawbacks but also other benefits like trivial inclusion of a picture in a comment that can be previewed with a mouse hover in the editor, or displayed inline. 2) Better integration with documentation and issue tracking. Same as the AST representation: we need to be able to link documentation, pictures, links to issues etc from code in a way that doesn't rot if we change directory paths, issue trackers and so on. A broken link to a document in a comment should be a build error like any other error.


An overview of a problem that I run into is the need to pair some input, validate that input, query the system based on the input, expose the intermediate stages for inspection and modification by arbitrary consumers, conditionally modify the underlying system in some way, and provide well defined semantics around the outcome while handling failure recovery, distributing the processing into multiple, cancelable stages, and sharing the system context and other forms of memory with concurrent operations.

The right way to behave in each situation based off of the input, the state of the system, or failures encountered is determined by history and by many people.

A language that makes my job easier solves problems like: 1. Help me describe my system as straightforwardly as the above description 2. Prevent me from neglecting to handle failure and reduce the work to specify how to handle failure 3. Prevent me from introducing logic errors. e.g. concurrency without synchronization, or passing an int where a string is expected. 3a. Help me verify that my solution does what I expect. 4. Make it easy to accumulate data and pass information through various interfaces. 5. Make it easy to extend behavior without modifying existing code.

Given that we probably spend 95% of our time plumbing information, handling failure, and reducing logic errors, an order of magnitude of productivity increase is realistic. Choosing an algorithm or data structure is less than five percent of time allocated.

Rich Hickey and Clojure seems to focus on people like me. Rust seems to focus on another subset of the challenges I face. One of the useful ideas I've encountered is that coding is a specification design process rather than a manufacturing process. We start with an ambiguous description and specify behavior in increasing detail until a computer can work with it. The article suggests that we can subordinate some of this detail to the compiler. The details that it chooses to subordinate aren't the details that disrupt my productivity.


It seems that you are describing Haskell or ocaml/f#, I don’t see how clojure would fit the bill given that it’s a dynamic language.


Those languages do indeed have a lot to offer for helping ensure safety. I attempted to describe 'situated programs' which Rich developed Clojure to solve. I've struggled to use Haskell in these kinds of environments maybe because it lets you do so much static analysis.

The article assumes that as we get more productive at programming our time will be dominated by tests. While that's probably true, I would consider it important in this future world to minimize what tests need to be run and written. We do this today in Ocaml by making some classes of inconsistency inexpressible. We do this today in Rust by making some kinds of failure inexpressible.


An overview of a problem that I run into is the need to pair some input, validate that input, query the system based on the input, expose the intermediate stages for inspection and modification by arbitrary consumers, conditionally modify the underlying system in some way, and provide well defined semantics around the outcome while handling failure recovery, distributing the processing into multiple, cancelable stages, and sharing the system context and other forms of memory with concurrent operations.

This sounds like a dream for data analysis.


How about a gesture based programming language that translates to a visual one on screen? Is it so far fetched?

I've been typing programs for so many years I think we've gone as far as we could go with text, auto complete, live compilers, hot reload etc.

It wouldn't be that uncomfortable seeing a developer spends most of time reading / analysing and, god forbid, the open office would allow it (!@#$%^ noise): THINKING.

Here is a thought, want to make the future of programming languages? Remove distractions.


How would you "name variables" or "refer to code blocks" without typing?

How will you search for specific things in a codebase (where did I assign this variable again?) without text?

Have you ever tried running "diff" on "graphical/flowchart" code?


Diff tools might actually be able to work better on graphical/flowchart code.

It could theoretically be easier for the tools to identify exactly what changed and present that information rather than superfluous stuff like whitespace changes.


My questions are based on actual experience using a "graphical" language [1]... I am not using it anymore, but the "theoretical" advantages you mention never actually became real for me (or my former colleagues still using it, either).

[1] https://en.wikipedia.org/wiki/WebMethods_Flow


I don't doubt that existing graphical languages have very poor tooling.

I believe that diff tools could be better if they operate on some format that represents the meaning of the code, not the details of how it's written. New tools would have to be written, so it's largely a matter of pragmatism and momentum that prevents such a thing.

FYI I googled "WebMethods Flow merge tools" and one of the more useful discussions actually featured you from 3 years ago expressing similar sentiment: https://news.ycombinator.com/item?id=12106945


I am old so I tend to repeat myself a lot :)

Honestly - I would be happy to use a graphical (or any other alternative to traditional) language. I was reasonably enthusiastic when I started working with webMethods and some things I like a lot. But honestly I cannot say it scales well beyond simple transformations / mapping tasks.


An angry programmer could find a gesture or two to name the most hated variables...


With a unique gesture(?)

and yes the diffing on Blueprints is very good, again, in UE4.


I've always felt that Ruby was the language that got closest to natural language or to natural pseudo-code while retaining conciseness and not getting overly verbose. I hope more new languages are heavily influenced by Ruby or go further towards being highly readable and concise.


Ruby is fantastic for DSLs; however I fear it's becoming Betamax to Python's VHS.


It’s been the Betamax to pythons VHS for a long time.

Outside of rails, ruby is pretty unpopular.


Let's start with Fred Brooks: There aren't any new paradigms, techniques, or languages that (realistically) promise even a one-order-of-magnitude improvement in programmer productivity. (Paraphrased from "No Silver Bullet".)

That said, I think the biggest way a language can help is by having a great library. Code I don't have to write is a huge productivity boost. An outstanding example (for its day) was the Java library. It was like Barbie - it had everything. And it was organized (and documented) well enough that you could find what you needed pretty easily.

For the language itself, I don't have any great answers. But I observe that many of the comments here focus on syntax. Syntax matters, but don't forget that semantics matter also.


> There aren't any new paradigms, techniques, or languages that (realistically) promise even a one-order-of-magnitude improvement in programmer productivity

I am doing 10×. It's possible.


Maybe you are. (I'd need more information to know how to evaluate your claim.) But if Brooks is right, you're not doing it because of the language you use, or the programming paradigm or technique you use.


I'd like to see tests as a first class concept.

Every class / any construct gets a test stub. The compiler collects information during debugging on test data and how the elements of the applications are connected and expands tests based on this information.


That article seems focused on languages aimed at the professional programmer.

I think the non-professional programmer is a bigger market. That is, the person who uses spreadsheets or writes some simple scripts.

Another issue is readability by non-professionals. Executives may not need to write business rules but they ought to be able to understand them enough to sign off on their correctness.


All that does sound a lot like COBOL.


COBOL is good for what it is.

I wouldn't use it to get the eigenvalues of a matrix though.


I've often asked myself this question - are there any fundamental reasons why language needs to be the most expressive way to instruct computers? If not, what else?

.. and I've tried to resist answering it and letting the question ferment instead.


You made me laugh. For some reason the image of me (an overweight programmer) writing my next website via interpretive dance - in the vein of the dance scene in "The Big Lebowski" - won't stop running through my head.


Most languages today come with a runtime. That makes it hard to interoperate with other languages/environments. To avoid this, a new programming language should either have no runtime (or a very minimal one), or be very explicit about which features it expects from its runtime and allow the runtime to be swapped out by something else (we might see something like that in the WASM sphere).

Furthermore, I believe we'll start seeing an increase in tooling around languages, and a focus on reducing iteration latency. Traditionally, REPLs/live editing have been mostly associated with higher level languages, but there's no reason that those things could not be applied to low level languages as well.

Finally, while I think that there will be always be room for both simpler and complex languages, the market for simple languages is somewhat underserved right now, so we'll start seeing more of those in the nearby future.

Disclosure: I'm the author of Muon, which is a new programming language that tries to embody these principles: https://github.com/nickmqb/muon


I'd love to see a modern successor to occam, or something similar that has first-class, painless multithreading.

It's strange to me that in this era of 32+ core machines, the language model is still single-threaded by default with (often significant) extra effort required to execute multiple bits of code simultaneously. Not to mention languages like python which restrict you to a single CPU core.


Rust?


Erlang/Elixir?


I guess so, but erlang runs in a vm that handles setting up OS threads and uses its own scheduler to manage erlang "threads" - I think that's cool and all but it would be nice to have a language that could do that more natively with less abstraction imho. Occam ran on bare metal


I hope the next languages will come from interactive theorem provers and program synthesis. I'd much rather work at the level of types, propositions, models, and proofs and solve the interesting problems. The rest is book keeping and often repeated over and over.

And I hope the interfaces for these languages won't require the use of a glowing screen and a sadistic keyboard. I'd rather like it for computers to disappear into the background. I'd like to reason about my programs in a physical space where I can freely walk around, write on a note pad, draw on a chalk board, converse, and re-arrange the room to my liking. I quite dislike how I've developed astigmatism, am at high risk for RSI, and probably other health ailments because we can't think of a computing environment better than what we have right now... just with more pixels, pop ups, nags, swishy animations, etc.


How bout a language that lets you write the tests and it creates your application code.


This might sound good to a TDD practitioner, but I'd suggest that the value of tests is that they independently verify an implementation. Couple them directly and soon you will be writing tests on your tests.

OTOH, saying "this must be true" and letting the compiler work it out sounds a lot like logic programming.


I've felt testing can be reduced to writing those functions twice and hoping at least one is correct. There has to be a better way.


The best way is making no mistakes :)

The second best is machine-checkable proofs, like types.

The fallback is tests. Sometimes the first two can't be applied.


BTW this is called "program synthesis" and there's a lot of research in the area right now. It only works for tiny programs at the moment.

Program Synthesis is Possible https://www.cs.cornell.edu/~asampson/blog/minisynth.html


ILP + IFP are easy to understand techniques of 'programming by example'.


Isn't that just a declarative language, or prolog?


Among a very long list of things that will probably never happen, I would like to see software projects become a kind of conceptual/logical history of beliefs and intention with the most recent iteration representing the current state of knowledge and practice about some domain of interest.

And I'd like to see tools that can extract value and improve efficiency by searching paths-not-explored and comparing them with the current knowledge state.

It's easy to forget that code is a means to an end, not an end in itself, and there may be other ways to reach those ends.

IMO there's been too little work done in CS on robust knowledge engineering - as opposed to lambda calculus-inspired algebraic manipulation.

ML is catching up a little, but a lot more may be possible.


A language that doesn't encourage its users to re-invent wheels and to instead craft re-usable modules that work well across hardware and locality changes through solidly tested and reviewed foundations would help improve productivity while still being comprehensible to everyone. This is the power of runtimes like the JVM and BEAM but is not really integrated into any language particularly well IMO. I'm thinking of a language that integrates algorithm constraints beyond types into the runtime into something more like a meta-runtime that helps run functions run with the appropriate level of resourcing.

I'm just hoping that someone at compile time will be told "warning: function will take 2M cores to sort an arbitrary list within 20 ms"


Goal-based language (similar to Makefile's), driven by AI, with automatic error handling of any kind.


I think a true next gen language will have distribution-related concepts built-in. For instance, a strict monotonic subset that will make it infinitely scalable.

For problems that can't fit in this subset, it will need concepts for coordination and reasoning about the state/consistency of the systems involved.

Concurrency and mobility/distribution are the big problems that programming languages have yet to satisfactorily solve. We have data structures and limited models that help with these problems (eg. CRDTs), but it would be a huge productivity boost to put these abstractions right in the language so you can read, write and reason about programs more easily.


Does Erlang not satisfy this?


Erlang solves some of it, but having to manually coordinate actors is not suitable for every problem. Collaborative editing for instance.


I think we’ll be working in bigger concept “chunks”.

Sort of like Django today but it will intelligently connect the plumbing for you.

For example you’d just select/say, “add a user authentication system to this app” and it would figure out and guess at the best way to do that.

Or “get me something to store the data users enter on this new form” and it would have some smarts to store it in a way that makes sense for your purposes.

You could always drill into what the system creates for you and change things but hopefully most of the time it comes up with something reasonable.


I'd like for people to go back to understanding why atomicity is important and not "design" systems with microservices that have at most a few thousand users. As for the future I'm already extremely happy with the compromises Elixir gives me but I'd like to see a redux/mobx type thing that is setup from the server infrastructure and knows how to interact with the frontend without me constantly having to keep things in sync and up to date manually.


Stack based concatenative operator languages are the future, if you haven't heard of them (and railroad programming) do yourself a treat and look them up!!


Are you have heard of them(Everything is a pipeline, everything is RMDB)? ;-)

https://github.com/linpengcheng/PurefunctionPipelineDataflow


They may be a future. I'm pretty sure they aren't the future, though...


Unless hardware changes drastically, I can't really foresee the demand for a new programming language.

Who is there to back it? In this era, it has to be one of the big techs out there. Unless they somehow find that educating their 10s of thousands engineers to a new language, and changing their infra accordingly is worthy cost comparing to the benefits the language brings, I didn't see how this would happen really.


This is a chicken and egg problem - if there is no "new paradigm" hardware no one will create software for this new thing while when there is no software for that new hardware no one has incentives to buy that hardware. This is the situation somewhat similar to multi-core CPU adoption - this was delayed at least by a decade in consumer computing for that reason. There were even motherboards supporting dual CPU targeted at consumer market (in Celeron/PIII era) but they were never popular outside a niche group of enthusiasts.


Existing languages are _terrible_ for today's hardware.


Not true. They have been optimized for decades. Programming language is as much as about implementation as abstraction


Nope, only compilers/interpreters were optimized, not the languages themselves. Languages are the same as conceived in the era of single-core CPUs with one-cycle memory latencies.


I think things are going to split up even more:

- languages for physicists - languages for game developers - languages for business apps - languages for mobile apps - etc

These domains turn out - IMHO - to have vastly different needs and are better served by specialist tools. Some of them textually, some of them visually (game design: Unreal's blueprint).


It'll look like Lisp. Or it'll take a single feature of Lisp and call it new (as has been the case for quite some time). Eventually every language family will be a mutually incompatible collection of Lisp variants.


I just want LOLCODE to become mainstream.


How about an new kind of hardware-enhanced IDE that stooge slaps the programmer if he even thinks about doing something like making a test that depends on stacktrace line numbers (just saw this...uyyy)

What if I just <whack>...okay, maybe I'll try something else.


It's interesting that this paper talks about compilers becoming a lot more sophisticated and powerful, and simplifying many aspects of programming. I've always hoped for this, and want to work on this as a personal side research project.


I'd like to see a language with built in version control, so that you can call older versions of parts of the code, compare performance between branches all within a running process. I believe there is a lot of potential here.


Educated guess: Very much like the old ones. That would be my conclusion at least by looking back the last 20 years.

Sure, there are exotics like languages for code golfing and domain specific languages, but overall things stayed pretty familiar.


I suspect that we'll be writing C, C++, Java and Javascript for the next 50 years. Unless WebAssembly really takes off, and then Javascript might finally go to the dustbin of history, as it should.


I really wonder if machine learning is going to allow us to create more intelligent editors/linters/compilers. In fact, it could start to automate most of the actual process of writing code.


Facebook demonstrated a version of this recently https://ai.facebook.com/blog/aroma-ml-for-code-recommendatio....


I'm working on one. It's nothing like any of the comments here. You all are trying to add complexity and I removed it.

All the comments here are along the lines of "I want a faster horse".


They will mostly look like what we have today, but they will myopically focus on one particular set of principles and have fancier names.


Truly the next programming language would be spoken English. Something like "Jarvis, give me the optimal schedule for a speaking tour of 27 North American cities and I don't want to pass through the same airport twice. Also, outline each stop in invisible red lines and I need it sent to my iPhone in Outlook format by yesterday."

Anything less and you may as well be using JCL on punched cards. $END


Abstraction to the next level up is classically how language paradigms have progressed. From flicking switches that represent bits, to assembly language, to C, to memory managed languages, etc. Programming languages are not for the computer, they're for the computer programmers' weak grey matter between their ears. Future languages should be entirely about improved coping strategies for us.

Higher level abstractions allow the programmer to do more with fewer instructions. The closest we've got to that right now is monadic programming in Haskell (and other languages with first-class support for monads). They allow encapsulation of common patterns (if/then/else, exception handling, state management, environment passing, list processing, etc.). But it would be naive to think we're done.

Anybody who works on web-applications over a long time will realise we've run out of luck with our current languages. Code-bases that only grow and projects that never end. I work on a code-base of 15 million+ lines of code, and the language gives very little help in terms of managing that. Especially when the monolith gets broken up into services.

We need:

* Abstraction from the network to easily support distributed applications. Erlang has its actor system, which is the closest to this, I think. The standard criticism is that because networks can fail you can't build a one-size fits all system. Looking at the actor model it's clearly possible to build something that fits most common use-cases for distributed applications.

* Abstraction from error handling

* Abstraction from complex control flows over time - i.e. code where one line runs, then the next line might run 6 months later

* Abstraction from threads: - No mutable data - Built in support for synchronisation, coordination, and resolution. For example, types that support vector-clocks transparently, or the ability to have locally synchronised versions of something on a server, etc.

* Improved type systems that allow for easy type-driven development, so the compiler is enforcing the business rules of the system. Languages with dependent type systems are closest to this, but still often feel quite clunky. Perhaps even ban the use of types like int, string, etc. directly - you can only alias them into new-types like: Metres, PersonName, etc. so you're forced into using stronger types (although someone would probably just alias `int` to `Int` to get around it).

* Adaptive type systems that can describe a network of services. They would fail to compile if you send the wrong message to a service, and fail to compile if, a service which should be available, isn't.

* Something that's as simple to understand as Python for new devs.

* No null

* OO to die in a fire

I've personally had to build all the stuff above to facilitate the scale of our system - and by scale, I mean scale of code-base; any language that makes this stuff transparent and picks good safe default behaviours will speed up the process of writing new code and maintaining old code.

One thing that's of note when each paradigm comes along is how there's a mass of devs from the old paradigm that say "that's not real programming"; I think we'll only be at the next paradigm when we have that moment.


A wonderful list, with one more feature desired: seamless integration with database, especially Relational db.


My future coding will be chains of functional blocks with configurations.


This sounds to be a nightmare to debug.


There's something I think could be game-changing on the front-end of programming languages:

Just (as in 'only') change the (internal) representation of source code from character sequences to something that doesn't need a complex parsing process. In other words, eliminating the concept of syntax from language design, while keeping other language properties intact (could even use e.g. an LLVM back-end), allowing whoever desires it to write their own editors etc. as always.

(This is different from eliminating text: IMO, the visualization should remain primarily text. I say to eliminate syntax in the sense that the text 'visualization' no longer has a connection with semantics, nor any grammatical constraints.)

The reason I say 'just' change that: there are other systems structured to render AST-like structures (rather than parsing), but they are integrated with particular software systems. We need a general purpose alternate format with a similar independence of any project that 'character sequences' have for current source code.

------

I have a proposal for one that's very simple and general: represent languages as graphs of language constructs, and represent particular programs as paths through a language graph. More details at [1].

Making this substitution we can: throw out parsing, allow easy customization of language appearance (including things that would traditionally fall under 'syntax'), have much easier access to language insight (source representation is directly in terms of a language definition), give far easier to access to experimenting with language UX ideas, and more.

I'd love to hear thoughtful feedback on this. I will likely invest a solid amount of time/effort into building a prototype before long, so if someone can spot a potential weakness that I've missed, I'd be forever grateful.

(That said, please attempt at least a small amount of charity in reading: I wrote this in 2015 and most all I've gotten is people replying, "visual programming doesn't work" —even though it's expressly not advocating visual programming. I believe I really am talking about something fairly new here: and I am more than willing to listen if I'm mistaken. I have too many projects already; I have no need for this to work out. I just haven't been able to disprove it as... not being what it appears to be, and so I've felt an obligation to see it through for a long time.)

[1] http://westoncb.blogspot.com/2015/06/how-to-make-view-indepe...


I skimmed your blog post, and I'm not quite sure I understand how it will make editing easier. It seems more complex to me, although that might be because it's unfamiliar.

If you want some intelligent counterpoint, here is a post by the designer of Rust on why text is a good representation for programs:

Always Bet On Text https://graydon2.dreamwidth.org/193447.html

As someone who's working on a programming language, and has implemented a few DSLs and many parsers, I tend to agree with him.

Parsing is a pain in the ass, but I think it can made easier. The difference between text and structured data is "just" parsing, so I don't feel that switching to structured data can be a real paradigm shift. If there was a paradigm shift to be found, then it would have been implemented already by converting text to structured data, and operating on that data.

And this of course already happens in dozens of IDEs, and that's great, but it already happened. And while I have seen people be fantastically productive in IDEs, a lot of the best programmers I've seen also use emacs/vim.

But your ideas do seem more fleshed out than most, and I think that developing a prototype is the only way to make them more concrete, and more clearly see the benefits and costs. I do think there are benefits to making structured data the primary format -- I just think they are outweighed by the costs. Graydon's post outlined a bunch of things you lose if you abandon text.


I can’t give a full reply atm, but briefly I’d say: yes, it’s more complex than character sequences but that complexity (which isn’t much) could be shifted to a lib once for any number of languages, as opposed to needing to parse every language uniquely.


That's true, but what I'm saying is that parsing is not a bottleneck in programming tools.

Parsing every language uniquely is a lot of work. But it is doable and more or less "done". After that you have a whole bunch of other problems to solve (i.e. all the other things an IDE or debugger does), and that's where the real work is.

You can get rid of parsing by making structured data the primary format rather than the secondary format. But then you introduce a whole host of other problems. For example, how do you 'git merge'? You can't even collaborate anymore because merging is a textual algorithm. You would have to write a new merge algorithm for every single programming language, because each one has its own structured data format.

So basically you get a slight benefit by removing text, in exchange for a huge cost.

BTW here are some posts I've written about parsing:

http://www.oilshell.org/blog/tags.html?tag=parsing#parsing

Again, parsing is a pain in the ass, but I believe it can be made easier, and it's not a bottleneck in any case.

In my mind, the biggest bottlenecks in programming are:

1) Understanding the problem well enough to formulate the solution, whether it's in a programming language, math, or a visual syntax.

2) "Size is code's worst enemy". Many programming languages are inappropriate to express the problems they are solving. The code becomes unmanageably large pretty quickly. A million lines of C++ code is not a great way to express a solution to anything, yet it's the state of the art for vital projects like LLVM and Clang, Word, Photoshop, etc.

Both of these are 1000x bigger problems than parsing IMO. I'm not trying to criticize your specific idea, but I have heard the general idea to "get rid of text" several times, including more than once on this thread ... so that's my general rebuttal, if you're interested in a reasoned critique.


Again I’m out atm and can’t reply on full, but I think if you read my post in more detail it already has the response to much of what you’re describing.

I agree it’s not the bottleneck to programming languages overall, but it is the bottleneck to the HCI/UX component of language design.

Edit: the git merge problem can be resolved by having a canonical text representation or doing a structure diff instead.

As for the rest of your response, you’re basically saying you didn’t take the time to get a basic understanding of my proposal, and you think other problems are more important than the specific one I’m addressing. I’m not sure how that could lead to a fruitful discussion.

I will take a look at the post about benefits of text though, so thanks for that. (Apologies if my frustration is coming out here, it’s just that I have had many responses—most much worse than yours—basically saying, “I didn’t read what you wrote but you should consider X instead.”)


That sounds like a great idea, actually. Maybe I misunderstood the gist of it, so sorry in advance if I did, but:

It makes a lot of sense to decouple parsing from the rest of the compiler, if only purely from an engineering standpoint.

Editors, IDEs and other tools (transpilers, linters, formatters) already have to reimplement the parser, or at least hook into some API. Having them interact directly with the AST is a huge bonus. Sure, you'd have to rewrite your diffing algorithm to use a tree, but it seems minor compared to the cool things we'd get.

As long as you have a sane AST specification (the HARDEST part, IMO) you'd be able to have teams with people working in a Python-like syntax, others working with a LISP-like syntax, and so on, as long as internal semantics are the same (again, the hardest part).

Instead of having thousands of crappy compilers we'd have pluggable parsers emitting ASTs. This is much better for experimenting with Developer Experience and trying out new things.

Even the typing system could be decoupled: adding a borrow checker or dependent types wouldn't require writing a new language or forking an existing one, so they would be reusable, as long as the AST supports it. Running a linter during a compilation process would also be trivial.

And we would be able to reuse optimizations, code generation, interpreters.

We complain so much about "vendor lock in" but this has potential to remove the language lock-in that we have. Sure, we'd still be able to get locked-in to frameworks and libraries, but that's something to solve another time.

--

OT: I actually had a similar experience back in the 2000s when we had to convert a large VB.NET codebase to C#: we used the first crappy "convert VB.NET to C#" site we could find online and it did the job amazingly well. Fun times.


Exactly this! It's just a graph describing what happens to data and the abstractions behind the constructs are switchable. UX is especially important here.


I just want to reply to almost every suggestion on this page that we've tried that, numerous times. Most suggestions here need not to explain why the suggestion is wonderful, but why it is that previous efforts to implement the suggestion didn't take the world by storm if they were so wonderful.

That extends to probably about half of the linked paper, as well, though I daresay it fares better than most I've seen.

I think we've mostly mined out the thoughts of the past, running on machines essentially simulating the machines of the past. All the bright ideas people had fifty years ago but had no prayer of getting to work on the hardware of the time has had time to be tried out. While I'm sure there is some path dependency in our current best practices, I personally think it's easy to overestimate it. There's a lot of languages running around, and if something was really 10 times better, I think we'd notice.

I think the next step up is going to require new thoughts, mostly driven by taking a fresh look at the hardware we have now and reconsidering the way we currently lay down a ~1970s paradigm on top of whatever hardware we get. It may also require some new hardware to be developed, as we stall out at current clock speeds. I don't know exactly what it will look like, but, well, first of all I see multiple languages, not one grand one, but languages like...

... something natively built to deal with hetereogeneous execution on GPU, CPU, slow-but-efficient-CPU, FPGA, and so on.

... something that successfully wraps up the sort of work that Haskell has been pioneering and moves us beyond working with ints and strings to something higher; an integrated view of functors and other higher structures, probably wrapped up behind friendly names, and using that to work with parallelism appropriately. There's a sense in which it's weird that it's 2019 and I'm still getting slabs of 32 bits to hold numbers, and the semantics of this hasn't hardly changed.

... Rust arguably already fits in here, but more efforts to pull down work done on the fringes of safety into something that people can actually use to build high-performance, yet safe, systems. These languages may not be 10x development speed improvements at the local level, but they can accelerate large projects by making a lot of things safe to do that right now are terrifying on a large code base. In general I still feel there's a lot of work to be done in the field of large programs. A lot of up-and-coming programming languages are still way too focused on making individual lines powerful, because that's cool, or because it's easy to evaluate how a new proposed feature does or does not make a 3-statement snippet now become one, but neglect how to make big programs hold together safely and without the whole thing calcifying to the point you can't move it forward as time goes on.

... as a specific case, languages or libraries designed to deal with the burgeoning field of "what can we safely do over a network", via CRDTs, lattices, and other safe mechanisms, integrated into the entire language's outlook rather than being a bodged-on addition.

... languages like Jai that detach the operational semantics of a data type from its storage, so that I can create an array of structs and just tell that array to actually be a struct of arrays, and maybe compress the array data if I'm willing to guarantee I'm always going to go sequentially across this array, etc., or tell it my access pattern so it can store the array optimally, using our modern hardware more efficiently than pretending it's still 1970 and memory accesses are all the same cost. Maybe throw in some cache awareness, or a pervasive cache-obliviousness, again at the language level if possible.

One thing I do not see, for what it's worth, is an increase in the declarative nature of programming languages. I don't particularly believe in declarative languages anyhow, but a lot of these things involve giving the programmers the ability to say more than current languages let them say, not less.

Many of these can work together harmoniously; safety, "something something CRDTs lattices", and data structures that integrate higher-order safety characteristics like functor and monad probably all go together pretty well. A language to address heterogeneity in computation might just work well on a network computer, too.


Bayesian programming anyone?


I think most programmers have very strong prior(opinion)s already...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: