Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Which people and groups are researching new approaches to programming?
263 points by maryrosecook on Jan 18, 2016 | hide | past | favorite | 121 comments
Which people and groups are researching new approaches to programming?

I'm interested in groups who are thinking about the next thirty years, rather than the next five years.

Some projects I find interesting:

* Subtext by Jonathan Edwards: https://vimeo.com/106073134

* Apparatus by Toby Schachman: http://aprt.us

* Bret Victor's explorable explanation and introspectable programming demos: http://worrydream.com

* Eve by Kodowa: https://www.youtube.com/watch?v=VZQoAKJPbh8

* Scratch by the MIT Media Lab: https://scratch.mit.edu

* Mathmatica by Wolfram Research: https://www.wolfram.com/mathematica

Why am I interested? I'm working on Code Lauren, a game programming environment for beginners: http://codelauren.com I want to learn as much as possible about work done by others in the same area.




I think you'd probably be interested by the Viewpoint Research (and Alan Kay):

http://vpri.org/index.html

...and STEPS:

http://www.vpri.org/pdf/tr2011004_steps11.pdf

"We set a limit of 20,000 lines of code to express all of the “runnable meaning” of personal computing (“from the end‑user down to the metal”) where “runnable meaning” means that the system will run with just this code (but could have added optimizations to make it run faster). One measure will be what did get accomplished by the end of the project with the 20,000 lines budget. Another measure will be typical lines of code ratios compared to existing systems. We aim for large factors of 100, 1000, and more . How understandable is it? Are the designs and their code clear as well as small? Can the system be used as a live example of how to do this art? Is it clear enough to evoke other, better, approaches?"


On the 5th year report (that you linked to), they mentioned that they had funding for another year and a final report. It's unfortunate that there was never a 6th year final report to wrap up the project.


Alan Kay is also running the Communications Design Group: https://github.com/cdglabs


Just to clarify, Alan is only 'running CDG' insofar as he is supporting and representing it as a sister lab to VPRI. The various research groups there are completely autonomous and as far as I can tell not publicly identified.


A lot of the news coverage is claiming that he had a role in recruiting specific people to CDG, and even names some of them (eg. http://www.bloomberg.com/news/articles/2015-01-29/sap-looks-... ), which would be more than just lending his support even if he's not really running the place. I certainly can't confirm if that's accurate though.


There seem to be two CDG branches: LA with Alex Warth (Jonathan Edwards also), and Bay Area with Bret Victor (Toby is also there). They are both doing really good work.


I'd enjoy seeing Niklaus Wirth's reaction to that project beating him at his own game from the other side of language design. My reaction to it is similar to my reaction to reading Wirth and Jurg's work on Lilith a long time ago.


Like most things, the kernel of tomorrow's ideas is already here. On the scale of the next five years, these ideas will give rise to what the future of programming will look like:

* Refinement types

Liquid Haskell: https://ucsd-progsys.github.io/liquidhaskell-tutorial/02-log...

* SMT Solver Language Integration

Cryptol: https://github.com/GaloisInc/cryptol

* Session Types

Scribble: http://www.scribble.org/

* Dependent Types

Agda: https://en.wikipedia.org/wiki/Agda_(programming_language)

Idris: http://www.idris-lang.org/

* Effect typing

Koka: https://research.microsoft.com/en-us/um/people/daan/madoko/d...

* Formal verification

Coq: https://www.cis.upenn.edu/~bcpierce/sf/current/index.html

TLA+: http://research.microsoft.com/en-us/um/people/lamport/tla/tl...

This is the general trend, generally making more composable abstractions and smarter compilers and languages that can reason about more of our programs for us.


Some more projects:

* Unsion by Paul Chiusano: http://unisonweb.org/2015-05-07/about.html

* APX by Sean McDirmid: https://www.youtube.com/watch?v=YLrdhFEAiqo

* Awelon by David Barbour: https://github.com/dmbarbour/awelon/blob/master/AwelonProjec...

And perhaps more in the "next five years" category:

* Om Next by David Nolen: https://www.youtube.com/watch?v=MDZpSIngwm4

* Elm by Evan Czaplicki: http://elm-lang.org/


Me.

I'm working on Full Metal Jacket, a strongly-typed, visual, pure dataflow language (http://web.onetel.com/~hibou/fmj/FMJ.html) with its own IDE.

Things have advanced a fair bit since I wrote those pages, and published the recent paper, so I'll add to the tutorials very soon, and announce this in Hacker News. Type definition, macros, and a few other things have been added to the language.

.303 shared-source release approaches, but I don't do deadlines.

No battle plan survives contact with the enemy, but I have some ideas for future directions, including adding dependent types, running on a multi-core machine with speculative execution, and automatic programming (i.e. user supplies just inputs and outputs). Very long-term ideas involve a developing a variant of the language which enables programs to run backwards, to enable execution on a gated quantum computer.


I'm working on something slightly similar.

But I'm sticking with the keyboard. And the "visuals" will still be text, with the option of other visualizations. Keyboard+text scales better than anything else.


neat, would be cool to have some pictures on the front page.


Wow, that's oddly similar to an idea I've been playing with.

Who do you see as the core users of these tools?


A lot of people creating multimedia works use visual programming languages that have dataflow models.

Some that are fairly mature projects:

vvvv - https://vvvv.org/

Puredata - https://cycling74.com/products/max/

Max MSP - https://puredata.info/


Am I wrong to think about the "graphical" aspect of visual programming languages as another layer of abstraction? If this is the case, doesn't that take some power away from these users, in the sense that their capabilities are limited to the abstractions/interface given to them by the language's developers?

Not exactly trying to question whether it's a good/bad thing (as I'm aware that for many creators this might be exactly what they need), just trying to understand.


No, because every language has one specific set of abstractions and interfaces chosen by the language's developers. This is the same for textual and visual programming languages alike.

What might well be very hard is making a language where all abstractions and interfaces map to useful graphical representations; I imagine it's much, much easier to represent a JSON object graphically than it is to represent a piece of C code this way.

For instance, how would jumps look? Lines going from far-removed boxes, producing code that literally looks like spaghetti? Moreso, do you even want to graphically represent things like pointer arithmetic, will this be easier to work with? How do you represent variables, globals?

Maybe you would only represent course features such as data flow graphically, and hide finer grained details and code in "blocks" with labels and defined in- and outputs. You would still mostly write your code, but the graphical view might help you mentally model your program.


Actually, the graphical aspect of a visual programming language is often an "un-abstraction." Text is pure abstraction with little static context beyond names. VPLs often attempt to provide more context by (a) mixing in execution context (like in Quartz Composer) and (b) through direct mappings to the domain (e.g. a VPL for layout that has you manipulating layouts directly). Much of the usability benefits, as well as scaling problems, that arise in VPLs is due to trying to them trying to be less abstract.


It doesn't have to be just another abstraction layer. In Full Metal Jacket, there's no underlying human-readable text-based language. The abstract machine it's designed to run on is highly-concurrent (MIMD) dataflow, which is very different from the von-Neumann architecture. You can represent concurrency visually better and more clearly than textually, though a few dataflow-based textual languages exist such as SISAL.

At present, FMJ is interpreted and runs on a virtual dataflow machine. This was a necessary step to ensure the computation model was implemented accurately. Compilation to run on a single processor would actually be simplified, however: there's no parsing or dataflow analysis required. Looking into the future, the question that should really be asked is whether languages designed to run on a single processor, or to execute the same instructions in parallel (SIMD), can be easily adapted to run on MIMD architectures. Do you really want to be using MPI?


Most languages compile the source to an AST, so I would say the AST is one level of abstraction below source code. Then, text and graphical languages could be equally powerful if each can represent any AST the other can.


I have done some programming in these and they are definitely more suited to the applications that their creators had in mind.

I wouldn't want to do general purpose programming in them -- at least as they exist currently.


Full Metal Jacket is general-purpose. Doing general-purpose programming in it does, however, has required me to think very differently. At present, it's still easier for me to program in Lisp, but that's mostly down to being a very experienced Lisp programmer and a rookie FMJ programmer. Initial difficulties are inevitable when you learn a new programming language different from the languages you already know. I had similar issues with Prolog but mastered that.

Different general-purpose languages are better adapted to different programming tasks, however, and Full Metal Jacket may in time find its niche. I wouldn't write a neural network in Prolog, or an expert system in C, for example.


Just noticed I switched up the PureData and Max MSP links. whoops!


Hat off to you, very interesting. Keep up the great work.


Sean McDirmid's work on Glitch is an interesting (and distinctly contra- the current "FP all the things!" zeitgeist) approach to live programming: http://research.microsoft.com/en-us/people/smcdirm/

Conal Elliott's work on Tangible FP was an interesting attempt to unify functional and "visual" programming that has been mostly abandoned: http://conal.net/papers/Eros/ Hopefully some of its ideas may yet survive in other projects.

The Berkeley Orders of Magnitude project is somewhere at the intersection of database and PL research, aimed at handling orders of magnitude more data with orders of magnitude less code: http://boom.cs.berkeley.edu/ The Dedalus language in particular is interesting, as it integrates distributed- and logic-programming: http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-17...

Joe Armstrong's thoughts on a module- or namespace-less programming environment are interesting: http://erlang.org/pipermail/erlang-questions/2011-May/058768...

I've been meaning to write a blog post about the convergence of various ideas of what the future of programming might look like for a while now, so I have a bunch of notes on this topic. The OP & other folks have already mentioned most of the other projects in my notes - in particular Unison, Subtext, Eve, & Bret Victor's work.

My current line of work is on tackling a tiny little corner of what I see as the future's problems - trying to find a better way to combine database/relational programming and functional programming. My work is here (but the docs are almost entirely in type-theory-jargon at the moment, sorry! feel free to shoot me an email if you have questions): https://github.com/rntz/datafun


Thanks for Datafun where can I start to learn more about the syntax you are using on the README.md?


Hm, which bits of syntax are you confused by in particular?

At the beginning, when defining what types, expressions, contexts, etc. looks like. I use a bunch of BNF (https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form). I'm afraid I don't actually know a good introduction to BNF.

The clearest account I know of of the most critical part of my notation, namely inference rules (those things with the big horizontal bars), is in Frank Pfenning's notes for his course on Constructive Logic: http://www.cs.cmu.edu/~fp/courses/15317-f09/lectures/02-natd... (the full course of notes is at http://www.cs.cmu.edu/~fp/courses/15317-f09/schedule.html, but the one I've linked is most relevant)

A lot of the rest of my notation (types like A × B, A → B, expressions like λx.e, the judgment Δ;Γ ⊢ e : A meaning "e has type A in context Δ;Γ", and the inference rules themselves) is borrowed from fairly standard type-theory-of-functional-languages stuff. Standard books to read here are Types and Programming Languages by Benjamin Pierce (https://www.cis.upenn.edu/~bcpierce/tapl/); or Practical Foundations for Programming Languages by Bob Harper (https://www.cs.cmu.edu/~rwh/plbook/book.pdf). Those are pretty heavy books that cover a lot of ground, including stuff that's mostly irrelevant to my work. If you're interested, though, they're a great place to start learning about formal PL theory!

Was there anything else in particular you wanted to know about?


Thank you, this is perfect to get started, I've been programming for a long time and I couldn't extract enough meaning.

I've heard about the BNF notation with the recent news about Peter Naur death and before but never took to time to get my head around.

I think your description of Datafun and the references you gave me are good and enough to make me learn something new and to understand your project.


We're working on http://tonicdev.com . You can read a bit about it here http://blog.tonicdev.com/2015/09/10/time-traveling-in-node.j... or just try it yourself.

Some guiding principles:

1. So much of what hinders programmers is that the friction to using existing solutions is so high that they choose to redo the work themselves. In Tonic we've made every package for javascript immediately available (over 200,000), so that you can focus on putting existing pieces together. We keep adding them as they come in immediately, as well as making notebooks accessible to each other. Its like you have access to the global library.

2. There shouldn't be any snippet of code online that isn't runnable. Every example you read on the internet should be immediately runnable (regardless of whether it require binary packages or what have you). The difference in understanding when you can tweak an example and see what happens vs just cursory reading over it and thinking you get it is huge. Tonic embeds allows this: https://tonicdev.com/docs/embed


How are you handling sandboxing and other security/abuse related things?



Seconding Urbit since I'm a giant fanboy. It's not a new programming languages as much as new computer, internet, everything. And it's a lot of fun to play with - Hoon isn't nearly as difficult as it looks :)

As insane as a lot of it seems, most of it are just logical conclusions from a few base ideas: what would computers look like if you owned your own data, if it was all built on top of a purely functional VM backed by bignum binary trees, if you could replace all the mistakes made over the years (nonversioned file systems, untyped shells, manually serializing data structures)


This sounds amazing. It's almost exactly what I've been wanting to do, on an even grander scale, and by people with much greater chance of it working (even if it's only ever 'beta quality') than I would!

"Re-inventing the wheel" is a phrase usually used in derogatory manner, but it's computing has changed so much in its brief lifetime that I've often wondered why we haven't started over; shed things we don't need any more, and built everything as we would today with all those lessons learned.

It's too easy to be confused over or not sure how to accomplish something Unix-y, and for the answer to begin with "oh well because originally", or "well, you see, in the eighties.."


My primary concern is with personal knowledge bases [1]. This intersects with new approaches to programming, because programs and algorithms are an aspect of and a way of interacting with knowledge.

It's mostly conceptual right now, but my idea is to represent knowledge as a hypergraph with spatial and temporal dimensions on both edges and vertices. This, I hope, could represent every kind of knowledge I can imagine. The hypergraph would function as a filesystem and database, and you could query/program the system. It would be Emacs for all media, and not just text.

I want to augment the inherent power of the hypergraph with a mashup of the OpenEndedGroup's Field, Xerox PARC's Smalltalk, Doug Engelbart's NLS, Symbolics' Open Genera, org-mode, Vim, the Wolfram Language, Bell Labs' Plan 9, and Ted Nelson's ZigZag and Xanadu projects.

If anyone finds this interesting and wants to chat about this stuff, please email me at the email address in my profile.

[1] https://en.wikipedia.org/wiki/Personal_knowledge_base


I'm working in a relational language like http://www.try-alf.org/blog/2013-10-21-relations-as-first-cl....

However a noob in building a language ;)

I have learn a lot of stuff. For example, columnar-stores provide some nice oportunities to manage data in-memory + compress values in columns.

Sorting, joining & selecting on multiple-arrays that are a necessity for OLAP queries translate well to the need of normal programming.

SQL is a overcomplication and a bad "programming" language. Unfortunately, the only practical way to interface with most databases.

If my plan work, this could be good to build a relational store, that with let your to say: The name is a string, and I need to search on it. A normal engine will need to store it again in a index. I will say instead, the name column is the index and the values are stored once.

Or something like that.

BTW: A relation have some simmetry with inmutable functions, but I still don't know how exploit this fact


Also check out datalog.

Datomic is a database that uses it as its query language -- it is really nice to work with.


Datalog not "click" to me. Is weird to understand and another uncommon programming model, and think that the ideao of build a relational language exhaust the limit of paradigm switch for my potential users. However I wonder if can be used as a internal engine and will provide efficient/easier backend for all the relational stuff?


It's almost 'free' if you know even just basic Prolog.

I learnt it before Prolog, and personally really like it, much more than I do SQL. It's a slightly different mindset - "what is a result" rather than "how do I go about finding a result" - but one that I found more intuitive in an Intro to DB course.



I'm building a language for learning to program on smartphones. It's a stack-based language designed for interactive editing (since typing code on a smartphone is no fun).

It's in the very early stages at the moment.

https://github.com/alpha123/vio


Microsoft have done some great and totally ignored work with embedding visual languages into game engines for kids - http://research.microsoft.com/en-us/projects/kodu/ http://welcome.projectspark.com/

Wrt to Code Lauren you may also be interested in http://www.cs.cmu.edu/~NatProg/index.html - a HCI approach to tooling (the WhyLine is my favourite).

Dog hasn't released much info yet but it's an interesting concept - http://dl.acm.org/citation.cfm?id=2502026 https://www.media.mit.edu/research/groups/social-computing

Program synthesis is really interesting area of research eg http://research.microsoft.com/en-us/um/people/sumitg/

VPRI has already been mentioned but I'd like to highlight their work on Programming as Planning - http://www.vpri.org/pdf/m2009001_prog_as.pdf

From the last FPW, there were a couple of projects that really stood out for me: http://probcomp.csail.mit.edu/bayesdb/ http://2015.splashcon.org/event/fpw2015-an-end-user-programm...

Also not really a new language, but Sam Aarons work on Sonic Pi is pedagogically interesting - http://sonic-pi.net/


Unseen

A Functional and Logical Dataflow language.

http://www.reddit.com/r/unseen_programming

Status: under development.

Functions are components, which can be used recursively. Arrows are logical relations between these functions. Inspired by Scala and VHDL.

The logic and flow deal with the control and time aspect.

Testing and commenting is integrated in the graphical system as different layers. All graphical structures can be converted to (reasonably) simple text, resembling Scala.


Looking at hardware trends on servers, in the next ten years I expect more pure functional programming on the CPU (implicitly concurrent, write-once data structures) and more data-parallel array operations being offloaded to GPUs. The CPU language of 2025 is probably something very much like Haskell with a cleaned-up base library, but I'm not sure what the GPU component will look like to the programmer.


It probably looks like APL. Data-oriented programming, with the only type being vectors/streams of data. Ideal parallelization to however many million CUDA cores just by distributing the workset into partitions.

Really, its how GLSL and other shader languages work already, just hidden behind C-like syntax and lies.


For a meditation on which aspects of our text-based programming tools derive from merits of the medium versus historical accident, see: http://westoncb.blogspot.com/2015/06/why-programming-languag... —there's also a second part (linked to within) that describes an alternate, general purpose architecture for programming tools that lets us stay powerfully text-centric while moving away from operating on character sequences under the hood.

I wrote this non-traditional program editor: https://youtu.be/tztmgCcZaM4?t=1m32s

And a new kind of visual debugger: https://www.youtube.com/watch?v=NvfMthDInwE


> * Eve by Kodowa: https://www.youtube.com/watch?v=VZQoAKJPbh8

This guy seems kinda young. Does anyone think that the future of programming can come from people without extensive programming experience of current programming?


I am. I am working on a new programming language designed for the next generations. The language re-imagines the role of the compiler from being a monolithic static blackbox that converts source to executable, into being an open dynamic system the manages the language syntax and compilation process, but leaves the final syntax and actual compilation to hot pluggable libraries. Why? To make the language dynamically upgradable, and on a per-project basis. This is the only way to make a language future-proof. More details: http://alusus.net http://alusus.net/overview P.S. the project is still at a very early stage, but a proof of concept is there.


Oh man, this is exactly what I've been thinking about lately (though I think it should be built on something like Scheme). The more research I do, the more it seems like a Hard Problem. In a nutshell, how do you handle inherently ambiguous composed grammars? Do you see your system being flexible enough to be able to essentially "behave like" arbitrary existing languages? It would be very easy to port things to it if this were the case.

Anyway, good luck!


It's not my goal to make the system able to parse arbitrary existing languages. The parser can deal with ambiguity by branching parsing states, but despite that it still won't be easy to parse inherently ambiguous grammars, and even if that was easy to do I still would push against it as that would be in conflict with the language's goal of unifying different programming areas under one language with consistent syntax. It wouldn't be much useful language if your code is written like c++ in one place and like, say, Ruby or SQL in another. You need your code to be consistent and the language to be consistent as well, otherwise learning the language would be difficult and understanding the fragmented user code would also be difficult. The main benefit behind being able to parse existing languages is for porting existing code, but that can be done using external conversion tools.


I read the links; your language looks interesting, but could you expand a bit on what makes it a new approach? User-programmable syntax is an old idea; for instance, it's been a core feature of Scheme for 30+ years.


My language is not only about user defined syntax; it's also about user defined code generation. For example, you can define your own syntax that compiles directly into a DB stored procedure or into GPU shader, or into whatever technology that might be invented in the future, like for example quantum computing. In fact, it's less about user defined syntax than it is about user defined code generation as I am trying to limit new syntax into certain patterns rather than leave it in the wild.


That's what macros are in Lisp and Scheme.


Does scheme give you access to the compiler's internal data structures? Can you for example write an extension that scans through the compiled code looking for, say, for loops and replacing them with something else?


Macro expansion happens just before compilation; you have the chance to alter the source code as it's being read by the reader just before compilation.

But I see what you're getting at; you mean global modifications to the compiler to interpret the whole language differently. A macro allows you to make syntax, but it doesn't change the meaning of other syntax elsewhere in the program; only allows the programmer to add syntax to the language.


Macros generate assembly?


Macros generate code, any code you want, including inline assembly if the scheme supports inline assembly, if not no.


And yet Scheme hasn't turned into something like that. There's a limit to how radical a syntax you can achieve using (define-syntax), and there's no provision for modularity. Worse still, syntax definitions are "imperative" - it's really hard to see what they do by looking at them.

I'd like to see a system where you have tiny, composable pieces of syntax written declaratively in a domain specific language that's as terse and general as possible - stuff like "a[n] -> (list-ref a n)" and "a..b -> (iota a b)". You want to make it as easy as possible to port language concepts to the system. Most of the wacky semantics you can get in other languages can be had in Scheme through some sort of library already, they're just usually kind of a pain in the ass to use without the syntax. Then making the language of your dreams is just a matter of enabling all the syntax you like. You'd need some kind of mechanism for handling grammar conflicts.

Of course, it's really hard. "Syntax" masks a lot of complexity. A feature I want, for instance, is to have the whole language behave like a CAS, where instead of bombing out on an undefined symbol it just computes as much as it can symbolically and hands it back to you e.g. "x+2+3" returns "x+5" if x isn't defined. That sort of thing has been done in Scheme of course, but it's hard to imagine a syntax specification language that would let you just turn that on as a single "feature" without massively affecting basic things like expression parsing grammar.


Everyone doing probabilistic programming. Can't really give a summary

https://en.wikipedia.org/wiki/Probabilistic_programming_lang...

Formal verification using ACL2 or Coq or other tools also.



I think "visual" programming is the incorrect approach, something I would like to see more of is "tactile-spatial" programming. Anybody have an example of these? Most work I've seen is visual/flowchart which is not optimal for touch devices or large projects.


This is what I'd like to see too. Let's get away from tiny motions and sitting in a chair and make tools more like traditional wood/metalworking tools for programming: benches with robotic appendages that we can manipulate ASTs with. Let's use our kinesthetic and touch senses more.


I've been thinking about this a lot; the problem I keep coming up against is abstraction. If you do allow abstraction, then the temptation is to make everything doable with a pinky finger. If you don't allow abstraction...

On the other hand, we do have two good examples: emacs and paredit. We don't abstract away text because we haven't really figured out how (some in this thread are trying, but the exceptions prove the rule), and parentheses we just abstract into fewer parentheses.

The point being: the medium must remain the same for the pleasant feeling of muscle memory to occur.

One interesting point is programming paradigms; a paradigm is (among other things) a relatively constrained set of actions one takes. Were programming changed to a more physical activity, one could imagine watching a developer and discerning, "ah, the object-oriented style," much as popular imagination imagines a martial artist watching another fight and discerning his style.

The last idea I'm having as I write this: I have also worried about "be careful what you wish for." Do we really want to repeat the same gross movements not as comfort demand, but as the code requires?

So one idea I have is: what if you tied certain operations/manipulations not to body movements, but to aspects of that movement. Short, staccato movements of any kind might correspond to indents or whatever: what's important is that one can do short staccato movements while walking, or golfing around with a pen, or reaching for a snack.

Weird stuff, just throwing it out there.


Like many people, I think about this now and then, but I haven't done any work in that area. Ultimately, I picture a live environment, but not like Blueprint in UE4.

I expect we'll end up with a predominantly interactive approach to programming, most likely visual drag-and-drop style programming, in a live environment that knows common patterns, data structures, algorithms; giving real-time advice, showing native code output as you go, etc. Basically, you're molding the system while monitoring it under various contexts, and you're programming against data models.

Got a new Arduino board? Just drag and drop the data sheet model into your environment. It contains memory address information, read/write contexts, access protocols, contexts, etc for every component, and how they're connected. Now you design the rest of the logic.

"A programming language is a user interface as much as it is any other thing. Why have multiple ones? They are all Turing equivalent."—Alan C. Kay (see the talk for the context of the quote; starts at around 23:25)

"Rethinking CS Education | Alan Kay, CrossRoads 2015." https://www.youtube.com/watch?v=N9c7_8Gp7gI

"Most computer people went into computing because they were disturbed by other human beings."—Alan C. Kay



I'm working on a new approach to programming that is in the direction of Bret Victor's Inventing on Prínciple (focusing on environment, all open source)

Because we're prelaunch(and working hard on getting it to all of you!) I can't talk about it much yet, but we're funded and looking to work with great people.

Email me at cammarata.nick@gmail.com if you're interested in this space, would love to hear ideas and talk more.


Kayia was presented at the Future of Programming at Strange Loop a little over a year ago.

http://kayia.org


Viewpoints Research Institute, see e.g. http://vpri.org/html/writings.php which has papers like "Checks and Balances - Constraint Solving without Surprises in Object-Constraint Programming Languages" and Alessandro Warth's "Experimenting With Programming Languages" (which led to OMeta/JS, which is I think on GitHub), as well as a ton of Alan Kay talks on fundamental new computing technologies http://vpri.org/html/words_links/links_ifnct.htm .

The way that Datomic uses Datalog is really interesting from a perspective of "new approach to programming" (databases).

Erik Demaine's course on advanced data structures gives some interesting ideas for time-travel-based games: https://courses.csail.mit.edu/6.851/spring14/ . His work also has application to other fields like creating an efficient in-app version control system http://www.cs.utexas.edu/~ecprice/papers/confluent_swat.pdf .

Lots of cool stuff on HaskellWiki; for example https://wiki.haskell.org/Functional_Reactive_Programming .

If you really want to jump into the deep end there's a whole blog called Lambda: the Ultimate about new approaches to programming: http://lambda-the-ultimate.org/


Program synthesis: see work by Ras Bodik's team at Berkeley (now UW) and descendants in Armando's MIT team and Sumit Gulwani's MSR team.

As a concrete examples making industry waves, see Excel's new Flashfill and Trifacta ETL wrangling product.

Underneath, these use search (ML, SMT, ...) to allow non-tradiotional & sloppy coding: fill in the blanks, program by demonstration, etc.


For more on program synthesis and verification see also the Leon system by Viktor Kuncak et al.[1]

[1] http://lara.epfl.ch/w/


From Scratch you should checkout Snap! and snap.berkeley.edu

John Maloney (co-inventor of Scratch), Jens Moenig (who was on the Scratch Team and develops Snap!), along with Yoshiki Oshima (who may also have been on the Scratch team), are deveolping a new langauge "GP" (for General Purpose) which like "professional Scratch".

Here's a video of it: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...

There's also lots of extenstions to Scratch and Snap! particularly around CS education, and I'd be happy to discuss those!


STEPS by alan kay(building a full OS including applications in 20K LOC: http://blog.regehr.org/archives/663

Spiral http://www.spiral.net/index.html and spiralgen - going straight from math to code optimized for multiple platforms: http://www.spiralgen.com/

Automatically solving bugs using genetic algorithms : http://dijkstra.cs.virginia.edu/genprog/

Automatic bug repair(2015): http://news.mit.edu/2015/automatic-code-bug-repair-0629


https://treeline.io/

YC company run by Mike McNeil. Visual backend programming. Built off Node, Sails and Machine Spec. The latter is particularly interesting:

http://node-machine.org/


@rl3 Thanks for the mention :)

@maryrosecook I'm glad this post showed up in our Slack feed- really cool topic! As I'm sure you've noticed from the volume, variety, and impressive comments in this thread, improving approachability/usability for software development is an exciting (albeit broad) effort, and there is a lot of amazing work underway. In the area of declarative programming specifically, there's a lot still to do, but as a community we're getting really close. That said, there is far too much work to do here for any one team to do it by themselves.

To be successful, or at least to expedite the process of democratizing software development, I believe we need to establish a common set of semantics for describing _what software is_. And the first step towards that is deconstructing functions-- or perhaps more accurately: "subroutines".

The rest of the Sails.js core team and I have been working on this problem for quite some time now, primarily as a reaction to having to build plugin system after plugin system in Sails as the framework's user base grew, and getting frustrated with every plugin system being a little bit different. After we had our first working prototype of the tech that would become Treeline, we extrapolated and open-sourced the machine specification and related utilities as a way of exposing building blocks for other devs to reuse for their own projects.

To add a little background, machines are stateless building blocks; nothing fancy. They're more or less equivalent to asynchronous functions in JavaScript, just highly specified. In particular, machines encapsulate definitions of their inputs, their exits, and whether they are "cacheable", just "idempotent", or neither (and therefore have side effects). Machines are oftentimes implemented as JavaScript functions-- although part of the project's philosophy is that what's important is the _usage_, not the implementation.

From the perspective of a traditional "intro to programming", an example of a machine is `moveTurtleDown()`: https://gist.github.com/mikermcneil/864b930a67b0d00d3d8b

As a more practical example, here's the documentation for a handful of machines related to string manipulation: http://node-machine.org/machinepack-strings (To try any of them out, just copy and paste the generated sample code into e.g. a Tonic sandbox: https://tonicdev.com/npm/machinepack-strings)

Each of a machine's expected inputs may be required or optional (with a potential default value) and declare a type schema. Each of its exits may optionally declare a return value, and if so declare a type schema for it. In addition, machines have strict conventions for meta properties (e.g. the machine `description`, which summarizes the machine's behavior when run, is ~80 characters or less of English, written in the imperative mood and in sentence case with ending punctuation)

This can be used to generate documentation, code for interactive form elements, or in the case of the default machine runner, runtime type validations.

But most important of all, once a piece of code is written to the machine spec and open-sourced, it can be used by any tool or platform which consumes that spec (think USB).

As an example, in Treeline we use machines as draggable/droppable components that you can compose into what we call a "circuit"; which can _itself_ be used as the implementation of another "composite" machine with its own inputs and exits (vs. an "analog" machine; i.e. a machine implemented with JavaScript code). We are planning to publish the circuit spec and related tooling for composite machines later this year. In the mean time, the analog machine spec is stable and ready to use. Around 500 MIT-licensed machines are available today on NPM, with generated documentation hosted on the Node Machine registry.

Hopefully that was a helpful first look- if I can help explain further or if you'd like to discuss the project roadmap, just hit me up on Twitter. It's really awesome to see so much new interest and progress in this tech, and particularly in the context of education. Excited to see where you go with CodeLauren!


Just checked out the node-machines.org and theJsDot presentaion there. Many parts of it has paralells to things I'm playing around with[1]. What are the reasons for not going with an established concepts for the machines? Like contracts for specifying pre-conditions (input requirements) and post-conditions? Or Promises for error vs success scenarios?

1. https://github.com/jonnor/projects/blob/master/introspectabl..., maybe especially Agree


I'm doing several myself at an abstract level given lack of time or resources. I feed what I learn in them to pro's for potentially seeing it built. Here's a few:

1. Modula-2-style systems language with macros, better typing, and typed assembler to replace C with minimal hit on performance or ease of use.

2. Relatively-simple, imperative language plus translators to equivalent functionality in mainstream languages to leverage their static analysis or testing tools if possible.

3. Unified architecture to natively represent and integrate both imperative (C or Modula style) and functional (Ocaml or Haskell style) languages. Extensions for safety/security at hardware level for both.

4. Hardware/software development system using Scheme or abstract, state machines that can turn a high-level description into software, hardware, or integrated combo of both.

5. Defining TCB's for type systems, OS architectures, etc that minimize what must be trusted (mathematically verified) to ensure invariants apply while maximizing expressiveness. crash-safe.org and CertiKOS are more concrete, useful, and professional versions of what I'm getting at with 5.

6. Building on 5, a series of consistent ways of expressing one program that capture sequential, concurrent, failure, integration, and covert channel analysis in a clean way for formal analysis. As in, an all-in-one high assurance toolkit that automates the common stuff a la static analysis or reusable, composable proofs. And makes hard stuff easier for pro's.

7. Occasional thoughts on automatic programming via heuristics, patterns, rules, human-guided transforms, and so on. Got me started in advanced SW and my mind occasionally goes back to possibilities for achieving it.

8. A way to do any of that with the rapid iteration, safety, performance, live updating, and debugging properties of Genera LISP machine. I mean, we still don't have all that in mainstream tooling? ;)


I am working on a new computer language called Beads, which is designed to replace the current development stack for both personal computers, mobile devices, and web. You can find out more about this project at e-dejong.com

The focus of my tool is creating graphical interactive software: iphone and Android apps, desktop apps, and things that run in the browser. The notation is compact, readable, and straightforward. It has many innovative aspects: new data structures, physical units (something FORTRAN just got after 30 years in the in-basket), and deductive reasoning which dramatically reduces the length of programs. It is not a design by graphics system, but a language. It isn't that abstract, and is far more straightforward than Haskell or PROLOG. It is not a LISP derivative.


There are many different projects, I feel however that Idris and Scala are the places where innovation should be done. Although there are ideas in Subtext, etc, they should be implemented as libraries in Idris or Scala to gain the maximum usage from developers.


Jordan Pollack's group is interesting (genetic algorithms, etc.): http://www.cs.brandeis.edu/~pollack/ Web page is hilariously archaic as bonus




I used to do that back when I was investigating and toying with AI. Had a whole book dedicated to all the ways one could apply it. One use case was to send an agent over our slow, expensive connections to where the data was to do work for a price and bring just the results back. Since then, our connections and machines have gotten fast. Yet, HN posts show the concept lives on in cloud services doing datamining and stuff for a new reason: pulling a lot of data out of the cloud costs a fortune vs pulling just the results of on-premises analysis.

Agent-oriented programming lives on today in a new form. Just dawned on me as I saw your post. :)


Weirdly, I'm not as interested in the AI aspect or the mobile code aspect[1]. I'm much more interested in a organization and modularization of system sense. I think it has an obvious code parallelism and a communication aspect for business users.

1) well, in the traditional Telescript sense. I am a bit interested for load balancing and redundancy.


That morphed into application and OS containers. Basically. They're way better than anything agent-oriented programming had back in the day. Also more versatile. That's why you don't hear about them for that much anymore except fringe academia.

Might have been different if Cyc or OpenMind had achieved anything. They could be the reference point for autonomous, mobile agents using knowledge-based programming. Best just to create more high-level languages, good libraries, and ways to package them up. It's not just good enough: it's more predictable and reliable than agent or expert systems even on their intended use-cases. Funny how that worked out, eh?


> That morphed into application and OS containers. Basically. They're way better than anything agent-oriented programming had back in the day. Also more versatile. That's why you don't hear about them for that much anymore except fringe academia.

The applications and stuff that goes into containers has to be written in some language, and I think an agent oriented language might be more suitable for large programs than what we currently have. I really don't think they are better, just more in line with what we have now.

As I said, I'm not really interested in the expert system aspect (AI). I'm looking at this as a language issue to build big systems. I think containers are not a language answer but a coping mechanism for current languages and practices. I've been doing research on a different path. I hope others are looking beyond what we have now and what paths history didn't take because C and UNIX won.


Most of what it takes to do that were just some interpreters and function calls. Originally established in distributed computing like Amoeba, MPP systems, and Obliq in agent-oriented systems. Might help if you think on it looking at distributed, agent, and reactive languages to see what language aspects you think would make it easier. Then you'll be able to convey it better.

To me, it's just a VM (or source), ability to capture state, and one or more function calls. Any language could do this.

Obliq just in case you didn't know about it:

https://en.wikipedia.org/wiki/Obliq


Relational XPath Map (rxm) provides a syntactically terse domain-specific language (DSL) to query relational databases and generate structured documents.

https://bitbucket.org/djarvis/rxm/

https://bitbucket.org/djarvis/rxm/wiki/Discussion

Currently generates SQL code from a DSL, which can be run against a database to produce an XML document.


I really love Jonathan Edward's chart where he shows the big stack of technologies one has to master to build apps today. The goal is to replace that entire stack with one language. That would be the 10:1 improvement that would really make a difference. The burning question therefore, is what can replace that entire stack? Clearly you have to offer a way of storing and structuring data that improves upon the table. otherwise you are back in relational database hell.


I also interested in new ways for minimal coding & debugging.

I'm working on Animation CPU platform, ACPUL declarative algorithmic language for same purposes.

https://www.youtube.com/watch?v=ubNfWarTawI

http://acpul.org/

Also LiveComment information tool:

http://acpul.org/livecomment


Alan Kay’s Viewpoints Research Institute: http://vpri.org/ (with which Bret Victor is associated).


Probabilistic programming might interest you.

e.g. http://arxiv.org/abs/1507.00996


I've been toying around with the idea of SPL, Species Programming Language. See Brent Yorgey's thesis. Like APL, functions would be able to match where they bind within a data structure. Instead of just multidimensional arrays it would handle all combinatorial species; think lists, trees, graphs, cycles, ...

Still haven't found a syntax I like yet. Yorgey and Classen both have nice Haskell libraries as a springboard.


Wow, these comments mentions lots of work I wasn't aware of! We've been building a community of people working in this area: the Future Programming Workshop. http://www.future-programming.org/ We will all do better if we get together to exchange ideas and criticism. Suggestions for improving FPW are welcome.


Why not

- Have a summary what future-programming.org is about on the homepage?

- Help to complete this table: https://docs.google.com/spreadsheets/d/1JJ14iwi-UU4Zw2dc9XHY... (Use the "comment only" dropdown button to request edit access)

- Create a Wiki organizing the information from this thread?


Personally I think in the next 30 years we will be programming with thesame languages popular today. They will evolve to handle multicore better though. With tasks, async, couruoutines, actors, etc. Before too long we will have a mix of thousands of cores. Gpu, cpu, remote cores. We will have to figure out how to spread our programs across all of them. Well all be doing supercomputing.


Before looking ahead, it's well-worth looking back to see how far we've come. Things have changed a lot over the last 30 years, and I'll be very disappointed if they don't change much over the next 30. In 1986, C++, Java, C#, Perl, Python, and JavaScript didn't exist. The web was a small project unheard of outside of CERN. Much of computing was done in COBOL on mainframes. Garbage collected languages weren't used much. Things haven't progressed uniformly, and in some areas have actually regressed (I miss Lisp Machines), but mainstream computing is well ahead of where it was then.

I also don't think you can simply graft supercomputing onto mainstream (i.e. Algol-descended) languages and expect to keep all your cores equally busy. You might get away with some SIMD parallelism, but MIMD parallelism, and quantum computing, require completely different approaches, and completely different programming mindsets.


I'm working on a live programming plugin for Visual Studio that supports C#: http://comealive.io

Previously we'd developed a code exploration tool: http://codeconnect.io


It might be worth checking out ethereum.org and the work they are doing there on their blockchain, as well as serpent / solidity programming languages. Also, check out ipfs. The distributed computation and storage model has broad applicability, and a good lens from which to view the world.


http://www.clafer.org/ Lightweight modeling

http://categoricaldata.net/fql.html FQL Functorial Query Languages


Intentional Programming (https://en.wikipedia.org/wiki/Intentional_programming), started by Charles Simony


Chris Granger seems to be doing some awesome work with 'Eve': https://www.youtube.com/watch?v=5V1ynVyud4M


I spent a day looking at these back in August. Summary is here:

https://gist.github.com/clumma/58253f3692e4c1c28087



CDG (Communications Design Group) at SAP is where Bret Victor and a lot of other people work on research. I think a big focus is changing how we program in the future.


Social Machines by Mark Stahl ... https://github.com/socialmachines


I am working on FlowGrid (Visual dataflow on Android): http://flowgrid.org


Wolfram Mathematica is a pretty old project, did it change much over the last five years?


The Wolfram language has been promoted a lot recently. They're moving from a computer algebra system to a more general purpose programming environment for scientific applications. There's neat new things like easier GUI features.


This may be a strange way of looking at it, but let's backtrack 30 years before and see what made the biggest differences. As I've been in the industry for about thirty years, my impression is that nothing much has changed. That seems strange, but it's still the same job that I started with.

For me it is interesting that in my career, code base sizes grew to gigantic proportions -- there are many applications that are 10s of millions of lines of code. In the middle of my career I worked on a few. Interestingly, I'm doing web development now and a lot of my colleagues think that 5000 lines is unbearably big. I think the take-away here is that we have gotten slightly better at abstracting things out and using development libraries (and dare I say, frameworks).

OO was just becoming a big thing at the start of my career. Everybody did it incredibly badly. Then Kent Beck and Ward Cunningham came along and told people how to do it not-so-badly. I think the biggest thing that I saw in this time frame was the breaking of the myths of OO being about code re-use, and the movement away from huge brittle design structures. Good OO developers moved back to the really basic ideals of dealing with coupling and cohesion in design. We even started to have a language to be able to discuss design intelligently. Of course, quite a huge number of people were oblivious to this, but it always struck me how amazing it was that Beck and Cunningham were really 15 years ahead of most of the rest of us.

Lately, functional programming is coming into vogue. For the second time in my career I was surprised. People in the know are talking about idempotence, and immutable structures. This was the stuff that the crazy people were talking about in the 80's -- stuff that was "too slow", and "too memory intensive" to take seriously. But now it's pretty obvious this is the way to go.

I think the other big thing that blew me away in the last 30 years was testing. Probably some people will remember pre-conditions, post-conditions, and class invariants. This was unfortunately forgotten by most, but the most astonishing thing was unit testing. Especially the practice of Test Driven Development that not only allowed you to document your code with executable examples, but also forced you to decouple your objects/modules by the very behaviour that creates the documentation. Very few people do this well (just like most of the other things I've mentioned), but it is completely game changing.

As for the future, what is coming up? I suggest you look at what has gone before you for hints to that. In the last 30 years, apart from TDD (which came completely out of the blue as far as I could tell), the major advancements came from stuff we already knew about. It was the stuff that the "crazy" people advocated in the 70's and 80's, but that seemed impractical. If I were to guess, I suspect that we will see further progress on decoupling in design. Immutable data structures will not just be common, they will be how ever professional designs code. As performance moves from single processing to multi-processing, this will be important. Look at innovative ways of processing like flow based processing and other crazy ideas from bygone years.

My last piece of advice: Don't look for large revolutionary changes. I think those are unlikely. The programmer of 30 years from now will probably still be doing the same job as I am today. The changes will be much more qualitative. Also, expect that the vast majority of programmers will be as oblivious to the advancements as most programmers are today.


Thoughts on Julia?


IMHO machine learning will automate coding to such an extent we all but a few will be unemployed.


This is a really broad question, about on par with asking "which fashion houses are putting out daring material and what will Dior be making that's popular 30 years from now". Software is just like any other cargo-cult industry where trends rise and fall almost like clockwork. From Rails, to Angular, to React.

RE: People/Groups who are researching 'new approaches to programming' - you have the typical universities putting out papers. Conferences like POPL and ICFP tend to be where most of the major academic work gets put out. From within the industry, commercial entities aren't really doing much, bar Microsoft Research, Cambridge (UK, not MA). They're really pushing the envelope with regards to strict PL research. www.rise4fun.com to see the dozens of projects they're putting out. Oracle, too, is surprisingly doing some interesting work.

30 years is a hard guess, but 5 years you'll certainly see: 1) a lot more emphasis on concurrency, at 14nm we're rapidly approaching the physical limitation of transistor density (which is why you're seeing 18 core Xeons). Sharing memory is notoriously hard, so the move towards immutability (whether it's pass by ref const in 'traditional' languages like C++ or more esoteric languages like Haskell, that's the direction it's going in, whether by using the actor model, STM, etc) 2) Especially with Intel's Knight's Landing. RDMA has been around for ages, but bringing it to Xeon means the common-man doesn't have to pay Infiniband prices for HBAs. RAM has been getting cheaper but imagine being able to just deck out a 42u filled to the brim with ~18 2u's of half a TB of DDR4 RDIMM a piece that your main application can access. 3) Disks, which used to be a huge thrashing bottleneck (who here is old enough to remember thrashing on GCC 2.95 when trying to compile E17?), are now posting amazing numbers, even for SSDs.

Effectively every level of computing that used to be a barrier (minus SRAM and the CPU caches which seem to have been stuck at the same spot for a while in terms of capacity) has, or will within 6 months be consumer accessible. I couldn't guess what's going to happen in 5 years. I can't even guess what's going to happen in 5 months and I've been at this nearly 20 years.


I think you may have lost the forest for the trees. For as broad of a question as this, the different web frameworks you listed are all just that - web frameworks. That is just one area of programming, and the differences between each framework are trivial in a multi-decade view of things.

What is more interesting is your points towards hardware limitations, and that seems like more of an area in which such a question would have meaning. I tend to think less about what we will code, and more about where that code will run, and how low/high level it will be -- Will we still be coding to browsers in 30 years? Will firmware still be a black box for most people, or will your average joe start to play in that realm? Will coding be a specialty, or will it be as pervasive as knowing how to read and write? If everyone on the planet could at least code to a basic level, and all devices had at least a minimal API to configure, what will that even look like? Will Legos have embedded code, so 4 year olds can assemble blocks and do robotics instead of just static toys?

I have no idea on any of those answers... but those are examples of the kinds of questions we should be looking at, not whether React or Angular will have more longevity, because frankly, in 30 years, very little of what we are doing today is going to have any relevance.


One interesting prophecy I read recently is that the emergence and widespread availability of NVRAM (non-volatile RAM) could shift the bottleneck in most web applications from the disk to the CPU. One implication of this is that the relative inefficiency of current high-level web programming languages could see a shift toward more efficient languages (e.g. Rust?).


> From Rails, to Angular, to React.

That's where you lost me. ;)

(It's sort of like a physicist claiming force, mass and momentum to be the hottest new research areas for the coming 30 years...)


> force, mass and momentum

sounds like you think pretty highly of Rails, Angular and React.


Don't forget the ether! I hear the ether is posed to make a comeback, big time!!


He might mean MVC by Rails which is a widely used paradigm in Web Development (Django, Laravel, etc.) And two-way bindings for a frontend framework as brought along by Angular or the concept of using a Virtual DOM to improve DOM performance as pioneered by React.

There are maybe tons more but each of these did have a massive impact in their respective communities.


In hardware, a bit of the opposite is happening: big companies are paying Intel, AMD, Cavium, etc more money for semi-custom SOC's that combine multi-core CPU's and HW accelerators. There's also more uptake of FPGA hardware, which necessitates new programming paradigms and tools for SW types. And nobody working at this level uses Rails or whatever lol. They actually want to use the hardware they buy.


Ramsey Nasser is developing a language written entirely in arabic. http://nas.sr/%D9%82%D9%84%D8%A8/


My research is on hierarchies of composable domain-specific languages (see github, account 'combinatorylogic').


If you want people to go there, provide a link https://github.com/combinatorylogic?tab=repositories #internetageattentionspan


I do not want to spam with links to my projects (and to get banned for this), but sometimes it is relevant to mention them, as in this thread.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: