Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Have you created a programming language and why?
218 points by jcoffland on Mar 7, 2017 | hide | past | favorite | 237 comments
Excluding school projects and toy languages, who here has created a programming language and why?

I'll go first. 13 years ago I created a specialized language for processing XML data. XmlPl marries the syntax of C, XML and XPath. It is very fast and efficient. It never caught on. See http://xmlpl.org.

Yes, CoffeeScript, in 2009.

Despite being a modest, fun experiment in stripping JavaScript down to a minimal skin, and without any corporate backing, it wound up catching on a little bit.

Thanks to the hard work of Geoffrey Booth and Simon Lydell, among others, there's a new "V2" version that includes many ES6 and 7 features that's almost ready to go: http://coffeescript.org/v2/

CoffeeScript was designed extremely conservatively. The goal was to stick close to JavaScript semantics, to avoid having a runtime, and to avoid features that would require gymnastics and overcomplications in the output, or would run noticeably slower than the raw JS equivalent.

These days, I'm most interested in trying something from the other end of the "political" spectrum: What would happen if you tried to make a web language that wanted to strip HTML, CSS and JavaScript down to their essence — but were extremely liberal in the techniques you used to get there?

I'd like to get a chance to give that a try some day.

Just wanted to say thank you for CoffeeScript. Despite its somewhat ambiguous syntax it really was a better version of JavaScript, and it would be fair to say it really mainstreamed the idea of JS as a target language. Many of the ideas in CS made it all the way to ES6. Many other ideas didn't make it to ES6 because CS fleshed out issues that quite possibly would have been missed otherwise.

It was a huge net positive for web development. 2009-2011 were really heady times - a whole programming community emerged from a deep, dark cave with new tools and a new optimism for the web.

Ditto. CoffeeScript made JS almost Ruby-level fun for me. I've now switched mainly to TypeScript, which is also great (so many fewer runtime errors!). But I still miss thin arrow functions, array comprehensions, the existential operator ... What I really want these days is CoffeeScript on top of TypeScript.

Same. Idiomatic Coffeescript was so much fun it gave me a borderline obsession with brevity. Array comprehensions also largely removed the need for lodash/underscore.

Give me an optionally typed, ES6-flavored Coffeescript and I'll be a happy developer.

Agreed, the brevity's the thing. As I felt moved to write in 2014: http://blog.mackerron.com/2014/10/29/great-coffeescript/

And, as you say, array comprehensions largely remove the need for each/map/filter etc, which also has potential for a nice performance boost.

I've never written CoffeeScript, but I agree that it had a significant impact on the evolution of the ECMAScript standard. A simple example, arrow functions, lexical scope (const, let), and `this` binding.

> What would happen if you tried to make a web language that wanted to strip HTML, CSS and JavaScript down to their essence — but were extremely liberal in the techniques you used to get there?

So, extremely liberal, you said... :-) Thinking about this I started to wonder about other systems producing a similar output to the trio html/css/js. I realized PostScript is such a system, but for fixed size documents. People actually explored using PostScript for GUI applications in the past, as described in [1]. At some point I'd like to explore [2] and [3] to see if some of the ideas could be modernized and adapted for the web.

1: https://en.wikipedia.org/wiki/PostScript#Use_as_a_display_sy... 2: https://en.wikipedia.org/wiki/Display_PostScript 3: https://en.wikipedia.org/wiki/NeWS

What are your personal opinions on the future of Coffeescript? It's still being used for Atom, but I've noticed colleagues are finally getting used to ES6 and similar languages and switching to them. I'm still stuck on Coffeescript though for all of my frontend projects. It's too closely aligned to my utopia of a language to try to find anything else (which would surely have a smaller following anyway).

My personal opinion is much as it has been from day one:

    > I'm not about to start using it for real projects, 
    > it's really just a thought experiment about how nice 
    > JavaScript could be with an alternative syntax.
(Source: https://news.ycombinator.com/item?id=1014225)

I've used it a bit for fun, but never for work.

And I think that the future of JavaScript is probably always going to be JavaScript — until a web language comes along that's so compelling it's just obvious that it will be the right direction, and then the question won't even have to be asked.

CoffeeScript, as a concept, was mostly finished back in the 2010-2011 days, as a layer on top of ES3.

CoffeeScript 2, while a great step forward in terms of practicality for folks who want to use it with ES6 features like classes and modules — is also a step backwards for the notion of CoffeeScript: For example, we don't have to formalize classes anymore because JavaScript has them now, and unfortunately JavaScript has done them in a very static way that's not at all the direction CS would have taken. Ditto modules.

As an open source project, it will continue to work just as well as it always did. But I definitely don't foresee a return to general relevance, ever.

For what it's worth, I have been working on serious production coding projects using CoffeeScript full time for the last 4 years! Moreover, I like it a lot and have no plans to change to another language anytime soon. So thanks!

Thanks everyone for this chain. I started up a new Rails project a few months ago and decided to stick with ES6 for the Javascript instead of Coffeescript. Part of what bothered me about Coffeescript is how difficult it seemed to be to find any recent activity of any type about it, plus how much it seems to have diverged from the feature set of ES6.

1. In 2007 started Cython (http://cython.org/). We were using Pyrex (by Greg Ewing) for Sage (http://sagemath.org), and kept having to add new support and features. Pyrex was amazing, but Greg seemed to work on it once a year (?) by himself, with no revision control, and his view for the direction of Pyrex was somewhat limited. We needed something that could compile "99.9%" of Python, e.g., including list comprehensions, nested functions (closures), etc. I had added a bunch of things to Pyrex (in a fork), and a student of mine -- Robert Bradshaw (now at Google) -- had added a lot more. So I made up a good name ("Cython", for which the only Google search was a picture of a guy with a mohawk flipping the bird), made a nice website, and asked each of Robert or Stefan Behnel to be the lead developer. Both said, "NO", so I made them both co-lead developers, and amazingly that worked :-). A huge amount happened with Cython since then (many new devs, a book, etc.), with Cython now being very popular for writing fast compiled extension to Python for scientific computing applications. Happily I have done almost no further work on the Cython compiler, which is not the sort of work I like doing (that's why Sage uses Python, unlike almost every other math software package).

2. I wrote the Sage preparser, which adds a bunch of math-friendly syntax/language extensions to Python, in order to make it more suitable for interactive use for math. I realized this was needed while giving an early demo of Sage at PyCon in 2005. Otherwise, users of the competitor products to Sage would be much less likely to consider switching. This seems to have worked pretty well, and fortunately I think we haven't added anything to the preparser in years.

Yes, about 10 years ago for a company that was in the financial risk management space.

We were going for a matlab like language but R and Octave weren't deemed ready. For those of you who are already objecting in your head, I'll grant you that I'm not sure we made a good choice here but this was the early 2000s:(

What we were after was a vectorized language that would let you write your quant formulas without having to write loops.

For instance, if you wanted to run say Black Scholes over multiple expiry dates and multiple underlying prices you could pass in a vector of dates and vector of strikes to your model and it would return a grid of results for the permutations of values.

Since this is pretty processor intensive we also had language features for splitting your calculations over multiple machines in your datacenter.

All of this has since been comoditized, but when we started most of this wasn't present.

The company ended up being the calculation engine for calculating risk metrics like VaR for several large banks being acquired by a large financial company, so in that sense it was a success.

The downside of writing our own language was that quants weren't thrilled about having to learn a new language that wasn't a really transferable skill as quants tend to move around from company to company alot.

So the fact that we wrote a language ended up being a barrier to adoption. it also meant that we did a lot of consulting with the banks to get the initial modesl setup, which was good for providing income, but bad in the sense that our good quants soon became our clients good quants.

How close did you get to re-implementing APL?

I'm currently making a programming language that tries not to be a programming language.

To give some background: I'm working on a program to allow anyone to create video games. But to add custom functionality, my users need to use some kind of programming/scripting language. Since these are non-technical people, it must be as easy and simple as possible.

Therefore I'm now making a language (called Screenplay) where you don't really have to learn anything. You click 'add action' and a wizard guides you through making your action. You select the subject, then the verb (or action), and then some parameters. Translated to OO: you select the object, method and parameters.

I considered using a visual programming paradigm, but still text gives you the most freedom in expression. It's also easier to create new concepts in text than making a new image that makes sense.

In the long run I want my Screenplay language to resemble OO, where users can define their own objects with methods. It kind of looks like this (remember it's game related):

  Hero walks to Professor
  Professor says "Hi there, how are you?"
  if Hero chooses "Fine"
      Professor says "Nice to hear that."
  else if Hero chooses "Not good"
      Professor says "I thought so, let me show you something"
      Professor walks 2 tiles to the left
Methods could be defined as:

  How <actor> says "<text>"
      GUI opens text dialog with "<text>" and image 'face image' of <actor>.
I want anyone who can read, but not program, to read such a program and kind of make sense out of it.

I don't think the marketing concept of non-technical programing makes sense. Its like trying to sell atonal arrhythmic music. Or randomization as an interior design strategy. You've already embedded learning in the design where the user has to understand its Subject-Verb not Verb-Subject. Also my experience with OO design is non-technical non-logical people will have truly weird arguments trying to convince you that your objects should be methods and vice versa due to lack of skill at being able to design. By analogy someone who knows how, can always convert a PDA to a CFG, its mathematically proven if you know how you can always do the transform or even write a program (compiler?) to do the transform for you in either direction. But if you don't know how it works or what any of that automata theory jargon means, well, then you just can't, thats it, game over. Likewise your end user can either program at some level, where they can convert culture and emotions and brain waves and other stuff into code, or they can't. And if they can't, well, they just can't.

I'd suggest you market it as a really simple programming language. That'll work. What you've got technically looks good an sounds like it could be useful, so, cool, but you just need to change how you're selling it. Realize that people who can't logically structure and rationally think and technically design will continue to be unable to program, of course, no matter how easy the tool is to use.

Remember most people are functionally illiterate and can't read at a high cognitive level. That probably correlates very strongly with being unable to program BTW.

(edited to add, this was the wisdom of the GUI. Plenty of people are too illiterate and non-technical to ever be able to use languages, including even English prose, as a computer UI. But if you strip out most of the functionality and change it from linguistic IQ skill points to 2-D spatial manipulation, lots more people can "use" that UI... Nobody ever got noobs to use computers via every "simpler" and dumber CLIs, that madness leads to CPM and msdos, the trick was to get lower intellectual functioning people to use computers by taking away power/complexity and abandoning language and literacy entirely by going spatial with GUIs, I'm just saying as a strategy making the tool weaker never helped people who can't use tools as a historical strategy)

Inform was a huge inspiration for me! But there are some things I didn't like and so try to improve:

- It comes with a manual. You shouldn't need a manual for something simple.

- Too much prose, I like more structural things. If inform is a book, my Screenplay is technical documentation.

Inform is great for what it does, but I guess I needed something slightly different.

Don't forget: computers have no true intelligence, and thus require humans to put things into some form of structure in order for the computer to understand it. The type of structure you can get away with using for simple tasks can often look beautiful, but beyond a certain point, you need to impose distinction and isolation so you can compartmentalize separation of concerns and so forth. This structure is the foundation of programming language design.

The major issue with all of this is that program code is built through a process of iteration: one brick on top of another. But once you hit the wall I just described, you have to face the scientifically organized structure of programming in order for the language itself to not be a mess (see PHP) or the programs written in it to be spaghetti (see BASIC and particularly GOTO).

I say this not to be a downer, just as strong encouragement to take this into account and try your best to factor this into your design. If you can make something that lets people scale really high before hitting that wall (if at all!), that would be awesome.

> You shouldn't need a manual for something simple.

That's like saying "Everyone knows how to drive a car, you just keep it pointing forward and engage the engine".

Everything needs some sort of documentation, whether it be through examples (the way love2d.org/wiki is done, for example, is both informative and useful), documentation, BNF, etc. The fact that a manual exists is not a detriment, but a success -- you have enough features and complexity worthy of documenting it (although I don't mean to imply that larger is bigger :) ).

I worked on a similar problem! (But much, much narrower and less ambitious.) It's designed for games like dating sims. It takes a spec, written mostly as dialogue, and converts it to gamemaker code. I wrote it to help some friends avoid tedious manual translation, which apparently saved them a lot of time.

Love inform7, thanks!

The problem is that there is still a syntax that's required to know. Look at AppleScript. It uses keywords that sound like easy English but knowing how to use AppleScript still takes a lot of reading.

After reading the article about RUST, I have been thinking about how a compiler could take source code and write it back with fixes/suggestions right into the source so if you make a silly mistake, the compiler can just say "I think you meant..." and offer to fix it for you.

In your language, it would be beneficial to avoid "Syntax error!" and implement something like "Did you mean...", or have synonyms like "says", "exclaims", "talks", etc that compile to the same thing but add flexibility (or the compiler automatically re-writes to the preferred syntax)

This sounds very similar to Clickteam's Klik & Play [0]. Here's a screenshot of its event editor: http://media.moddb.com/images/engines/1/1/29/krnl386_003.png

[0] https://en.wikipedia.org/wiki/Clickteam

The example reminds me a bit of inkle's ink interactive fiction scripting language: https://github.com/inkle/ink

This is the same set of design principles that was behind COBOL, as I understand them.

It is probably fine for the purpose of simple gaming scripts.

You should check out Alice, developed at cmu. It was a programming environment designed to build interactive 3D stories.

How does this compare to Unreal Engine's visual programming language?

This doesn't seem all that different from the various scripts inside RPGMaker or, for that matter, the scripting language that I remember from the custom map makers in Starcraft and Warcraft 3.

I created the Bloomberg Equity Screening (EQS) query language. It was mostly backward compatible with a previous generation, but added a lot of new things like chained conditionals (Python has these, most languages do not). It is meant to be familiar to users of the Excel formula language.

Most users do not want to write their queries by hand, so I also built round tripping of the language to/from a GUI query builder. You can build a query graphically, edit it as text, then go back to the graphical form. Certain things cannot be displayed however, and generating good error messages was very difficult.

A weird fact is that the backend which evaluates the queries has a totally different language of its own, so there is a transpiler for that.

Thousands of people have used this language, but 99% of them did not know it.

I’ve been working on a statically typed concatenative programming language, Kitten[1], off and on for a few years now. I plan to do an early release this year.

I started working on the language to bring an elegant compositional style to low-level high-performance programming, in the form of a simple language that admits powerful tooling for program editing and visualisation. I also wanted to address usability concerns with stack-based languages (e.g., by allowing infix operators). As I’ve spent time in industry, I’ve begun to treat it as a way to address my gripes with C, C++, and Haskell; and as I’ve learned more type theory, I’ve begun to use it as a playground for type system research.

The goal is to allow pure functional programming that “feels imperative”, with a simple performance model (unboxed values by default, in-place updates, no laziness) and minimal runtime support (no GC, eventually no libc).

The latest compiler is nearly complete, but not yet usable for real programming. Still, you may like to keep an eye on its development.

[1]: https://github.com/evincarofautumn/kitten

Oh, it's good to see you. I'd like to say that I'm "following" (mostly limited to looking at the kittenlang.org every couple of months, but still) your project and I'm quite interested in it.

After learning about concatenative languages a couple of years ago I started learning Forth and later Factor. I wanted to give Joy a try, but IIRC I couldn't make it run and, after that, I stumbled upon Cat. But Cat was written for .NET and unmaintained (I just checked and it looks like even cat-language.com is dead now...). Still, I thought the idea was very interesting, at that point I was kind of tired of Forth strictly untyped nature. And then finally I found Kitten, which I thought was very promising, however very incomplete at the time. This is how I ended with Kitten being on my list[1] of languages to keep an eye on.

I hope it goes well and you'll release as planned, I can't wait to start playing with it! :)

[1] Here: https://klibert.pl/articles/programming_langs.html#org3087a1...

Thanks for taking an interest. :)

I guess it’s still in a “promising but very incomplete” state. Lately I have a pretty clear sense of how I want the language to be, and people seem pretty excited about it. It’s just a matter of finding the energy to do the work…

Also, follow the repo if you want updates. The website hasn’t been updated in like 2 years, and probably won’t be until I need to prepare for a release.

I created a language called MoonScript in 2011

It is heavily inspired by CoffeeScript (and Python), it works as a transpiler for Lua. I've since used it regularly to code dozens of projects.

It's now powering my company along with a handful of websites and supporting libraries. I've also made games and GUI apps with it. The adoption has been pretty minimal, but I'm satisfied regardless. It has definitely improved my productivity.

http://moonscript.org https://github.com/leafo/moonscript

leafo! This made the thread 10x better, for me at least :)

Since he doesn't want to mention his company, it's called itch.io. It's a game marketplace -- kind of like Steam -- but with a ton of pretty interesting ideas when it comes to advertising, pricing, etc.

The really cool thing is that the site is written entirely in MoonScript AND uses the Lapis web framework, which he also wrote entirely in MoonScript.

Anyways, if you're ever looking for a place to host and/or sell your game, itch.io is definitely the place to go.

MoonScript is extremely cool. What are your thoughts on MoonScript in the browser? I know you have an online "Try MoonScript", but what would be the best way to use it in a real (hobby) project with DOM manipulation capabilities?

Please don't give up on MoonScript :)

MoonScript is designed to compile to Lua, and there are no good ways to run Lua in the browser right now. It's impossible to write a Lua -> JS transpiler while matching all the same functionality (coroutines), so the only option is to run the entire Lua vm in the browser (The MoonScript online version uses this approach with emscripten).

In the future I'd like to be able to run MoonScript in the browser, maybe web assembly is the answer.

Another option would be to have a JS target for the MoonScript compiler, but there are some issues with different language features that would make code sharing difficult (1 indexed arrays, metatables, etc.)

I have no plans to give up moonscript. I haven't been actively developing it recently though, which I feel guilty about.

Long ago I found a copy of "System Design from Provably Correct Constructs" at the Seattle public library. I didn't realize it at the time but I now know that it is basically a presentation of Dr. Margaret Hamilton's Higher-Order Software ideas. In essence it's a thin AST that is only modified by operations that preserve certain kinds of correctness (i.e. type safety), with a tiny core of essential combinators that are combined (again, in "provably correct" ways) to form control-flow constructs. I was struck by the essential simplicity and spent time on and off trying to get a implementation working. (It never amounted to much but it DID get me my first job as a programmer!)

Eventually, I found Manfred von Thun's Joy language and realized that it was better than my thing and now I've implemented that in Python in Continuation-Passing Style. https://github.com/calroc/joypy

I think Joy is pretty amazing. It's a Functional Programming language that superficially resembles Forth, in that it's stack-based, but it's much more like a mathematical notation. https://en.wikipedia.org/wiki/Joy_(programming_language)

It's called Universal Systems Language. One of the founders of software engineering also was a pioneer in high-assurance toolkits. A NASA engineer's review showed the tool to have serious usability and performance issues. However, she and her team should get credit for solving the hard problems with a tool that at least worked for some people. It's also written in itself.


Edit: The book you mention is on Amazon for $6 now. Ill buy a copy for historical inspiration if its contents go into detail with examples of applying her method to at least toy problems.

It's like it went into its own bubble universe. Cheers.

As for the book, I don't know if I can recommend it. I've only ever seen a copy one other time, to check if it mentions Hamilton (it does, in some end notes) and I can't say it lived up to my memories of the first exposure. I can't even say how much of it is actually original to J. Martin and how much is strictly Dr. Hamilton's.

commenting for future reference.

Your browser has 'ctrl-d' and if you click the 'item' link (the timestamp) then you can 'favorite' that link within HN. That way you don't need to clutter up threads with comments like these.

I created a non-Turing complete programming language for editing animations, implemented as a GIMP Script-Fu script (Scheme, basically). Here's the documentation: http://tshatrov.github.io/animstack

Why? Because in GIMP it's annoying to edit animations with many layers, so I wanted to write a script that would do what I want. But rather than hardcoding a specific action, I wrote it in an extensible way, and the rest is history.

In general, I think non-Turing complete (always terminating) languages are super useful and should be used more often.

> In general, I think non-Turing complete (always terminating) languages are super useful and should be used more often.

yup, always-terminating functions are also called total functions (if they're defined for every input). Which is a related concept to Total functional programming


While Idris is turing complete for example, it's possible to mark functions and whole modules as total, and let the compiler prove it for you. Wish that more people knew that we have such tools available nowadays

Nice one, I wish there was a Haskell extension for that! (Can't risk switching existing/pending code-bases over to Idris just for that feature though)

I remember reading about two projects og Gabriel Gonzales that might be interesting in this regard.

Dhall, a total configuration language.


There is Morte and Anna as well, that look like the previous experiments in total languages in haskell.


Deep down I believe GIMP's ergonomy is a plot to push people to write Scheme DSLs.

Non-Turing complete languages are useful, but it has been my impression that it is difficult to be powerful enough to be useful without being (accidentally!) Turing complete.

Well, always-terminating implies non-Turing-complete, so that's a way to guarantee it.

The proof is easy: just set for all n, f(n) to be the nth program in your toy language in lexicographical order and then set for all n, g(n):=f(n)(n) + 1. This g is clearly Turing-computable (you can modify an interpreter to compute it) and not in your language (a la Cantor).

Actually non-Turing languages are very useful and can be quite powerful.

Consider for example TensorFlow and/or Theano. Before they added the loop and conditional Ops, they were non-Turing Complete. And you could do amazing things with it as it were.

In fact, I took a lesson from that when working on Gorgonia (https://github.com/chewxy/gorgonia) - I don't ever want it to be a Turing Complete language.

As a Computer Scientist, I actually find the distinction Turing and non-Turing to be rather obscure. Perhaps I am in the ignorant minority on this.

A Turing-complete programming language just means that you can compute anything that can be computed using that language. In other words, it can be used to write arbitrary computations. You don't need much to achieve this; off the top of my head, I think that being able to read and write to storage, basic addition, conditionals and loops are enough. (Formally, it means that your language can be used to simulate a single-taped Turing machine.)

A non-Turing-complete language is any language which does not have one of those capabilities. Because the bar is so low, most languages you have used in your life are Turing-complete. For example, you can imagine a simple calculator-like language for just writing mathematical expressions. It can do all basic math operations, has conditions, but no ability to repeat things (no loops or recursion). That language is not Turing-complete. You can trivially prove that programs from such a language will finish - they can't not! there's no ability to impede progress.

You can actually have a turing complete instruction set with only one instruction. I am not sure as to the practicality but here is a wikipedia link to read. https://en.wikipedia.org/wiki/One_instruction_set_computer

Isn't it just possessing both conditionals and loops?

One more thing - memory.

A Turing Machine is literally just a tape, a movable reader/writer, and a bunch of states that dictate what the reader/writer does.

As long as you have memory, the ability to write to memory, conditionals, and some form of looping, whether it's a `while` loop, a `for` loop, or a `jmp` or `GOTO` instruction, it's Turing complete.

Note that a conditional jump (`jne`, for example) satisfies both of the latter.

"Turing Complete" refers to a system which can be configured to be come any Turing Machine. When we write a particular program in some language and run it on some input, we have a Turing Machine. If the language lets us create any Turing Machine (i.e. do any Turing computation), then it is a Universal Turing Machine, and Turing Complete.

Turing's Tape Machine is a UTM because the tape can be initialized with contents to turn it into any TM. Any calculation for which there is a TM can be done by the UTM (if someone can figure out what to put on the tape).

yup memory. yup.

I forgot...

"Conditionals, loops, and memory" is an interesting list of requirements, because "memory" can be abstracted into anything. Functional Languages rely on the fact that the same code shall always produce the same outcome, and thus the "tape" has been changed into a series of concrete blocks. That said, I don't think anyone would argue that functional languages are not TC, but it is interesting to consider that the Tape becomes code in Lisp...

Not necessarily. Purely functional languages, like Lisp, do not have loops. Instead they have recursion. Any iterative algorithm (I.e. one with loops) can provably be converted to a recursive algorithm with out iteration.

Lisp has loops:

For instance, the do operator:

  (do ((i 0 (+ i 1))) ((> i 10)) (print i))
Or the loop macro:

  (loop for x below 5 and y in '(a b c d e)
        collecting (list x y))
The 1965 manual for Lisp 1.5 describes the prog construct, which persists into ANSI CL. Inside prog we can have labeled statements to which we can branch unconditionally with go in any direction. Lisp also has mutable variables. The following example from the Lisp 1.5 Programmer's Manual is still valid code today:

    (PROG (B)
  S   (SETQ B A)
      (COND ((NULL B) (RETURN C)))
      (SETQ C (CONS (CAR A) C))
      (GO S)))

While another commenter has mentioned that this is technically not correct with Lisp, it holds for the lambda calculus. You don't need loops to be TC if you have recursion. And for that you don't even need named functions-- note the URL of this site!

However, lambda calculus doesn't have code made of data, QUOTE or any of that. Lambda calculus isn't Lisp; it captures the gist of some of the evaluation semantics of Lisp only.

Lisp is more multi-paradigm than pure functional. And we like it that way. It can be functional, object oriented, aspect oriented, etc.

That they are always terminating is very useful for programs that generate proofs, or otherwise don't have to be evaluated.

But if you can encode the Ackermann function, proof of termination is not very useful in practice - as it may consume arbitrarily large amounts of space and time to evaluate (easily more than the age/size of the universe).

Although I am not the creator of it, I am a frequent contributor to Pyret [0], (I'm the error message czar, and prototyped the language support for tabular data).

Beyond that: Writing an interpreter is typically the first thing I do when I am trying to pick up a new language.

To anyone interested in learning how to design and implement a programming language, I highly recommend the 2015 offering of Brown's "Programming Languages" [1]. Implementing type inference was a revelation for me! The starter code is supplied in Pyret, but the material isn't language-dependent; the assignments can be implemented in any language.

[0] http://www.pyret.org/ [1] http://cs.brown.edu/courses/cs173/2015/assignments.html

Type inference is exactly what I need for my new language! Thank you!

I made ChoiceScript, a DSL for writing interactive novels in the style of choose-a-path gamebooks, but longer, richer, and deeper. https://www.choiceofgames.com/make-your-own-games/choicescri...

The goal was to build a language simple enough that we could teach it to non-coder authors, to scale up the number of interactive novels we could publish.

It seems to be working. ChoiceScript is the core of our business; we're profitable with a staff of four editors, a production assistant, plus me, the main tech person. We've found that ChoiceScript makes it easier to teach writers to code than it is to teach coders how to write.

Out of curiosity, what's your opinion of Squiffy?

I haven't used Squiffy myself. There are a lot of tools out there to let you build interactive fiction, either parser-based games where you type your actions, or choice-based games where you choose what you want to do from a list. For many people learning to code, building a tool like this is literally their second program, right after "hello world." Building a full-blown "Choose Your Own Adventure" is one of the tutorial projects in "JavaScript for Kids for Dummies."

The most popular type of choice-based IF is hypertext IF, and the most popular tool for building hypertext IF is Twine. Squiffy is most directly a Twine competitor, but almost all hypertext IF is developed in Twine; nobody's using Squiffy.

That may not be Squiffy's fault. Generally speaking, IF tools rise above the noise when someone writes a Great Game in that tool, regardless of whether the tool is strictly better or worse than anybody else's tool. Then people who admire the game say, "I want to make a game like that! How did you make it?" and that gets the ball rolling.

Since Twine is more popular than Squiffy, I'd advocate that anybody starting today use Twine. (And so the rich get richer.)

We don't use Twine or Squiffy at Choice of Games, partly because Twine isn't very well suited to writing really large interactive novels, (our minimum is 100,000 words of code+text) but also because hypertext UI isn't that easy to use on mobile devices; it can be hard to accurately tap on links in the text, as opposed to tapping big buttons at the end of the text. (You can use buttons in place of links in Twine, but you start fighting the tool pretty quickly.)

I made NewtonScript, starting in 1992. (https://en.wikipedia.org/wiki/NewtonScript)

The other viable choice was to have application developers write everything for the Newton in C or C++. By giving them a much higher-level framework (Scheme/Self-like language + UI views/interaction + indexed object store) we were able to get more functionality into the product/ecosystem faster and more reliably. At least I think so. Also, we didn't have much RAM to play with, and were able to do some nice memory-saving tricks underneath.

Is there anything in Newton script you miss in current languages, or wish that would be more widely adopted?

Well, JavaScript was pretty darn close to NewtonScript, except for the bizarre way arrays work and the weird semi-implicit prototype inheritance. Since JavaScript has taken over the world, I wish that stuff was cleaner. :)

And speaking of prototype inheritance, I still think that makes a lot of sense in some domains (like UI programming) but hasn't gotten much traction.

Beyond that, having an integrated object store (as opposed to a filesystem) is an idea from the 90's that really should get more mainstream attention.

Respect !

I bought a Newton in 2007 or so, just to be able to play with NewtonScript.

I made a programming language called Earl Grey a couple years ago: http://www.earl-grey.io/

It compiles to JavaScript, but it has a Pythony syntax, macros and fairly advanced pattern matching. I wrote an essay about the why and the experience: http://breuleux.net/blog/my-own-language.html

I think it's a pretty nice language, but I'm biased. I use it for my personal projects (including my own markup language, Quaint: http://breuleux.github.io/quaint/), but it didn't really catch on for anything else.

Wow, that is really neat. Dealing with JavaScript is one of the reasons I try to avoid front end work. Python, however, has wonderful syntax. I am bookmarking that page for future use. Thanks.

I created D because I could implement new ideas without waiting years.

What do you mean?

To improve C++, I'd need to write proposal papers to the committee, mount a campaign to get it accepted, attend all the committee meetings, and wait years for the next standard to get through the process.

With D, I could implement it and ship it in a few days or weeks.

Of course, I could (and did) add features to the C++ compiler I developed. But I soon discovered that nobody was interested in using features that were not part of the standard - even the people who proposed the features would not use them.

Over time, many of the improvements put in D found their way years later into the C++ standard.

Of course, D having a much larger community these days means that improvements take longer from conception to ship. More community means more process.

D is a successor to (you've guessed it) 'C'.

If you've ever used Zortech C++ then you've used Walter's product.

When I was in college, I was addicted to Starcraft II. Luckily, I primarily played 4v4s so the brainpower necessary to win wasn't nearly as high as 1v1s (which left far less room to meme, so I didn't enjoy them nearly as much).

However, while playing game after game in the evenings, I was always frustrated that I was using the inputs and outputs of my body so inefficiently. Starcraft tied up my eyes and my hands, but I still had time to think about the things I'd be coding if I weren't playing, and wished wholeheartedly that I had a few extra hands so I could play AND code at the same time.

This frustration crept into other activities like the drive to and from classes (and, even more frustratingly so, the long drive to and from home when I visited) and I eventually came up with a plan to let me code while multitasking with things that require my hands (primarily starcraft and driving).

I had taken the traditional compilers class the previous year so I had some experience with lexing and grammars, and set out to make my own voice-first programming language. I called it Bespoke (ha) and uploaded a proof of concept to GitHub[1][2] after it was featureful enough to solve a couple Project Euler problems.

I decided to use Javascript in the beginning so I could take advantage of webkit's native speech to text and get right into the logic without having to deal with processing sound or extracting words from it. After getting a proof of concept up, I expanded the flow control and data structures it supported (piggybacking on JS) and eventually wrote some fun (albeit simple) programs while gaming, but eventually gave up on the project when I realized I could code things up in a similar amount of time by just outlining code structure while gaming (also using STT) and coding up the necessary snippets in brief downtimes (between games, at stoplights, etc).

Now, I still pretty much do the same: I use a prose-like DSL that's evolved over the years to outline what to code, in what order, and how to tie it in to other pieces. I now take the train/bus instead of driving (and code at will), and outline what to code while I'm dead in Dota 2 games.

[1] https://github.com/drusepth/voice2code/blob/master/voice2cod... [2] https://gist.github.com/drusepth/3134188

The first time I read about voice programming (in Heinlin's 'Number of the Beast', the flying car they use is voice programmable[1]) I considered how you'd implement it and figured what they do in the book was too cumbersome - essentially natural language. What they did do though, was create shortcut commands for sequences of commands, then shortcuts based on shortcuts... and this to me suggested a Forth-like language might be a better fit.

I think it's still the case, C-like languages have too much grammar going on to deal with without visuals, forth is grammar-free. Also, it's naturally shell-like; instead of your 'run the code', just say the function you need to execute; you can test and redefine each function as you go.

I attempted to build this on a ZX Spectrum in 1983, based on some voice recognition code from a magazine...needless to say I didn't get too far with that hardware. I should have another go.

[1] Sample:

    "Program. L axis add speed vector three point
    six klicks per second. Paraphrase acknowledge."
    "Increase forward speed three and six tenths
    kilometers per second."
    "Chief Pilot?"
    "Execute." Deety glanced at the board. "Gay Deceiver,
    H-above-G will soon stop decreasing, then increase
    very slowly. In about fifty minutes it will maximize.
    Program. When H-above-G is maximum, alert me."
    "Roger Wilco."
    "If-when one hundred klicks H-above-G, alert me."
    "Roger Wilco."
    "If-when air drag exceeds zero, alert me."
    "Roger Wilco."
    "Remain in piloting mode. Ignore voices including
    program code words until you are called by your
    full name. Acknowledge by reporting your full name."
    "`Gay Deceiver,'" answered Gay Deceiver.

This is really interesting. How are you able to multitask so well? I can't hardly listen to a podcast and code at the same time, let alone play a game.

As the other response mentioned, at that point gaming was pretty habitual (and way more "thoughtless" than when I first started). I'd start up a game with a general plan and adjust mildly as needed, but in 4v4s it's basically 10-15 minutes of rushing an economy (which is pretty rote after a while) into massing up your army (which is pretty must just reflex APM of "select all barracks, queue up another marine/marauder in each; select all factories, queue up a few tanks/thors; select all starports, queue up a few medivacs/ravens") and then filling any empty space with random drops and harassment. I think there's a _lot_ more thought that goes into 1v1s since you're in a two-sided equilibrium and can't start losing much, but in 4v4s there's so much room for comebacks and throwing (and/or teams working together or not) that it doesn't matter nearly as much.

I couldn't listen to a podcast and code at the same time either, probably. If I'm listening to a podcast, I'm mentally trying to process it. If I'm coding, I'm mentally trying to plan/process what I've done. Any mental processing from one takes away from the other.

Podcasts, though, don't require any use of your hands. I actually feel guilty _just_ listening to podcasts (or similar) unless I'm also doing something like painting or paperwork that requires hands but not brainpower. I recognize it's kind of irrational, but it just feels like a waste of time and potential when I'm (probably) not going to live forever.

I think it's just a matter of recognizing what your potential inputs (seeing, hearing, touch, processing) and outputs (hands, voice, mental "caches" like outlines that you can context switch back into later) are and choosing activities to multitask that take advantage of the maximum amount of each without overlapping.

When something becomes habitual, it frees up the cognitive load for you to think about other things. I assume parent has played so much starcraft, that many of the routine aspects of the game such as bootstrapping a base, farming resources, and building a military has become engrained. Especially in 4v4's the risk of an enemy trying to pull an early cheese is so low, that the beginning of the game is basically almost always the same.

When I was playing Diablo II, I used to be able to complete the Secret Cow Level runs with a nova sorc almost by rote. I would often be thinking about other things, chatting with friends, calling into internet radio shows while doing this

This is really cool. You should think about doing a startup around this, especially given the new tech ecosystem with AR/VR and good voice recognition.

I would love to be able to code, using a headset+phone combo, while wandering around in a park.

I have created a graphical data flow language for Android. The idea is that typing would be lame on a touch screen and I should do something about it... 3 years young according to github, hasn't really caught on "yet" O:)

For details, see http://flowgrid.org.

Play store link: https://play.google.com/store/apps/details?id=org.flowgrid

P.S. If you want to create your own language, consider using my expression parser: https://github.com/stefanhaustein/expressionparser

P.P.S. https://github.com/stefanhaustein/typo

I designed a standard, a reference implementation in Ruby, and an unfinished self-compiled implementation for my unreleased language "Cub", which compiles to C and looks a bit Go-like while still preserving the full functionality that C offers, such as pointers, the stdlib, and binding to C libraries without glue code. Here's a short example. Let me know if this is interesting at all.

    include "stdio.h"

    class Widget {
        x int
        y int

        func init() {
            @x = 0
            @y = 0

        func draw() {}

    class Button extends Widget {
        func draw() {
            printf("Drawing button!\n")

    func main() int {
        w *Widget = new(Button)

I created a spec called MOML and is a markdown-friendly alternative to YAML. It uses plain markdown but allows you to section off parts of the markdown file in order to organize data in a sensible fashion. It has support for arrays so far it only has support for 3 data types: small text, text, and array (of small text or text).

I'm slowly adding features to it like typing, object nesting, etc.

I created it for easier blogging. I'm not a fan of YAML because it's indent-based and I wanted to have a markdown file that hosts all the information about a post on it from the date, to snippet, to main text, title, metadata, hell even a sidebar ad, and author, etc.

I did end up creating a parser for it that I couple with a blogging engine.

The current compiler transforms the MOML file into a JS object

I've been interested in creating something similar for a while, do you have a site for this yet?

Yes, every time I do something in Forth... :)

The same thing technically applies to any extensible language (i.e. Lisp) but a Forth programmer goes in with the expectation that the first task will be to create an application specific language. Since you have to pretty much create everything you are best off doing that in an efficient way.

I love creating programming languages! They're powerful tools of abstraction - designed well they make complex concepts look simple.

Couple of examples - a language that compiles to Bitcoin script opcodes [0]. Although the Bitcoin script engine is stack based (easier to follow) I couldn't resist designing a small, simple language that could be used to write transaction output scripts. This way it's easy to understand what are the conditions of moving funds to the next owner.

Another language has first-class functions, operators as functions, optional lazy computations, but more importantly a small runtime that supports tail-calls and capturing execution as a value (callcc) [1].

I've also written parser and interpreter for Prolog [2], just to get the feeling of logic programming.

Writing a small language can make you understand the paradigm (functional, imperative etc.) better and it takes a great deal of effort to decide how should the syntax look like, how will the runtime work (usually with toy languages you provide runtime too...).

Edit: Just noticed the "except toy languages" part... :-/

[0]: https://curiosity-driven.org/bitcoin-contracts

[1]: https://curiosity-driven.org/continuations

[2]: https://curiosity-driven.org/prolog-interpreter

I've created three instances of what I'd consider to be programming languages.

The first was something I did in an intro to CS class. I asked the professor if I could use C++ rather than Pascal for the assignment, and was, of course, turned down. I'm not proud to admit to this, but in a fit of pique, I decided to write my own programming language in Pascal than then submit the assignment in that. The result was something called 'SeqTl', for Sequenced Testing Language. It was basically a stack oriented language somewhat analogous to Forth. There were two stacks, one for numbers and the other for strings, and the interpreter worked by continually parsing strings of tokens. (Think Tcl pre-8). (I did get an A, the professor said I should've gotten bonus points, but in retrospect he would've been within his rights to fail me outright.)

The second was about five years later, while working on industrial process control firmware. My company wanted to allow our customers to specify control algorithms that were able to be guaranteed to run within a given time bound. To achieve this, I wrote a simplified language with a C-like syntax. It lacked constructs like unbounded loops, and the compiler would run an analysis of the control flow graph and throw an error if the longest path exceeded a given projected execution time. This language also used value/status tuples rather than raw scalar values, so it could do things like propagate bad data information through a calculation. (Think NaN propagation through floating point math, but more expressive.)

The third was/is an ongoing project to write a Scheme-like language.


This codebase started out as George Carette's SIOD interpreter, although I've made a bunch of changes since. (There's now a compiler, first-class hash tables, the reader and writer are now in Scheme rather than C, there is a test suite including benchmarks, a decent REPL, etc.) Back in 2001/2, this started out as the core of a shareware calculator program I was attempting to sell online. The tl;dr on that is that the design was way too ambitous, the market didn't support it, and I didn't have remotely the bandwidth necessary for either the development or the sales. It was and is still fun to hack on, however.

> I did get an A, the professor said I should've gotten bonus points, but in retrospect he would've been within his rights to fail me outright

Well, to be fair, in the real world its not too rare for engineers to write custom programming languages to solve their problems (see: this thread). So what you did isn't that far outside of what's really done, assuming you submitted the code for the programming language with the assignment.

That said, I think he would have been within his rights to give you a C for overengineering it :p.

> That said, I think he would have been within his rights to give you a C for overengineering it :p.


In 1990 I started work at a tiny company run by people out of the mainframe data-processing world (apparently). Over the next year or so I created what was more or less Scheme with an infix syntax, plus a DSL for data formats. It happened gradually, starting with the DSL -- before that they'd write custom little programs in C to convert from a new client's format, etc. Then hey, why don't I hook up an arithmetic evaluator so we can filter records without another custom program? (They'd evaluated microcomputer database sytems as unsuitable.) And so on, until they had a real scripting language in a couple thousand lines of C. Nowadays you'd be kind of nuts to do this, but it was mostly a great success, and the gradual insinuation of this way of doing things was much easier to sell (from a guy right out of school) than bringing in a whole new language like Perl, if I could even be sure Perl would be right for them -- I'd just seen it listed in ads in Dr. Dobb's Journal. None of us were on the net. It was a different world.

The fun exception to the success was the time I tweaked something and broke the garbage collector, and we only found out when a job went back to the client with completely bogus results. An educational moment for me.

Years later at JPL I helped with a new Scheme dialect. It was partly in response to the politics described in "Lisp at JPL" https://news.ycombinator.com/item?id=2212211 -- we could say "this is actually a C program, see" -- but the language design was quite nice, IMHO. I can say that without backpatting because it was mostly done before I came in. Influenced by ML and Dylan; a shame it never was released.

I created a Scheme in Haskell, complete with tutorial on implementation, to help people learn programming language design in Haskell. The goal is to make a solid, or nearly industry ready, Haskell project, and a narrative(free book) that people are able to follow along. I'm looking for contributors, if you'd like to take a look: https://github.com/write-you-a-scheme-v2/scheme, tutorial: wespiser.com/writings/wyas/home.html.

Before this, I was on a team building a programming language for financial analysis. It had basic javascript semantics, was interpreted, and mostly returned results as data visualizations to browser based UI. Why? Because the investors believed there is a significant market for bundling data sources, along with a proprietary programming language to hedge/mutual funds. The real difficulty in achieving this is the man-years of time required to build a language capable of reproducing the same analysis used in by quants in production before ever being able to apply a technology, like GPUs, you bet can make things better. It can be especially difficult for the business leaders to make decisions that impact a programming languages development, as the highly technical quickly meets the business relevant. If you ever go this route, have system architecture/design for the programming language that will bring you to a MVP. Otherwise, you'll be paying developers to play with prog lang ideas for years!

Inspired by Algorithmic Information Theory, the theory of shortest programs, I set out to create the simplest possible computational model with binary input and output.

The Binary Lambda Calculus is described in [1],[2] and further inspired this IOCCC winner [3].

[1] http://tromp.github.io/cl/Binary_lambda_calculus.html [2] http://tromp.github.io/cl/cl.html [3] http://www.ioccc.org/2012/tromp/hint.html

I think I once read that only an exponentially small fraction of binary strings are valid BLC programs. In theory this means you could create shorter programs by merely enumerating valid BLC programs, which is bothersome. Is this a correct characterization and do you see a fix?

EDIT: there's the paper https://www.cambridge.org/core/journals/journal-of-functiona... citing an exponent of 1.963447954...

> I think I once read that only an exponentially small fraction of binary strings are valid BLC programs

This is true for the encoded lambda calculus prefix; but not for the entire BLC program in which you can embed arbitrary binary data.

> In theory this means you could create shorter programs

Not really; BLC is universal, so program sizes are optimal up to a constant. In other words, for any language L, there's a constant c_L, such that for all x, the complexity of x according to BCL is at most a c_L larger than the complexity of x according to L.

Cool, I was wondering that. So you confirm that I can embed arbitrary binary data in a BLC program with only a constant overhead?

With two caveats: the binary data must come after the encoded lambda expression, and in case of self-delimiting programs, that lambda expression must somehow know where the binary data ends.

Yes; in 1997, I created a simple Javascript-like embeddable scripting language for the network security scanner product I was a part of (Secure Networks Ballista). Instead of a BSD-style sockets interface, it was designed around direct access to the network interface and had a standard library that implemented Ethernet/IP/TCP, and the language had primitives for picking apart and composing packets. We used it to to write low-level network tests without writing C code.

At the time, there was no mainstream realistic option for embedding popular languages that lots of people knew into C applications. A few years later, I'd have solved the same problem with Tcl and a few extensions; then Python, then Javascript.

I've developed quite a few languages:

(1) Strix, a rule-based language with SNOBOL4-style string pattern matching.

(2) ENeMaL, a rule-based expert network management language.

(3) Gossip, an object-oriented Prolog

(4) Emblem, a Lisp dialect.

I've been using Emblem to develop

(5) Full Metal Jacket, a visual dataflow language (http://web.onetel.com/~hibou/fmj/FMJ.html)

There will always be a demand for domain-specific languages. I'd think hard about developing general-purpose languages. There are plenty out there already, and you could contribute towards them instead.

If you want to develop a new language for release, it's a good idea to develop several others first for your own use to gain some practice and learn what's involved.

Why have I developed Emblem? I wanted something simpler than Common Lisp, but didn't want to rely on third party libraries. It has support for 2d and 3d graphics, statistics, symbolic AI and a few other things.

Why have I been developing Full Metal Jacket? I wanted something significantly better than Lisp, with directed graphs as the fundamental data structure, extreme type safety, and the ability to run efficiently on multiple CPUs.

It isn't ready for release yet, and I'm taking a break from it at the moment. I'll consider it a success when I start using it in preference to Emblem.

I've created somewhere around 5-10 languages. Most are domain specific, some are new internally generated general purpose languages in "black box" use in client software because we saw the need, some are from clients which needed languages built.

Mostly we value declarative specification, however some domains do require in-order recipe style expressivity. The more problem-space the language can be, the easier it is to iterate, catch problems early & easily, report more meaningful errors, and optimize the compilation.

It's a craft, but I honestly think it's not a hard one to pick up. The major thing is being able to step back and think about the problem humans are tackling, not grabbing your default C style (or whatever) and trying to bolt it into some solution space. As programmers, it's easy to give zero thought to all the boilerplate and machine-fiddling required for typical programming; you must set that aside when starting from scratch with a new problem. You must also realize when existing or in-progress solutions become boilerplaty and fiddly, and have the wherewithal to step back and notice that it's not expressing the correct level of abstraction for the problem.

We've supported Bloomberg, ADP, nuclear power plants & other utilities, etc, so we need something truly viable, robust, maintainable, and worth the client's money. You have to think in their shoes, understanding the industry vocabulary of the user. You need to give them tools that closely match their natural expression of what they want the machine to do, not divert them into being programmers, yet still have all the power they need for their tasks at hand. It all starts with having a great business analyst.

I (with some colleagues) developed a capability-based shell scripting language (http://shill-lang.org) that makes it easier to apply the Principle of Least Privilege to system administration tasks and also functions as a modular, programmable sandbox.

We're currently working on a port to Linux and a commercial version. If you're intrigued, I'd love to chat.

A few years ago I wrote a proof-of-concept interpreter for a small stack-oriented prefix-notation language inspired by languages like Python, Lisp, io, and GolfScript, written in Python.

I had fun with the project, learned some stuff about interpreters, and also learned that infix notation for things like arithmetic is actually pretty neat and we should probably keep using it.


I've created several bytecode interpreters, mainly for bootloaders and cross-platform drivers. These environments are sometimes very space constrained, so using the native instructions to do highly repetitive I/O to set up devices can blow your space budget. Instead, I shrink the instruction set down to (usually) 4 bit codes with packed immediate data in a 16 or 32 bit word. In one case, a SH4 based set-top box, this reduced the bootloader from 30+k down to a single flash page. This meant that we could have redundant bootloaders instead of just one copy, and that increased platform reliability and serviceability quite a bit.

I also wrote a BPF-like JIT for network packet matching once, targeting the QorIQ pattern matching engine as a backend.

I worked on a compiled, statically-typed Python-like language, called Runa. It has a compiler in Python which compiles to LLVM IR. It's similar enough to many of the ideas in Rust that I stopped working it and decided to go code Rust instead.


I wrote a language called T for Bell Northern Research, not knowing there was already another T language based on Scheme. The language was used for black box testing testing of terminal interfaces, and so needed good string handling.

One aspect of the language was that Unix regular expressions were a built-in type, and so were associate arrays. These two features worked together in a cool way:

  x['hi there'] = 5
  x['hello'] = 2

  # regular expressions are enclosed in `backquotes`
  x[`hell.`] == [2]
  x[`h.*`] == [5, 2]

I created a tiny language (assembly level) when I was fifteen years of age. It could convert simple postfix mathematical expressions, including trigonometric functions into machine code for a Zilog Z80 processor inside Sinclair ZX Spectrum+.

I was unaware of compilers at that age, and came across a machine code sample which ran about two orders faster than BASIC. That motivated me to "invent" a compiler and create this proof of concept.

I wrote a 'language' that essentially compiled down to several other languages (C++, PHP, Javascript were mostly implemented) but unlike e.g. Haxe, it didn't try to hide any of the underlying languages; instead, for parts that weren't abstracted by the language, you had to provide your own implementation blocks for each specific language. This basically made a file that was meant to be used from e.g. C++ and PHP a mix of three languages: the two you wanted to support plus my custom language.

It was mostly useful for writing boilerplate code only once; stuff like class declarations that had to change in sync between parts of one code base. But it grew from there to the point where it let you express 'pure' algorithms in this language so that you only had to write them once, but if you wanted to you could 'override' parts of it in the target languages. It was the bastard child of a compiler and a macro processor. It saved me many, many hours writing tedious boilerplate; however I don't think it saved me more than I spend on it.

It has to be in one of my zipped up archived subversion repos somewhere...

Sounds similar to MyDef: https://github.com/hzhou/MyDef. However, MyDef doesn't attempt to be a new language, it attempts to be meta-layer. With MyDef you can save time with boilerplate, and more importantly, reorganize code into a better semantical structure.

Yes, Tauthon. (https://github.com/naftaliharris/tauthon).

It lets people with Python 2 code start to use new features from Python 3. (It's a backwards-compatible fork of Python 2.7 with features like async/await, function annotations, and keyword-only arguments backported from Python 3).

I created a language.

I was refactoring a very complex class at work that was hiding a very unsafe one liner of python doing a «safe» eval that was nowhere near safe. And after 3 days of mine + the 3 weeks from the team that appeared in the git on the work of this class, I thought, well I am told I am an idiot, but I am pretty sure one can make a safe language that only includes what customers need without taking as much time as this disgusting pile of obfuscated unsafe junk.

I did a forth based language in 16 hours on my spare time that was having the minimal requirements that satisfied real use case of all the customers.

Proposed to replace it after showing vulnerabilities.

Then I was shouted at by the CTO because it was undermining is great technical choices.

And then I realized that this Access Provider did not cared about neither their security nor the one of their customers, just about their nice boats and expensive cars and were just milking the internet users like any other ISP.

Creating your own language is dangerous ; it can make you think about too much things, do not do it.

16 hour Forth to replace everything and solve our problems.

Forgive me, but I'm leaning toward siding with that CTO on this one.

I've created a few interpreted languages over the years: 1) A C and Lisp hybrid called "Frame" that was an expert system development language; it looked like C but ran like Lisp, with functional programming like code modification capabilities. Created over an 18 month period around 1988, with lots of statistical and 3D rendering capabilities. For proof of concept, it had an accounting system built with it, which generated 3D cash flow animations for analysis purposes. After that I wrote a bunch of "agent AIs" with it as back then AI was all about actors and agents. 2) At Philips Media during the early 90's I wrote the ASLAN documentary production language, which enabled a team of people to create Ken Burns style documentaries. This was a "distributed language" with different people operating in different roles having different syntax for their portion of the overall documentary being created. Audio people, video editors, screenwriters, other coders, graphic artists, and 'builders' all had their scripts that when combined and executed, would produce a ISO disc image that was a "cut" of the documentary. I ended up being the production engineer for 13 documentaries, which we repeated in 8 languages. Philips also sold ASLAN to other production companies, which I then supported their use. 3) At Sennari Games, under contract to EA, I wrote the front end UI and control language for all the Tiger Woods PGA golf titles. This was maybe '94? It was the first Tiger Woods Golf, which was then ported all over the place. When I checked back 10 years later, they were still using it for golf game front end UIs.

I seem to make mini-languages quite a bit now. I operate as an integrator between logical systems of complex software. I've found mini-languages, often nothing more than a macro text replacement engine, is all that is needed to enable fluid integration between vastly different technologies. The KISS principal 1000%.

Nope, but have always loved playing around with compilers for languages like PL/0, Pascal (subset) and such. I have many books on compiler design[0] but my itch is usually scratched by implementing a subset of a Wirth language as opposed to creating my own. I don't keep anything I write because it's fun to do it all over again later.

[0] I think my oldest is "Compiler Construction for Digital Computers" by David Gries. My favorite isn't really a compiler book, it's Wirth's "Algorithms + Data Structures = Programs" because he implements PL/0. I used to have some PDFs by Thomas Christopher that used matrices to computer first, follow, etc sets that I thought was interesting, but have since lost those.

I've just started writing my own PL/0 compiler, I found the best resource for it (apart from the Wikipedia articles on EBNF, PL/0 and P-Code) is Wirth's book "Compiler Construction" (http://www.ethoberon.ethz.ch/WirthPubl/CBEAll.pdf). I don't think later versions of "Algorithms + Data Structures = Programs" contains anything about PL/0 (or a I could not find anything in the contents page). Eitherway, I find Wirth's writing amazingly clear.

Yes, my understanding is later editions of the book didn't include it.

I created a language that better expresses server side web programming issues (exactly for that reason).


I also created language for describing processes (company processes).

Mostly for fun and to scratch my own itches.

I'd love to see work on how to visualize processes

I'm using graphviz for that.

Yes, Cirru, in 2013, which is only a language front-end that compiles to JavaScript, Clojure, JSON or HTML.

* http://cirru.org * text.cirru.org (temporary solution) * https://news.ycombinator.com/item?id=13773813

I was trying to explore the expressiveness of graphical editing(at first using indentations). And it ends up with a nice tree editor that I can editor as DOM. With DOM, the layout of the code is actually the layout of AST controlling by algorithms that is responsive to screen size and AST structure.

I play language designer in my free time, I have like ten languages buried in folders with thousands of abandoned projects.

My recent endeavour was a language with only five constructs that would mimic mathematics in the way we program:

    = assign
    $ print
    @ loop
    ? condition ! else
    # definition/function
So print hello world would be like:

    $ 'Hello world'
Every single program in the whole universe can be easily translated to Uno: Universal Notation.

I thought developing a browser extension to allow Uno being run from the address bar for quick calculations would be an interesting project.

That reminds me a bit of PILOT: https://en.wikipedia.org/wiki/PILOT

What a coincidence, I am also working on a language that uses three letters for instructions and every single line starts with one, same concept as Uno:

    say 'hello world'
    let list = [1,2,3]
    for item in list: say item
    def add(x,y): ret x+y
    is? a=1 say 'ok' no? say 'wrong'
Easier for the compiler and for the human.

I created Myrddin (https://myrlang.org). It started off as a fun experiment, but now I'm finding myself using it for many of my personal bits of code.

I created a programming language called NCD. It is a mostly imperative language but where each statement can make the program backtrack to that point. The original intended use case was for configuring the network on Linux but it is really practical for similar things like starting programs based on the presence of hardware.

The interpreter is written in C and is open source [1]. There is quite some old documentation here [2] (needs to be moved, this was from a google code wiki). An online demo with several example programs can be found in [3] but it does not have all the latest features of the language.

The interpreter is completely asynchronous (uses epoll etc; this is why the Javascript demo was so easy). It has many build-in modules that allow to do quite a lot of things (including sub-processes with pipe I/O, TCP client and server, receiving events from input devices, observing devices through udev events).

The language is actually built from a relatively small core which defines the syntax and basic semantics, and a larger set of modules which implement different statements. Many of what would in other languages be part of the language exists as a module in NCD, e.g. the var() statement and even If.

And here [4] is a real program written (mostly) in this language, that does something completely different from network configuration (allows controlling a Sphero robot using a joystick).

[1] https://github.com/ambrop72/badvpn/tree/master/ncd

[2] https://github.com/ambrop72/badvpn-googlecode-export/blob/wi...

[3] https://rawgit.com/ambrop72/badvpn-googlecode-export/wiki/em...

[4] https://github.com/ambrop72/spherojoy

Working on automating software development, I created YSLT as part of the YML project: https://fdik.org/yml/yslt – this is just syntactic sugar to make XSLT usable.

Now I'm creating Intrinsic, a programming language with a modifiable grammar while runtime. Because I found no tool chain, I published pyPEG, which is an compiler interpreter https://fdik.org/pyPEG/ Intrinsic will have a generative paradigm.

We created the Lamdu language that is designed to work well with its IDE.

It has some features we miss in other languages (extensible&structural product and sum types), but also makes different trade-offs (all names are GUIDs behind the scenes, parameters to multi-param functions are [almost] always named).


Like someone said, every time I write Forth or Lisp :) So I took the next logical step and implemented Forth in Lisp (https://github.com/codr4life/foonline). Why? Because it gives me more leverage, it's like having one more dimension on top of macros. The other reason is that it gives me facts to go with opinions on language design, allows me to see the compromises others made in a new light.

Indeed I have: https://tkatchev.bitbucket.io/tab/

Because awk sucks and SQL is too constraining.

I'm in the process of making a user-facing scripting language for my hobby project Rolz.org (https://rolz.org), that's going to be a very simple affair.

My only attempt at making a "serious" language was np-lang (http://np-lang.org/tutorial - but it's defunct right now). Initially I made an interpreter in Java, later I made a parser that creates bytecode for a slightly mutated Lua VM. The reason why it's not being maintained right now is that, while basing the runtime on the Lua VM was attractive at first, it was really a dead end for me productivity-wise because I ended up spending more time with the Lua internals than was practical.

The reason why I started the np-lang experiment was that I wanted to experiment with a language that could ultimately become my scripting language of choice for web projects. I was never happy enough with it though.

I'll probably try again to make a language with that goal in mind, when I have some time and motivation to spare.

Apart from that I created several toy programming languages over the years because it's fun and I think it's also practice that can make you a better programmer by gaining a deeper understanding of the design decisions that go into a programming language and its runtime.

I also made some project-specific DSLs, mainly to facilitate configuration and user scripting. And simple, BASIC-like scripting interpreters for toy RPG projects.

Some great submissions here, nothing as nice from me, but I guess there is something to learn from most people.

I have created many in my life, non of them open source (yet; the one I am chewing on now will be I think), most of them toys, the most successful one really niche and those same successful ones embarrassingly crappy. I wrote a PHP before PHP (in Perl as well, guess it was meant to happen :) and it made me a lot of money being able to write dynamic web applications without much of the plumbing at the time. I wrote a bunch of 4GLs for finance and web CRUD applications, all sold as addons to our CMS early 2000s. After that I wrote a lot of toy languages which I meant to put on github some time but I don't think they are of interest to anyone anymore by now. I write, for my own sanity, Forths when I do embedded work, assembly work and/or work that doesn't allow me to have interactivity for development.

They 'why' is easy; the feeling that things are just not as efficient as they can be and the drive to do something about that. That will never leave me and it did give me a lot of competitive advantage over the years. Mostly in the end I end up with something that, after postmortem, would've been easier with Lisp. If I surpass that point in my life, I succeeded.

Not quite, but I've hacked on various programming language related projects:

* Helped write a C++ bindings library for Ruby. I was finally getting sick and tired of doing everything in C++, and paying the cost in undefined behavior and compile times, but hadn't given C# a proper try yet. Bugs in the C API for exception handling eventually convinced me to abandon the project IIRC, but we had something functional.

* Company-internal modifications to a C++ bindings library for Squirrel, Sqrat. Bugfixes for UB, adaptations for custom smart pointers, logging, replacing exception-throwing logic in later versions... was at a company using it professionally and was encountering bugs, compatibility issues (we couldn't compile with exceptions enabled for all platforms), and poor debugging support (which I 'fixed' by adding a lot of dumping options for callstacks and the like.)

* Wrote a flash bytecode rewriter utilizing RABCDAsm. This wrapped all method calls - and statements within methods - in additional logic to track the current callstack. Our flash embedding solution didn't let us debug flash very well, but these modifications let us get real callstacks for unhandled exceptions and other errors. As a bonus, I was able to reuse much of the logic for a simple tracing profiler of sorts.

Not technically a separate language, SaferCPlusPlus[1] is a memory safe dialect/subset of C++. As far as I know, it is by far the highest performance[2] solution currently available for addressing memory safety in C/C++ code (aside from full static verification when that is feasible, of course). The idea is basically just to replace the potentially unsafe elements of C/C++ (like pointers and arrays/vectors) with compatible (memory) safe substitutes. A nice thing about this approach is that converting existing (unsafe) C/C++ code is a simple, straightforward process (that should hopefully be (mostly) automated before long), and it doesn't require learning/adopting any new paradigms. If you want maximal performance though, you'll need to understand the technique of using "scope lifetimes" to achieve memory safety with no run-time overhead.

It's been perfectly usable for a while now, but it's not yet complete. Static tools for automatically identifying uses of potentially unsafe C/C++ elements in existing code, and (at least mostly) automated translation are still being worked on. And there are still elements that are missing safe substitutes (like std::string). So, if anyone's looking to kill some free time... :)

[1] https://github.com/duneroadrunner/SaferCPlusPlus

[2] https://github.com/duneroadrunner/SaferCPlusPlus-BenchmarksG...

Back in the 1980s when text adventure games were popular I was thinking of writing a game so I started by writing a compiler for my own game language. I never completed the game and I probably had more fun developing the language than the game. This was also the time when OOP was becoming fashionable and my language allowed me to experiment with class and prototype OOP, multiple inheritance etc. so it was definitely a useful experience.

I kind of make a new language for every project. Rather than writing a program directly, I write pseudocode to solve the program, and then build a language that runs just that pseudocode.

I had to query lots of data from webpages, so I made a pattern matching language where the program is a sample of the webpage. E.g. to query the the text of <span id="foo">some text</span>, you write the "program"/pattern <span id="foo">{.}</span>. Or if the webpage had multiple <span class="foo">some text</span>, you would use the pattern <span class="foo">{.}</span>+

Then I noticed, you sometimes need more expressive power than simple patterns, e.g. returning an element depending on the value of another element, so I added variables and allowed some kind of XPath expressions inside the {}. It was not really XPath, because XPath 1 has no variables, but after a while I noticed, XPath 2 has variables. There was no XPath 2 interpreter for the host language, so I implemented a standard-conformant XPath 2 interpreter for this. Not really my language, but my implementation of a language. Later I updated it to XPath 3, which became far more complex than expected, since XPath 3 does not just has variables, but is fully Turing-complete with anonymous functions. And a type system with 50+ different types. Makes JavaScript look trivial. I also noticed, XPath 3 is almost the same as XQuery 3, so I implemented that, too. http://www.videlibri.de/cgi-bin/xidelcgi

Which brings us back to your XmlPl, because XQuery was designed for the same tasks. That example node[] main() { <title>helloString;</title> } would become declare function main() as node() { <title>{$helloString}</title> } in XQuery.

Previously I made a completely different XML processor for my personal webpage. Keep all content in XML files, make an HTML file for the layout of the webpage, and then insert the data from the XML in the HTML at statements like {{$WRITE some/kind/of/xpath }}. It had control structures {{$IF}}, {{$FOREACH}}. I thought it was a good idea to keep the program separate from the markup, but it only looks confusing, so stopped using that language.

Most recently I made a language to sort my mails when they arrive on the server. More a toy than a full programming language, but the cleanest way to keep those filters http://hg.benibela.de/mailfilter/

> I kind of make a new language for every project. Rather than writing a program directly, I write pseudocode to solve the program, and then build a language that runs just that pseudocode.

I love that approach. Don't let non-essential constraints limit the elegance of your solutions.


http://little-lang.org is a C like scripting language that compiles down to tcl byte codes (so it can call tcl and tcl can call it). Why? Didn't like tcl, did like tk (still feel the same way). And I always wanted a scripting language with structs, I just like how structs are sort of a self documenting part of any program.

Also created an unreleased version of awk that made awk scripts first class, you could have

    awk_script | awk_script 
in the same script. We wanted this one for some database stuff we were doing but ended up doing it in C instead.

Also created a "language" for processing deltas in a version control system. This one is pretty obscure but pretty useful. Patterned after awk, so there is a begin/end and then the body is called once per delta. Here's an example that digs out the commits and displays them in JSON format:


when reading this one the rules are that stuff in double quotes is what is printed, the rest is logic.

I created http://jogolang.org, mostly because I was interested in learning how compilers worked. I was also motivated by the lack of simple, type-safe, ahead-of-time-compiled programming languages for writing video games. Now the project is kind of dead, but I certainly learned a lot about parsers, code generation, optimizations, SSA, and register allocation!

Over the past couple years I worked on a typed Python-like language with it's own bytecode, cpython-like runtime and memory manager (all in C). To test the runtime and mm, I wrote a webserver with it which runs my website at http://www.survivalscout.com. Eventually I'd like to build an unified operating system and CPU.

I built a DSL for creating easy to build tests for a frequently updated enterprise web application. Selenium and it's counterparts, while very powerful were entirely too much to take on for many people and the vendor's testing was kind of a joke.

What my DSL did was translate a lot of language into terminology that made more sense to people in that space, and wrapped frequently used combinations of functions together. I.e. if we wanted to test if an element was present in the navigation bar, we wouldn't have to find the navigation bar slideout toggle switch, validate that it was open or open it, ensure that the navbar was populated, then use an xpath query to determine whether it was present, we could just declare a simple one line test.

I had no experience with building DSLs (Domain Specific Languages) previously, no experience building an IDE, and no experience with creating or parsing syntax trees. I just knew that there was a problem and it needed to be solved.

What was conceptually very simple turned complex quickly as I ferreted out the long term goals and researched how I could build something that I could responsibly transition to people less willing to hack their way through things.

I could have built an API rather than a whole new language, but I had compelling reasons not to go that route.

Did it catch on? Yes, and no. It's been 4 or 5 years since I released it to a customer and it's still being actively used. The entire target market was less than 3000 customers, so it was never going to be a household name.

I have to say it was one of the more eye opening projects that I have taken on. The growth experience was infinitely valuable, and like all lower-level things it provided great perspective that can be useful on a variety of projects.

I build one around 2000 that was used to program consumer polls; an interpreted language simple enough for non technical people to use. It turned out to be sufficient that the implementors of the polling system used it to manage the underlying system as well by storing the code in a database. No idea what happened to it. It was written in C/Objective-C in a WebObjects environment.

I've been tinkering with a language I call beep for a couple years now. It's sort of my programming equivalent of the old hot-rod sitting in the garage. I'll disappear for a weekend while I work on it and then not even think about it for a month or two.

I made it/am making it mostly as an exercise. I like to know how things work, and there's no better way for me to learn than by doing. It's also just fun to make stuff. Last (and definitely least), it's an expression of a certain degree of frustration I have with the current offerings. My dream language—oh, maybe I should've named it "dream"—would feel like Ruby but have static types and no nil references. I wholeheartedly acknowledge the tension inherent in that statement; much of Ruby's power comes from its dynamism. But like I said, it's just an exercise. :)

Not sure if that qualifies as not-a-toy-language. Is a language's "toyness" based on its simplicity, or its intended use?

Sounds interesting. Have you ever tried Kotlin? I've mostly worked with Ruby and C#/.NET, and Kotlin, based on JVM, feels like an interesting fusion plus a few extra tricks. It's still mostly static, like Java, but adds a ton of stuff around blocks/lambdas and non-nullable reference types.

Ah, funny, I was just reading about Kotlin yesterday. I haven't tried it but it looks interesting!

Have you looked at Crystal before? It's a compiled statically typed variant of Ruby.

I haven't! I'll check that out.

I made one foray into language creation that some folks may find interesting, although I am very much an amateur. I designed a declarative language attempting to mimic the structure of human concepts. The idea was partly to try out this hypothesis that all concepts are reducible to 'types' and 'relations', and to try creating a language based on those two primitives. After coming up with some concept hierarchy, you could use it to generate particular instances of the root concept—the idea being to use it for procedural content generation. Here's some sample code:

	(legs: Cylinder, seat: Slab, back: Slab)
		attachment(legs, seat)
		attachment(seat, back)

		a: unsequence[x: Type2, xor[y: {Type3}(), z: type4]],
	 	b: optional[Fruit], c: >3[pet: Animal], d: 2[Animal(Blah){}]
		constraint1(x, y, z)->recognize->toInt
		constraint2(x, z.a.b.c)
		constraint3(z, y)
I wrote a grammar for the language and generated a parser, and have a design for the runtime on paper—but, the design was rather complex and for various reasons I thought it would be best to build a general purpose data structure visualizer to assist in writing it. That ended up being a quite large project on its own, which I'm still working on (quick demo: https://www.youtube.com/watch?v=HpxgUVNAhXc ; more info: http://symbolflux.net/projects/avd).

I also realized recently that my main 'innovations' were already covered by the logic programming paradigm decades ago, so that has discouraged me some. (And I've recently started thinking that machine learning will be better for doing procedural content generation than anything produced by explicit descriptions.)

Our CTO and my co-founder at CryptoMove (also, my dad) created the Hello distributed programming language recently.

From the white paper, some of the problems Hello aims to solve are (1) Repeated re-coding of the same distributed primitives; (2) Challenging development process due to the combination of a programming language and extraneous library; and (3) Inefficient and unreliable distributed code.

Here is the white paper and reference guide if anybody is curious:

White Paper: https://docsend.com/view/zfcz7sh Reference Guide: https://docsend.com/view/65tbeht

Edit: Thank you omg.lame.dont.dotis@wtf.example.com (email edited) for the feedback. We have email capture by default because we use DocSend for sales collateral, but we've turned it off for these white papers. They're downloadable too, if you just want the PDFs.

I tried writing a lang called Slang where every statement was a search query against an corpus of data and methods that were either imported or generated by the active program. Under a certain threshold of query-match accuracy, it would abort with an exception. I could never make it work and it's probably a bad idea that I should try 5 or 6 more times.

I created a language back in college, although it wasn't for any class but to learn how Forth worked. It was Forth-like, but unlike Forth, it had polymorphism and objects. I used the language for both work [1] and a class assignment (a Unix shell, where I made Unix commands first class objects---a really cool design and I got an A in the class). But I never did like the syntax (Reverse Polish Notation). I've thought of cleaning up the code [3] but ... it's a lot of code to update ...

[1] I worked for a neuroscientist [2] out of the Math Department writing software.

[2] Dr. Arnold Mandell (https://en.wikipedia.org/wiki/Arnold_J._Mandell)

[3] It would win a "Most Abuse of the C Preprocessor" award, due to writing my own template package, long before C++ had templates. Also, the language written in C, not C++. Abuse indeed.

I created an IVR scripting language back when premium-rate billing was in its heyday. We were in a very successful business (so successful that we didn't need a salesperson). Our Achilles heel was that we were dependent on a company for our IVR language. I asked why we hadn't written our own and was told that they guy who wrote it was really smart and took 2 years to bring it into existence.

I decided to try to work out a proposed architecture on a pad of paper during a transcontinental flight. I created a mockup simulator that seemed to work and then borrowed some hardware to see if there were any problems. There weren't any that weren't quickly fixable.

I brought in my language to my company's owners, who told me they were going to buy the language they used for $300,000. I told them they could have mine for free.

I want to, but I have never done so. Maybe later in my life.

I have 2 ideas.

the first comes from the difficulty of using make. I think the make system lacks debuggability, it's not easy to log, break, step through and watch locals.

I feel that building should be handled by a domain specific language, instead of make.

I haven't researched enough though, there may be something similar, only it is not popular yet.

the second ideas comes from the difficulty of documentation.

I'm thinking of a programming language that allows you to describe the block diagram (modules) of your program easily. You need to specify what the interfaces are, how many threads ....

then a diagram can be automatically generated as a document. you then fill the internals of your module, with classes. The idea is always forcing programmers to describe design first, then implement. code and document are mixed, so the document is always up-to-date.

For your first idea see:

  * cmake
  * premake
  * ant
  * scons
  * ninja
  * gyp
  * jam
  * gn (Google)
  * meson
To name just a few.

Your second idea is basically UML and some of the tools that exist for it.

You simplified my idea by a lot. the second idea is not uml. uml isn't really helpful at understanding source code. It's simply a visual version of classes.

I use cmake, don't think it's a language.

It may or may not be a Turing complete programming language, but it surely is a language.

Regarding your first idea, you may be interested in the Shake [0] Haskell EDSL as a replacement for Make. Being embedded in Haskell, Shake still has the full power of Haskell at its disposal.

[0] http://ndmitchell.com/downloads/paper-shake_before_building-...

A really cool replacement for make is tup. It's very fast and figures out dependencies by tracking which files get read or written by each command.

One academic also did make in Prolog. That way it was ddclarative and used logic programming.

Make is declarative. It has rules that you declare; it figures out which ones fire and in what order.

I know that. The imication is they redid a declarative program in a declarative language.

I wrote a stack based, concatenative, operator language. I demoed it live at a tech talk in Italy on the future of programming and databases.

Its core feature is emergence/composition, to demonstrate its power, let me show you a SCARY example of a factorial function:

factorial : [ = ] ? 1 | [ - 1 factorial * [

Now, let me show you the exact same program using an EVEN SCARIER "language pack" (note, this is the exact same program, nothing is different) here:

Define the factorial function to be that if the left parameter is equal to the right parameter then return 1 else wise please take the left parameter , subtract one , recursively calling the factorial function again multiplied by the right parameter .

That terrible "english" is the EXACT SAME program as the first line, the interpreter doesn't know there was any english. I do not recommend ever coding with plain english, but it demonstrates the power of emergence/composition/currying with the language.

If anybody is curious, I'd love to explain more / chat with people / show demos. I'm ultimately going to switch the programming language's interpreter to be built with a database (the one I'm working on, http://gun.js.org/ ) so that way I can have a Turing Complete database system. Once that is working I will then be able to evaluate programs across multiple machines simultaneously and modify programs dynamically on the fly, as well as reduce O(N) operations to O(1) by spinning up N number of machines (for large enough data sets that make it worth the overhead of splitting up the load).

I wish they recorded the talk, I'll have to do it again and post it. It had a lot of other interesting ideas, like combining it with Machine Learning to sample inputs and outputs of functions to see if the computer could optimize algorithms and then issue a pull-request to its own code, becoming a contributor. :)

I implemented one in XSLT, we were developing a product in 2003 that was supposed to be a media management system. Managing the transformation from popular XML formats or any generic XML to various media outputs. I created the language so you could call write logic code inside of your input XML, calling functions was done either with markup or using a uri scheme (this was trick to get around possible validation issues)

Each media and format had a configuration file, it was inside of this file that variables and functions were defined. If you tried to call a function that was not defined in your context that part of the tree was removed from the output (figuring it was better to make the output than to fail). This also meant that you could branch and override functions and variables dependent on the media or input format.

I once created an emulator for HP Time-Shared BASIC (https://en.m.wikipedia.org/wiki/HP_Time-Shared_BASIC) so I could run the original code for the original text-adventure form of Oregon Trail. I already had much of the AST parsing infrastructure in place for a syntax highlighting text editor I had written. The text editor component got extended to be the terminal emulator. This was all in JavaScript, rendering with Canvas2D, so that the resultant image could be textured into a 3D model of a Commodore PET (the only "old" computer model I could find that didn't cost an arm and a leg to download) in WebGL, then displayed in a VR headset through WebVR.

I regret nothing.

Yes. I created an internal language called "Lispon".

A lisp using json instead of sexprs. e.g. ["op", "arg", "arg"]. The team had a couple of requirements for a service we wrote: 1. Need to do batching. 2. Need to do stored-templated json responses based on data in our DB. 3. Need to not cost the project more than a week.

We had worked around #1 and #2 in awkward and debt riddled ways. (The client of the service was PHP with lack luster JSON processing, the service was java) So I whipped up Lispon and evaluator in about 3 days and had it integrated in another day.

It's lifespan was intended to be somewhat short, but like all code it's somewhat slow to get replaced and is doing its job adequately. JSON based sexprs are terrible to read.

This reminds me of MiniMAL [0].

[0]: https://github.com/kanaka/miniMAL

I created https://github.com/onnlucky/hotel for a few different reasons.

1) I wanted to see how far you can go by making every language concept first class, because only first class things can be an abstractions.

2) But another big motivation was that languages usually grow towards building large systems in. I wanted every tradeoff to go to the human side, making it more suitable for beginners. For example, 0.1 + 0.2 = 0.3, slower by not using native floating point numbers, but much more humane. Similar motivation to remove the import statements. Or no difference between object fields or methods, e.g. string.length is the same as string.length().

Do you really think that those things make it easier? It seems like when languages do these kind of "beginner friendly" features, they just end up as gotchas and quirks. (Like in JS or SQL)

I also teach programming, and see many of the beginner mistakes made.

Part of what I tried is that the smallest subsections of the language would be complete and useful by themselves. Without ever having to say: "this part you do not need to understand yet". That is where the "beginner friendly" part focused on.

Not by making it "simpler" if that would sacrifice first class-ness. Say JS with its global scope promotion, maybe easier, but not first-class, clashes are inevitable.

And for example, the no import thing is based on very predictable scope rules, which can be postponed while learning, but later are completely deterministic and predictable, while still first class, so that the programmer can have control over it if needed.

But in a way it was mostly an experiment to get my thoughts clear on these matters. And for instance the concurrency things in there have less to do with beginner focus.

I haven't created a programming language per se, but I created https://mypost.io/ which is a bit more advanced than most single webpage creation platforms, as it allows HTML, CSS, and JavaScript. With that--I basically added hundreds of lines of code to accept BBCode. From random strings and names to advanced mathematical equations and formulas. I mean, I didn't write the scripts per se, as I'm using javascript code, available on the Internet, but I did write the BBCode that makes them all easier to write out and publish on the website. For the math, however, I'm still writing up the documentation.

I forked an existing scripting language https://github.com/mingodad/squilu (from https://github.com/albertodemichelis/squirrel) and did some bug fixes and modifications to better suite my needs:

- C/C++/Java/Javascript like syntax

- Be able to write a subset of C/C++/Javascript (meaning it can be interpreted or compiled when performance matter)

- No global variables by default, warning when locals are overridden

- Clean hash tables (no builtin methods/preperties)

- Fast enough (sadly it can't beat lua/luajit)

- Small and easy to extend (one man can master it)

- Cross platform


Several, one for driving the film printer in the Image Processing Lab - it allowed me to create a program for printing jobs and then re-use that program rather than to manually run all the steps. One for compiling graphics operations into the minimal RISC code of an Intel graphics chip. One for programming robotics behaviors in mobile robots, a variation on TUTOR for writing some automated student testing software.

But 'creating a language' is just a tool in the programmer's tool box that allows a set of capabilities to be strung together in different ways automatically. That has been a useful tool ever since Jacquard used it to program his looms.

Yes, I wrote a few different ones for text-based BBS games I wrote in the 90s. I did it because I wanted to write a totally customizable game engine that didn't require any additional tools. Also, I don't think there were too many free options for me at the time. (Maybe there were, but I didn't really look.)

I did it again years later when I was writing another game engine, this time sprite-based. In this case, I could've used an external scripting language like lua, but I figured it would be fun to learn what it takes to write my own language.

I learned a ton in both cases even though they were really toy languages that were meant for a very tiny scenario.

I usually make my own dialects of languages that already exist, I got furthest making my own Forth and Lisp interpreters. As for why, the best way of learning a new language, for me at least, it to implement (or even just trying to implement) and interpreter or compiler for it. Despite being for learning, I have managed to make something semi-usuable out of both of them.

The interpreters are available at:



I created a scripting language for my interactive fiction engine because the existing ones have too many assumptions about gameplay. It's Turing complete with ultra minimalist syntax using punctuation as the few keywords it has.

Is it (the engine) multiplayer capable?

I'm working on a cross between IF and LP-style MUDs with support for graphics and sounds, using Io as both implementation language and scripting language (and probably Nim for GUI once I get to it). The code lives here: https://github.com/piotrklibert/mrtr and is currently a mess, I also got side-tracked and started porting pyparsing to Io to get a nice DSL for parsing user's commands, but the idea has been on my minds for a very long time and I'm determined to finally do something about it.

I'd be happy to learn more about your project and to chat about it if you'd like.

Mine was single player and had no parser as it was intended for touchscreens and drop down menu interface. Also see my reply below - your solution is far more sensible and is what I would go with today too.

Github page? Sounds interesting as I've always wanted to write one, but never had the time and I'm sure Inform7 is bigger than what I need.

I keep my Internet personas separate - just like your throwaway7645 account name suggests you do. It's not that big, I wrote it in between jobs. It's only there so that non-tech people can write stories for it. Nowadays I'd just use any old scripting language and write a library for that instead. I suggest you go with that solution instead.

I write a DSL now and then. I also wrote Toadskin[1]. This is a good reminder for me to rescue it from the Internet Archive and stick it in GitHub along with the fix for the bug in '[' that it looks like I never published.

The reason I gave at the time was: "If you have to ask, then you have never been stuck in a hotel room in Fort Worth with no internet access." (This was 2003)

1. https://esolangs.org/wiki/Toadskin

Edit: Actually there's more than one bug there. I'll get it fixed before I revive it.

Didn't create the language, but had to implement a parser for RELAX NG Compact[0] years ago (in PHP!!).

It was to provide a client the flexibility they "needed" for data validation specifications.

It was a mess. We suggested many alternative routes, but the client insisted on this. We did make the saner decision to go with an already-established markup language.

We couldn't even do the full XML version, as the client didn't want to "do XML".

[0] http://relaxng.org/compact-tutorial-20030326.html

Yes, EasyAM, a programming language for organizing analysis conversations.

I created it to eliminate:

1) Situations where different people in an organization have conversations about the same thing and reach different results, not being aware of the duplication

2) Situations where lots of people in an organization are forced to copy and repeat the same information over and over again

3) Churning and duplication of effort where various people and teams are learning about how and why an organization works and how their work fits into the big picture

4) All the noise and churn it takes for a team to ramp up on a story that it hasn't seen before

Is there any documentation about this? Would by nice to have as a simple interface to a RM tool.

Sure. Email me.

I'm working on Fold – a modern functional language with lisp-like macro-system on top of OCaml.

- https://github.com/fold-lang/fold/wiki/Language-Overview

The language syntax is defined as a library and can be extended, which makes Fold ideal for DSL implementation. The powerful macro system can also be used to perform many static optimisations. My goal is to have a very flexible and performant language specially suitable for data processing.

I am working on a draft for something that started off with my discontent with the direction C++ was going in. So, in a few terms, it follows the performance focus principle and philosophically it's being designed to serve you as a tool (i.e. not staying in your way) instead of trying to discipline you (even if that means letting you shoot yourself in the foot). C++ got messy in my view. For now I've been collecting ideas on language design from every piece of info I've stumbled upon, but there's nothing implemented so far.

I'm currently working on a DSL for manipulating tables. It's a cross the between smalltalk family (ST, Self, NewSpeak, IO), SQL, and vector languages like Q. The code translates into Java, where it executes tablesaw code. Tablesaw (https://github.com/lwhite1/tablesaw) is a very fast dataframe for java, with pretensions.

I'm mostly doing it for fun, but also to make it easier to use tablesaw for exploratory analytics.

I made Bard, a functional-first Lisp. It started out years ago as a version of s-expression Dylan and then, after I got an early compiler and VM for it working, began to accumulate changes as I went through successive phases of incorporating new ideas followed by redesigning to make it simpler.

The why of it is simple: I designed it to make me happy in my work.

I've made a couple of products with it, but I don't think anyone else has ever used it for anything.

It's between versions at the moment, experiencing yet another assimilate-then-simplify phase.

I made a little DSL for a program. Soon enough after requests the functions included a ternary function, and some custom functions. And before you knew it it was the bastard child of every bad programming language imaginable.

> and why

I didn't intend to. It just happened. And now it's used for all kinds of things it can't really manage because it doesn't have the fundamentals right - it was just supposed to do some simple formulas and it's too late to design it like a proper language.

This is how both js and php were born I imagine...

Created https://github.com/shanhuio/smlvm

A C like language with golang like syntax. Pure Go language implementation.

https://shanhu.io/smlvm - shows the dependency structure of the project.

https://github.com/shanhuio/smlhome - example code that we wrote in the language.

IfLoop (http://www.tapirgames.com/App/Ifloop).

For better programming experience on touch screens.

I created several programming languages a few years ago, thinking "I could do this better" (than the languages I was using). Turns out it was harder than I thought, but I learned a lot about how software works. It remains by far the best thing I've done to advance my skills.

Some time after that, the Julia language came into existence (http://julialang.org/) which is pretty much everything I wanted out of a language.

I made a toy language called Cowbel to experiment with various language minimalism features:


It's an attempt to take as many features as possible out of a language and still have an expressive Javascript-like language. It worked pretty well, but of course, actually producing the compiler is the easiest part of any new programming language.

Features include:

- all types are anonymous (although interfaces are named)

- aggressive type inference which allows the compiler to distinguish between scalars, direct objects and indirect objects (via a vtable) at compile time --- not only does it do direct function calls if it knows what type an object is, but you don't have the scalar/object schizophrenia that Java and C++ does; integers are just objects implementing the int interface, but there's no object overhead

- very limited compiler knowledge of the language semantics --- int semantics like addition, subtraction etc are defined in the standard library

- template-based generics, full closures, nested functions

- multiple return values

- objects, methods (but no classes)

- non-nullable

- compiles into C for maximum interoperability

It was going to have prototypical inheritance via composition, where an object could inherit methods from an arbitrary number of other objects, but I never got round to implementing that bit.

One of the bits I'm most proud of is that I managed to unify block scopes and object constructors. {...} constructs an object. The only difference between one used as a block and one used as a constructor is whether you assign the result to anything --- the compiler just optimises everything away is you don't! This makes nested functions and methods identical, and drastically simplifies the language semantics...

    if (condition) { thisIsABlock(); }
    var object = { thisIsAnObject(); }
It's not actually useful, mind. There are too many rough edges and the compiler tends to crash if you give it invalid code. But it was really interesting to do.

Oh, yeah, a very early prototype of this language compiled into Gnu Make. I ended up writing quite a lot of an arbitrary precision maths library in Make. I... have no idea what I was thinking.


In the early 90s I created a small custom language. The purpose was to learn/build a compiler though, but other projects came along so it was eventually scrapped. I did get fruits from it though in terms of how compilers work.

I also came up with an idea to build a common intermediate compiler so you could use C/C++, asm, BASIC, custom language etc. on top, then a descriptor/translator file would compile to a common assembler base. This was on the Amiga (and some years before .Net).

Yes, in the late 80's, I was frustrated how slow PL/M was to compile even the most trivial program (order of 5 minutes for DO ... END). I developed PLD which had had fast compile as its objective. I separated all tokens with spaces and code was strictly left to right (e.g A + B -> C ; rather than C = A+B;). The first self-compiler was about 1500 lines and self-compiled (after some optimizing) in about 45 seconds on same hardware as PL/M above.

I have a language that is an experimental pure lazy functional programming language with a type system whose types are unions, not necessarily disjoint, of allowable value constructors. The syntax is S-expressions, so it kind of looks like Lisp but runs like a Haskell/Smalltalk mash-up. Someday I'll write it up, but I'm presently working at a company that claims ownership of everything I ever think about, so it's kind of on hold for now.

No, but I've considered some ideas at the back-of-envelope stage:

- typesafe portable macro assembler. Used for those cases when people want "close to the metal" behaviour of C, but with less footgun potential.

- Archaeology of old language features that have been thrown out with the bathwater (e.g. COBOLs use of '.' as statement terminator, "PIC" statements)

- attempt at Perl-style language which optimises for English text readability, comprehensibility and euphony

I would be interested in working on the portable typesafe macro assembler!

At the bank I used to work at, traders had a habit of wandering over and asking questions about the differences between the prices of securities over time (as arbitrageurs, it's all they thought about).

I got so sick of writing one-off SQL queries that I wrote a very small (not much bigger than a calculator) language to express their questions. They liked it, but it had no potential for use outside of our trading desk (which was disbanded a few years later).

In college I did a project where I created a visual programming language. I thought it would make the program easier to read, but in hindsight it's pretty clear that these systems would be too cumbersome to use except when teaching basic programming concepts.

Professionally, no, but I've taken the approach of "I'm writing a compiler" when ingesting someone else's xml schema.

It's very useful when you need to identify and report errors very early.

Graphical languages are quite common in the industrial controls space, ladder logic in particular. Horrendously cumbersome to you or me, but usable by controls engineers and factory technicians.


I designed a logic-functional language called Cosmos (https://github.com/mcsoto/cosmos) based on what I thought would be a good/minimalist/innovative/easy-to-use language design that I'd like to program in. It's lacking in implementation, as it'd ideally have at least a VM in C (and a game framework, I like those).

Yes, I have written a few)) Because this is the most interesting area of programming for me.

Many years ago, I wrote BNF-like language to translate text into syntactic tree, it was part of my graduation project.

I also wrote a Lisp-like lexical preprocessor as an alternative to the standard C preprocessor, it can be used with other languages, for example, with Python.


Yes, Battlestar, for fun and for the educational process. It's a different take on assembly, with the goal of creating tiny executables.

I would have done it completely differently had I started again today, but it works, and "life" example code is kinda nifty.


I created a DSL for WebSphere application deployments while working as a contractor for a local company. There were 4 people on the team, and typically each update would take at least a half hour, and was quite error-prone because nothing was automated and each deployment would touch multiple servers.

Fairly simple language, parsed and executed in Perl. I doubt it survived long after I left, sadly.

I made a language for doing 2-adic arithmetic easily and exploring the Collatz 3n+1 problem.

I am not aware of any languages with 2-adic number primitives, so created one!


Concept-oriented programming http://conceptoriented.org. Not a language yet but an attempt to generalize OOP. It introduces concepts instead of classes and inclusion instead of inheritance.

Still working on one because I think Agent oriented programming has a use and DSLs are closer to how we talk about what a program does than any API.

Sure, http-based rpc mechanisms won, but it seems like agents have a lot of promise for parallelism and code deployment.

I wrote a parser and interpreter for a game oriented BASIC with a 2D game engine and api inspired by old game consoles. Wrote it in C 10+ years ago. It was one of the greatest learning experiences of my life. Never saw the light of day :)

I wrote remobjects oxygene (Pascal), c#, swift and iodine (java) for .net Java/android cocoa and native windows and linux. Started about 12 years ago, still going. Originally because I thought it would be fun, ended up being my job.

Internally at my company, because https://en.wikipedia.org/wiki/Greenspun's_tenth_rule

I've done a lot of little languages for game projects, at first poorly and unsuccessfully, but increasingly getting wins.

I made a postfix stack language as a scripting system to describe bullet patterns; I didn't use it to the degree I thought I would and it was hard to debug: Lessons Learned.

I made an s-expression parser and small interpreter(not particularly Scheme-like, though) for an in-game console. It added a lot of friction to make the API accessible from this parser: Lessons Learned.

I made a data language intended to allow for the composition of documents with varying node types. Subsequently I realized I had reinvented XML and started using that instead.

I made a small VM, "Ivy", intended to provide a logical abstraction for concurrency(e.g. game actor state machines), with subthreads spawned and maintained by running a special opcode(they get pushed onto a stack, and then terminated when the opcode at the base of the stack returns false). I then target the VM as the output of various data languages. The approach didn't really provide wins in practical situations as it turned out to be more expressive to have the VM spawn more instances of itself through an API call, but it has continued to be maintained and revised and the newest generation, renamed "Hedera", is mostly designed but not operational: It's shifted to a single-thread design and a new focus on clean, hot-swappable addressing of global data resources(think URIs) which I'm using throughout the game code now.

I made an XML-syntax GUI system to supplement an IMGUI. The document acts to represent the kinds of things that are normally stored in a retained-mode layout(positioning, nesting, relative scale, etc.), and it offers finer control with explicit stack push and pop and conditionals for quick disabling of elements. I'm still using this one.

I made a story engine scripted with an XML syntax. This one uses the Ivy VM described above, first compiling the data to a behavior tree system, and then to the VM opcodes. It supports concrete functions for gameplay(setting counters, rendering text, presenting choices, substituting names and personal pronouns) as well as their structuring in terms of which ones get called when, pushing story passages onto a stack, transferring outcomes to hardcoded algorithms, and support for recalling save games if the story content changes. There is a lot of customization that precluded working with Ink, Choicescript, Twine, etc. - taken in whole, the stack is probably overengineered and could have been built faster/cleaner with different approaches, but it works.

I made a little lambda calculus with super limited possible ASTs last summer to see if I could get a RL algo to learn to program in it. It failed horribly, but it was fun and I learned a lot.

Hecl: http://hecl.org/ - for fun, and because there was no scripting environment for J2ME devices back in the day.

I'm taking a class on Prgramming Languages right now and we're building a language using Racket as the host language.

It has no intended adamtages other than to learn how a language is created.


I'm creating a small virtual machine based around the idea of associative memory (read: tuplespaces). I'm going to use it to write a ton of declarative code for smaller devices and for some of my own projects.

the opcodes are still in a state of flux, but a partial implementation exists in Python, with the intent of porting everything to C and getting a spec up. you have operations like 'match', 'exists', 'assert', 'retract', mathematical operations...

...oh, and no callstack. ;)

Yes, Kayia (kayia.org) because code is not meant to be read, it's meant to be queried, and text files are a ridiculous way to hold information to be queried.

Groovy. Did you write this for actuarial work or something as a replacement for spreadsheets? The philosophy aside, what did you expect the use cases to be?

Thanks! It's kind of a funny story, the innovation isn't really the language but the data structures underneath -- which means I'm going to be pushing it initially as a database (of all things). I'll post more in a month or two.

Not yet. :)

(Been thinking about a language more Object Oriented than Smalltalk for a few years on and off, but not yet.)

Because time. :(

Because it's the Scheme way.

while never really used outside of a couple of games, I wrote a couple that were used as a scripting languages in MUDs, and one of them ended up as the logic engine in a commercial game in 1995.

Since 2013, I have been leading the creation of Envision, an in-house domain-specific language, intended for data processing, reporting, forecasting and optimization for supply chains and logistics. Small intro here: http://www.lokad.com/envision-more-technical-overview

Why ? Between 2008 and 2013, Lokad provided a data forecasting API : send us your sales history and we'll respond with a good sales forecast for each product. This didn't work because:

- most customers don't have the know-how or infrastructure to clean up their sales history (for instance, by setting up adequate SQL queries when exporting data to our services), and instead either sent us garbage data, which led to garbage results and therefore lost us customers, or they cleaned up the data manually in Excel, which prevented us from being a permanent, automated step in their supply chain processes, and therefore lost us customers.

- most customers don't care about knowing the future, they care about knowing how much they have to order from their suppliers. These decisions can certainly be optimized if you have a good model of what the future looks like, but that involves even more knowledge about the customer's business and even more data (expected margins for products, current stock levels, minimum order quantities for various suppliers, etc).

Most of our sucesses in this early period were premium customers for which we could afford to write and run custom C# code that did the initial data cleanup and pre-processing, and then used the forecasts to generate a list of actual business decisions.

As you might guess, C# is not the best language for this. It costs too much to write, deploy, maintain and run. We examined the various candidates available at the time and decided to write our own language, with the following characteristics:

- 100% online, no local setup needed

- excellent static analysis to detect issues ahead of time (no type errors, null references, off-by-one or out-of-bounds errors, join arity surprises, etc)

- good high-level guarantees (no infinite loops, no accidentally quadratic operations, fully transactional behaviour)

- fast enough without manual optimization

- predictable performance (small changes in the input data should not cause major changes in execution speed)

- handle the "hard developer stuff" behind the scenes (multi-threading, memory management, error recovery, caching intermediate results to make REPL faster, etc)

- our various high-performance black boxes (such as demand forecasting) are available as functions in the language

- a small, well-designed (supply chain) standard library

Around 2005, I created a language for learning how to create compilers. It was just a learning exercise.

Between 2008 and 2010, I created a language with Java/C# like syntax, C/C++ like semantics, strongly-typed but with type inference that was meant to be embeddable into C++ apps. I wanted to use it to create a game. However, I didn't want to created a Virtual Machine for it, so I opted to use the Parrot VM[1]. The choice to use Parrot was huge mistake because that project turned out to be a clusterfok that never worked right, even though they had been working on the VM for close to a decade, by that point. I abandoned that language and moved to another language because there were no other VMs that suited me, at that point.

Since I was burned from the Parrot VM fiasco, I decided to create another language that didn't rely on a VM, so I created a general purpose API design language. In this language, you design APIs, and then write 'generators' in JavaScript that transform the API code to the final output. The benefit of this is that you only have to create one generator for each type of code/asset that you want to create, and use the same front-end API code (call schemas). So, for example, you can create a schema for some Amazon web-service, and then have generators that would output different types of clients, or some test-servers, or automatic documentation, sample inputs, dashboards, graphs, whatever based on your one schema. The language is written in C++ and it very well tested (close to 3000 tests). My vision is that a developer will design their API schema once, and then use generators (written by the developer or by others) to generate all kinds of things from that one API: client-code, server-code, test-cases, docs, graphs, dashboards, etc. This way, we can focus on actual API design and leave the mundane tasks to the generators. The flow is like this: [schema(s)] -> [compiler] -> [intermediat representation] -> [generator(s)] -> [final output(s)] I think this language is neat.

While I was working on my API design language, I wanted to have really good error messages, and I wanted the error messages to be all in one file, so that I could glance at them easily while developing. So, I ended up designing a micro-language for handling text-templates, where you have things like: "The ($noun) is ($verb)" The thing is, you'll quickly realize that replacing variables is not enough. Eventually, you'll want conditionals, loops, .... function calls (:-0), so I added them. This micro-language is embedded in a couple of files within the API language, but I thought it was neat, so I decided to create a proper text-template language based on it, because if you do a search only for C/C++ text-template language, there are not alot of choices for this. I have the general design for the language completed in my head, and I have a basic implementation of some of the basic things like syntax parsing and variable interpolation and basic function calls, but I paused development on it for now, while C++ 17 hits the streets, because I had a real need for std::string_views to minimize string copying. Also, since I decided to make the implementation of this text-template language more robust and performant for enterprise type of use, I decided to go with an sql-like prepare->execute type of processing; this requires adding a micro-vm for the execution part, which adds to the development time.

[1] http://parrot.org/

a version of scheme from 2013


it is a very good exercise to implement a scheme interpreter and LR parser. I have done so many times since 2007 in many different programming languages. It is a very worthwhile exercise that involves variables levels of resistance and some difficult work. The wikipedia article on LR parsers is very helpful.

Why do people train for competitions? There is such a thing as being best at the world at this stuff, but only in the imagination. There is no real "top gun" prize because you don't want these hot shots going insane with jealousy. It's just an idea. Programming languages help you compete without competing.

In the future, people won't create programming languages because the math will have caught up to handle them. As it is, codebreaking math and classified math are holding this back. I think this will change as soon as people get serious about automatic translation of mathematics textbooks such as the Springer-Verlag series (undergraduate and graduate).

Hi, Great people and great works! Buuuuuut is there someone who think the lexing style of almost all these languages is wrong?

Yes, as part of my PhD I have built a parallel functional language that can run on GPUs: https://futhark-lang.org

I've done some tiny domain specific interpreters for specialized use in products.

I think I'm going to have to though. I used to like C++ but I really am not a fan of the direction it's moving in. It used to be a simple "better C" that let you write abstactions over simple code that you could understand how

Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact