My favorite extreme example was the time I used a super fast 6502 assembler environment that would do the "change a line of code and get the target running" cycle in a couple of seconds. This kind of interaction is magical. You're still writing kind of crappy assembly language, but it almost doesn't matter. Things just flow.
On the other extreme: A large component of me deciding to quit a job was the fact that a build of the product took four hours. And the build was typically broken. So: Arrive in the morning, sync, wait all morning for the build to fail. Do another cycle after lunch: fail. Go home, repeat for weeks. (Add to this: Managers who refused to buy decent development machines, people who kept checking in busted code and making things worse, and a crushing schedule. Who needs that?)
These days I'm do a lot of C++, and some PHP. The C++ projects take a few minutes to build, which isn't great, but it's survivable. The PHP "builds" as fast as I can refresh a browser page. And as much as it pains me, on most days, when I grit my teeth and get honest about it, I'm more productive in PHP. And I despise PHP.
I learned LISP early on in my career; wrote a few LISP interpreters, goggled at the majesty of LISP machines, read all that I could. But I've never shipped a significant project in LISP, nor am I likely to. And the Newton actually flipped from a LISP (well, OOPy-Scheme) implementation language to C++ in order to ship. I'm wondering if there is some law of human nature at work: You can have elegance and comfort or you can have a product. I sure hope I'm wrong.
Another reason being that I don't need to waste time discussing and arguing which OOP patterns to use - in C it more or less fixed how you write code. Of course one can go extra mile and reinvent VTable and Dependency Injection, but it feels all foreign to C.
So maybe give C a try, it's amazingly fast to compile and link, yet you feel almost at home after C++.
In reality what it means for me is that I need to spend time making decisions about containers, which is good for game development, as what makes games fun doesn't have any connection to templates. But it also bad for casual non-game related code. I find it much faster to just "hey load this json, do some processing, and save json back" do that in Python than trying to do in C just for sake of C. Ideally namespaces/templates and operator overloading in C would solve all this struggles, but then where C ends and C++ starts?
-  https://github.com/gingerBill/gb/blob/master/gb_math.h
-  https://github.com/attractivechaos/klib/blob/master/khash.h
A lot of CS seems to be story-telling: "This feature makes language X better/worse, because it just does, obviously" with no independent testing or peer review of language productivity and robustness in real work environments.
That would be sociology or one of its siblings, no?
... or, at best, the piddly "ethics" course that gets tacked on to most undergraduate CS degree programs?
This observation ("workflow") was one of the 3 bullet points outlined by Keith Adams' 2013 presentation "Taking PHP Seriously".
(KA's presentation was partially a response to 2012 "PHP a Fractal of Bad Design".)
Basically, the horrendous inconsistencies and flaws outlined in "PHP Fractal Bad" can be true ... but simultaneously be overshadowed by "workflow" benefits.
 starting with slide #14 of "Adams-TakingPHPSeriously.pdf" : https://github.com/strangeloop/StrangeLoop2013/tree/master/s...
 4 previous HN threads: https://hn.algolia.com/?query=php%20seriously&sort=byPopular...
So I get the benefits of static types, plus the benefits of fast iteration, PLUS the benefits of a fully dynamic language for when I want to accomplish something quickly.
PHP - for all its faults - does not suffer from these kinds of recompilation issues, yet can be optionally pre-compiled for added speed when you do go to production.
I realize this isn't always possible, and is probably easier to do in a managed language, but you can cut down on iteration times by mostly forcing the IDE to do incremental compiles. You can work in real-time on code that might take tens of minutes for a full build.
TDD is fun.
Rich Hickey, creator of Clojure has a fun quote that goes something like: "You thought you wanted TDD, but really you wanted an interactive REPL."
My gripe with TDD, is that it gets tedious in its own way. Always needing to eventually mock some aspects of the code, and always working in the small, you start feeling distant from the medium and large picture of the code.
I think that the a good workflow is to do initial experiments on a REPL followed, once I get an idea of the code will look, by writing tests and the actual code that will run in production.
Ya, I agree, tests are a good treatment, the best for most languages. A good interactive REPL is often better though. Clojure has tests, good IDEs and debuggers. Yet people don't care about them as much, because the REPL solves most problems. In return, it means the IDEs and debuggers aren't as good as Java, because people care less and so also invest less in them, and sometimes I'd want a better IDE or debugger to complement the great REPL experience, but I wouldn't trade the great interactive REPL for them.
Off course, writing testable code using OO almost requires dependency injection IMO. With FP, it's almost trivial.
I don't have a long build step for my codebase (compiling typescript just takes a few seconds and incremental is instant) -- but I do avoid lengthy deployment cycles -- deploying code to aws lambda takes quite awhile.
Starting with a simple unit test or spec (what I meant by functional testing, though this can be BDD too) can be easier to start with and change as you figure out what you want. Even with specs, or a BDD framework like you said, no need to bring in a whole new DSL unless you need it IMHO, others may disagree.
Doesn't LISP offer the same "fast feedback" cycle that e.g. PHP or whatever does? And even faster than C++?
It's hard to break this habit of familiarity, it's self replicating, because the less familiar people are, the less likely they can have good teachers and teaching material about it, etc.
Some people (myself included) don't like the aesthetics of Lisp as a language, nor do we like the enforced functional program structure, nor do we like the lack of infix operators. Maybe I "didn't have good enough teachers," but for every other language I've ever learned I didn't need those "good teachers" to achieve a strong level of mastery, and I've learned more than a dozen to a high level of mastery.
After a while you have to stop blaming the people and instead just admit that Lisp isn't as awesome-for-all-purposes as some famous people seem to think it is.
After a while you have to stop blaming the people and instead just admit that Lisp isn't as awesome-for-all-purposes as some famous people seem to think it is.
I don't know, sounds like it's just not as awesome for you, but probably is that awesome to all those famous people saying it is.
I spent months working on a Lisp project in college. I pretty much hated it.
Like I said, its possible the style doesn't bode with ya, and that's cool. That said, here's my anecdote. I also did Lisp in college and hated it. Thought it was stupid, had the worst most unreadable syntax I'd ever seen. It made every trivial thing hard, like why can't I just loop?!?, where are the variables?!?, how do I just do one thing followed by another?!?. Where do things start and end?!? I don't even remember what Lisp it was, but trust me, it was some academic cryptic variant, probably maintained by some teacher himself, had no library ecosystem, lacked documentation, the ugliest editor I'd ever seen. Anyways, moved on and never thought of it again.
Like our pg? Who loves Lisp to death, and made his first major app (a web page store generator, if I recall) in Lisp? But then after selling it to Yahoo -- ported it to another language (Python?) so that other developers could actually modify it?
It's not "lowest common denominator" to expect a language to stay within certain bounds of flexibility. If every single project has so many macros that it's effectively a DSL that no other programmer anywhere understands, then you've created a software package that becomes instantly unmaintainable if the wrong people get hit by a bus.
In college, by the time I picked up Lisp, I had already used:
* Assembly Language
And I'd already worked on video games professionally (in 1987, in a project for Lucasfilm Games, before they became LucasArts).
Java and C# had yet to be invented. At the time, there weren't libraries and documentation to speak of in any Lisp, so that's something. It was Common Lisp (as well as "elisp" on the Atari ST, which of course was completely incompatible...).
I don't see the advantage of using it over using other languages
That's probably the best question to ask, I think sometimes there's a lot of false benefits to new/different tools, that while cool, rarely add real value.
The only thing I feel are benefits brought over from Lisp are the interactive REPL workflow, concise notation, code that can be used as an extendable serialization format and configuration format, as well as macros (as double edged sword). Macros have helped me in rare times, to avoid having to write verbose code, but at my work we have very few macros even after 2 years of accumulated code, I think we added 5 or 6 macros only. And I wouldn't want to have more, for the reason you mentioned, we're not looking to use our own custom programming language, and add too many macros and that's what you get. By the way, Clojure chose to not have reader macros, which other Lisps have, and are the ones where you can really go crazy changing the very syntax.
I'd be just as fund of another language that would be functional, immutable, interactive and with smart abstractions. I can live without macros, even though I love a good macro when you need one. I would prefer a syntax with only expressions, not a fan of statements, because you're restricted on where you can use them. And I like concise syntax for having to type less and read less, able to see more at once. I reckon not all Lisps fits this description, so maybe its really not Lisp I should evangelize, but Clojure specifically. Elixir also fits this description, that I know of.
On static vs dynamic type systems, I still don't know. I like types, they feel good to have, but I just don't feel they really help, I think they're a false sense of safety, and they just slow you down. That said, maybe they pay back in the long term maintainance, I'm still unsure though.
You get all of the above with TypeScript except immutability. Pull in Lodash for extra functional sauce, or the functional variant of Lodash for increased composability. And the serialization format is the industry standard JSON.
I think once you have static types, immutability isn't as critical, but if you're set on it, there's always Immutable.js 
> On static vs dynamic type systems, I still don't know. I like types, they feel good to have, but I just don't feel they really help, I think they're a false sense of safety, and they just slow you down. That said, maybe they pay back in the long term maintainance, I'm still unsure though.
Technical debt earns compound interest over time. Static type systems let you zero out technical debt for almost no effort.
The only "slow you down" I get from TypeScript is defining declarations for libraries that are missing them. That's about 5 minutes to 15 minutes per library. That's an extremely low up front cost to pay, and everything else about TypeScript is faster (auto complete on object members) and better (refactoring is fast and easy enough that it's easy to "just do it" and not put it off until later).
 http://reactivex.io/ or https://github.com/cujojs/most/ are good examples.
Is it that "Some people" don't like those, or that "some people" have been to Algol-style languages first, and they can't easily adapt to the other style?
>Maybe I "didn't have good enough teachers," but for every other language I've ever learned I didn't need those "good teachers" to achieve a strong level of mastery, and I've learned more than a dozen to a high level of mastery.
Yes, but I bet the first one's you've learned were already algol-like.
After that, adding other languages does not mean much if they all share the same paradigms and syntax style. It's like a Common Lisp person also learning Scheme, Racket, Clojure etc -- in the end it's not much difference.
Going from those to different paradigms is what makes an actual difference (e.g. learning Coq, or Erlang, or Haskell, or Lisp or Forth, etc).
BZZZZT! Wrong answer!
I started with BASIC and transitioned from there to assembly language. Wrote several games and device drivers in assembly language (BASIC was too slow) and then learned Forth. Tried really hard to like Forth, and played with it for a while; loved some aspects of the language, but couldn't wrap my head around how to write a game with it, so I gave up and kept writing games in assembly language.
Pascal was my next language, and that wasn't until later. Much later than that I finally got to C.
You might think of BASIC as Algol-like, though I'd say it's more Fortran-like. Assembly language, though, is about un-Algol as a language can be, as is Forth.
The problem with image based languages is that you've thrown away your file-based tool system. You're so used to your favorite text editor, source control system, etc? So sorry, we're building better tools! And when your code is ready, you just deploy the image!
It sounds great in theory, once those tools are delivered, but in practice you've lost more than you gained. No matter how good your environment, you do not beat the collective effort and productivity of file-system based tools.
What I say about image based tools also applies to code deployed in a database. Good luck keeping source control in sync with stored procedures...
First, what's bad about that? Our text editors and source control systems are indeed crap compared to what we could get with image based systems with inherent (not text-level) knowledge of code. The only real argument is that we still need to work with legacy crap text-based languages also, which can't share those tools. But that's not really an argument against those tools being good in themselves.
Second, there's nothing about an "image based language" that says you can't also have a textual representation, and thus work with your editor (or the image based editor) and "regular" SCM tools.
Third, LISPs are generally not image based.
However no image based system is poised to take over the world. And I'm willing to bet my salary that none will ever have sufficient mind share to do so. Programmers switch languages, environments and operating systems. When you add switching tooling to the learning curve, you've just created a barrier to entry to the environment, and programmers are going to spend time learning something that is not applicable to whatever else they do next.
So you wind up with using "awesome tool if you just understood it" where your hires don't. And the "awesome tool" probably isn't quite what those people find most productive. So it becomes a constant frustration.
Move back to a text based language, and this problem goes away. People use the tools that they know. I'm probably going to write code in 4 different languages this week, and have to look at what is going on on a half-dozen machines..and won't need to switch tools.
As for LISPs in general, it depends. What I've seen with Lisp has mostly been image based, but I don't use Lisp very often.
I bet people said the same about trains and ships. There was a lot of work put into horse-powered infrastructure and first cars and trains were very slow and bad. Same with boats: at the time steam-powered ships were created wind-powered clippers were 5 times faster and much more reliable.
But somehow it never happened.
I agree that it's easy to build a slow, batch based build system and just tell people that's how it is. Fast iteration is important and requires effort to keep working. It might even be more important than a static type system, at least for some apps. But you can have both.
For example, I start my program thinking that I need a duck for all of my various waterfowl systems. However, four days into the coding process, I realize I need to allow for an ugly duckling to be passed around as well. Now I have four days worth of code to comb through and re-type. I could just use a refactoring tool to change all of the existing 'duck' types, but some of that code actually does need a duck, and won't work with an ugly duckling. So I have to come up with a more abstract type which can encompass both an ugly duckling and a duck, and refactor that into my program.
Eventually my code will be correct again, at least until I realize that a platypus needs to be included into my now-renamed aquatic_ecosystem as well.
The nice thing about strong and dynamically typed systems - I just start treating the incoming object as what I need it to be. No error chasing, no wading through four days worth of code. Yes, I'm more likely to have incorrect code, and it's probably going to show up while the code is running, and not before. Many times, that's a tradeoff I'm willing to accept.
My ideal system? I don't think at all about types. I just write code, and the (still fast) compiler will tell me when I'm passing something that doesn't quack to a function which expects quacks.
GHC does let you defer type errors until runtime but I never want to...
So maybe liking developing in them depends on having that personality trait? (or typeclass if you prefer, pun intended).
I, myself, would rather be creating new functionality than fixing a litany of type errors. In fact, I'd rather go to a meeting than change several hundred instances of 'duck' to 'waterfowl', 'avian', and 'ugly_duck' (knowing that I'll probably have to go back and change it again later).
Different strokes for different folks, I guess.
Why? Unit tests also check a lot more than the types being passed around, so they are a lot more useful in the long run. There are some type systems where this is perhaps not the case, but they certainly aren't the majority. The majority is "so do I go with a float or a double" or "I have to cast this int to an uint64 for this one function".
And when it comes down to it, in theory, proofs are better than tests, and Id say no one would disagree, and in practice, types are proof. Unfortunately, you can’t use types for proving everything, so that is where tests comes in.
Imagine a platform where you could indeed prove everything. I believe, but am not sure, that there is some languages that do this, like Idris.
Are you really complaining about lack of generics?
We never get there if the conversation about types is dominated by people who have only used Java-like type systems. Even ignoring that, demanding that powerful type systems be ubiquitous before discussing their benefits is a complete non sequitur. There's no excuse for ignorance, here.
I find the runtime errors I get from dynamically typed languages typically much clearer (and easier to debug) than many compile-time errors from C++ or Haskell.
I do like C's static type systems because it's needed for efficiency at runtime. (Re-) Compiling C can be close to dynamic execution for not-too-large applications. Often also C++ is needed for easy to use containers, but as some else said here it's really a tradeoff because compiles are much slower (I don't know why that is, but part may be because containers are re-compiled for every compilation unit that uses them).
And the overwhelming majority of bugs really appear on the first run of the dynamically typed code. The bugs that remain would have very often been also bugs with statically typed languages, since these are so ridiculously bad at expressing the simple invariants... They get unusable much faster than they get good at helping with bug-discovery.
The one case where it might make sense to through CS out of the window is if you're building something very small that you are sure you will never reuse or even look at again. And even then I'm not sure it is faster to be sloppy.
I'm using Haskell at work at the moment and while rebuilding everything and rerunning all the tests takes a frustratingly long time, reloading just the module I'm working on and playing with my changes is so fast I don't notice any delay. In practice, this means that 95% of any given task feels great but the final 5% before I'm done can be a real pain because I need to rebuild everything to faithfully reproduce our production environment and that does have a slow iteration time.
It does not change the fact that GCH is slow, since you must do full compilations once in a while. But it does let you keep your flow while developing. Besides, GHCi is much faster.
Initial compiles of some Haskell libraries that essentially do exponential inlining (cough vector-algorithms cough) can take a long time. An incremental non-optimized compile of a small change to a project with a good module structure takes a couple seconds.
The GHC devs are correct that it has been slowing down and are putting a lot of effort into getting that speed back. But it's not at the level of "rebuilding my project takes hours" that you frequently get with some build systems.
I gave up when the code reached about 400 lines. The compilation times were at 10-15 seconds already, and the error messsages were really ugly.
In short, compilation times depend on how you use complex type system extensions, or even only how much the libraries that you use make use of the type system. (And if you don't use the type system much - it becomes such a bad developping experience in most application domains, or you code performs very badly, etc.).
It was so much simpler to do it in C. <1 sec compiles, incredibly performant with straightforward non-optimized code.
Nobody is saying anyone is wrong. One person can perceive short compile times that another thinks are long. But, we shouldn't make generalizations based on one datapoint. Maybe you _can_ have fast builds with Haskell, but maybe that isn't the norm.
My biggest gripe with static checks, is that in reality, most of the software most programmers are asked to write don't need to have 100% correctness. As long as 95% of the most likely to occur and the most user impacting bugs are fixed, the rest doesn't matter.
So I'm really interested to see the optional type system research mature more. When I start programming, I rarely know what the functionality should be, I have a vague idea, but I need to experiment. I don't need each experiment to be correct, at that phase it could have tons of bugs, as long as it can give me a sense for the functionality, and allow me to demo it to the business so they get a similar sense. Static checks slow this process down a lot, even though I do have fun making each experiment correct, its really just a waste of time for the product.
But as the desired functionality gets clearer and clearer, then I'd want to start working towards that 95% correctness, and types are quicker to write then tests. I'd rather have types to assert type errors, borrow checks to assert no memory errors, and tests to assert functional errors. Then have to write tests to assert all three, because tests are the slowest to write. But writing 100% type annotations or memory annotations is too much, just like I wouldn't write 100% test coverage in practice. That's because most software needs 95% correctness. Not 100%. So this is the struggle I feel.
Also lazyness I'm not a super fan of. And I/O is kind of a pain, not sure its worth the overhead just to achieve purity.
P.S.: I encourage people to use Haskell though. Its a great language, a step forward in a lot of ways, I'd be happy using it for work, just not as happy as I'd want to be, because of the problem I explained above.
The important thing is not at which stage in the compilation pipeline the bug became obvious, it's at how many seconds elapsed it became obvious.
I generally found that working in untyped js with a workflow prioritising fast iteration was much better for finding bugs quickly than working in Scala with its advanced type system and miserably slow iterations.
But as you say, it's possible to have both.
A large set of people in that community want to derive algorithms mathematically, using a deductive or proof based process, as in math.
Iterative development conflicts with a top down form of development, and also conflicts with the safety oriented culture of strong typing. While rapid iteration languages and environments are good for initial development, they can be awful for projects requiring maintenance.
About a decade ago, I built a small web app using Common Lisp (SBCL) and a lot of the development I did was done through adding new features and debugging in the REPL. While I saved the VM, reading the code months later to add a feature was terrible because the app was hacked together. I wonder if there's a way to fix this. Typed Racket looks like a promising move in that direction.
FWIW, ghci already interprets Haskell quite quickly for iterative development.
I also wrote a small article on it, not very detailed but maybe it can help you get a rough idea of the power of clojure.spec when combined with generative testing -- [http://abhirag.in/articles/spec_oracle.html]
There's more today than there was yesterday, but it's still not a system expected to be built, maintained, and extended for decades.
> derive algorithms mathematically
Of course, most of what computers do is not algorithms (or even "computing").
> can be awful for projects requiring maintenance
Or wonderful, see Smalltalk. Not that it is perfect and can't be improved, but it sure has been successfully maintained over a long period of time.
Pssst! Hey, buddy, that's what an algorithm is.
Both at the macro level, what they are used for, but also at the micro level. Actual computation tends to be incidental.
What is speech recognition for? Communication.
I think you keep arguing against something I never said, which is that computers do no computing whatsoever.
In gaming Unity3d embraces this interactivity by modifying more or less everything directly while the game is running. The unreal engine blueprints show data flows live etc.
In technical sciences there was always Matlab with its interactive mode of development. We have similar technology in data science with ipython, Spyder, jupyter notebooks etc.
Actually I'm seeing more interactivity than good type systems out there. Elm, Haskell & co is something you find advocated in internet forums but rarely in companies.
(and as the author mentioned - those two actually don't have to be exclusive)
That's why my favorite language is Clojure. That interactivity, instant feedback, seeing the program running as you are tweeking it, its a bliss to use and it creates better more functional software.
Imagine playing music as you hear it when trying to come up with a good melody. Now imagine not playing it, but composing it on music sheets instead, and occasionaly playing what you've got every 10 to 30 minutes.
Lisp championed interactivity, it invented dynamic programming for that sole purpose. The idea is that you morph a running program into shape, molding it like you would clay.
The first thing you do when writing in a Lisp like Clojure is run your program. In most other languages, running your program happens much later, and much less frequently, and it can actually be quite challenging to run.
How is the actual work flow you use when molding your app?
The entire state of the program, including the code and the globals in memory, is preserved, and simply spun up in a different environment. This tripped me up for some time with Smalltalk - I didn't understand this.
It's the same with many lisps - you can frequently get a REPL directly inside a running program, and query/change the objects (including code) that is running. This was used to great effect in fixing the Deep Space 1 probe while it was 100 million miles away from Earth.
Can easily add a REPL to any Clojure program though.
I'm not quite sure what you meant by the dependency question. If you need to reference other namespaces, you can add new (require) imports in the repl or live-reloaded source files. If you need to add completely new third party libraries to the project, that's rare enough that a REPL restart is fine - there's a library to do it dynamically at runtime if you want to though.
So you can interactively change the code on Emacs, Cursive, CounterClockwise,... and then update the REPL state.
While using the REPL for debugging purposes.
So at the end of the coding session, all source files have the current state of the application.
Well yes, I do it daily at my work. We develop backend SOA enterprise services like that, it works great.
Surely even in Clojure code there are lots of dependencies so you can't just change one place in isolation.
Well, there still are a few, but very little compared to most other languages. That's because everything is immutable by default, and state is passed around instead of accessed globally most of the time. So you'd be surprised how often you can actually work in isolation. Achieving this was Clojure's number one design tenet. To have a language which promotes untangling dependencies as much as possible, that makes it easy to write simple untangled code.
The other thing to realize is that the program running is not an isolated one like when doing TDD. It is the full program, with all its dependencies, connected to the file system, your databases, etc. I don't mock anything, if I'm trying to write my DB query, I try it for real on a real database.
And I also guess that changes done on the repl aren't actually saved for the next time you run your app?
So, that's why Clojure has tight integration between your editor and the REPL. Or tight integration between code files and the REPL. You normally don't type code at the command line, in fact, the Clojure REPL has no UI or command line interface, it's a network repl which listens to messages over a port using a special protocol called nREPL. Each editor that support Clojure connect to it, and send the content of the buffer to it for you. So you're editing the file and saving it as you see fit, while also asking the editor to have the file or part of it as you edit be sent to the REPL. You can also edit the file, save it, and then have the repl watch file changes and auto-reload them as they change. That way can work even with editors that don't have Clojure support. There is a CLI you can use too, for when you want an ephemeral program. Some editor also build UIs for the REPl, allowing rich media to be printed instead, like an interactive graph.
And how do you do testing?
Well, you're always testing, as you code, in parallel. Since your code is running live as you edit, you see the impact of your change right away. You're expected to also write unit and integ tests, and you do that like in all other language. These are needed for regression testing, but you don't need tests to test something works, you'll be doing that as you code fron the REPL. You need tests to make sure someone in the future doesn't revert your functionality. I actually find these are much easier to write in Clojure, again because the language pushes you to write isolated easy to test code, but also because you don't need a test framework and a mocking library, the core language is enough offers both.
How is the actual work flow you use when molding your app?
First I create a project. Then I start a repl configured from that project, then I open my editor and connect to my repl.
Then I write my main function and load it in the repl. I add more and more functions, loading them and trying them out as I do. Once I've got enough, I orchestrate calls to them from my main method, loading and reloading it all as I go, seeing the result of every step in the REPL. When I'm satisfied, I save my file. Once I've got what I want, i create a test file, write some regression tests, and then git commit, send a CR request, and then git push. Rince and repeat.
First, I use a highly-flexible prototyping instrument to quickly iterate on melodic ideas. Normally this is either humming or whistling. "Our first instrument," as an instructor used to call it. I do this until the melody is catchy enough to start sticking in my mind. Then I'll look at alternate parts, like countermelodies, B/C sections, harmonies, or basslines. Each part gets repeated until it gets stuck in my head.
Then, I repeatedly hum/whistle each part like a disturbed rambler until I get home or to some other place where I can scribble the parts down onto paper. I used to be faster just typing out the Lilypond, but I'm out of practice with that whereas all musicians generally never stop having a short shitty hard-to-read staff-based shorthand.
Now, finally, I actually start playing the different parts on actual instruments, figuring out stuff like fingering for guitar or piano, rhythm and phrasing, filling out transition chords, etc. During this process, I play less and less of the song, drilling down to tiny fragments and setting up tiny local contexts for testing ornaments, phrasings, hits, etc.
If I'm gonna try to make the metaphor rigorous, there would be only one language, because the musical process happens entirely in one language. This language is small enough to write on a single piece of paper, fully contains all of its abstractions, has focused notation for specific instruments, and permits debugging any part of a program by cutting any contiguous subprogram out and turning that fragment into its own live environment.
I'd write a program first on a prototyping platform which is so lightweight that nearly any prototype program will run, and I'd use that to write out the entire first draft of my program. Then, I'd incrementally move pieces of my program transparently onto a more rigorously-precise framework which is more restrictive about types but makes it easier to compose modules and let them stay composed.
There is a point in time when composition is more about managing multiple parts at once and you won't actually want to listen to more than about 15s of your song at once, and you'll want your runtime to support your quest towards stability as a basis for tweaking the fine details of your song.
This language is small enough to write on a single piece of paper
Check, Clojure is one of the most concise language out there. I use it for hand writing code or when coding on my phone, because its so short.
fully contains all of its abstractions
Yup. In fact, you need to change your mindset, you don't write instructions in Clojure, you search for the functions that do what you want and arrange them in the order you need. Think micro-library.
has focused notation for specific instruments
Lisps like Clojure are the kings of DSLs. This is literally their bread and butter.
and permits debugging any part of a program by cutting any contiguous subprogram out and turning that fragment into its own live environment.
Yes, that's what the REPL lets you do. Load any subset of the program into a live running environment.
I'd write a program first on a prototyping platform which is so lightweight that nearly any prototype program will run, and I'd use that to write out the entire first draft of my program.
This is the idea behind dynamic programming which was invented by Lisp. Clojure will run almost any code, correct or not, no question asked. It does not put constrain on you, though it highly suggests the use of safe tools over unsage ones, it lets you cut yourself if you want too.
Then, I'd incrementally move pieces of my program transparently onto a more rigorously-precise framework which is more restrictive about types but makes it easier to compose modules and let them stay composed.
This is my ideal too. There's not yet the golden graal for this, but Clojure is currently focused on this very last part. Composing modules is actually pretty trivial, because it supports performant immutable datastructures as first class, open polymorphism, defaults to functionally pure constructs and wraps all state in safe managed containers.
Now as for correctness checks, it has optional types, but the implementation is a work-in-progress, and is stagnating a bit. Optional generative tests is the current strategy being explored, as well as highly powerful runtime checks. Still in alpha though. Static analysis is also being worked on, though soundness is not a target, also in alpha at this point.
If its any consolation though, the only two studies I could find about defect rates relating to programming language choice showed that Clojure does as well as Hakell. Which means it had some of the lowest defect rates of the languages tested, averaging a tad behind Haskell, and doing better then Scala and F#. Obviously besting Java, C++, Python, Ruby, C#, Go, etc. It was an outlier in that sense, as it was the only non staticly typed check language to do so well.
If Clojure fully contained all its abstractions, then it would not have the option of calling into the JVM.
Clojure does not have the property that cutting any fragment of a valid program yields a valid program. There are languages in the concatenative style which have this property, but even there, the ability to compose does not guarantee the ability to split.
If you think that types produce reliability, then you do not understand types.
You seem dedicated to Clojure, which is great, but music is thousands of years older and has figured out a lot of stuff. I think that you also missed my bigger point, which is that programming is roughly at the same point in its art that cave painting was at hundreds of thousands of years ago. If you think that Lisps are beautiful and Clojure is the pinnacle, then I think that you don't know beauty. But it's not your fault; you've never seen anything beautiful. None of us have. And none of us ever will, at the current rate of progress.
Here, have a video to provoke some thoughts: 
I'm exactly saying that I believe the ability to quickly run and test your program is most important, more so then to prove properties from its code description. And that's specifically the strengths of Lisp dialects like Clojure. Are you saying something different?
Indeed I did. I'm not claiming Lisps and Clojure to be the be all end all of programming languages, I hope not. That said, I think you're being a little dramatic. Beauty really doesn't exist, its an illusion fabricated by a mix of our culture and our genetic predispositions. It's mostly characterized by the emotions it evokes in you.
Clojure and Lisps, currently of all languages I've tried, evoke the strongest set of positive emotions in me, the strongest one being joy. And I think some of it is not without merit.
Still, beauty is not what I'm talking about. I like quantifiable measures. You have productivity, explorativity, understandability, performance and correctness. Clojure finds a good balance between these, it tries to maximise the average of them all. Which is why I find it's great for your average programming project. I don't yet know of a language which does a better job at this maximization, but I'm always on the outlook for one.
It only takes four or five paragraphs to mathematically describe the basis for chromatic notation.
Its just as short to describe the basis of Lisps, that is the lambda calculus. Also, chromatic notation is less powerful then lambda calculus notation, much less, yet lambda calculus notation isn't much longer to describe. I'm also not sure shortness is necessarily better or more beautiful, again, that's an esthetic preference like minimalism in art. With nine constructs you can have a full Lisp impmementation that can do all turing equivalent computations and also I/O. But in practice, having even more turns out to be useful and makes things easier. I think it's the same for Forth too.
I'm not sure this has any practical merits, but there's nothing in Clojure preventing this. It could, in fact self-hosted ClojureScript does. It's not very useful, which is why Clojure doesn't bother, and calling into the JVM is actually quite useful, to leverage lots of existing code.
Clojure does not have the property that cutting any fragment of a valid program yields a valid program.
You can cut anywhere an sexpr starts or ends and get a valid program. Maybe that's not granular enough for you, but what is? In forth you can cut at any whitespace and get a valid program, but I could claim it's not granular enough, I want to cut at any character. To me, the property is that the notation is recursive, it builds on itself. Sure the unit being an sexpr is not as small a unit as a single musical note, but that unit is independent, composable and nestable.
I don't, I didn't use that word at all. In fact I believe Clojure programs to be very reliable, even though they don't have static types.
Here, have a video to provoke some thoughts
Thanks, always looking for thought provoking thoughts, I have not watched it yet, but will.
I wish the restart times were much faster, but its not as big a problem as most people make it sound like.
Often time, you can just learn to mold your program in smarter ways, so that you don't get yourself in weird inconsistent states, or you learn how to fix your state instead of starting over. Or you use mount, components. Integrant, etc. They're really low overhead, mount is trivial to add, its like 4 more characters per global variable.
Having said that, Clojure isn't the best thing ever, just the most interactive language I know of currently that is mature enough and has a large enough ecosystem I can use it for commercial software. Do you have other recommendations for a language that meets these criterias?
That's true only if clojure is your only dependency, in real project with libraries is more like 15 seconds with lein repl, cider is much worse.
Do you have other recommendations for a language that meets these criterias?
Some would say Common Lisp.
But I'll eventually give Common Lisp and Scheme a try.
1. Headless server like nREPL for Clojure
2. Code editor that can send chunks of text to the headless repl and receive the output. Emacs and neovim (with its terminal) are pretty good at this. You can use commands to send the surrounding s-expression, paragraph, etc.
3. Ta da! Your text is already in your source file.
Disclaimer: I work on the team at Google that builds the underlying language platform for Flutter.
Android build times have improved a lot over the past few years but this is a whole next level experience.
As a game programmer, building systems with fast turnaround times is more valuable to the artists and designers in the studio than for me personally. And the value in the artists and designers and programmers all having fast turnaround time is in being able to make a game that's more fun for the consumers.
While it would be possible to replace any value by another of the same type (e.g. redefining a function without changing the signature), I'm not aware of any statically checked language with a REPL that allows that. When I'm playing around in the Haskell REPL, redefining a function requires also redefining all other functions that use it, and that's a chore that isn't even required by the type system.
Other modifications are likely to break static checks, e.g. adding a new case to a sum type would invalidate exhaustiveness checking (and thus probably a bunch of compiler optimizations), so no standard type system would allow it. But having a check that makes sure you handle the added case everywhere would actually be nice to have, especially when it can tell you interactively where you need to add more code.
The major hurdle to altering a running program without violating type safety is the fact that you can't just check the original program and the modified version for internal consistency, you also have to ensure that old code that's still running won't be confused when it calls new code and gets an unexpected return value. In the case of adding to a sum type, you could compile in a fall-through case for all pattern matches, and then patch in the new handler code.
For even larger changes, like completely replacing the return type of a function, it might be necessary to specifically engineer the type system such that it can support this case. Ideally, it would support almost all modifications that work in a dynamically typed language, while still preventing anything that would take the program into an inconsistent state.
The sad part is that this encourages bad development practices because piling on nested loops doesn't require bouncing the server, while properly factored code does.
There are tools like JRebel that are supposed to fix this, but I haven't used them.
This is impossible. Data structures have these little things called “invariants” that require proof to be established. In statically typed languages, abstract data types are used to prevent users from breaking these invariants, by making the representation invisible to anyone but the implementor.
If the data structure underlying an abstract data type can be modified anytime, then every time you patch your program, you would have to check two things:
(0) That the new data structure respects every invariant relied upon by other code.
(1) That either the new data structure is compatible with the old one (which is often not the case), or there are no reachable instances of the old data structure in memory.
This is an even bigger pain in the ass than just stopping the program and fixing it.
I'm not sure why you think that the checking will be painful, it almost sounds like you think that it would be done by the programmer. The whole point of type systems is that they can be checked automatically, so the programmer is prevented from doing something stupid.
Dynamic languages already allow all kinds of modifications that might or might not break invariants or introduce subtle incompatibilities; a type system would only make it safer.
It is also not just a matter of "stopping the program and fixing it". Suppose you are writing a game, and during playtesting you encounter a bug, where something is stuck in an endless respawn loop. In a dynamic language, you could look at the misbehaving code, develop a fix, and immediately observe its effects. This allows you to quickly iterate until you have found a solution that works. Compared to a "stop, fix, retry"-cycle, it's simply going to be faster, even assuming you can reproduce the bug reliably (maybe using some kind of input replay).
I have yet to see a type system that can take a putative implementation of a data structure with arbitrarily complicated invariants, and spits out whether the implementation is correct or not. (Note that Coq, Agda, etc. don't quite fit the bill, because they require the programmer to enter the proof himself, even if these tools can partially automate the process.)
> I'm not sure why you think that the checking will be painful, it almost sounds like you think that it would be done by the programmer. The whole point of type systems is that they can be checked automatically, so the programmer is prevented from doing something stupid.
My point is precisely that type systems aren't normally used to enforce data structure invariants directly. Instead, data abstraction (i.e., the inability to inspect the representation of abstract data types from client code) is used to confine the potential to break data structure invariants to a small fragment of a big program (namely, where the abstract data type is implemented). This is in furious contradiction with the idea of inspecting and modifying anything anytime from anywhere.
Those tend to be the mistakes I make when programming interactively in Python. Forgetting to put a single value into a one-element list. Forgetting to check for None. Misspelling a key in a dictionary. Swapping the order of two arguments in a function call.
Yes, in some cases those properties can only be verified by proving some invariant equivalent to the Collatz conjecture. I'd conjecture that most instances could still be solved by an appropriate type system. I'm not too worried if it can't prevent me from invalidating invariants, so long as it can prevent me from making simple mistakes that are obvious in retrospect.
Those tend to be the mistakes that I take for granted any seasoned programmer can detect and fix almost instantaneously and effortlessly. (Of course, not because programmers are superhuman, but rather because Hindley-Milner is the bare minimum a high-level language should have.) It's pathetic that we're still discussing these in 2017.
> Yes, in some cases those properties can only be verified by proving some invariant equivalent to the Collatz conjecture.
I have yet to see a useful program whose correctness is contingent on the Collatz conjecture being true. But I have seen lots of programs that are much easier to verify by hand than using a type system.
> I'm not too worried if it can't prevent me from invalidating invariants, so long as it can prevent me from making simple mistakes that are obvious in retrospect.
I'm not worried either. I'm just saying that “allow anything to be modified anytime, anywhere” is counterproductive. But if you really want to do it, you can do that in ML and Haskell too: just stuff all your top-level definitions into mutable cells.
Evidently most language creators find it difficult to integrate both interactive programming and static typing, which suggests to me that the problem is not easy. Or maybe there just isn't enough overlap between the groups who value one or the other.
> just stuff all your top-level definitions into mutable cells
That seems like it could be part of a potential solution, but it would require rewriting the program so that everything is implicitly wrapped in the IO monad. And it still doesn't handle the case were you want to add to an existing data type.
Interactivity is one thing. Randomly redefining things is a-whole-nother thing. ML and Haskell are interactive. They just don't stuff absolutely everything in mutable cells like most dynamic languages do.
> That seems like it could be part of a potential solution, but it would require rewriting the program so that everything is implicitly wrapped in the IO monad.
You can't have it both ways: either you have effects and accept that you have effects, or don't have effects and accept that you don't have effects. (IOW, lying is bad.)
If I understand correctly, success typing rejects only programs that will lead to a type error at runtime, but allows all programs that it can't prove incorrect. I think that's an interesting idea and definitely better than no type checks at all, but I'd still like to have a type system that can prove some programs to be type safe, if possible.
If you can't touch the code for fear of breaking something, fixing bugs takes forever and new features / levels / versions never happen.
As a great (yet personally disappointing) example, I give you Mass Effect: Andromeda. No more patches or content will be released for the single player game only 5 months after its launch.
Different needs for different domains.
Even though I agree with the main point of the article, I have to point out that when a game doesn't crash every hour, it's definitely more fun.
Few examples of games that do crash: many games in the Elder Scrolls series, Fallout 3 (and New Vegas), Dwarf Fortress. Games that are undeniably complex and emergent in such ways that the coders have no way of testing everything.
It will be so great that people will be able to make even more complex games, and have them not crash.
> A better argument is that some technologies may result in the game being more stable and reliable. Those two terms should be a prerequisite to fun [...]
Of course, sometimes E&C just doesn't work for mysterious reasons of its own.
(*) English unsurprisingly lacks an acausal past tense to describe doing something that changes an event that has already happened.
Or "change it so..."
We change [history/state of the world] to not include that event.
I'd go even further and claim that productivity is the ultimate currency in programming, because you can convert it into pretty much anything and everything else. Better quality, better performance, better UI. Of course, there is no guarantee that you will actually do that.
Reaping these benefits does mean that you need to constantly work at improving the code, "if it ain't broke don't fix it" leads to entropy, and so does being afraid to make fundamental improvements due to lack of test coverage.
Does correctly implementing a composable state machine and effects system mean the game is more fun? Usually.
This essay is full of question begging and false dichotomies.
The 2 main downsides is the lack of proper interfaces and object.method() notation, which is sometimes more readable.
This is where the choice of language might help or hinder you. One language might take longer to implement it or be more prone to bugs.
The question of whether or not your ideas turn out to be fun is completely orthogonal.
Ill give you one clue or more like a hunch...
If you have a tool that is trying to do everything it isn't going to be equally great at all those things and it will likely be confusing/hard to use.
The generic programming languages tend to be more popular but they do so in the same way TV programs or video games are trying to appeal to an audience as large as possible. (This is how we got all these absurd hacker movies for example)
In the future (lol) we will discover that single purpose languages are just better at the limited scope of things they do.
PHP is perhaps a bad example that I shouldn't even have mentioned here but such a language knows exactly what its goal is in life like PHP knows it is suppose to bake websites.
Maybe you've touched the awesome with your fun. Someone should try build a language entirely around the core mechanics of fun.
In games there is fun in the form of rewards for stuff that just takes a fucking long time to do, there is fun from rewards for stuff that requires skill, there is fun from rewards obtained though luck, there is fun from a storyline progressing, there is fun from unexpected things, fun from buying ingame shit, fun from selling ingame shit, fun from cooperation as well as growing to be able to do those things on your own.
But the real list is probably much longer.
The language should probably have a basic fun object that looks something like:
Then you have to benchmark the fun people are taking out of a bit of code or graphics using real world data.
Then you would be able to focus your attention where your effort produces the largest amount of fun as well as see the areas where your game is teh suck. Answer the big questions like what parts are people playing and why? Where do they rage quit?
If they didn't give up on creating content for Diablo 2 I would probably still be playing it.
If they had a language where fun was the central mechanic they would have known that changing all the items and ruining all the heroes had a negative dev time to fun conversion ratio.
It wouldn't have to be limited to games at all. One could quantify the fun on HN using the same language. It would all of a sudden be obvious that the karma system and submission ranking lacks random rewards and events. A thing no one considered up to now but if we know it is fun and the system is lacking it it becomes worth considering it.
"does higher build quality make formula 1 cars faster"
"do better materials in a car make it more fun to drive"