Hacker News new | comments | show | ask | jobs | submit login
Fun vs. Computer Science (2016) (dadgum.com)
221 points by Tomte on Aug 26, 2017 | hide | past | web | favorite | 181 comments



I have happily traded working in a nice language with slow iteration times to working in a lousy language with very, very fast turnaround.

My favorite extreme example was the time I used a super fast 6502 assembler environment that would do the "change a line of code and get the target running" cycle in a couple of seconds. This kind of interaction is magical. You're still writing kind of crappy assembly language, but it almost doesn't matter. Things just flow.

On the other extreme: A large component of me deciding to quit a job was the fact that a build of the product took four hours. And the build was typically broken. So: Arrive in the morning, sync, wait all morning for the build to fail. Do another cycle after lunch: fail. Go home, repeat for weeks. (Add to this: Managers who refused to buy decent development machines, people who kept checking in busted code and making things worse, and a crushing schedule. Who needs that?)

These days I'm do a lot of C++, and some PHP. The C++ projects take a few minutes to build, which isn't great, but it's survivable. The PHP "builds" as fast as I can refresh a browser page. And as much as it pains me, on most days, when I grit my teeth and get honest about it, I'm more productive in PHP. And I despise PHP.

I learned LISP early on in my career; wrote a few LISP interpreters, goggled at the majesty of LISP machines, read all that I could. But I've never shipped a significant project in LISP, nor am I likely to. And the Newton actually flipped from a LISP (well, OOPy-Scheme) implementation language to C++ in order to ship. I'm wondering if there is some law of human nature at work: You can have elegance and comfort or you can have a product. I sure hope I'm wrong.


One of the reasons why I switched from C++ to C professionally (I do C for living) was build times: previously kind of big casual game took 5-10 minutes to compile and around 10-20 sec incremental build, and now my C project takes 5 second to compile from scratch and around 1 second to do incremental changes.

Another reason being that I don't need to waste time discussing and arguing which OOP patterns to use - in C it more or less fixed how you write code. Of course one can go extra mile and reinvent VTable and Dependency Injection, but it feels all foreign to C.

So maybe give C a try, it's amazingly fast to compile and link, yet you feel almost at home after C++.


How do you cope with the lack of template containers? That's the big thing I miss in C.


Great question! To make things even more annoying - C doesn't have namespaces. So people are forced to write long names like "my_struct_hash_map_t" everywhere, which makes all modern C code look like wall of text (compare too older style code "mshm_t*a = hmalloc()"). To answer honestly - I'm asking myself very pragmatic question: "do I really need a generic code?" much more often compare to C++. In game development C++ templates were used mostly for math libraries (2d/3d/4d vectors, matrices, etc) and containers (hashmap, lists, etc). If we look closely on math libraries we can make a compromise "just float is enough", and if we make such compromise then everything is simple - one don't need templates for math lib anymore, I'm currently using "gb_math.h" [1] and it works pretty good. Containers on the other hand are more challenging, the key here is to understand "can I make this work with linear (maybe static) array?" - and in most cases yes, then you don't need any containers because just arrays will do. If algorithm specific do require a hashmap or list, well, gonna use what we have: macro! It looks weird, and it is weird, but something like khash.h [2] works fine in production.

In reality what it means for me is that I need to spend time making decisions about containers, which is good for game development, as what makes games fun doesn't have any connection to templates. But it also bad for casual non-game related code. I find it much faster to just "hey load this json, do some processing, and save json back" do that in Python than trying to do in C just for sake of C. Ideally namespaces/templates and operator overloading in C would solve all this struggles, but then where C ends and C++ starts?

- [1] https://github.com/gingerBill/gb/blob/master/gb_math.h - [2] https://github.com/attractivechaos/klib/blob/master/khash.h


There's a bizarre lack of interest in CS in the psychology of usable systems - either for developers or for users - which takes into account practical requirements like cycle time.

A lot of CS seems to be story-telling: "This feature makes language X better/worse, because it just does, obviously" with no independent testing or peer review of language productivity and robustness in real work environments.


I think that's why Go is so popular right now. I find the language kind of depressing from a PLT perspective, but it was built taking into account practical requirements such as fast compile times, a standard formatting tool, ease of deployment, etc, which makes it a very good system.


I would like you to cite some papers, which claims it makes language X better/worse without any explanation.


Well then, so much for it being Computer Science...


Since computers are built by people for people, it does seem like we're missing the most important and difficult science in favor of doing the science that's easy to do. It's easy to reason about how a system that I designed will perform. It's hard to answer the questions of how our technology choices affect other people. It would be great if science could help us design high quality systems with a limited budget, or establish a theory around whether designing any given system will improve someone's life. It would be pretty dang useful if we had some formal scientific results around what makes a game fun, or more generally what makes software pleasant to use and productive.


> establish a theory around whether designing any given system will improve someone's life

That would be sociology or one of its siblings, no?

... or, at best, the piddly "ethics" course that gets tacked on to most undergraduate CS degree programs?


As-is, CS really is much closer to a branch of mathematics than a science.


Eh, in a similar way to how theoretical physics is mostly math. Math is the tool you're using, but the assumptions are all based on real-world observations.


I don't know about that; real world observations like what?


Like observations of what can and can't be computed by physical objects. The point of the Church-Turing thesis is that things you prove about Turing machines are proofs about what you can compute in the real world, not just some abstract mathematical model.


Fortunately, computer science still literally means just that - it's an exploration of the properties of computers. As to humans developing computer based systems, yes, scientific rigor is not mandatory, just as you don't require understanding classical mechanics to build an abode. As to the psychology at play with humans developing and using computer based systems, rigorous psychology is eminently suited to your concerns.


> to working in a lousy language with very, very fast turnaround. [...] The PHP "builds" as fast as I can refresh a browser page.

This observation ("workflow") was one of the 3 bullet points outlined by Keith Adams' 2013 presentation "Taking PHP Seriously".[1][2]

(KA's presentation was partially a response to 2012 "PHP a Fractal of Bad Design".[3])

Basically, the horrendous inconsistencies and flaws outlined in "PHP Fractal Bad" can be true ... but simultaneously be overshadowed by "workflow" benefits.

[1] starting with slide #14 of "Adams-TakingPHPSeriously.pdf" : https://github.com/strangeloop/StrangeLoop2013/tree/master/s...

[2] 4 previous HN threads: https://hn.algolia.com/?query=php%20seriously&sort=byPopular...

[3] https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/


I have been using TypeScript. The TypeScript compiler gives me roughly 1-2 second builds, even on a large project that (it can watch code and build incrementally).

So I get the benefits of static types, plus the benefits of fast iteration, PLUS the benefits of a fully dynamic language for when I want to accomplish something quickly.

It's a full win-win-win. Throw in linting to avoid the legacy JavaScript crap and it's pretty awesome.


> The PHP "builds" as fast as I can refresh a browser page

JavaScript is just as fast, or faster. That's probably an important reason why it's "eating the world".


I agree that JavaScript is fast - but we've ruined that in a lot of places by throwing in things like code genertion, compilation, builds, and other nasty words. The turnaround for the last React&Node program I had the displeasure of working with was on average 30 seconds.

PHP - for all its faults - does not suffer from these kinds of recompilation issues, yet can be optionally pre-compiled for added speed when you do go to production.


I don't use backend js, but things like webpack-dev-server which on file save do an incremental recompile and then hot reload have made this a non issue on the frontend for me. Not sure if this applies to your use case, but figured it couldn't hurt to offer a partial solution.


On the backend side, tools like nodemon provide a very similar solution. Then just setup your webpack server to proxy requests to your backend, and you're good to go.


So is python, which is a much nicer language.


You don't know what you're talking about. Python process needs to be bounced, or has to watch files and bounce itself on change. The PHP runtime is designed to read the file on each request. You have to go out of your way to require restarts (it is an optimization). PHP has the only mainstream runtime AFAIK that is designed this way (+ the model is stateless, so each refresh really is one). I'm not a PHP fan but it bothers me that people don't acknowledge some of its important, unique features and are content to rag on its inadequacies. It bothers me because we're probably ten years from people forgetting what PHP was, and creating a new, worse PHP (!).


This message would have been just as effective without the opening accusation.


Sorry, you're right, I was annoyed by the "hurr durr Python" tone.


It didn't even have that tone...


Well it does just assert without any supporting argument that Python is a nicer language, which is textbook flame-war starting.


Your assertion that giving an opinion without supporting evidence is automatically "textbook flame-war starting" is also patently silly.


Sorry, when I said "Nicer language", i meant that it has a more readable syntax.

On top of that, I was comparing Python to JavaScript and wasn't even talking about PHP, which, as you rightly discovered, I have very little knowledge about.


Python can reload modules while running. This is standard practice. And anyway, bouncing a Python process is instantaneous.


No its not. It's slower.


TDD can help with this. I more often than not, will write a unit test or functional test first, then work on the code for that test where I'm just running that one test until it succeeds. After I have individual parts working, I will integrate everything, which again will just be another integration test run from the IDE.

I realize this isn't always possible, and is probably easier to do in a managed language, but you can cut down on iteration times by mostly forcing the IDE to do incremental compiles. You can work in real-time on code that might take tens of minutes for a full build.

TDD is fun.


It's true, but it's more a treatment then addressing the root cause.

Rich Hickey, creator of Clojure has a fun quote that goes something like: "You thought you wanted TDD, but really you wanted an interactive REPL."

My gripe with TDD, is that it gets tedious in its own way. Always needing to eventually mock some aspects of the code, and always working in the small, you start feeling distant from the medium and large picture of the code.


A REPL is better when prototyping but tests have a lot of value afterwards. They're repeatable and they can be used to spot regressions.

I think that the a good workflow is to do initial experiments on a REPL followed, once I get an idea of the code will look, by writing tests and the actual code that will run in production.


Yea, you still want regression tests no doubt, but those are very different from TDD. You only need write them after you're code complete on a feature.


It's a pretty good treatment. I think the solving the root cause would be something like a constant time compiler, which I don't think is going to happen. You don't need to mock at all if you do DI and modularity well in Guice or Spring or something else (it can be the rare exception for a few libraries that use say private constructors). You can also write 3-5 line tests that will absolutely test everything in your program, and be able to debug it line by line in a debugger. So, I'm not sure I would agree with Rich Hickey on this. REPLs are no substitute for a good IDE and debugger running a series of declarative tests.


Not all REPLs, just like you had to preface debugger with good, you need to preface REPL with good. You really need to give a good REPL a fair try to know what I mean. Most people also thought tests wouldn't add much worth, and took a long time to bother giving tests a fair chance. Good interactive REPLs are the same. Try it out, not one day of it, spend a week, then decide. It helps if you know someone that can show you. At my work, a lot of devs try Clojure and don't like it. Then I sit down with them and show them how you're supposed to use the REPL. They always turn around after that, and acknowledge how awesome the REPL is. That's why I emphasize on fair chance. Its easy to try it and miss the point, not use it how it is meant to be used. Also, the language must embrace it, if its tacked on, it rarely works. It has to be first class.

Ya, I agree, tests are a good treatment, the best for most languages. A good interactive REPL is often better though. Clojure has tests, good IDEs and debuggers. Yet people don't care about them as much, because the REPL solves most problems. In return, it means the IDEs and debuggers aren't as good as Java, because people care less and so also invest less in them, and sometimes I'd want a better IDE or debugger to complement the great REPL experience, but I wouldn't trade the great interactive REPL for them.


Despite being downvoted, this assertion about TDD is true. It can provide a very interactive experience when programming, even with languages with little natural interactivity. But you have to know how to set up the tests.


A good IDE will go a long way here. Any of the IntelliJ IDEs are interactive in almost any aspect of technical coding. Setting up and running a test is a left click, if you've written the function and annotated it in Scala. I've been trying to use Atom for some embedded coding, and some of the plugins for it are pretty good, but not completely free. MS VS is up there too, but I haven't used it in a long time.

Off course, writing testable code using OO almost requires dependency injection IMO. With FP, it's almost trivial.


I'm enjoying wallabyjs with jest and vscode. It's quite fun having feedback from test execution always available in the IDE.

I don't have a long build step for my codebase (compiling typescript just takes a few seconds and incremental is instant) -- but I do avoid lengthy deployment cycles -- deploying code to aws lambda takes quite awhile.


have you ever tried BDD (behavior driven development)? one of the interesting aspects of this is being able to develop the test implementation in a different (possible more expressive) language


For web development, yes. Once you know pretty much what you want, and have a good idea of how you're going to go about implementing it, it can be fast. If you're new, don't know what you want, and are starting out on a framework, it can be an impassable chasm. If you're just experimenting, it's also a bit too much.

Starting with a simple unit test or spec (what I meant by functional testing, though this can be BDD too) can be easier to start with and change as you figure out what you want. Even with specs, or a BDD framework like you said, no need to bring in a whole new DSL unless you need it IMHO, others may disagree.


I've never done professional Web Development. But, I found BDD useful for describing correct behavior to the end user in a readable, yet functional and extensible way. When it works well, it eliminates the telephone game of capturing requirements and translating into brittle test cases which are readable only by the developers.


While I agree with the first part (immediate feedback is essential), I'm not sure about the connection with LISP in the end.

Doesn't LISP offer the same "fast feedback" cycle that e.g. PHP or whatever does? And even faster than C++?


It's kind of my point: LISP does (mostly) provide instant feedback, but it's not used much in the real world (every big LISP success I've heard of wound up either being translated into a shlub language, or resulted in the teams using it having a really hard time hiring people and scaling).


Ya, I think that's just the familiarity hump. Most people don't get over it, and the few that did geek out too much about the possibility of infinite macros, and how you could extend Lisp, while eventually realising how few made it past the familiarity hump and starting to feel isolated they eventually move to something else.

It's hard to break this habit of familiarity, it's self replicating, because the less familiar people are, the less likely they can have good teachers and teaching material about it, etc.


I spent months working on a Lisp project in college. I pretty much hated it.

Some people (myself included) don't like the aesthetics of Lisp as a language, nor do we like the enforced functional program structure, nor do we like the lack of infix operators. Maybe I "didn't have good enough teachers," but for every other language I've ever learned I didn't need those "good teachers" to achieve a strong level of mastery, and I've learned more than a dozen to a high level of mastery.

After a while you have to stop blaming the people and instead just admit that Lisp isn't as awesome-for-all-purposes as some famous people seem to think it is.


Some would go to fight with a Long Sword, others would prefer a Katana. I think that's fine. Master what works best for the way your mind and body works. That said, I'd hate for languages to adopt a lowest common denominator just so companies feel then that all programmers can replace any other and all code base is understandable by all. That would be like having every sword fighter fight with a knife.

After a while you have to stop blaming the people and instead just admit that Lisp isn't as awesome-for-all-purposes as some famous people seem to think it is.

I don't know, sounds like it's just not as awesome for you, but probably is that awesome to all those famous people saying it is.

I spent months working on a Lisp project in college. I pretty much hated it.

Like I said, its possible the style doesn't bode with ya, and that's cool. That said, here's my anecdote. I also did Lisp in college and hated it. Thought it was stupid, had the worst most unreadable syntax I'd ever seen. It made every trivial thing hard, like why can't I just loop?!?, where are the variables?!?, how do I just do one thing followed by another?!?. Where do things start and end?!? I don't even remember what Lisp it was, but trust me, it was some academic cryptic variant, probably maintained by some teacher himself, had no library ecosystem, lacked documentation, the ugliest editor I'd ever seen. Anyways, moved on and never thought of it again.

Then became a dev, did C++, some assembly, moved to ActionScript 3 lots of event programming, then C#, JavaScript, Java, all that. Suddenly, I gave Lisp another shot, at work, on a real project, not some school project, with Clojure. First thing I realized, there are libraries and documentation, and frameworks, you can set variables, loop, do infix even if you want too. There's easy ways to do imperative code, OOP, all that is there, I just couldn't see it before, I think it was college shadowing the obvious from me, and maybe my coding instincts weren't good enough then, so I couldn't recognise things if they looked and behaved slightly differently. Also, I had coworkers who used Lisp for real projects, commercially. I realised Lisps are often used in school for teaching theoretical programming concepts. But in practice, you'd not use Lisps like that, you'd use it like any other language. Anyways, eventually, I also got recursion, tail call, collection mappings, pure functions, higher order functions and all that. Once I got all these things, it was like, how have I had been using anything else before. Now its such a drag to go back.


>I don't know, sounds like it's just not as awesome for you, but probably is that awesome to all those famous people saying it is.

Like our pg? Who loves Lisp to death, and made his first major app (a web page store generator, if I recall) in Lisp? But then after selling it to Yahoo -- ported it to another language (Python?) so that other developers could actually modify it?

It's not "lowest common denominator" to expect a language to stay within certain bounds of flexibility. If every single project has so many macros that it's effectively a DSL that no other programmer anywhere understands, then you've created a software package that becomes instantly unmaintainable if the wrong people get hit by a bus.

In college, by the time I picked up Lisp, I had already used:

* Assembly Language * Basic * Pascal * Forth * C++ * Recursion

And I'd already worked on video games professionally (in 1987, in a project for Lucasfilm Games, before they became LucasArts).

Java and C# had yet to be invented. At the time, there weren't libraries and documentation to speak of in any Lisp, so that's something. It was Common Lisp (as well as "elisp" on the Atari ST, which of course was completely incompatible...).

Since then, I've used tail calls (in Lua, and soon coming to JavaScript), collection mappings, pure functions, higher order functions, and all that, in many other languages. I just still hate Lisp, and while I have no problem understanding how to use Lisp, I don't see the advantage of using it over using other languages, and in particular I am finding static types to be critical in large system architecture, which are completely lacking in Clojure and other Lisps.


1987, wow, I have to give you my respect. I think then Lisp is just not your thing, but I have to ackownledge that in 1987 they might have sucked. Realistically, the only one I like and know is Clojure and ClojureScript. Maybe I'd hate Common Lisp if I tried it. I know I don't like elisp, because it doesn't even use lexical scope by default, that's just a nightmare.

I don't see the advantage of using it over using other languages

That's probably the best question to ask, I think sometimes there's a lot of false benefits to new/different tools, that while cool, rarely add real value.

My observed (but not measured) benefit was enhanced productivity. That said, I'm not sure how much of that is due to Clojure being a Lisp, or more that I switched from Java/C# which are staticly typed and verbose, to a dynamic language. I don't encounter the issues I do in JavaScript and Python (which I have experience with) as much with Clojure though. Mostly that they get harder and harder to maintain over time. Again, I'm not sure this is because of Lisp, I feel it's more due to Clojure's choice to be a functional language first, with default immutability and managed reference types, as well as its smart choice of abstractions, like open polymorphism using protocols and multi-method and CSP instead of event hell.

The only thing I feel are benefits brought over from Lisp are the interactive REPL workflow, concise notation, code that can be used as an extendable serialization format and configuration format, as well as macros (as double edged sword). Macros have helped me in rare times, to avoid having to write verbose code, but at my work we have very few macros even after 2 years of accumulated code, I think we added 5 or 6 macros only. And I wouldn't want to have more, for the reason you mentioned, we're not looking to use our own custom programming language, and add too many macros and that's what you get. By the way, Clojure chose to not have reader macros, which other Lisps have, and are the ones where you can really go crazy changing the very syntax.

I'd be just as fund of another language that would be functional, immutable, interactive and with smart abstractions. I can live without macros, even though I love a good macro when you need one. I would prefer a syntax with only expressions, not a fan of statements, because you're restricted on where you can use them. And I like concise syntax for having to type less and read less, able to see more at once. I reckon not all Lisps fits this description, so maybe its really not Lisp I should evangelize, but Clojure specifically. Elixir also fits this description, that I know of.

On static vs dynamic type systems, I still don't know. I like types, they feel good to have, but I just don't feel they really help, I think they're a false sense of safety, and they just slow you down. That said, maybe they pay back in the long term maintainance, I'm still unsure though.


> I'd be just as fund of another language that would be functional, immutable, interactive and with smart abstractions.

You get all of the above with TypeScript except immutability. Pull in Lodash for extra functional sauce, or the functional variant of Lodash for increased composability. And the serialization format is the industry standard JSON.

Events are really a necessary tool. You're the first person I know of to complain of "event hell;" event-based programming is a key feature of modern operating systems and browsers and applications. I haven't used a CSP (it looks like a Lisp thing?), but it doesn't look like a full replacement for events, but rather streams? And there are functional stream libraries in JavaScript/TypeScript. [0]

I don't deny that having macros as a "sometimes feature" would be a good thing. I just can't stand the Lisp zillions of nested parenthesis. (Thankfully JavaScript callback-hell is also a thing of the past with async/await and Promises...I don't like zillions of levels of callbacks either).

I think once you have static types, immutability isn't as critical, but if you're set on it, there's always Immutable.js [1]

> On static vs dynamic type systems, I still don't know. I like types, they feel good to have, but I just don't feel they really help, I think they're a false sense of safety, and they just slow you down. That said, maybe they pay back in the long term maintainance, I'm still unsure though.

Technical debt earns compound interest over time. Static type systems let you zero out technical debt for almost no effort.

The only "slow you down" I get from TypeScript is defining declarations for libraries that are missing them. That's about 5 minutes to 15 minutes per library. That's an extremely low up front cost to pay, and everything else about TypeScript is faster (auto complete on object members) and better (refactoring is fast and easy enough that it's easy to "just do it" and not put it off until later).

[0] http://reactivex.io/ or https://github.com/cujojs/most/ are good examples.

[1] http://facebook.github.io/immutable-js/


Ya, I would probably be happy with TypeScript. How's the backend story? Is using it with node easy? Is there an interactive mode, or just quick transpilation?


It's pretty trivial to use with Node. You just need to have a source folder, set up your tsconfig.json file with your preferred options, and then run:

    tsc --watch
There's also an REPL for Node here:

    https://www.npmjs.com/package/ts-node
I find it a bit tricky to do anything complex in, though. Since TypeScript is mostly ES2015 anyway, when I want interactive anything I usually just use an ES2015 interpreter (like Node itself). The types usually make interactive stuff more challenging.


>Some people (myself included) don't like the aesthetics of Lisp as a language, nor do we like the enforced functional program structure, nor do we like the lack of infix operators

Is it that "Some people" don't like those, or that "some people" have been to Algol-style languages first, and they can't easily adapt to the other style?

>Maybe I "didn't have good enough teachers," but for every other language I've ever learned I didn't need those "good teachers" to achieve a strong level of mastery, and I've learned more than a dozen to a high level of mastery.

Yes, but I bet the first one's you've learned were already algol-like.

After that, adding other languages does not mean much if they all share the same paradigms and syntax style. It's like a Common Lisp person also learning Scheme, Racket, Clojure etc -- in the end it's not much difference.

Going from those to different paradigms is what makes an actual difference (e.g. learning Coq, or Erlang, or Haskell, or Lisp or Forth, etc).


>Yes, but I bet the first one's you've learned were already algol-like.

BZZZZT! Wrong answer!

I started with BASIC and transitioned from there to assembly language. Wrote several games and device drivers in assembly language (BASIC was too slow) and then learned Forth. Tried really hard to like Forth, and played with it for a while; loved some aspects of the language, but couldn't wrap my head around how to write a game with it, so I gave up and kept writing games in assembly language.

Pascal was my next language, and that wasn't until later. Much later than that I finally got to C.

You might think of BASIC as Algol-like, though I'd say it's more Fortran-like. Assembly language, though, is about un-Algol as a language can be, as is Forth.


people's exposure to infix operators way precedes any dabbling with an algol language; it's the common notation that everyday mathematics has settled upon. you learn it in school.


Speaking personally, I don't want to work with an image based language. That's an argument against both Lisp and Smalltalk.

The problem with image based languages is that you've thrown away your file-based tool system. You're so used to your favorite text editor, source control system, etc? So sorry, we're building better tools! And when your code is ready, you just deploy the image!

It sounds great in theory, once those tools are delivered, but in practice you've lost more than you gained. No matter how good your environment, you do not beat the collective effort and productivity of file-system based tools.

What I say about image based tools also applies to code deployed in a database. Good luck keeping source control in sync with stored procedures...


>The problem with image based languages is that you've thrown away your file-based tool system. You're so used to your favorite text editor, source control system, etc? So sorry, we're building better tools! And when your code is ready, you just deploy the image!

First, what's bad about that? Our text editors and source control systems are indeed crap compared to what we could get with image based systems with inherent (not text-level) knowledge of code. The only real argument is that we still need to work with legacy crap text-based languages also, which can't share those tools. But that's not really an argument against those tools being good in themselves.

Second, there's nothing about an "image based language" that says you can't also have a textual representation, and thus work with your editor (or the image based editor) and "regular" SCM tools.

Third, LISPs are generally not image based.


You are right that in principle an image based system should be able to produce a better tool for dealing with that environment than is possible with a text based system.

However no image based system is poised to take over the world. And I'm willing to bet my salary that none will ever have sufficient mind share to do so. Programmers switch languages, environments and operating systems. When you add switching tooling to the learning curve, you've just created a barrier to entry to the environment, and programmers are going to spend time learning something that is not applicable to whatever else they do next.

So you wind up with using "awesome tool if you just understood it" where your hires don't. And the "awesome tool" probably isn't quite what those people find most productive. So it becomes a constant frustration.

Move back to a text based language, and this problem goes away. People use the tools that they know. I'm probably going to write code in 4 different languages this week, and have to look at what is going on on a half-dozen machines..and won't need to switch tools.

On having a textual representation, yes you can do that. And then you've changed your image based language into just being yet another VM for another language. Whether you write code in Java, JavaScript or Python in the end it runs in a VM, and that doesn't matter.

As for LISPs in general, it depends. What I've seen with Lisp has mostly been image based, but I don't use Lisp very often.


> No matter how good your environment, you do not beat the collective effort and productivity of file-system based tools.

I bet people said the same about trains and ships. There was a lot of work put into horse-powered infrastructure and first cars and trains were very slow and bad. Same with boats: at the time steam-powered ships were created wind-powered clippers were 5 times faster and much more reliable.


The difference here is that image based languages have been making the argument that their better will beat every one else's larger community for decades. The one that came closest to realizing the dream was Smalltalk.

But somehow it never happened.


Fair enough, but not all Lisps are image based. Clojure is file based. That's the one I use.


I feel like this is a false dichotomy: you can have a fast iteration cycle, and have statically checked guarantees. I've worked in Haskell for 7 years and had exactly this. I could load my entire app in GHCi, make changes, reload and test. Now I'm working in Java and in IntelliJ I can have something similar with hot-swap. And in the browser with typescript I have a strong type-system and can reload my app in seconds.

I agree that it's easy to build a slow, batch based build system and just tell people that's how it is. Fast iteration is important and requires effort to keep working. It might even be more important than a static type system, at least for some apps. But you can have both.


This has not been my experience with strongly and statically typed systems. The biggest problem is the coding is not fast; especially if you make a change that requires "shaking the tree". Turnaround to me is as much about the code you write as it is the results you get from that code.

For example, I start my program thinking that I need a duck for all of my various waterfowl systems. However, four days into the coding process, I realize I need to allow for an ugly duckling to be passed around as well. Now I have four days worth of code to comb through and re-type. I could just use a refactoring tool to change all of the existing 'duck' types, but some of that code actually does need a duck, and won't work with an ugly duckling. So I have to come up with a more abstract type which can encompass both an ugly duckling and a duck, and refactor that into my program.

Eventually my code will be correct again, at least until I realize that a platypus needs to be included into my now-renamed aquatic_ecosystem as well.

The nice thing about strong and dynamically typed systems - I just start treating the incoming object as what I need it to be. No error chasing, no wading through four days worth of code. Yes, I'm more likely to have incorrect code, and it's probably going to show up while the code is running, and not before. Many times, that's a tradeoff I'm willing to accept.

My ideal system? I don't think at all about types. I just write code, and the (still fast) compiler will tell me when I'm passing something that doesn't quack to a function which expects quacks.


A lot of people who stick with expressive static type systems find the process of fixing type errors quite enjoyable. I really like the experience -- I can go really fast confidently.

GHC does let you defer type errors until runtime but I never want to...


>A lot of people who stick with expressive static type systems find the process of fixing type errors quite enjoyable. I really like the experience

So maybe liking developing in them depends on having that personality trait? (or typeclass if you prefer, pun intended).


Maybe, I don't know. The psychology of programming language preference seems like a pretty interesting topic. Enjoying programming at all seems like it might depend on some quirks; I don't know if there's a significant difference to make you inherently prefer fixing type errors over fixing unit tests or whatever.


> find the process of fixing type errors quite enjoyable

I, myself, would rather be creating new functionality than fixing a litany of type errors. In fact, I'd rather go to a meeting than change several hundred instances of 'duck' to 'waterfowl', 'avian', and 'ugly_duck' (knowing that I'll probably have to go back and change it again later).

Different strokes for different folks, I guess.


Well... but usually those are actual errors that you would need to fix anyway.


Why would generalizing a type definition make an error?


In the places you use a specific concrete type you presumably did so for a reason, and now need to think about how the type change affects it. The parts of your code that are generic should have been written to be generic, and the parts where the type can be inferred from the lower-level functions involved should have left the type to be inferred from the lower-level functions involved.


What you are saying is like the people that would "rather write new functionality than tests"... Like, yeah, that would be nice, if you were able to write 100% bug free code, but you aren't so your tests are actually important in the goal towards a working product.


Well, I'd actually rather write unit tests than shake out type trees too, so no, the two statements aren't equivalent.

Why? Unit tests also check a lot more than the types being passed around, so they are a lot more useful in the long run. There are some type systems where this is perhaps not the case, but they certainly aren't the majority. The majority is "so do I go with a float or a double" or "I have to cast this int to an uint64 for this one function".


So use a good type system rather than a bad one. I mean if your point is "some popular type systems are so bad that they're worse than no type system at all" then I agree with you, but don't tar all type systems with that brush.


A language that is exceedingly popular and has a great type system and has great tooling for tests is Rust. When using Rust, it’s quite evident why you have to use both and not one or the other.

And when it comes down to it, in theory, proofs are better than tests, and Id say no one would disagree, and in practice, types are proof. Unfortunately, you can’t use types for proving everything, so that is where tests comes in.

Imagine a platform where you could indeed prove everything. I believe, but am not sure, that there is some languages that do this, like Idris.


Nowadays I feel that experience with some type system at least as powerful as Haskell's is required to criticize static typing... Well, of course, you can criticize it without such experience, but that only serves to look foolish when you complain about stuff that is solved for a decade.

Are you really complaining about lack of generics?


Come back to me when the most used programming languages have a type system like Haskell's. Then we can talk about the benefits of static typing over dynamic typing. Until then, static typing is mostly "expected a HashMap<i32, u64>, got a HashMap<i64, u64>" or other pedantic stuff like that which is only really meaningful to the compiler.


> Come back to me when the most used programming languages have a type system like Haskell's.

We never get there if the conversation about types is dominated by people who have only used Java-like type systems. Even ignoring that, demanding that powerful type systems be ubiquitous before discussing their benefits is a complete non sequitur. There's no excuse for ignorance, here.


You are confusing "static typing" with "algebraic typing", which is related but different.


> Yes, I'm more likely to have incorrect code, and it's probably going to show up while the code is running, and not before. Many times, that's a tradeoff I'm willing to accept.

I find the runtime errors I get from dynamically typed languages typically much clearer (and easier to debug) than many compile-time errors from C++ or Haskell.

I do like C's static type systems because it's needed for efficiency at runtime. (Re-) Compiling C can be close to dynamic execution for not-too-large applications. Often also C++ is needed for easy to use containers, but as some else said here it's really a tradeoff because compiles are much slower (I don't know why that is, but part may be because containers are re-compiled for every compilation unit that uses them).

And the overwhelming majority of bugs really appear on the first run of the dynamically typed code. The bugs that remain would have very often been also bugs with statically typed languages, since these are so ridiculously bad at expressing the simple invariants... They get unusable much faster than they get good at helping with bug-discovery.


It's not a dichotomy at all. Type systems etc are obviously there to make it easier, faster and safer to build and maintain your code. I'm much more productive in Swift than I ever was in objc.

The one case where it might make sense to through CS out of the window is if you're building something very small that you are sure you will never reuse or even look at again. And even then I'm not sure it is faster to be sloppy.


*throw (not through)


One very common complaint I've heard about Haskell is slow compile times, see this discussion for example with a bunch of GHC developers:

https://www.reddit.com/r/haskell/comments/45q90s/is_anything...


That's a real problem, but it's manageable for day-to-day development because reloading code in the REPL is really fast.

I'm using Haskell at work at the moment and while rebuilding everything and rerunning all the tests takes a frustratingly long time, reloading just the module I'm working on and playing with my changes is so fast I don't notice any delay. In practice, this means that 95% of any given task feels great but the final 5% before I'm done can be a real pain because I need to rebuild everything to faithfully reproduce our production environment and that does have a slow iteration time.


GHC is very slow. That said, cabal defaults to incremental buillds, so most compilations will take few seconds.

It does not change the fact that GCH is slow, since you must do full compilations once in a while. But it does let you keep your flow while developing. Besides, GHCi is much faster.


It's funny how when someone describes their own experience, you tell them they're wrong. Are you claiming the GP didn't actually experience fast reloads?

Initial compiles of some Haskell libraries that essentially do exponential inlining (cough vector-algorithms cough) can take a long time. An incremental non-optimized compile of a small change to a project with a good module structure takes a couple seconds.

The GHC devs are correct that it has been slowing down and are putting a lot of effort into getting that speed back. But it's not at the level of "rebuilding my project takes hours" that you frequently get with some build systems.


Well, here's another data point. I made a single file prototype to implement a Tetris game just for fun. I think it was with the Haskell SDL bindings or so, and I went with the most straightforward way about the implementation, and do have a reasonable level of experience with Haskell.

I gave up when the code reached about 400 lines. The compilation times were at 10-15 seconds already, and the error messsages were really ugly.

In short, compilation times depend on how you use complex type system extensions, or even only how much the libraries that you use make use of the type system. (And if you don't use the type system much - it becomes such a bad developping experience in most application domains, or you code performs very badly, etc.).

It was so much simpler to do it in C. <1 sec compiles, incredibly performant with straightforward non-optimized code.


> It's funny how when someone describes their own experience, you tell them they're wrong. Are you claiming the GP didn't actually experience fast reloads?

Nobody is saying anyone is wrong. One person can perceive short compile times that another thinks are long. But, we shouldn't make generalizations based on one datapoint. Maybe you _can_ have fast builds with Haskell, but maybe that isn't the norm.


That discussion is over a year and a half old.


True, but in my (very limited) Haskell experience, it's got worse since then, not better.


Well, yea, hopefully in the future we have languages that allow both. I do feel like research in language design has ignored the interactive and fast feedback loop aspect though, at least from the reasearch I know of.

My biggest gripe with static checks, is that in reality, most of the software most programmers are asked to write don't need to have 100% correctness. As long as 95% of the most likely to occur and the most user impacting bugs are fixed, the rest doesn't matter.

So I'm really interested to see the optional type system research mature more. When I start programming, I rarely know what the functionality should be, I have a vague idea, but I need to experiment. I don't need each experiment to be correct, at that phase it could have tons of bugs, as long as it can give me a sense for the functionality, and allow me to demo it to the business so they get a similar sense. Static checks slow this process down a lot, even though I do have fun making each experiment correct, its really just a waste of time for the product.

But as the desired functionality gets clearer and clearer, then I'd want to start working towards that 95% correctness, and types are quicker to write then tests. I'd rather have types to assert type errors, borrow checks to assert no memory errors, and tests to assert functional errors. Then have to write tests to assert all three, because tests are the slowest to write. But writing 100% type annotations or memory annotations is too much, just like I wouldn't write 100% test coverage in practice. That's because most software needs 95% correctness. Not 100%. So this is the struggle I feel.


Um, we have those languages NOW. We've had them for decades. You just need to use them:

https://www.haskell.org/


I've used Haskell, have I overlooked parts of it? How does it solve my problem?

Also lazyness I'm not a super fan of. And I/O is kind of a pain, not sure its worth the overhead just to achieve purity.

P.S.: I encourage people to use Haskell though. Its a great language, a step forward in a lot of ways, I'd be happy using it for work, just not as happy as I'd want to be, because of the problem I explained above.


If Haskell was a solved problem, Haskell 98 and Haskell Prime wouldn't exist.


Correct me if I'm wrong, but hot-swap requires a very particular set up and would be tricky to implement after a legacy java based stack has been established no? Mind elaborating on that a bit? I worked in frontend on such a system and looked far and wide for something that would let me maintain my sanity such as a turnaround time of less than 10 minutes.


> It might even be more important than a static type system, at least for some apps. But you can have both.

The important thing is not at which stage in the compilation pipeline the bug became obvious, it's at how many seconds elapsed it became obvious.

I generally found that working in untyped js with a workflow prioritising fast iteration was much better for finding bugs quickly than working in Scala with its advanced type system and miserably slow iterations.

But as you say, it's possible to have both.


Scala has incremental compilation now through SBT, I believe. How long ago was this?


A few years.


I think, to some degree, the idea that drives strongly typed, theorem proved languages is a bit different than what most people think of when coding.

A large set of people in that community want to derive algorithms mathematically, using a deductive or proof based process, as in math.

Iterative development conflicts with a top down form of development, and also conflicts with the safety oriented culture of strong typing. While rapid iteration languages and environments are good for initial development, they can be awful for projects requiring maintenance.

About a decade ago, I built a small web app using Common Lisp (SBCL) and a lot of the development I did was done through adding new features and debugging in the REPL. While I saved the VM, reading the code months later to add a feature was terrible because the app was hacked together. I wonder if there's a way to fix this. Typed Racket looks like a promising move in that direction.

FWIW, ghci already interprets Haskell quite quickly for iterative development.


"About a decade ago, I built a small web app using Common Lisp (SNACK) and a lot of the development I did was done through adding new features and debugging in the REPL. While I saved the VM, reading the code months later to add a feature was terrible because the app was hacked together. I wonder of there's a way to fix this." -- I think you might like clojure.spec, I feel that it is a step in the right direction towards keeping the flexibility and interactivity of lisp while making the code more maintainable. You can find more about it here -- [https://clojure.org/about/spec].

I also wrote a small article on it, not very detailed but maybe it can help you get a rough idea of the power of clojure.spec when combined with generative testing -- [http://abhirag.in/articles/spec_oracle.html]


One thing to remember - the domain under discussion in the article is gaming. There's little maintenance typically involved with games.

There's more today than there was yesterday, but it's still not a system expected to be built, maintained, and extended for decades.


The engines, though, absolutely


Yeah, this sounds right 99% of the time, although there is at least one genre that, if successful, breaks that mold - MMOs. WoW is 13 years old, Everquest is 18 years old, and Ultima Online is 20 years old. For us hobbyists, there are MUD codebases based upon code written 27 years ago.


If you're WoW you are printing money. You can just throw people at the problem.


Good points.

> derive algorithms mathematically

Of course, most of what computers do is not algorithms (or even "computing").

> can be awful for projects requiring maintenance

Or wonderful, see Smalltalk. Not that it is perfect and can't be improved, but it sure has been successfully maintained over a long period of time.


Can you expand on what is meant by algorithms and computing, and what you mean that computers mostly do instead?


The skill of programming is really about translating tasks that are semantically meaningful to humans (ie making a user friendly online booking site for a hotel) into "meaningless" instructions that can be computed by a turing complete machine. That's very different from, say, deriving a faster partition algorithm by using half-swaps instead of full swaps (as demonstrated by Andrei Alexandrescu last year).


But aren't all forms of engineering about translating semantically meaningful tasks into {problem-domain}? What makes software different?


"...into "meaningless" instructions that can be computed by a turing complete machine."

Pssst! Hey, buddy, that's what an algorithm is.


Computers today mostly (a) communicate and (b) store/retrieve data.

Both at the macro level, what they are used for, but also at the micro level. Actual computation tends to be incidental.


Yet you can't just tell the computer to communicate or do I/O -- it's all controlled by decisions, and making those decisions is basically what I call computation.


Yep...incidental computation. Also: "primarily"/"mostly". Not all/none.


Hmm, maybe a lot of computation is outsourced to server farms, but "clients" still do a lot of it: image processing, codecs, scheduling, layout, speech recognition, etc etc.


So what are codecs for? Communication, storage.

What is speech recognition for? Communication.

etc.

I think you keep arguing against something I never said, which is that computers do no computing whatsoever.


I feel it's the other way round.. Everyone is advocating Javascript, python etc. JS webapps are more and more developed directly in the browser in the dev console, CSS interactively modified to directly see the results. But this also lead us to code coverage abominations where people write hundreds of tests for manually checking input types that otherwise might crash the thing after two days running because that hashtag less of lists of objects contained a string instead of a float. Is that fun?

In gaming Unity3d embraces this interactivity by modifying more or less everything directly while the game is running. The unreal engine blueprints show data flows live etc.

In technical sciences there was always Matlab with its interactive mode of development. We have similar technology in data science with ipython, Spyder, jupyter notebooks etc.

Actually I'm seeing more interactivity than good type systems out there. Elm, Haskell & co is something you find advocated in internet forums but rarely in companies.

(and as the author mentioned - those two actually don't have to be exclusive)


Yes, I don't think type systems (or, in general, "computer science-y stuff") are the enemy here. I do at least wonder about some of the stuff that passes for "best practices" nowadays: CI tools sound like a good idea on the surface, but can serve to (partially) hide complex build and deployment processes where once someone might have just typed make. Automated tests of awkward corner cases can be pretty valuable, but easily lead to test suites that take minutes (or worse...) to run on every build. I don't really know how best to get the good without the bad, but giving at least some weight to the "sense of fun" stuff sounds like a pretty good starting point.


Agree whole heartedly!

That's why my favorite language is Clojure. That interactivity, instant feedback, seeing the program running as you are tweeking it, its a bliss to use and it creates better more functional software.

Imagine playing music as you hear it when trying to come up with a good melody. Now imagine not playing it, but composing it on music sheets instead, and occasionaly playing what you've got every 10 to 30 minutes.

Lisp championed interactivity, it invented dynamic programming for that sole purpose. The idea is that you morph a running program into shape, molding it like you would clay.

The first thing you do when writing in a Lisp like Clojure is run your program. In most other languages, running your program happens much later, and much less frequently, and it can actually be quite challenging to run.


When showing off Clojure the application is often molded while running by patching it via the repl. But what I don't understand is if you can actually develop real programs like that? Surely even in Clojure code there are lots of dependencies so you can't just change one place in isolation. And I also guess that changes done on the repl aren't actually saved for the next time you run your app? And how do you do testing?

How is the actual work flow you use when molding your app?


The trick with some of these runtimes (I don't know if Clojure does this, just noting that the paradigm exists) is that the state of the runtime is not lost when going from development to production.

The entire state of the program, including the code and the globals in memory, is preserved, and simply spun up in a different environment. This tripped me up for some time with Smalltalk - I didn't understand this.

It's the same with many lisps - you can frequently get a REPL directly inside a running program, and query/change the objects (including code) that is running. This was used to great effect in fixing the Deep Space 1 probe while it was 100 million miles away from Earth.

https://en.wikipedia.org/wiki/Deep_Space_1


Clojure doesn't do the image-based persistence thing. You can ahead-of-time compile to Java byte code if you want, but that's probably closer to the .fasl files created by some Common Lisp implementations. No old bits of state floating around when you deploy a new server. Do occasionally miss the ability to save an image, but it doesn't seem to be the modern way (and would be a nightmare to implement well on top of the JVM).

Can easily add a REPL to any Clojure program though.


Re workflow: A typical one is to sketch something out in the REPL, then paste the code into a suitable function in your module, and live-reload it and test it by calling it from the REPL. You'll also typically define some variables holding your test data in the REPL. Sometimes you skip the sketching out part if you already know what you want to do.

I'm not quite sure what you meant by the dependency question. If you need to reference other namespaces, you can add new (require) imports in the repl or live-reloaded source files. If you need to add completely new third party libraries to the project, that's rare enough that a REPL restart is fine - there's a library to do it dynamically at runtime if you want to though.


You can do stuff like send current (line, function, file, ...) to the REPL from the editor.

So you can interactively change the code on Emacs, Cursive, CounterClockwise,... and then update the REPL state.

While using the REPL for debugging purposes.

So at the end of the coding session, all source files have the current state of the application.


But what I don't understand is if you can actually develop real programs like that?

Well yes, I do it daily at my work. We develop backend SOA enterprise services like that, it works great.

Surely even in Clojure code there are lots of dependencies so you can't just change one place in isolation.

Well, there still are a few, but very little compared to most other languages. That's because everything is immutable by default, and state is passed around instead of accessed globally most of the time. So you'd be surprised how often you can actually work in isolation. Achieving this was Clojure's number one design tenet. To have a language which promotes untangling dependencies as much as possible, that makes it easy to write simple untangled code.

The other thing to realize is that the program running is not an isolated one like when doing TDD. It is the full program, with all its dependencies, connected to the file system, your databases, etc. I don't mock anything, if I'm trying to write my DB query, I try it for real on a real database.

And I also guess that changes done on the repl aren't actually saved for the next time you run your app?

So, that's why Clojure has tight integration between your editor and the REPL. Or tight integration between code files and the REPL. You normally don't type code at the command line, in fact, the Clojure REPL has no UI or command line interface, it's a network repl which listens to messages over a port using a special protocol called nREPL. Each editor that support Clojure connect to it, and send the content of the buffer to it for you. So you're editing the file and saving it as you see fit, while also asking the editor to have the file or part of it as you edit be sent to the REPL. You can also edit the file, save it, and then have the repl watch file changes and auto-reload them as they change. That way can work even with editors that don't have Clojure support. There is a CLI you can use too, for when you want an ephemeral program. Some editor also build UIs for the REPl, allowing rich media to be printed instead, like an interactive graph.

And how do you do testing?

Well, you're always testing, as you code, in parallel. Since your code is running live as you edit, you see the impact of your change right away. You're expected to also write unit and integ tests, and you do that like in all other language. These are needed for regression testing, but you don't need tests to test something works, you'll be doing that as you code fron the REPL. You need tests to make sure someone in the future doesn't revert your functionality. I actually find these are much easier to write in Clojure, again because the language pushes you to write isolated easy to test code, but also because you don't need a test framework and a mocking library, the core language is enough offers both.

How is the actual work flow you use when molding your app?

First I create a project. Then I start a repl configured from that project, then I open my editor and connect to my repl.

Then I write my main function and load it in the repl. I add more and more functions, loading them and trying them out as I do. Once I've got enough, I orchestrate calls to them from my main method, loading and reloading it all as I go, seeing the result of every step in the REPL. When I'm satisfied, I save my file. Once I've got what I want, i create a test file, write some regression tests, and then git commit, send a CR request, and then git push. Rince and repeat.


Well, actually, let's explore the metaphor. Here's how I compose music.

First, I use a highly-flexible prototyping instrument to quickly iterate on melodic ideas. Normally this is either humming or whistling. "Our first instrument," as an instructor used to call it. I do this until the melody is catchy enough to start sticking in my mind. Then I'll look at alternate parts, like countermelodies, B/C sections, harmonies, or basslines. Each part gets repeated until it gets stuck in my head.

Then, I repeatedly hum/whistle each part like a disturbed rambler until I get home or to some other place where I can scribble the parts down onto paper. I used to be faster just typing out the Lilypond, but I'm out of practice with that whereas all musicians generally never stop having a short shitty hard-to-read staff-based shorthand.

Now, finally, I actually start playing the different parts on actual instruments, figuring out stuff like fingering for guitar or piano, rhythm and phrasing, filling out transition chords, etc. During this process, I play less and less of the song, drilling down to tiny fragments and setting up tiny local contexts for testing ornaments, phrasings, hits, etc.

If I'm gonna try to make the metaphor rigorous, there would be only one language, because the musical process happens entirely in one language. This language is small enough to write on a single piece of paper, fully contains all of its abstractions, has focused notation for specific instruments, and permits debugging any part of a program by cutting any contiguous subprogram out and turning that fragment into its own live environment.

I'd write a program first on a prototyping platform which is so lightweight that nearly any prototype program will run, and I'd use that to write out the entire first draft of my program. Then, I'd incrementally move pieces of my program transparently onto a more rigorously-precise framework which is more restrictive about types but makes it easier to compose modules and let them stay composed.

There is a point in time when composition is more about managing multiple parts at once and you won't actually want to listen to more than about 15s of your song at once, and you'll want your runtime to support your quest towards stability as a basis for tweaking the fine details of your song.


Honestly, that describes exactly my programming workflow when working in Clojure. The whole process happens in Clojure.

This language is small enough to write on a single piece of paper

Check, Clojure is one of the most concise language out there. I use it for hand writing code or when coding on my phone, because its so short.

fully contains all of its abstractions

Yup. In fact, you need to change your mindset, you don't write instructions in Clojure, you search for the functions that do what you want and arrange them in the order you need. Think micro-library.

has focused notation for specific instruments

Lisps like Clojure are the kings of DSLs. This is literally their bread and butter.

and permits debugging any part of a program by cutting any contiguous subprogram out and turning that fragment into its own live environment.

Yes, that's what the REPL lets you do. Load any subset of the program into a live running environment.

I'd write a program first on a prototyping platform which is so lightweight that nearly any prototype program will run, and I'd use that to write out the entire first draft of my program.

This is the idea behind dynamic programming which was invented by Lisp. Clojure will run almost any code, correct or not, no question asked. It does not put constrain on you, though it highly suggests the use of safe tools over unsage ones, it lets you cut yourself if you want too.

Then, I'd incrementally move pieces of my program transparently onto a more rigorously-precise framework which is more restrictive about types but makes it easier to compose modules and let them stay composed.

This is my ideal too. There's not yet the golden graal for this, but Clojure is currently focused on this very last part. Composing modules is actually pretty trivial, because it supports performant immutable datastructures as first class, open polymorphism, defaults to functionally pure constructs and wraps all state in safe managed containers.

Now as for correctness checks, it has optional types, but the implementation is a work-in-progress, and is stagnating a bit. Optional generative tests is the current strategy being explored, as well as highly powerful runtime checks. Still in alpha though. Static analysis is also being worked on, though soundness is not a target, also in alpha at this point.

If its any consolation though, the only two studies I could find about defect rates relating to programming language choice showed that Clojure does as well as Hakell. Which means it had some of the lowest defect rates of the languages tested, averaging a tad behind Haskell, and doing better then Scala and F#. Obviously besting Java, C++, Python, Ruby, C#, Go, etc. It was an outlier in that sense, as it was the only non staticly typed check language to do so well.


Um. Here's the first part of Clojure's docs [0]. Compare and contrast: "A pitch is a frequency. 440Hz is an A pitch. There are twelve pitches. Each pitch differs from the previous pitch by the multiplicative constant twelfth-root-of-two. Twelve pitches in a row doubles the frequency, creating an octave." It only takes four or five paragraphs to mathematically describe the basis for chromatic notation.

If Clojure fully contained all its abstractions, then it would not have the option of calling into the JVM.

Clojure does not have the property that cutting any fragment of a valid program yields a valid program. There are languages in the concatenative style which have this property, but even there, the ability to compose does not guarantee the ability to split.

If you think that types produce reliability, then you do not understand types.

You seem dedicated to Clojure, which is great, but music is thousands of years older and has figured out a lot of stuff. I think that you also missed my bigger point, which is that programming is roughly at the same point in its art that cave painting was at hundreds of thousands of years ago. If you think that Lisps are beautiful and Clojure is the pinnacle, then I think that you don't know beauty. But it's not your fault; you've never seen anything beautiful. None of us have. And none of us ever will, at the current rate of progress.

Here, have a video to provoke some thoughts: [1]

[0] https://clojure.org/reference/reader [1] https://vimeo.com/74354480


I've seen the video, it sums up my thought pretty well. I knew all of it already though. But now I'm really confused, I thought we were disagreeing?

I'm exactly saying that I believe the ability to quickly run and test your program is most important, more so then to prove properties from its code description. And that's specifically the strengths of Lisp dialects like Clojure. Are you saying something different?


I think that you also missed my bigger point

Indeed I did. I'm not claiming Lisps and Clojure to be the be all end all of programming languages, I hope not. That said, I think you're being a little dramatic. Beauty really doesn't exist, its an illusion fabricated by a mix of our culture and our genetic predispositions. It's mostly characterized by the emotions it evokes in you.

Clojure and Lisps, currently of all languages I've tried, evoke the strongest set of positive emotions in me, the strongest one being joy. And I think some of it is not without merit.

Still, beauty is not what I'm talking about. I like quantifiable measures. You have productivity, explorativity, understandability, performance and correctness. Clojure finds a good balance between these, it tries to maximise the average of them all. Which is why I find it's great for your average programming project. I don't yet know of a language which does a better job at this maximization, but I'm always on the outlook for one.

It only takes four or five paragraphs to mathematically describe the basis for chromatic notation.

Its just as short to describe the basis of Lisps, that is the lambda calculus. Also, chromatic notation is less powerful then lambda calculus notation, much less, yet lambda calculus notation isn't much longer to describe. I'm also not sure shortness is necessarily better or more beautiful, again, that's an esthetic preference like minimalism in art. With nine constructs you can have a full Lisp impmementation that can do all turing equivalent computations and also I/O. But in practice, having even more turns out to be useful and makes things easier. I think it's the same for Forth too.

If Clojure fully contained all its abstractions, then it would not have the option of calling into the JVM.

I'm not sure this has any practical merits, but there's nothing in Clojure preventing this. It could, in fact self-hosted ClojureScript does. It's not very useful, which is why Clojure doesn't bother, and calling into the JVM is actually quite useful, to leverage lots of existing code.

Clojure does not have the property that cutting any fragment of a valid program yields a valid program.

You can cut anywhere an sexpr starts or ends and get a valid program. Maybe that's not granular enough for you, but what is? In forth you can cut at any whitespace and get a valid program, but I could claim it's not granular enough, I want to cut at any character. To me, the property is that the notation is recursive, it builds on itself. Sure the unit being an sexpr is not as small a unit as a single musical note, but that unit is independent, composable and nestable.

If you think that types produce reliability, then you do not understand types.

I don't, I didn't use that word at all. In fact I believe Clojure programs to be very reliable, even though they don't have static types.

Here, have a video to provoke some thoughts

Thanks, always looking for thought provoking thoughts, I have not watched it yet, but will.


With a good IDE with an integrated debugger Python and its ilk, and even Java, are like clay. PyCharm, IntelliJ, and their siblings.


More like cement. I don't mean to be dismissive, but REPLs aren't created equal. I've used good IDEs with Python and Java and C#. The C# debugger is probably the best, but its really not the same thing. Python could probably embrace it more so, and if you've used Jupyter, you got a pretty good taste of what it means to mold a running program, now imagine doing that for all programs, not just data science reports.


To be fair, in Clojure you have to reach for horrible external hacks like Component and Mount to have a "real" interactive development.


Not really, I often don't need them. They're mostly for being able to reset the full app state back to how it is at startup. You can also just restart the REPL, though some people find waiting 5 second for that too annoying.

I wish the restart times were much faster, but its not as big a problem as most people make it sound like.

Often time, you can just learn to mold your program in smarter ways, so that you don't get yourself in weird inconsistent states, or you learn how to fix your state instead of starting over. Or you use mount, components. Integrant, etc. They're really low overhead, mount is trivial to add, its like 4 more characters per global variable.

Having said that, Clojure isn't the best thing ever, just the most interactive language I know of currently that is mature enough and has a large enough ecosystem I can use it for commercial software. Do you have other recommendations for a language that meets these criterias?


You can also just restart the REPL, though some people find waiting 5 second for that too annoying.

That's true only if clojure is your only dependency, in real project with libraries is more like 15 seconds with lein repl, cider is much worse.

Do you have other recommendations for a language that meets these criterias?

Some would say Common Lisp.


Another Lisp, I see. I've heard good things. It seems just a different set of trade offs. Clojure tradeoffs align better with me. The extra emphasis on immutability, concurrency, and functional programming, and embracing the JVM and JavaScript ecosystems. As well as the extension on the syntax for better data notation.

But I'll eventually give Common Lisp and Scheme a try.


What would you describe as "real" interactive development? And while the reloaded[0] workflow is one benefit of using a library like Component or Mount, it's hardly the only benefit.

[0]: http://thinkrelevance.com/blog/2013/06/04/clojure-workflow-r...


Is there a way to write the changes in the REPL back to source file, or do I have to copy? I am wondering since trying out Clojure.


A better pattern is to use your editor as the REPL via SLIME or vim-fireplace or similar. Basically:

1. Headless server like nREPL for Clojure

2. Code editor that can send chunks of text to the headless repl and receive the output. Emacs and neovim (with its terminal) are pretty good at this. You can use commands to send the surrounding s-expression, paragraph, etc.

3. Ta da! Your text is already in your source file.


With a set-up like Cider in Emacs (and I wouldn't be surprised if Cursive with IntelliJ has something similar), you can write code in a file and send it to the repl for evaluation.[0] This is a common method of development which accomplishes what you describe: code in a source file, evaluation in the repl.

[0]: https://cider.readthedocs.io/en/latest/interactive_programmi...


Thanks, that makes sense (also to cormacrelf).


Flutter (http://flutter.io) strikes an interesting balance here by (1) allowing just-in-time compiled, state-preserving "hot-reloading" during interactive development and (2) supporting optimized deployment using classical ahead-of-time compilation to native code.

Disclaimer: I work on the team at Google that builds the underlying language platform for Flutter.


Thank you for your work on Flutter! I tinker with it during weekends like this and despite it being my first go at mobile development, I feel productive programming in it.


Hmm, looks interesting, but it's quite unfortunate that making the compilation process be useful at runtime-error-removal is merely optional ("strong mode"). So one wonders how many shops write enough "prototype" code in "weak mode" that they decide to leave it in its Python-like mess instead of rewriting for "strong mode"...


Don't worry; strong mode will be the only mode going forward and it already is the only mode for Flutter. Not merely optional.


I have been trying out Flutter over the past week or so and so far it really has delivered on high speed build-edit loops. I've already gotten in the habit of improving layouts while the app is running, it's really a great experience.

Android build times have improved a lot over the past few years but this is a whole next level experience.


Worth noting that "fun" in this case means player fun, not programmer fun. My productivity and my enjoyment of game development in C++ are secondary to the user's enjoyment of the finished game, and this is where the author's point gets more interesting and valuable. There are a lot of comments here already talking about fun, code safety, and productivity from the programmer's point of view, which IMO misses the most important part.

As a game programmer, building systems with fast turnaround times is more valuable to the artists and designers in the studio than for me personally. And the value in the artists and designers and programmers all having fast turnaround time is in being able to make a game that's more fun for the consumers.


I think it might be possible to get the benefits of static type checking while still allowing interactive modifications, but most current type systems don't lend themselves to that.

While it would be possible to replace any value by another of the same type (e.g. redefining a function without changing the signature), I'm not aware of any statically checked language with a REPL that allows that. When I'm playing around in the Haskell REPL, redefining a function requires also redefining all other functions that use it, and that's a chore that isn't even required by the type system.

Other modifications are likely to break static checks, e.g. adding a new case to a sum type would invalidate exhaustiveness checking (and thus probably a bunch of compiler optimizations), so no standard type system would allow it. But having a check that makes sure you handle the added case everywhere would actually be nice to have, especially when it can tell you interactively where you need to add more code.

The major hurdle to altering a running program without violating type safety is the fact that you can't just check the original program and the modified version for internal consistency, you also have to ensure that old code that's still running won't be confused when it calls new code and gets an unexpected return value. In the case of adding to a sum type, you could compile in a fall-through case for all pattern matches, and then patch in the new handler code.

For even larger changes, like completely replacing the return type of a function, it might be necessary to specifically engineer the type system such that it can support this case. Ideally, it would support almost all modifications that work in a dynamically typed language, while still preventing anything that would take the program into an inconsistent state.


Not sure about Haskell (what I know about the compilation model makes it seem more difficult), but in a typical Java environment (including an IDE remote debugging tomcat/jetty, for instance), the debugger will hot swap changes that are just to method bodies.

The sad part is that this encourages bad development practices because piling on nested loops doesn't require bouncing the server, while properly factored code does.

There are tools like JRebel that are supposed to fix this, but I haven't used them.


> Ideally, it would support almost all modifications that work in a dynamically typed language, while still preventing anything that would take the program into an inconsistent state.

This is impossible. Data structures have these little things called “invariants” that require proof to be established. In statically typed languages, abstract data types are used to prevent users from breaking these invariants, by making the representation invisible to anyone but the implementor.

If the data structure underlying an abstract data type can be modified anytime, then every time you patch your program, you would have to check two things:

(0) That the new data structure respects every invariant relied upon by other code.

(1) That either the new data structure is compatible with the old one (which is often not the case), or there are no reachable instances of the old data structure in memory.

This is an even bigger pain in the ass than just stopping the program and fixing it.


I'm aware that a fully general invariant checker is impossible, but there are still type systems that can catch a lot of errors in practice. If dynamic modifications have to be taken into account, that makes the problem more difficult, but not necessarily impossible. Even though there are changes that can't be checked at all, those can't be too common, or humans wouldn't be able to handle them either.

I'm not sure why you think that the checking will be painful, it almost sounds like you think that it would be done by the programmer. The whole point of type systems is that they can be checked automatically, so the programmer is prevented from doing something stupid.

Dynamic languages already allow all kinds of modifications that might or might not break invariants or introduce subtle incompatibilities; a type system would only make it safer.

It is also not just a matter of "stopping the program and fixing it". Suppose you are writing a game, and during playtesting you encounter a bug, where something is stuck in an endless respawn loop. In a dynamic language, you could look at the misbehaving code, develop a fix, and immediately observe its effects. This allows you to quickly iterate until you have found a solution that works. Compared to a "stop, fix, retry"-cycle, it's simply going to be faster, even assuming you can reproduce the bug reliably (maybe using some kind of input replay).


> but there are still type systems that can catch a lot of errors in practice.

I have yet to see a type system that can take a putative implementation of a data structure with arbitrarily complicated invariants, and spits out whether the implementation is correct or not. (Note that Coq, Agda, etc. don't quite fit the bill, because they require the programmer to enter the proof himself, even if these tools can partially automate the process.)

> I'm not sure why you think that the checking will be painful, it almost sounds like you think that it would be done by the programmer. The whole point of type systems is that they can be checked automatically, so the programmer is prevented from doing something stupid.

My point is precisely that type systems aren't normally used to enforce data structure invariants directly. Instead, data abstraction (i.e., the inability to inspect the representation of abstract data types from client code) is used to confine the potential to break data structure invariants to a small fragment of a big program (namely, where the abstract data type is implemented). This is in furious contradiction with the idea of inspecting and modifying anything anytime from anywhere.


I'm not talking about the kinds of invariants that require an undecidable type system to formalize, but about the most simple things. "Any value passed to this function can be iterated over." "This sequence of checks is exhaustive." "Calling this function with these arguments won't throw an exception."

Those tend to be the mistakes I make when programming interactively in Python. Forgetting to put a single value into a one-element list. Forgetting to check for None. Misspelling a key in a dictionary. Swapping the order of two arguments in a function call.

Yes, in some cases those properties can only be verified by proving some invariant equivalent to the Collatz conjecture. I'd conjecture that most instances could still be solved by an appropriate type system. I'm not too worried if it can't prevent me from invalidating invariants, so long as it can prevent me from making simple mistakes that are obvious in retrospect.


> Those tend to be the mistakes I make when programming interactively in Python.

Those tend to be the mistakes that I take for granted any seasoned programmer can detect and fix almost instantaneously and effortlessly. (Of course, not because programmers are superhuman, but rather because Hindley-Milner is the bare minimum a high-level language should have.) It's pathetic that we're still discussing these in 2017.

> Yes, in some cases those properties can only be verified by proving some invariant equivalent to the Collatz conjecture.

I have yet to see a useful program whose correctness is contingent on the Collatz conjecture being true. But I have seen lots of programs that are much easier to verify by hand than using a type system.

> I'm not too worried if it can't prevent me from invalidating invariants, so long as it can prevent me from making simple mistakes that are obvious in retrospect.

I'm not worried either. I'm just saying that “allow anything to be modified anytime, anywhere” is counterproductive. But if you really want to do it, you can do that in ML and Haskell too: just stuff all your top-level definitions into mutable cells.


> It's pathetic that we're still discussing these in 2017.

Evidently most language creators find it difficult to integrate both interactive programming and static typing, which suggests to me that the problem is not easy. Or maybe there just isn't enough overlap between the groups who value one or the other.

> just stuff all your top-level definitions into mutable cells

That seems like it could be part of a potential solution, but it would require rewriting the program so that everything is implicitly wrapped in the IO monad. And it still doesn't handle the case were you want to add to an existing data type.


> Evidently most language creators find it difficult to integrate both interactive programming and static typing.

Interactivity is one thing. Randomly redefining things is a-whole-nother thing. ML and Haskell are interactive. They just don't stuff absolutely everything in mutable cells like most dynamic languages do.

> That seems like it could be part of a potential solution, but it would require rewriting the program so that everything is implicitly wrapped in the IO monad.

You can't have it both ways: either you have effects and accept that you have effects, or don't have effects and accept that you don't have effects. (IOW, lying is bad.)


Look into success typing


Thank you for the recommendation. I think I found the paper that introduced the idea: http://www.it.uu.se/research/group/hipe/papers/succ_types.pd...

If I understand correctly, success typing rejects only programs that will lead to a type error at runtime, but allows all programs that it can't prove incorrect. I think that's an interesting idea and definitely better than no type checks at all, but I'd still like to have a type system that can prove some programs to be type safe, if possible.


I agree. Yet I want as much inference as possible as you described upthread


Maintainability is also important for "fun".

If you can't touch the code for fear of breaking something, fixing bugs takes forever and new features / levels / versions never happen.


Games are rather infrequently maintained. There may be a few early patches, but after a year or so, the game is left as-is.

As a great (yet personally disappointing) example, I give you Mass Effect: Andromeda. No more patches or content will be released for the single player game only 5 months after its launch.

Different needs for different domains.


>Does choosing C++14 over C++11 mean the resulting game is more fun?

Even though I agree with the main point of the article, I have to point out that when a game doesn't crash every hour, it's definitely more fun.

Few examples of games that do crash: many games in the Elder Scrolls series, Fallout 3 (and New Vegas), Dwarf Fortress. Games that are undeniably complex and emergent in such ways that the coders have no way of testing everything.

It will be so great that people will be able to make even more complex games, and have them not crash.


The article agrees.

> A better argument is that some technologies may result in the game being more stable and reliable. Those two terms should be a prerequisite to fun [...]


I'm rather late to this party, but it is so nice using C# with "Edit and Continue". Program hit an exception? No problem! We'll just make that didn't happen(+), move the next line of execution back a bit, edit some variables, put the correct code in, and carry on.

Of course, sometimes E&C just doesn't work for mysterious reasons of its own.

(*) English unsurprisingly lacks an acausal past tense to describe doing something that changes an event that has already happened.


"We'll make _it_so_ that didn't happen" sounds perfectly natural to me.

Or "change it so..."

We change [history/state of the world] to not include that event.


I think "We'll just make that not have happened..." would be correct English, but your version, though clumsy, is more evocative of what you actually meant. It's coming into more common usage, too.


The past is (currently) immutable, and the english idiom to handle this case is "we'll just pretend that didn't happen".


How about "we'll just undo that"?


Sorry to be off topic, but I just want to mention that this site is not an AMP website, but just try browsing around it to see what a website feels like without bloat.


The thing is that most people working in academic computer science are not very good programmers. They don't have to be, as their main duty is not to produce working code, but to produce publishable papers. Of course there are very good computer scientists who are also great programmers, and this is where the practically relevant research is produced.


Video games are getting massive and they're not slowing down much. We're relying more and more on the engine to do the hard work for us and leaving the creative stuff to the humans. The only way I can see video games moving forward is to have a massive fixed costs of development in reusable, optimized code, and push all the variable costs onto the individual game made by the individual humans making them. The stuff that goes in libraries and into the game engine and even the tools needs to start sticking around longer and to be sane and safe. Basically, it's what almost every engine has already discovered: game scripting and game logic can use its own language optimized for development time and accessibility. We can leave Python to the animators and level designers, but keep your Rust and your Go and your fancy data structures to people who are building the infrastructure.


Computer science and software development are different disciplines, and software developers do value iteration time a lot. Tooling you use and algorithms/programming language principles at play should be viewed independently.


Yes, yes, and yes.

I'd go even further and claim that productivity is the ultimate currency in programming, because you can convert it into pretty much anything and everything else. Better quality, better performance, better UI. Of course, there is no guarantee that you will actually do that.

Reaping these benefits does mean that you need to constantly work at improving the code, "if it ain't broke don't fix it" leads to entropy, and so does being afraid to make fundamental improvements due to lack of test coverage.


Fighting with a bad technology stack definitely takes time away from the domain, where "the fighting" is different for everyone. One should be in a state of flow during dev, game or anything. So the tools that enable you to get there are the right tools.

Does correctly implementing a composable state machine and effects system mean the game is more fun? Usually.

This essay is full of question begging and false dichotomies.


Game programming is never about comfortable programming languages. I think this is pretty well known isn't it. In other programming fields I'd say the experience is definitely getting more and more "fun", headache-free and productive in general.


The author has a point that I generally agree with, but note that PHP, JavaScript and Basic all went through a phase of revision that made them more 'CS conformal', removing some notable corner cases in how they worked. (On the other hand, C++ has also had some corner cases removed over the years...)


The most fun language (having tried lots of them) I put my hands on is Julia, I just love it. You can do so much with such little effort and it can be optimized to almost match the speed of C.

The 2 main downsides is the lack of proper interfaces and object.method() notation, which is sometimes more readable.


This is exactly why I switched from native iOS development to React Native. Swift is really nice and probably my favorite compiled language right now, but 10s+ compile times will never compare to hot reloading in React Native, period.


> It's about being able to implement your ideas.

This is where the choice of language might help or hinder you. One language might take longer to implement it or be more prone to bugs.

The question of whether or not your ideas turn out to be fun is completely orthogonal.


Has anyone seen the straw man that article argues against? What does "computer science" have to do with "writing games"?


>pretend all the computers in the movie work like your desktop PC. RIP Matt Damon.


I will leave the details as an exercise for the reader but one would have to quantify fun in its full spectrum.

Ill give you one clue or more like a hunch...

If you have a tool that is trying to do everything it isn't going to be equally great at all those things and it will likely be confusing/hard to use.

The generic programming languages tend to be more popular but they do so in the same way TV programs or video games are trying to appeal to an audience as large as possible. (This is how we got all these absurd hacker movies for example)

In the future (lol) we will discover that single purpose languages are just better at the limited scope of things they do.

PHP is perhaps a bad example that I shouldn't even have mentioned here but such a language knows exactly what its goal is in life like PHP knows it is suppose to bake websites.

Maybe you've touched the awesome with your fun. Someone should try build a language entirely around the core mechanics of fun.

In games there is fun in the form of rewards for stuff that just takes a fucking long time to do, there is fun from rewards for stuff that requires skill, there is fun from rewards obtained though luck, there is fun from a storyline progressing, there is fun from unexpected things, fun from buying ingame shit, fun from selling ingame shit, fun from cooperation as well as growing to be able to do those things on your own.

But the real list is probably much longer.

The language should probably have a basic fun object that looks something like: { temporal: 0. skilz: 0, luck: 0, story: 0, randomEv: 0, pay2win: 0, progaming:0. coop: 0, solo: 0, [...etc...] }

Then you have to benchmark the fun people are taking out of a bit of code or graphics using real world data.

And then....

Then you would be able to focus your attention where your effort produces the largest amount of fun as well as see the areas where your game is teh suck. Answer the big questions like what parts are people playing and why? Where do they rage quit?

If they didn't give up on creating content for Diablo 2 I would probably still be playing it.

If they had a language where fun was the central mechanic they would have known that changing all the items and ruining all the heroes had a negative dev time to fun conversion ratio.

It wouldn't have to be limited to games at all. One could quantify the fun on HN using the same language. It would all of a sudden be obvious that the karma system and submission ranking lacks random rewards and events. A thing no one considered up to now but if we know it is fun and the system is lacking it it becomes worth considering it.

</fun>


I'm considering writing a post about "Fun vs physics". In it I will explore rhetorical questions like

"does higher build quality make formula 1 cars faster"

and

"do better materials in a car make it more fun to drive"




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: