
Fun vs. Computer Science (2016) - Tomte
http://prog21.dadgum.com/221.html
======
kabdib
I have happily traded working in a nice language with slow iteration times to
working in a lousy language with very, very fast turnaround.

My favorite extreme example was the time I used a super fast 6502 assembler
environment that would do the "change a line of code and get the target
running" cycle in a couple of seconds. This kind of interaction is _magical_.
You're still writing kind of crappy assembly language, but it almost doesn't
matter. Things just flow.

On the other extreme: A large component of me deciding to quit a job was the
fact that a build of the product took four hours. And the build was typically
broken. So: Arrive in the morning, sync, wait all morning for the build to
fail. Do another cycle after lunch: fail. Go home, repeat for _weeks_. (Add to
this: Managers who refused to buy decent development machines, people who kept
checking in busted code and making things worse, and a crushing schedule. Who
needs that?)

These days I'm do a lot of C++, and some PHP. The C++ projects take a few
minutes to build, which isn't great, but it's survivable. The PHP "builds" as
fast as I can refresh a browser page. And as much as it pains me, on most
days, when I grit my teeth and get honest about it, I'm more productive in
PHP. And I despise PHP.

I learned LISP early on in my career; wrote a few LISP interpreters, goggled
at the majesty of LISP machines, read all that I could. But I've never shipped
a significant project in LISP, nor am I likely to. And the Newton actually
flipped from a LISP (well, OOPy-Scheme) implementation language to C++ in
order to ship. I'm wondering if there is some law of human nature at work: You
can have elegance and comfort or you can have a product. I sure hope I'm
wrong.

~~~
bambax
> _The PHP "builds" as fast as I can refresh a browser page_

JavaScript is just as fast, or faster. That's probably an important reason why
it's "eating the world".

~~~
falcolas
I agree that JavaScript is fast - but we've ruined that in a lot of places by
throwing in things like code genertion, compilation, builds, and other nasty
words. The turnaround for the last React&Node program I had the displeasure of
working with was on average 30 seconds.

PHP - for all its faults - does not suffer from these kinds of recompilation
issues, yet can be optionally pre-compiled for added speed when you do go to
production.

~~~
caseymarquis
I don't use backend js, but things like webpack-dev-server which on file save
do an incremental recompile and then hot reload have made this a non issue on
the frontend for me. Not sure if this applies to your use case, but figured it
couldn't hurt to offer a partial solution.

~~~
scaasic
On the backend side, tools like nodemon provide a very similar solution. Then
just setup your webpack server to proxy requests to your backend, and you're
good to go.

------
hesselink
I feel like this is a false dichotomy: you can have a fast iteration cycle,
_and_ have statically checked guarantees. I've worked in Haskell for 7 years
and had exactly this. I could load my entire app in GHCi, make changes, reload
and test. Now I'm working in Java and in IntelliJ I can have something similar
with hot-swap. And in the browser with typescript I have a strong type-system
and can reload my app in seconds.

I agree that it's easy to build a slow, batch based build system and just tell
people that's how it is. Fast iteration is important and requires effort to
keep working. It might even be more important than a static type system, at
least for some apps. But you can have both.

~~~
CJefferson
One very common complaint I've heard about Haskell is slow compile times, see
this discussion for example with a bunch of GHC developers:

[https://www.reddit.com/r/haskell/comments/45q90s/is_anything...](https://www.reddit.com/r/haskell/comments/45q90s/is_anything_being_done_to_remedy_the_soul/)

~~~
chowells
It's funny how when someone describes their own experience, you tell them
they're wrong. Are you claiming the GP didn't actually experience fast
reloads?

Initial compiles of some Haskell libraries that essentially do exponential
inlining ( _cough_ vector-algorithms _cough_ ) can take a long time. An
incremental non-optimized compile of a small change to a project with a good
module structure takes a couple seconds.

The GHC devs are correct that it has been slowing down and are putting a lot
of effort into getting that speed back. But it's not at the level of
"rebuilding my project takes hours" that you frequently get with some build
systems.

~~~
jstimpfle
Well, here's another data point. I made a single file prototype to implement a
Tetris game just for fun. I think it was with the Haskell SDL bindings or so,
and I went with the most straightforward way about the implementation, and do
have a reasonable level of experience with Haskell.

I gave up when the code reached about 400 lines. The compilation times were at
10-15 seconds already, and the error messsages were really ugly.

In short, compilation times depend on how you use complex type system
extensions, or even only how much the libraries that you use make use of the
type system. (And if you don't use the type system much - it becomes such a
bad developping experience in most application domains, or you code performs
very badly, etc.).

It was so much simpler to do it in C. <1 sec compiles, incredibly performant
with straightforward non-optimized code.

------
Karrot_Kream
I think, to some degree, the idea that drives strongly typed, theorem proved
languages is a bit different than what most people think of when coding.

A large set of people in that community want to derive algorithms
mathematically, using a deductive or proof based process, as in math.

Iterative development conflicts with a top down form of development, and also
conflicts with the safety oriented culture of strong typing. While rapid
iteration languages and environments are good for initial development, they
can be awful for projects requiring maintenance.

About a decade ago, I built a small web app using Common Lisp (SBCL) and a lot
of the development I did was done through adding new features and debugging in
the REPL. While I saved the VM, reading the code months later to add a feature
was terrible because the app was hacked together. I wonder if there's a way to
fix this. Typed Racket looks like a promising move in that direction.

FWIW, ghci already interprets Haskell quite quickly for iterative development.

~~~
falcolas
One thing to remember - the domain under discussion in the article is gaming.
There's little maintenance typically involved with games.

There's more today than there was yesterday, but it's still not a system
expected to be built, maintained, and extended for decades.

~~~
sbov
Yeah, this sounds right 99% of the time, although there is at least one genre
that, if successful, breaks that mold - MMOs. WoW is 13 years old, Everquest
is 18 years old, and Ultima Online is 20 years old. For us hobbyists, there
are MUD codebases based upon code written 27 years ago.

~~~
tormeh
If you're WoW you are printing money. You can just throw people at the
problem.

------
mto
I feel it's the other way round.. Everyone is advocating Javascript, python
etc. JS webapps are more and more developed directly in the browser in the dev
console, CSS interactively modified to directly see the results. But this also
lead us to code coverage abominations where people write hundreds of tests for
manually checking input types that otherwise might crash the thing after two
days running because that hashtag less of lists of objects contained a string
instead of a float. Is that fun?

In gaming Unity3d embraces this interactivity by modifying more or less
everything directly while the game is running. The unreal engine blueprints
show data flows live etc.

In technical sciences there was always Matlab with its interactive mode of
development. We have similar technology in data science with ipython, Spyder,
jupyter notebooks etc.

Actually I'm seeing more interactivity than good type systems out there. Elm,
Haskell & co is something you find advocated in internet forums but rarely in
companies.

(and as the author mentioned - those two actually don't have to be exclusive)

~~~
dasmoth
Yes, I don't think type systems (or, in general, "computer science-y stuff")
are the enemy here. I do at least wonder about some of the stuff that passes
for "best practices" nowadays: CI tools sound like a good idea on the surface,
but can serve to (partially) hide complex build and deployment processes where
once someone might have just typed _make_. Automated tests of awkward corner
cases can be pretty valuable, but easily lead to test suites that take minutes
(or worse...) to run on every build. I don't really know how best to get the
good without the bad, but giving at least some weight to the "sense of fun"
stuff sounds like a pretty good starting point.

------
didibus
Agree whole heartedly!

That's why my favorite language is Clojure. That interactivity, instant
feedback, seeing the program running as you are tweeking it, its a bliss to
use and it creates better more functional software.

Imagine playing music as you hear it when trying to come up with a good
melody. Now imagine not playing it, but composing it on music sheets instead,
and occasionaly playing what you've got every 10 to 30 minutes.

Lisp championed interactivity, it invented dynamic programming for that sole
purpose. The idea is that you morph a running program into shape, molding it
like you would clay.

The first thing you do when writing in a Lisp like Clojure is run your
program. In most other languages, running your program happens much later, and
much less frequently, and it can actually be quite challenging to run.

~~~
jfries
When showing off Clojure the application is often molded while running by
patching it via the repl. But what I don't understand is if you can actually
develop real programs like that? Surely even in Clojure code there are lots of
dependencies so you can't just change one place in isolation. And I also guess
that changes done on the repl aren't actually saved for the next time you run
your app? And how do you do testing?

How is the actual work flow you use when molding your app?

~~~
falcolas
The trick with some of these runtimes (I don't know if Clojure does this, just
noting that the paradigm exists) is that the state of the runtime _is not
lost_ when going from development to production.

The entire state of the program, including the code and the globals in memory,
is preserved, and simply spun up in a different environment. This tripped me
up for some time with Smalltalk - I didn't understand this.

It's the same with many lisps - you can frequently get a REPL directly inside
a running program, and query/change the objects (including code) that is
running. This was used to great effect in fixing the Deep Space 1 probe while
it was 100 million miles away from Earth.

[https://en.wikipedia.org/wiki/Deep_Space_1](https://en.wikipedia.org/wiki/Deep_Space_1)

~~~
dasmoth
Clojure doesn't do the image-based persistence thing. You can ahead-of-time
compile to Java byte code if you want, but that's probably closer to the .fasl
files created by some Common Lisp implementations. No old bits of state
floating around when you deploy a new server. Do occasionally miss the ability
to save an image, but it doesn't seem to be the modern way (and would be a
nightmare to implement well on top of the JVM).

Can easily add a REPL to any Clojure program though.

------
kasperl
Flutter ([http://flutter.io](http://flutter.io)) strikes an interesting
balance here by (1) allowing just-in-time compiled, state-preserving "hot-
reloading" during interactive development and (2) supporting optimized
deployment using classical ahead-of-time compilation to native code.

Disclaimer: I work on the team at Google that builds the underlying language
platform for Flutter.

~~~
mmirate
Hmm, looks interesting, but it's quite unfortunate that making the compilation
process be useful at runtime-error-removal is merely _optional_ ("strong
mode"). So one wonders how many shops write enough "prototype" code in "weak
mode" that they decide to leave it in its Python-like mess instead of
rewriting for "strong mode"...

~~~
kasperl
Don't worry; strong mode will be the only mode going forward and it already is
the only mode for Flutter. Not merely optional.

------
dahart
Worth noting that "fun" in this case means player fun, not programmer fun. My
productivity and my enjoyment of game development in C++ are secondary to the
user's enjoyment of the finished game, and this is where the author's point
gets more interesting and valuable. There are a lot of comments here already
talking about fun, code safety, and productivity from the programmer's point
of view, which IMO misses the most important part.

As a game programmer, building systems with fast turnaround times is more
valuable to the artists and designers in the studio than for me personally.
And the value in the artists and designers and programmers all having fast
turnaround time is in being able to make a game that's more fun for the
consumers.

------
yorwba
I think it might be possible to get the benefits of static type checking while
still allowing interactive modifications, but most current type systems don't
lend themselves to that.

While it would be possible to replace any value by another of the same type
(e.g. redefining a function without changing the signature), I'm not aware of
any statically checked language with a REPL that allows that. When I'm playing
around in the Haskell REPL, redefining a function requires also redefining all
other functions that use it, and that's a chore that isn't even required by
the type system.

Other modifications are likely to break static checks, e.g. adding a new case
to a sum type would invalidate exhaustiveness checking (and thus probably a
bunch of compiler optimizations), so no standard type system would allow it.
But having a check that makes sure you handle the added case everywhere would
actually be nice to have, especially when it can tell you interactively where
you need to add more code.

The major hurdle to altering a running program without violating type safety
is the fact that you can't just check the original program and the modified
version for internal consistency, you also have to ensure that old code that's
still running won't be confused when it calls new code and gets an unexpected
return value. In the case of adding to a sum type, you could compile in a
fall-through case for all pattern matches, and then patch in the new handler
code.

For even larger changes, like completely replacing the return type of a
function, it might be necessary to specifically engineer the type system such
that it can support this case. Ideally, it would support almost all
modifications that work in a dynamically typed language, while still
preventing anything that would take the program into an inconsistent state.

~~~
catnaroek
> Ideally, it would support almost all modifications that work in a
> dynamically typed language, while still preventing anything that would take
> the program into an inconsistent state.

This is impossible. Data structures have these little things called
“invariants” that require proof to be established. In statically typed
languages, abstract data types are used to prevent users from breaking these
invariants, by making the representation invisible to anyone but the
implementor.

If the data structure underlying an abstract data type can be modified
anytime, then every time you patch your program, you would have to check two
things:

(0) That the new data structure respects every invariant relied upon by other
code.

(1) That either the new data structure is compatible with the old one (which
is often not the case), or there are no reachable instances of the old data
structure in memory.

This is an even bigger pain in the ass than just stopping the program and
fixing it.

~~~
yorwba
I'm aware that a fully general invariant checker is impossible, but there are
still type systems that can catch a lot of errors in practice. If dynamic
modifications have to be taken into account, that makes the problem more
difficult, but not necessarily impossible. Even though there are changes that
can't be checked at all, those can't be too common, or humans wouldn't be able
to handle them either.

I'm not sure why you think that the checking will be painful, it almost sounds
like you think that it would be done by the programmer. The whole point of
type systems is that they can be checked automatically, so the programmer is
prevented from doing something stupid.

Dynamic languages already allow all kinds of modifications that might or might
not break invariants or introduce subtle incompatibilities; a type system
would only make it safer.

It is also not just a matter of "stopping the program and fixing it". Suppose
you are writing a game, and during playtesting you encounter a bug, where
something is stuck in an endless respawn loop. In a dynamic language, you
could look at the misbehaving code, develop a fix, and immediately observe its
effects. This allows you to quickly iterate until you have found a solution
that works. Compared to a "stop, fix, retry"-cycle, it's simply going to be
faster, even assuming you can reproduce the bug reliably (maybe using some
kind of input replay).

~~~
catnaroek
> but there are still type systems that can catch a lot of errors in practice.

I have yet to see a type system that can take a putative implementation of a
data structure with arbitrarily complicated invariants, and spits out whether
the implementation is correct or not. (Note that Coq, Agda, etc. don't quite
fit the bill, because they require the _programmer_ to enter the proof
_himself_ , even if these tools can partially automate the process.)

> I'm not sure why you think that the checking will be painful, it almost
> sounds like you think that it would be done by the programmer. The whole
> point of type systems is that they can be checked automatically, so the
> programmer is prevented from doing something stupid.

My point is precisely that type systems aren't normally used to enforce data
structure invariants directly. Instead, _data abstraction_ (i.e., the
inability to inspect the representation of abstract data types from client
code) is used to _confine_ the potential to break data structure invariants to
a small fragment of a big program (namely, where the abstract data type is
implemented). This is in furious contradiction with the idea of inspecting and
modifying anything anytime from anywhere.

~~~
yorwba
I'm not talking about the kinds of invariants that require an undecidable type
system to formalize, but about the most simple things. "Any value passed to
this function can be iterated over." "This sequence of checks is exhaustive."
"Calling this function with these arguments won't throw an exception."

Those tend to be the mistakes I make when programming interactively in Python.
Forgetting to put a single value into a one-element list. Forgetting to check
for _None_. Misspelling a key in a dictionary. Swapping the order of two
arguments in a function call.

Yes, in some cases those properties can only be verified by proving some
invariant equivalent to the Collatz conjecture. I'd conjecture that most
instances could still be solved by an appropriate type system. I'm not too
worried if it can't prevent me from invalidating invariants, so long as it can
prevent me from making simple mistakes that are obvious in retrospect.

~~~
catnaroek
> Those tend to be the mistakes I make when programming interactively in
> Python.

Those tend to be the mistakes that I take for granted any seasoned programmer
can detect and fix almost instantaneously and effortlessly. (Of course, not
because programmers are superhuman, but rather because Hindley-Milner is the
bare minimum a high-level language should have.) It's pathetic that we're
still discussing these in 2017.

> Yes, in some cases those properties can only be verified by proving some
> invariant equivalent to the Collatz conjecture.

I have yet to see a useful program whose correctness is contingent on the
Collatz conjecture being true. But I have seen lots of programs that are much
easier to verify by hand than using a type system.

> I'm not too worried if it can't prevent me from invalidating invariants, so
> long as it can prevent me from making simple mistakes that are obvious in
> retrospect.

I'm not worried either. I'm just saying that “allow anything to be modified
anytime, anywhere” is counterproductive. But if you _really_ want to do it,
you can do that in ML and Haskell too: just stuff all your top-level
definitions into mutable cells.

~~~
yorwba
> It's pathetic that we're still discussing these in 2017.

Evidently most language creators find it difficult to integrate both
interactive programming and static typing, which suggests to me that the
problem is not easy. Or maybe there just isn't enough overlap between the
groups who value one or the other.

> just stuff all your top-level definitions into mutable cells

That seems like it could be part of a potential solution, but it would require
rewriting the program so that everything is implicitly wrapped in the IO
monad. And it still doesn't handle the case were you want to add to an
existing data type.

~~~
catnaroek
> Evidently most language creators find it difficult to integrate both
> interactive programming and static typing.

Interactivity is one thing. Randomly redefining things is a-whole-nother
thing. ML and Haskell are interactive. They just don't stuff absolutely
everything in mutable cells like most dynamic languages do.

> That seems like it could be part of a potential solution, but it would
> require rewriting the program so that everything is implicitly wrapped in
> the IO monad.

You can't have it both ways: either you have effects and accept that you have
effects, or don't have effects and accept that you don't have effects. (IOW,
lying is bad.)

------
bambax
Maintainability is also important for "fun".

If you can't touch the code for fear of breaking something, fixing bugs takes
forever and new features / levels / versions never happen.

~~~
falcolas
Games are rather infrequently maintained. There may be a few early patches,
but after a year or so, the game is left as-is.

As a great (yet personally disappointing) example, I give you Mass Effect:
Andromeda. No more patches or content will be released for the single player
game only 5 months after its launch.

Different needs for different domains.

------
vesak
>Does choosing C++14 over C++11 mean the resulting game is more fun?

Even though I agree with the main point of the article, I have to point out
that when a game doesn't crash every hour, it's definitely more fun.

Few examples of games that do crash: many games in the Elder Scrolls series,
Fallout 3 (and New Vegas), Dwarf Fortress. Games that are undeniably complex
and emergent in such ways that the coders have no way of testing everything.

It will be so great that people will be able to make even more complex games,
and have them not crash.

~~~
falcolas
The article agrees.

> A better argument is that some technologies may result in the game being
> more stable and reliable. Those two terms should be a prerequisite to fun
> [...]

------
pjc50
I'm rather late to this party, but it is _so nice_ using C# with "Edit and
Continue". Program hit an exception? No problem! We'll just make that didn't
happen(+), move the next line of execution back a bit, edit some variables,
put the correct code in, and carry on.

Of course, sometimes E&C just doesn't work for mysterious reasons of its own.

(*) English unsurprisingly lacks an acausal past tense to describe doing
something that changes an event that has already happened.

~~~
Kluny
I think "We'll just make that not have happened..." would be correct English,
but your version, though clumsy, is more evocative of what you actually meant.
It's coming into more common usage, too.

~~~
Sir_Substance
The past is (currently) immutable, and the english idiom to handle this case
is "we'll just pretend that didn't happen".

------
kybernetikos
Sorry to be off topic, but I just want to mention that this site is _not_ an
AMP website, but just try browsing around it to see what a website feels like
without bloat.

------
auggierose
The thing is that most people working in academic computer science are not
very good programmers. They don't have to be, as their main duty is not to
produce working code, but to produce publishable papers. Of course there are
very good computer scientists who are also great programmers, and this is
where the practically relevant research is produced.

------
c3534l
Video games are getting massive and they're not slowing down much. We're
relying more and more on the engine to do the hard work for us and leaving the
creative stuff to the humans. The only way I can see video games moving
forward is to have a massive fixed costs of development in reusable, optimized
code, and push all the variable costs onto the individual game made by the
individual humans making them. The stuff that goes in libraries and into the
game engine and even the tools needs to start sticking around longer and to be
sane and safe. Basically, it's what almost every engine has already
discovered: game scripting and game logic can use its own language optimized
for development time and accessibility. We can leave Python to the animators
and level designers, but keep your Rust and your Go and your fancy data
structures to people who are building the infrastructure.

------
noway421
Computer science and software development are different disciplines, and
software developers do value iteration time a lot. Tooling you use and
algorithms/programming language principles at play should be viewed
independently.

------
mpweiher
Yes, yes, and yes.

I'd go even further and claim that productivity is the ultimate currency in
programming, because you can convert it into pretty much anything and
everything else. Better quality, better performance, better UI. Of course,
there is no guarantee that you _will_ actually do that.

Reaping these benefits does mean that you need to constantly work at improving
the code, "if it ain't broke don't fix it" leads to entropy, and so does being
afraid to make fundamental improvements due to lack of test coverage.

------
sitkack
Fighting with a bad technology stack definitely takes time away from the
domain, where "the fighting" is different for everyone. One should be in a
state of flow during dev, game or anything. So the tools that enable you to
get there are the right tools.

Does correctly implementing a composable state machine and effects system mean
the game is more fun? Usually.

This essay is full of question begging and false dichotomies.

------
SZJX
Game programming is never about comfortable programming languages. I think
this is pretty well known isn't it. In other programming fields I'd say the
experience is definitely getting more and more "fun", headache-free and
productive in general.

------
mannykannot
The author has a point that I generally agree with, but note that PHP,
JavaScript and Basic all went through a phase of revision that made them more
'CS conformal', removing some notable corner cases in how they worked. (On the
other hand, C++ has also had some corner cases removed over the years...)

------
RivieraKid
The most fun language (having tried lots of them) I put my hands on is Julia,
I just love it. You can do so much with such little effort and it can be
optimized to almost match the speed of C.

The 2 main downsides is the lack of proper interfaces and object.method()
notation, which is sometimes more readable.

------
stevenschmatz
This is _exactly_ why I switched from native iOS development to React Native.
Swift is really nice and probably my favorite compiled language right now, but
10s+ compile times will never compare to hot reloading in React Native,
period.

------
DerSaidin
> It's about being able to implement your ideas.

This is where the choice of language might help or hinder you. One language
might take longer to implement it or be more prone to bugs.

The question of whether or not your ideas turn out to be fun is completely
orthogonal.

------
srtjstjsj
Has anyone seen the straw man that article argues against? What does "computer
science" have to do with "writing games"?

------
rubmo
>pretend all the computers in the movie work like your desktop PC. RIP Matt
Damon.

------
dispo001
I will leave the details as an exercise for the reader but one would have to
quantify fun in its full spectrum.

Ill give you one clue or more like a hunch...

If you have a tool that is trying to do everything it isn't going to be
equally great at all those things and it will likely be confusing/hard to use.

The generic programming languages tend to be more popular but they do so in
the same way TV programs or video games are trying to appeal to an audience as
large as possible. (This is how we got all these absurd hacker movies for
example)

In the future (lol) we will discover that single purpose languages are just
better at the limited scope of things they do.

PHP is perhaps a bad example that I shouldn't even have mentioned here but
such a language knows exactly what its goal is in life like PHP knows it is
suppose to bake websites.

Maybe you've touched the awesome with your fun. Someone should try build a
language entirely around the core mechanics of fun.

In games there is fun in the form of rewards for stuff that just takes a
fucking long time to do, there is fun from rewards for stuff that requires
skill, there is fun from rewards obtained though luck, there is fun from a
storyline progressing, there is fun from unexpected things, fun from buying
ingame shit, fun from selling ingame shit, fun from cooperation as well as
growing to be able to do those things on your own.

But the real list is probably much longer.

The language should probably have a basic fun object that looks something
like: { temporal: 0. skilz: 0, luck: 0, story: 0, randomEv: 0, pay2win: 0,
progaming:0. coop: 0, solo: 0, [...etc...] }

Then you have to benchmark the fun people are taking out of a bit of code or
graphics using real world data.

And then....

Then you would be able to focus your attention where your effort produces the
largest amount of fun as well as see the areas where your game is teh suck.
Answer the big questions like what parts are people playing and why? Where do
they rage quit?

If they didn't give up on creating content for Diablo 2 I would probably still
be playing it.

If they had a language where fun was the central mechanic they would have
known that changing all the items and ruining all the heroes had a negative
dev time to fun conversion ratio.

It wouldn't have to be limited to games at all. One could quantify the fun on
HN using the same language. It would all of a sudden be obvious that the karma
system and submission ranking lacks random rewards and events. A thing no one
considered up to now but if we know it is fun and the system is lacking it it
becomes worth considering it.

</fun>

------
Grustaf
I'm considering writing a post about "Fun vs physics". In it I will explore
rhetorical questions like

"does higher build quality make formula 1 cars faster"

and

"do better materials in a car make it more fun to drive"

