Hacker News new | past | comments | ask | show | jobs | submit login
Reconsidering Functional Programming (dadgum.com)
84 points by twiss on May 4, 2015 | hide | past | favorite | 113 comments



Along these lines, for me one of the things that makes Clojure stand out is the careful thought that went into state, time and identity in the design of the language. It removes mutability as a default without forcing you to be purely functional all the way down, and gives you tools to manage state changes in ways that are easier to reason about (though still not necessarily easy of course).


  > single-assignment "variables,"
It's a pet peeve of mine to see people disparage the phrase "immutable variable" as a contradiction. A given variable's value can vary each time its function is called, or each time its program is run, without ever needing to be mutated. In fact, in practice most variables don't end up requiring mutation, which was a surprise for me coming from traditional imperative languages (not to say that mutation isn't valuable, but it makes more sense for your language to make variables immutable by default rather than mutable by default).

  </rant>


F# doesnt actually call them variables, they call them bindings [1]. It looks like they intentionally avoid the "variable" terminology for immutable values.

[1] https://msdn.microsoft.com/en-us/library/dd233238.aspx


Right. It's not like programmers invented the term variable and it can only reasonably refer to a mutable memory cell... we adopted it from math, where x is quite clearly a variable despite having only a single value in "x + 1 = 7".


"Variable" is still the correct name, because across multiple invocations of a function, the values -- both those passed in and those computed by execution -- will not necessarily be consistent.


This is why we call these things "values" not "variables"


Those are different concepts. Variables have names, values typically don't. 500 is a value; foo is a variable.


Values that have names are typically called objects (their identity isn't structural). Named values are necessary for having mutable fields (I.e. variables).


Hmm, that doesn't sound right to me, at least not if "name" means "identifier" in the conventional sense (like foo). For example, stack-based languages can do field mutation without names.


Depends if we are talking about local stack or global heap state. If you just want to modify a register, then a locally bound name/index/stack pointer is sufficient. If you want heap allocated aliased (shared) state, you need globally unique names or addresses. The challenge is always with global state, hardly ever with locally scoped state.


Value is the thing associated with variable. Like 3 or "foo".

Variable is just a name for value.


Also, single-assignment != immutable. Likewise, reassignment != mutable.


In my opinion the most important part of functional programming is the purity of functions. There is a great quote about this on one of the Clojure doc pages

"If a tree falls in the woods, does it make a sound? If a pure function mutates some local data in order to produce an immutable return value, is that ok?"

Function purity enforces that mutations are localized to small areas of the code, while maintaining composition.


For me, having first class functions with proper lexical scoping is enough to start feeling functional. That might sound like nothing but its surprising how few of the most popular languages get both of those right.


What is proper lexical scoping? Does JavaScript get it right?


No. Javascript gets it horribly wrong without "var", still terribly wrong with "var", and somewhat better with "let".


I don't understand what let has to do with functional programming though (enclosing a variable within if () {} instead of function block etc). Do you mind elaborating?


The question wasn't about functional programming per se, but about lexical scope.

"let" keyword in ES6 allows Javascript to have "lexical" style scoping (with some caveats due to JS legacy).

That said, several ways of scoping a variable can be had in functional programming. Have a look here for some more explanation with regards to Common Lisp:

http://stackoverflow.com/questions/463463/dynamic-and-lexica...


The "this" keyword is also a mess. I wonder when well be able to start using the new arrow-syntax lambdas everywhere.


> Function purity enforces that mutations are localized to small areas of the code, while maintaining composition.

Most programmers of 'traditional' (imperative, oo) programming languages would agree.


I wonder if somebody have tried using a STM style approach to making games?

For a couple of LudumDares I did try (in java) and for the games I was able to do it worked fairly well. Basically the drawing thread is seeing a frozen readonly version of the world (and keeps 60 fps) while the update thread will try to run simlation over readwrite version of the world. When the update thread finishes it will "publish" the new state and start over. I have ever wondered how it will perform at scale, maybe it will start braking apart if it grows too much?


It breaks down when you approach the hardware limits, i.e. for rts games with thousands of units on the screen, particle effects and so on, or if you have complex cad models in automotive applications. Essentially you then need to minimize driver overhead and avoid data copying at all costs. In particular you need to do memory mapped IO (using glMapBufferRange) between the GPU and main memory as fast as possible. That is only possible if you do tripple buffering with a ring buffer, at any given time one part of the buffer will be in flight to the GPU, one will be processed by it and the third will be modified by the CPU. You "publish" with synchronization (glFenceSync() and glClientWaitSync()), see for example http://gdcvault.com/play/1020791/.

In particular you have to organize all your world data in flat mutable arrays because even on the CPU your data has to be presented in a way that avoids cache misses as much as possible, this philosophy goes by "data-oriented design", see for example https://www.youtube.com/watch?v=rX0ItVEVjHc.


This is all very low-level stuff, though. Functional systems are high level abstractions typically erected on top of a low-level, imperative foundation. Let's use the particle effects example: One can describe how a particle system should behave using pure functions and immutable data whilst letting the actual simulation and rendering be handled by a low-level system that does all the dirty stuff necessary for speed.


I agree that it's potentially possible to make a functional layer on top, but it still seems somewhat irreconcilable with the idea of STM.


IANAGameEngineProgrammer, but I'd say it all depends on how you do the 'publishing' part. To do this in a performant way you need a language that supports pointers, so all you need to do in the thread safe part of your code is to swap two pointers (current and new). I'd assume the thread sync overhead once a frame becomes negligable (is there a good analysis for thread sync overheads on current x86?).


> "At what point in the frame was this change made?" Some parts of the game may have looked at the original version, while others looked at it after the change. Now magnify the potential for crossed-wires by all of the destructive rewriting going on during a typical frame, and it's a difficult problem.

Not really, it's a pretty trivial problem and one of the least concerns of game programming these days.


This is an unsolved problem in a very high profile AAA competitive game: Starcraft II.

In Starcraft II, when 2 Protoss players order one of their templars to move/cast a particular fatal spell to the other templar, then wait, you do not get the expected double kill. Instead, when they get in range of each other one templar casts the spell first, kill the other, and survives.

Which templar lives and which dies depends on the order the units are processed in the frame —arguably an implementation detail. To the players, this is just plain unfair. Not a deal breaker for a hosts of reasons, but still no good.


This also happens with (quite common) high templar vs ghost duel. Both have ability to instantly kill the other type, with the same range. When you move them further than that range, and order both to use that ability on each other - the unit that was produced earlier always wins.

It's not game breaking, but players noticed this.

To avoid such problem with marines shooting (this would be game-breaking, because marines make up bulk of terran armies for most of the game) - there is miniscule random factor added to delay between shots for each marine.

If it was important enough - they would fix the high templar/ghost problem the same way.

BTW in e-sport games (like Starcraft) - finding such bugs and abusing them creatively is part of game. For example mineral-walking is legitimate techinque to micromanage your workers, used even in low-level matches, and it's caused by workaround of bug in Starcraft 1.


This is a pretty standard multithreading problem that has nothing to do with mutability or immutability.


Mutability implies caring about order. I think that's why GP used this example.


Indeed. This problem arises even in a single thread setting. Let's say 2 units aim at each other, and in frame N, they are at range of each other. Here's what should happen:

  - Unit1 kills unit2
  - Unit2 kills unit1
If you double buffer your actions properly, both unit will be dead at frame N+1. If however you modify your data in place, this may happen:

  1. Unit1 kills unit2 => delete unit 2
  2. Unit2 no longer exists.  Unit1 lives.
Or, this may happen:

  1. Unit2 kills unit1 => delete unit1
  2. Unit1 no longer exists.  Oops.
Oops.

What definitely does not happen is the double kill. StarcraftII can live with that, but other games, like Frozen Synapse, would be broken.


What you call double buffer, other call transaction I believe, and it's a system design. You have clear steps where information is gathered and rules are applied.

In that case you take all the attacks, computes the damages, apply them and prune the dead.


Indeed, this is a relatively trivial problem. There are reasons you might want to do something like this (like if you are trying to parallelize your sim) but I'd slot that under "open research problem that nobody agrees on".

There are a lot of problems with global state in games, but entity values are not one of them, mostly. Problems do crop up but you just deal when they do.


Exactly, and in fact it's usually not as simple as "you just work on the previous frame's data". There's a bunch of data dependencies that needs to be respected during a frame update. For example, if you update the player position wrt user input, you will want to update its animation in the same update, and also check if its not coliding with anything. Otherwise you're just lagging behind.


That's a problem only if your engine runs so slow that the changes between frames are relatively large. They shouldn't be, otherwise you'll end up with inconsistencies no matter how many sources of data you use during the calculation of the new frame. It's entirely legal to run your 'world' locally at a higher rate to resolve such issues than the display rate.


That was my initial thought too, but I think it's still a useful realization. at least one place in which this would make sense is physics response.

If your actors has a single 'update' method, the first actor would check for collisions and move before any other actors had a chance to do anything. This gives certain actors preferential treatment in movement.

Of course, you could instead plan out the dependencies better; do all collision detection before all physics response. But this is just a different way to look at 'immutability'


In most real physics systems, you have to solve systems of simultaneous constraints in order to get the right answer, i.e. all the guys involved in an interaction are sort of going at the same time. (Though also these systems do tend to have ways to break constraints and subdivide the problem when it gets too big, which then does give preferential movement, but not in a way that any kind of functional programming would solve, because it is inherent in reducing the complexity of the math problem.)

There may be exceptions to this; it is not a subject I keep current with.


Everyone has to go at the same time or weird instabilities occur in your physics engine; one way to do that is to have everyone read past frame values to produce current frame values (very much like a world -> world function).


That's exactly how it's done. See also: 'Conway's game of life' for a very simple version of such a system (and for kicks, try doing an asynchronous version of it).


yeah, that is actually what I was saying. I just provided an alternative way of looking at it where you pipeline the dependent operations instead of running them at the same time.

eg: instead of:

    foreach actor(find collisions -> find response -> do movement)
do:

    foreach actor(find collisions) -> 
    foreach actor(find response) -> 
    foreach actor(do movement)

That way you may modify the primary data structure directly in the 'do movement' response without any conceptual 'data races'. It really just off loads the information lost from 'mutability' to the secondary data structures of collision lists and the like.


This is how the time travel mechanics of Braid were implemented. Record the events, apply those events in a deterministic manner, and then reverse the events to go backwards in time.

More generally this is pretty much the event sourcing pattern: http://martinfowler.com/eaaDev/EventSourcing.html


Not true, mostly. I recorded world state, not events. The one exception is world 5 where I record events so that your shadow-universe guy can do things.

Event recording has a fair bit of history in games, especially as a debugging technique, but I did not want to use it for rewind, considering it too fragile and annoying, and probably too expensive and complicated (you would have had to store world state anyway, to have something nearby to delta from so that you don't start from the beginning of time every frame, so now you have TWO systems: world state recording and event recording. Better to stick with one.)


This is how Glitch works (my programming model for time management):

http://research.microsoft.com/en-us/people/smcdirm/managedti...

If you modify the past, events are recorded and replayed, however (just rewinding focuses on a different world state). This is all baked in directly to the programming model. Another thing we are able to do is capture deltas rather than do entire world checkpoints, which works out well since many states do not change on all time steps (except for that moving bullet, of course). So each cell has multiple entries of temporally scoped values (for values that change on every frame, they just have one for each frame).


Can you elaborate on the distinction between world state and events? As in, you don't capture user inputs but rather track all object positions and states at all times? seems very inefficient that way.

Also, big fan! Great work on the new language/compiler livecasts and can't wait for The Witness.


Exactly: I record the positions of all objects every frame (and other state variables), not the inputs that generated them.

It is more expensive in terms of the amount of memory required, but it is much less expensive in terms of the amount of CPU required, and CPU was ultimately the biggest problem, so it seems I made the right decisions. Even on a limited-memory console like the Xbox 360 you can rewind most levels for 30-45 minutes before running out of buffer. That is more than anyone ever wants to do as a practical gameplay interaction.

Working on The Witness... it will be done ... someday not too long from now.


Did you use deltas a la trie-based immutable data structures, or did you save the entire state every moment? Did you save frame by frame, or did you save ON the event, and then interpolate animations between the two states when reversing?



There is a system of full-frames and deltas, like video encoding. Every frame gets saved and reproduced exactly. There is no interpolation.


Out curiosity, what happens if you run out of buffer?


In most worlds, the oldest state just gets thrown out; you keep playing as usual, it's just that if you were then to try and rewind all the way to the beginning, there would be a limit on how far you can go.

In world 4, though, where you can walk to the beginning of time just by walking to the leftmost part of the level, I actually kick the player out of the level when there's no more memory. I have never heard of anyone noticing this.


Theoretically, you probably just want to kick the stuff at the end out. (i.e. you lose the earliest state you can)


That would be a cool game mechanic: you can freeze time and fine tune your behavior over a small window of the immediate past(e.g. the last two seconds), with the ability to see how the game changes over that window "strobe style." Sort of like Sherlock Holmes (in the movie where he thinks everything through "what if" style).


That's basically the mechanic in the game Frozen Synapse: http://www.frozensynapse.com/

Every player plans their movements for the next fraction of a second and all turns get executed simultaneously.


I was thinking of a one player game against the computer, so the player can actually see what the computer will do according to their planned action.


It's efficient enough, given its a 2D plattform jump&run game. The developer wrote an extensive article how it works, it's really interesting, search for it.

Game creator Jonathan Blow explains the rewind in a video (27min): http://www.youtube.com/watch?v=tSeYShR-OG0

And there is Gamasutra article about Rewind, in four parts: http://gamasutra.com/blogs/CameronLeBlanc/20130220/187036/Re... and part 2-3: http://www.gamasutra.com/blogs/CameronLeBlanc/20130313/18844...


You're replying to jonathan blow ;)

EDIT: to be clear, the GP of your comment is jblow, not the parent.


If you could make all events reversible (exactly), then why would you need to rewind to state 1 and play back to the requested time for every frame?

Also, sweet game! I actually just introduced it to my gf last week.


Someone has implemented a PL based on this circa 1982:

https://78462f86-a-feb80ac8-s-sites.googlegroups.com/a/tetsu...

It is not very practical, however. Many operations that you'd want to do are not time reversible without saving things.


That's interesting. So, is a time-reversible language Turing complete? How would one write a trap-door function in a time-reversible language?



You could do that, but then you aren't storing events any more, you're storing world state again. (That is what it means to make events reversible .. or at least that is the straightforward way I would think to do it.)


It reminded me of Martin Fowler's "Event Sourcing" paper from the enterprise world http://martinfowler.com/eaaDev/EventSourcing.html

So as a bluntly simple example, assume the entire world's state is just the number 100. Instead of storing 105 as the new state, you store {:increment, 5}, and somewhere, the reversible counterpart to increment is defined {:decrement, 5}. You're correct that a full global state would have to be stored at SOME point, but could you simply hold the initial state, and the current state, and work from either of those?

Or just do what the mp4 video algorithm does and store so-called "keyframes" every N frames which store the entire state, but all intermediary frames are just diff frames.

Many web apps end up needing a global event feed/stream that various processes hook into via publish/subscribe. In theory, these events are reversible. Of course, in practice, individual types of events may find difficulty, especially if they have side effects that are outside the control of the app developer


I stand corrected! Could have sworn I'd read the eventing method was used but obviously not. Thanks for the comment Jonathan, big fan.


Are you just speculating or did you read this somewhere? You make it sound like fact until the creator of Braid comes in and says it ain't so.


I thought I had read it somewhere, but obviously not.


The writer monad gives you purely functional "printing".


I thought monads were going to be the answer at the end of the post... -_-


I don't think the article really is about printing. He just uses printing as an analogy to point out that you can calculate changes without altering the simulation data during the calculations.

And changing topic, I am not a huge fan of how the Writer forces your code into a monadic style. One of the things I enjoy the most when I program in Ocaml instead of Haskell is being able to put printfs or mutable buffers wherever I want.


> One of the things I enjoy the most when I program in Ocaml instead of Haskell is being able to put printfs or mutable buffers wherever I want.

FYI, for the first need:

http://hackage.haskell.org/package/base-4.8.0.0/docs/Debug-T...


Writer isn't really about printing. It's about emitting values, to be later collected. That could very easily be state changes.


I know. But even then, I kind of like being able to append values to a mutable buffer that is in scope without having to rewrite everything to use monads.


"I know."

Then I don't understand your earlier comment.

"But even then, I kind of like being able to append values to a mutable buffer that is in scope without having to rewrite everything to use monads."

We can agree to disagree, there.


What I was trying to say is that I think there is some virtue in being able to add effects to your program without having to change the syntax you use. Its nice to only have to use List.map and not have two incompatible map and mapM functions.

This is not to say that I don't value extra static safety. I think there is definitely room for languages that track effects without the syntactic weight of monads and monad transformers.


Ah, that's the bit I understood, and about which I said we could agree to disagree. To me, that reads like "I want to add new side effects and not document them," but I think coding style comes into play in complicated ways and it's not something I am eager to dig into right here, right now.

What I did not understand was your response of "the article wasn't talking about actually printing" to the suggestion of using Writer, despite your knowing that Writer is not about actually printing, Writer being in fact a good fit for what was described in the article, and the poster having quoted "printing" in the first place.


The IO monad gives you purely functional "printing".

The Monad gives you purely functional sideeffects.


I think there's a good insight here, related to Clojure's atoms and Facebook's Flux.

If you start by making everything in your program 100% immutable and timeless, how do you introduce any kind of time-dependence? A nice clean answer is that there is a global clock, and when it ticks, everything changes.


Time dependence is just like an async dependence (like waiting for a mouse click) in that, in a profound sense, a program doesn't "know when it will happen". The only time a program's state changes is with new information, and the clock is just another source of information. (There's no reason why you couldn't specify timeouts in "ticks" and give the user a button that represented a "tick" such that they could advance the program.)


Interestingly, I'm building an app in Flux/React that lets you do exactly that.


Cool. It's a style I've been working with, too. I'd be interested to check out your work.


That's an argument for single-assignment programming, of which functional programming is a subset. There's nothing wrong with named values; it's mutable variables that are the problem.

Both Go and Rust encourage single-assignment programming, with ":=", "let", and automatic type inference for assignments. C++ has had "auto" for years. The trend is towards single-assignment as a programming style. Declaring named values without initializing them is so last-cen.

Single-assignment programming is a bit more powerful than a pure functional style; you can use a value more than once. It's the difference between a tree and a directed acyclic graph. If you have code that does something imperative, talking to the world outside the program, trying to hammer that into a pure functional form is like pounding a screw.

It's also nice to have named variables when reading a program.

   f(g.h(f2(g2(x))))
provides little information about why someone was doing that. When those intermediate results have names, it's somewhat more clear.


I assume you've never read anything written by Tony Morris on this matter :) He argues that identifier names are one of the least useful tools in understanding programs: http://tonymorris.github.io/blog/posts/identifier-names/ (his tone might be a bit grating, but he knows his subject).

In any case, I'm puzzled... why do you think writing in a pure FP language has anything to do with not naming your variables?

BTW, I'd say the opposite of your initial claim is true: single-assignment is the subset of FP, not viceversa. There's way more to FP than immutability, and in fact there are FP languages which allow (controlled) mutation.


Unfortunately "functional" is coming close to being the Next Big Idea, which means it stops meaning anything specific. If you look at unlambda[0] as the kind of Platonic ideal of functional languages you might see what GP is talking about.

It lacks any facility for bindings because any binding is a storage of state. Roughly, since you can refer to every function and value literally, the idea behind unlambda is that you should.

(Note that by this way of looking at things, closures are not "functional" at all and are in fact embeddings of imperative mini-languages.) Historically, closures were more or less developed by accident as LISP codified its scoping rules; in that sense their presence isn't particularly indicative of "functional ness"; it's just that using them requires a GC and 40 years ago only functional languages had one.

[0] http://www.madore.org/~david/programs/unlambda/


I agree there is no widely accepted and formal definition of "functional".

Thanks for the link! I didn't know about Unlambda.

A couple of remarks though: by the author's own admission, Unlambda is not a purely functional language, when using the accepted terminology. The author then goes on to introduce what he acknowledges to be nonstandard terminology, so I'm not sure what this says about closures being functional or not under more mainstream definitions. Furthermore, Unlambda is a purposefully obfuscated language! Not sure how it is relevant for the purpose of my post or the OP's :)

Thanks anyway for the link, it's interesting.


Although historically closures have required GC, both C++ and Rust have implemented a design for closures without that requirement (and the latter manages to retain memory safety while doing so).


In this case he actually doesn't know his subject, because the subject here is comprehensibility of code, which is a function of human psychology and cognitive capabilities. What works for Tony and people like him does not work for most other people.

Abstract type signatures induce eye glazing in most programmers. It induces my eyes to glaze over and I've actually passed a course in category theory. It's not like I don't understand type signatures: I just takes more time than reading a decent descriptive name and trusting my coworkers. If a function square doesn't actually square, but cubes, I have a coworker to corner.


I do not agree 100% with Tony, it was just an interesting counterpoint to the blindly asserted notion that "naming identifiers help". Make no mistake, he does knows his stuff; he's just a bit extreme about it.

By the way, I wouldn't use an argument "by laziness" to argue in favor of naming things. "Type signatures induce eye glazing in most programmers", and "it takes more time"? Well, most programmers also write most of the bugs :) Not sure what this says about the relative merits of naming vs types/structure...

In any case, this was a tangent. Since FP doesn't preclude naming things if you wish, I'm not sure what the OP was all about.


The claim that variable names are worthless is quite exotic in the general context. I value algebraically explicit operations as much as the next guy but I also write much better code when it's obvious to me what my code does, and good variable names are an important tool cognitively in that respect.


Yes, Tony's claim is a bit extreme, and I don't think it's standard practice in general. I just thought it was a nice counterpoint to the OP's claim.

Note, however, that this "do not needlessly name things" does show up in actual programming. In standard Haskell practice, it's common to write very short, abstract variable names whenever the function is general, and in turn you're encouraged to write general functions whenever possible.


Maybe it would be interesting to assert from a name a specific behavior or type. For example the test suite sees the variable name "description" and concludes that this should be a Unicode string and tests this automatically. A universal vocabulary could be agreed upon and automatically tested. Say lint for names.


I was thinking that names help in those areas where algebraically correct operations can still lead to broken domain logic. Type systems help verifying that a specified algebra functions coherently but they do not help in designing that algebra.

I.e velocity v, acceleration a are modeled as floats. To integrate from instance in time t0 to t1, v1 = v0 + (t1 - t0) * a.

Now, I can replace any of those arithmetic operators with other arithmetic operators and still the statement is algebraically correct. However, the calculation is probably wrong,e .g. v1 = v0 * (t1 / t0) - a is a such broken permutation.

Naming time at point 0 as t0 and at point 1 as t1 are helpful mnemonics that help here.

Sure, we can create data types (that contain floats) Acceleration, TimeBefore, TimeAfter, VelocityBefore and VelocityAfter and create function (since we are ignoring names) asdf->VelocityBefore->TimeBefore->TimeAfter->Acceleration->VelocityAfter but the rub is - the implementation of that function needs to do that floating point arithmetic at some point.

Wrapping it in an algebraically explicit container does nothing to help in the correctness of the computation in itself. The main benefit here is for the end user in form of explicit types. Here, the function signature proves the OP:s point - signatures can be enough to document a function. However, it does nothing to verify the implementation.

Floating point operations are inherently "dynamically typed" in the sense that from a semantic point of view only some operations make sense when forming equations but modeling all of these relations using a type hierarchy is really, really cumbersome - and after the type hierarchy is in place, the hierarchy itself cannot prove the equation is correct, it just sets it in stone. Maths in inherently dynamically typed and good names help write better equations. If anyone have counter examples to prove this notion wrong I would appreciate them.


I take you example of the broken computation: v1 = v0 * (t1 / t0) - a

A smart lint system with a vocabulary for physics programs could know by convention that a name t means time and 0 and 1 means index and that in context of time the greater index means a step ahead in time. (At least this is what 90% of humans would assume). So it could infer t1 > t0 and check that at runtime in some debug mode. Maybe there are valid cases where t1 < t0 but then this naming introduces under guarantee false human interpretations of the source. The lint system should model and check against the typical human interpretation of names.

This does not detect the above wrong computation but it could help find errors without writing a test for every function.


Hmm. Isn't a lint system as you describe just an informal static typing system in this case? And anyway, someone still needs to write the lint system. Adding layers of abstraction does not make the problem go away...

Sure, when we have well defined domain logic rules those can be enforced through typing. But I would claim it is really unusual to do greenfield development and start from a really well defined set of domain logic rules formalized as a type system. Usually one discovers those rules as one develops the system, and then encodes those rules in types.


Dimensional analysis to the rescue.

The space of physically meaningful equations is vastly smaller than the space of all possible permutations of variables and operators.


Yeah, physical dimensions are one of the few concrete static types one can beneficially leverage at the start of a project. They describe an uncommonly well defined domain logic polished over centuries. What I tried to demonstrate that algebraically correct expression does not necessarily imply correct domain logic and a physics calc was the most simple example I could come up wit.

Too bad only a few languages like F# come with dimensions attached to numbers out of the box :)


> Both Go and Rust encourage single-assignment programming

Go is probably the least "singe-assignment encouraging" of current languages. It does not provide any const/final mechanism for anything but compile-time constants, and does not allow the creation of immutable objects.


Right. There's ":=" and "=", but no read-only enforcement on anything in Go other than compile-time constants. So you can write Go concisely in single-assignment style, but there's no enforcement.


Not only is there no enforcement by the language of the single-assignment style (as in Clojure, Erlang or Haskell), but there is even no way for the programmer to enforce a single assignment of a specific variable as in, say, Java and C++.

So no single-assignment style and no single-assignment without the style in Go. Not only is mutability not discouraged, there's even no way to turn it off for specific types/variables. Go is fully in the JS/Python/Ruby camp when it comes to mutability.


Does pure functional programming somehow not have "let ... in ..."?


Yeah, I'm not really getting the single-assignment bit. That kind of thing makes me think about effect systems and linear types instead...

In any case, let ... is basically syntactic sugar for lambdas.

    let x = a in b
is equivalent to

    (\x -> b)(a)
Other than aesthetics, the only reason to have a separate "let" syntax is to tell the typechecker that a function can be polymorphic.


As a compile-time macro of sorts, yes (early LISPs just had (labels(...)), for instance).


Immutability gives you elegant command-query separation, and it's sometimes useful (for example you can use the same function in physics, and in AI to predict where player will be in 5 frames). Of course you pay with performance.

I wonder if keeping a tree of game states in persistent variable (like Clojure datastructures) could help with that.

We could keep the current state as a root of game states tree, AI code will calculate possible future states a few steps ahead (these future states would mostly share memory with parent and sibling states, and if we predict n frames forward AI can start in each frame from states in step t+n-1 and calculate their immediate kids only). This is trivialy parallelizable BTW.

And in each frame game engine would move the tree root to one of the kids depending on player action, culling the previous root and the branches not taken from the tree.


I took a stab at solving this problem with a programming language called Radian: [http://www.radian-lang.org/]

It is based on the observation that SSA is functional programming, and there is a well-defined process for transforming imperative code into SSA, so why not reverse the process and develop a functional language with imperative-looking, Python-like syntax?

The goal was to automatically extract parallelism from for-loops, internally converting them into map-reduce processes which could be spread across multiple cores, and that worked out pretty well. In absence of a marketing effort, however, a userbase never developed, and I haven't done any work on the project in a couple of years.


I think the simplest way to do stuff like this is to build up an iterated copy of the previous state and once done replace the old state with the new one. Iterating isn't modifying the current state, it's generating the next state from the current one.


Interesting premise. However, isn't what the author described is similar to event-sourcing?


I think this is roughly how demos (in Quake, et al) are recorded. I think most demos also include the state introduced by RNG if the game involves that. It's interesting to think about that idea with relation to functional programming.


I am a great lover of functional programming, so much so as to seek out and find a summer job in Clojure this year, and to have written my own functional Lisp dialect.

Lately though, as I've been weighing engine options for a CRPG project, I have become increasingly skeptical of its applicability or utility to games.

The simple point of the matter is that games are humongous state machines, but even more importantly, humongous state machines that are performance sensitive.

The standard approach so far has been to simply deal with this either monadically or atomically, having some big state object that gets passed through each cycle, copied again and again with every frame. This is all well and good when the full extent of your game state is a handful of blocks and a paddle, but stop and think how many, say, console-style RPGs you've seen done in functional languages to completion.

In the Pokemon games, there is a move called Defense Curl, that increases the users defense. Simple enough. Then in the second gen games, it was very silently given another effect: there is a particular attack move whose damage is doubled if the user has previously used Defense Curl in that fight. In the third gen, this was done again for an additional move that, at the time and for three more iterations of the series, was only available to a single character.

That is an incredibly specific state detail. And that's just one. The entire game is filled with little quirks and mechanical details like that. There are over 600 moves in the game, many with other unique effects, not to mention the 700+ creatures, almost 200 abilities, plus an entire hidden second stat system with the IVs.

Try passing a state object containing all the flags you need to keep track of that 30 or 60 frames a second.

Is it any wonder then that you see so little done with FP languages that isn't just clones of old arcade games? I know of one company working on an A-level game in Haskell. Beyond that, I've seen a couple Quake 3 ports and a whoooole lot of unfinished projects.

I'm just not convinced it's up to the task yet, and the research and theoretical end still seems to focus on trivial toy examples instead of full-scale projects that are on a whole other order of magnitude from Pac-Man. I seriously start to wonder how much of the learning so far and the 'obvious' patterns and solutions that seem to already just be accepted dogma, really have anything to do with real-world games programming in the large.

Maybe Wayward Tide will finally release source and prove me horribly wrong here, or Elm will indeed prove as promising as it looks when extended beyond simple browser toy experiments, but in the meantime I'm veering hard back to commercial, traditional engines instead.


Try passing a state object containing all the flags you need to keep track of that 30 or 60 frames a second.

Hmm, have you tried it? I don't see an immediate problem. 60 Hz is not an immense rate for computers.

And state updates don't usually involve copying large structures, since pure languages use lots of sharing. It's more like updating a tree; you just do some logarithmic amount of consing.

There aren't that many companies using Haskell at all, so you can't infer much from the fact that there aren't many large-scale games written in it.

This is of course not an argument, but John Carmack doesn't seem to believe purity itself is an intractable performance problem!


Pokemon rules would probably be wonderfully modeled as ADTs. Every "modifier" could be in a big sum type and then you would just express all your rules explicitly in terms of the current modifiers in play along with pokemon stats, type, and moves. In the end you have a function State -> Move -> State. You would take inputs, and apply them with the current state to get the new state. You would encode all your rules in this transition function.


A real world megabudget game has the money to make a high perf C++ state machine, make all the microoptimizations and platform tweaks necessary to reach the time budget for each frame on each platform. As soon as a more efficient tool promises something that can save time in development, of course it is adopted. Scripting languages are used in many enginess for example. The engines themselves seem to be firmly in C++, but I think this is at least as much down to available dev skill, tools etc.

Large games Will probably not go from C++ to Haskell, but instead (hopefully) from C++ to Rust, where the developer can choose a more elegant functional approach where possible, and will still have compiler guarantees about the parts that are mutable.


"Try passing a state object containing all the flags you need to keep track of that 30 or 60 frames a second."

You can structure your data such that you only need to rewrite a portion of it, based on what changed. I couldn't more highly recommend "Purely Functional Data Structures" by Chris Okasaki.


Following up on this...

With regards to the Pokemon example given, I can think of two good ways to encode it:

First, simply a bunch of single-bit flags, one per move per competitor. Each turn, for move tracking, we update at most one bit in one of these arrays by copying the array and updating a pointer. We can track 1000+ moves, and only need to copy 128 bytes. You can do thousands of comparable things and still be done in a single frame.

Second, a list of turn information. A move could ask arbitrary questions about the history of the current fight by walking the list. An append-only list trivially has an immutable sublist, and you only need to "pass" a single pointer. Walking the list should be trivially possible well inside a single frame, provided you don't have many thousands of turns in a single fight.

All that leaves aside the fact that you don't actually need to fit your world updates in a single frame render - you can interpolate/extrapolate. This is especially easy in a turn-based game like Pokemon, where it mostly consists of "smoothly progress through the same animation".


Or maybe, FP is the wrong tool for that job.


Isn't the author basically saying, an iterative process (process as in procedure-semantics; like the word is used in SICP) is better than a recursive process?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: