Hacker News new | past | comments | ask | show | jobs | submit login
An Epic future for SPJ (haskell.org)
365 points by Smaug123 78 days ago | hide | past | favorite | 169 comments



Tim Sweeney did a presentation[0] at MIT's CSAIL on programming languages that I thought was really interesting (it mentions Haskell quite a bit), titled "The Next Mainstream Programming Language: A Game Developer’s Perspective."

[0]: https://web.archive.org/web/20120113030236/http://groups.csa...


On slide 57 he takes the following position:

    Memory model
    – Garbage collection should be the only option
an interesting position to see Sweeney take up, since so many people in games reflexively say that garbage collection is totally non-viable whenever it comes up.

but he also says:

    By 2009, game developers will face...
    CPU’s with:
    –20+ cores
MacBook Pros, Razer Blades, the XBox Series X, and the PS5 all use 8-core CPUs, so the prediction of an ever-escalating number of CPU cores seems to have not really panned out. One thing that has been majorly outside of these predictions is the dedicated ML chips like the M1's Neural Engine.

The presentation isn't explicitly dated, but since the sample game is Gears of War, which came out in 2006, later on he says "by 2009" as if it was not-yet 2009, and the file name is "sweeney06games.pdf", gonna say this is almost certainly a presentation given in 2006. At the time this was given, the iPhone hadn't even come out yet.

It's an interesting presentation. I wonder what Sweeney would say today about this presentation, about how much of it still rings true, and about which things he would second-guess nowadays.


Most people's mental model of garbage collection is still just stop-the-world mark-and-sweep, which is extremely old. Game developers hate GC because of impossible to predict or the lack of ability to limit latency pauses eating into the frame budget - but there are tons of new, modern GC implementations that have no pauses, do collection and scanning incrementally on a seperate thread, have extremely low overhead for write barriers and only have to scan a minimal subset of changed objects to recompute liveness, etc. that are probably a great choice for a new programming language for a game engine. Game data models are famously highly interconnected and have non-trivial ownership, and games already use things like deferred freeing of objects (via area allocators) to avoid doing memory management on the hot path that it could do automatically for everything.


After being both a Java programmer for the greater part of a decade, then spending a bunch of years doing Objective C/Swift programming, I don't really understand why the Automatic Reference Counting approach hasn't won out over all others. In my opinion it's basically the best of all possible worlds:

1. Completely deterministic, so no worrying about pauses or endlessly tweaking GC settings.

2. Manual ref counting is a pain, but again ARC basically makes all that tedious and error prone bookkeeping go away. As a developer I felt like I had to think about memory management in Obj C about as much as I did for garbage collected languages, namely very little.

3. True, you need to worry about stuff like ref cycles, but in my experience I had to worry about potential memory leaks in Obj C about the same as I did in Java. I.e. I've seen Java programs crash with OOMs because weak references weren't used when they were needed, and I've seen similar in Obj C.


On the contrary, as others have pointed out, automatic reference counting seems to be the worst of all worlds.

1. Automatic reference counting adds a barrier every time you pass around an object, even if only for reading, unlike both manual memory management and tracing GC.

2. ARC still requires thought about ownership, since you have to ensure there are no cycles in the ownership graph unlike tracing/copying/compacting GC.

3. ARC still means that you can't look at a piece of code and tell how much work it will do, since you never know if a particular piece of code is holding the last reference to a piece of data, and must free the entire object graph, unlike manual memory management. In the presence of concurrency, cleanup is actually non-deterministic, as 'last one to drop its reference' is a race.

4. ARC doesn't solve the problem of memory fragmentation, unlike arena allocators and compacting/copying GC.

5. ARC requires expensive allocations, it can't use cheap bump-pointer allocation like copying/compacting GC can. This is related to the previous point.

6. ARC cleanup still takes time proportional to the number of unreachable objects, unlike tracing GC (proportional to number of live objects) or arena allocation (constant time).

Reference counting is a valid strategy in certain pieces of manually memory managed code (particularly in single-threaded contexts), but using it universally is almost always much worse than tracing/copying/compacting GC.

Note that there are (soft) realtime tracing GCs, but this can't be achieved with ARC.


It depends on the implementation. In traditional ARC implementations these are all issues to a degree, but even then they tend toward much lower overhead with more predictable behavior. Though I agree tracing GCs can be really fast and better in many ways.

1. I'm guessing you mean lock based implementations? There's several non lock, non atomics based ARC designs that still handle threading safely. That means it's little more than a single integer operation.

2. True, but for many contexts this is easy to do and makes it easier to understand data flows. In other cases it's possible to use index based references pretty readily, like in Rust. Or add a cycle collector.

3. In theory you can't, but in practice it's often pretty easy to tell. At least in non-oop languages. I use Nim with its ARC (1) on RTOS and this is really only a concern for large lists or large dynamic structures. It can be managed using the same patterns as RAII where you call child functions who know they won't be the last ones holding the bag. Also you can use the same trick as some Rust code does where you pass the memory to another thread to dispose of (2).

4/5. It depends on the implementation, but you can use pools or arenas or other options. Nim provides an allocator algorithm (tlsf) with proven O(1) times and known fragmentation properties (3). Still tracing gcs can make better usage of short lifetime arenas true. Though with ARC you get similar benefits using stack based objects.

6. It's tradeoffs. Tracing gcs also end up needing to scan the entire heap every so often. ARC's only need to check a root object during usage and only access the entire graph when de-allocating.

Your last point isn't accurate, as you can use an appropriately designed ARC in a hard realtime context. I've found it quite easy to do, but granted it takes a little bit of care, but any real time systems do. For items like interrupt handlers I ensure no memory is allocated or destroyed.

1) https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc... 2) https://abramov.io/rust-dropping-things-in-another-thread 3) http://www.gii.upv.es/tlsf/


I mostly agree with all of your points, and the core message - it's all tradeoffs.

Just one nitpick though:

> 6. It's tradeoffs. Tracing gcs also end up needing to scan the entire heap every so often.

This is not accurate: tracing GCs always start from the roots and only ever visit live objects. By definition, the objects that they free are not reachable from anywhere. "Full scans" typically refer to various optimization strategies that tracing GCs implement to avoid scanning all roots (e.g. generations, per-thread scans) which do rely on still occasionally doing a full scan (scanning all live objects in all generations, or in all threads).


Yah you’re right. I didn’t quite describe that correctly, but mostly I meant scanning the “live” objects of the heap.


> non lock, non atomics based ARC designs that still handle threading safely

Don't think that's even doable at all, at least not portably. Do you have some examples?


There's a couple of methods like deferred counting, differential counting, or biased references. They're usually not completely atomic free but generally provide guaranteed constant overhead or tweak when or how the memory can be shared.

- Nim's ARC only allows one thread to manage a reference count at a time, but enables an isolated graph of memory to be moved to another thread. The concept is called `Isolate`, and is very similar to Rust's single owner of mutable references. There's still WIP to have the compiler automate the checks, but it's usable now (I used it with FreeRTOS's xQueue mechanism just fine). https://github.com/nim-lang/RFCs/issues/244

- Python's new non-GIL proposal that does this using biased references: https://hackaday.com/2021/11/03/python-ditches-the-gils-and-...

- The source of Python's biased references: https://iacoma.cs.uiuc.edu/iacoma-papers/pact18.pdf

- Defered Reference counting: https://dl.acm.org/doi/pdf/10.1145/3453483.3454060

It's pretty cool stuff!


If you check ARC code in a profiler, there's a shocking amount of time spent in the retain/release calls:

https://floooh.github.io/2016/01/14/metal-arc.html

There are ways around this overhead at least in Metal, but this requires at least as much effort as not using ARC to begin with.


I think the M1 CPU lowers some of the ARC retain/release to silicon -- it doesn't remove the problem, but it does seem to reduce the relative overhead significantly.

https://news.ycombinator.com/item?id=25203924

> this requires at least as much effort as not using ARC to begin with.

Designing a brand new CPU architecture certainly counts as "at least as much effort", yes. ^_^

P.S. While I'm here, your "handles are the better pointers" blog post is one of my all-time favorites. I appreciate you sharing your experiences!


Isn't ARC absolutely worst-case for most multi-threading patterns? You'll thrash objects between cores just when you reference them. Every object becomes a mutable object!


Reference counting is a form of garbage collection. Remember that reference counting and tracing garbage collection are simply two extremes of a continuum of approaches. Must-read paper: "A Unified Theory of Garbage Collection".


Compared to other, state of the art GC strategies, it's slow and has bad cache behavior esp. for multithreading. Also, not deterministic perf wise.


Can you explain these or point me to any links, I'd like to learn more.

> has bad cache behavior for multithreading

Why is this? Doesn't seem like that would be something inherent to ref counting.

> Also, not deterministic

I always thought that with ref counting, when you decrement a ref count that goes to 0 that it will then essentially call free on the object. Is this not the case?


Bad cache behavior: you're on core B, and the object is used by core A and in A's L2 cache. Just by getting a pointer to the object, you have to mutate it. Mutation invalidates A's cache entry for it and forces it to load into B's cache.

determinism: you reset a pointer variable. Once in a while, you're the last referent and now have to free the object. That takes more instructions and cache invalidation.


> Bad cache behavior: you're on core B, and the object is used by core A and in A's L2 cache. Just by getting a pointer to the object, you have to mutate it. Mutation invalidates A's cache entry for it and forces it to load into B's cache.

Thank you! This was the response that made the cache issues clearest to me.


> Why is this? Doesn't seem like that would be something inherent to ref counting.

Once I know I can read the data, it usually doesn't matter that another thread is also reading it. Reference counting changes that because we both need to write to the count every time either of us takes or drops a reference to the data, and in the latter case we need to know what's happened on the other core, too. This means a lot more moving of changing data between processor cores.

> > Also, not deterministic

> I always thought that with ref counting, when you decrement a ref count that goes to 0 that it will then essentially call free on the object. Is this not the case?

That's my understanding, but is that "deterministic" as we mean it here? It's true that the same program state leads to the same behavior, but it's non-local program state, and it leads to you doing that work - potentially a lot of work, if you (eg) wind up freeing all of a huge graph structure - at relatively predictable places in your code.

There are good workarounds (free lists, etc) but "blindly throw language level reference counting at everything" isn't a silver bullet (or maybe even a good idea) for getting low-latency from your memory management.


> Doesn't seem like that would be something inherent to ref counting.

It is inherently so. A reference count is an atomic mutable value that must be updatable by any thread.

A significant selling point of ARC compared to traditional ObjC reference counting was that the compiler could elide retain/release calls better than the programmer could, thus preventing a ton of stalls.


Writing to the rc field even for read only access dirties cache lines and causes cache line ping ping pong in case of multi core access, where you also need to use slower, synchronised refcount updates so not to corrupt the count. Other GC strategies don't require dirtying cache lines when accessing the objects.

Determinism: the time taken is not deterministic because (1) malloc/free, which ARC uses but other GCs usually not, are no deterministic - both can do arbitrary amounts of work like coalescing or defragmenting allocation arenas, or performing system calls that reconfigure process virtual memory - and (2) cascading deallocations as rc 0 objects trigger rc decrements and deallocation of other objects.


> Also, not deterministic

> not deterministic perf wise.

was what parent wrote (emphasis added), I assume referring to the problem that when an object is destroyed, an arbitrarily large number of other objects -- a subset of the first object's members, recursively -- may need to be destroyed as a consequence.


Because (mainstream) refcounting GCs are just slower than modern tracing GCs. GC pauses are virtually gone these days (Java gives you <1ms maximum pause for heaps up to 16TB), and are actually more deterministic than refcounting.


RC can be deterministic in terms of when a destructor gets called. GC languages usually don't support destructors for that reason.


But RC GC is not deterministic, though. All you know is that it will be called when some reference is cleared. You don't know which and when, and you don't know how much work it will do. With modern tracing GCs, though, there are no more pauses, and mostly a constant and small CPU tax, done in the background.

The only significant cost tracing has these days is in memory footprint.


> The only significant cost tracing has these days is in memory footprint.

And that's not insignificant. The top-of-the-line Pixel 6 Pro has twice as much RAM as the top-of-the-line iPhone 13 Pro. Maybe the Android camp is just more eager to play the specs game, but I've long speculated that iOS simply needs less RAM because of its refusal to use tracing GC.


Windows Phones of the Lumia series had historically less RAM than equally priced Android models and the performance with .NET Native was much better.

I used all my models until their hardware died.


How can there be degrees of determinism?


Determinism in programming doesn't just mean "X was caused by Y" (e.g. determinism as a stand-in for the causual chain).

It mostly means "I can know (as in "determine") how much time, or instructions, or calls, or memory an operation will take".

And this knowledge can come in degrees (be more or less fuzzy).


I can tell you I'll come fix your plumbing between 10:00 and 14:00, and it will take between 30 minutes and two hours, or that I'll make a delivery at 9:45.


The destruction of graph of several billion nodes and about five times as much edges took several seconds in my C++ program. I constructed it in main() and delay between message just before "return 0;" statement and program actually exiting was quite long.

"Determministic" is a double edged sword. You get deterministic allocation and release times, but they might be bigger than what really is achievable.


Because it is worse, regardless of the sales pitch.

https://github.com/ixy-languages/ixy-languages

When doing all the required optimizations, it turns into tracing GC optimizations with another more political accepted name.


Some game engines (C++) do allocations in multiple memory areas. If some allocated memory is known to be needed only for the current frame, then it is allocated from that memory area. Then at the end of the frame the whole region is freed. This is an explicit garbage collection at end of each frame. Memory allocation can further be split according to the CPU thread, thus avoiding global locking in the allocator. With double-buffering the next updated/prepared frame can use its own memory area, while the previous one finishes rendering.

The problem with GC or with reference counting is that it needs to operate on each allocated object separately. If the task for the GC can be reduced to operate on whole memory areas only, its overhead is greatly reduced.


> there are tons of new, modern GC implementations that have no pauses

There are no GC implementations that have no pauses. You couldn't make one without having prohibitively punitive barriers on at least one of reads and writes.


There are no CPUs that have no pauses; who knows, you may have to wait a microsecond for that cache line to come in from RAM.

There are no operating systems that have no pauses; if you don't want to share the CPU, you're going to have to take responsibility for the whole thing yourself. Most people are not even using RTOS.

There are no reference counting implementations that have no pauses. Good luck crawling that object graph. Most people are not even using deferred forms of reference counting.

As far as I know, there is one malloc implementation which runs in constant time. No one uses it.

There are tracing GCs whose pause times are bounded to 1ms. That is enough for soft-real-time video and audio (which is what matters to video games). In general, you are not going to get a completely predictable environment unless you pick up an in-order CPU with no cache and write all your code in assembly.


I think the more accurate / useful statement is, there's no GC with bounded pause times that's also guaranteed to free memory fast enough you don't run out even though you shouldn't. In other words, GC can never fully replace thinking about your allocation patterns.

(I suspect this is obvious to a lot of programmers, but especially in working with Java programmers who just skim JVM release notes and then repeat "pauseless", it's also not obvious to many too.)


> GC can never fully replace thinking about your allocation patterns

Of course not! Nothing can. But—two things:

1. Per Knuth on premature optimization, 97% of your code should not care about such things. For those parts of your code that do need to effect their own allocation strategies, they can generally do so as well in a GCed language as another. Tracing GC forces minimal cognitive overhead on the remaining 97%, and allows it to interoperate smoothly with the 3%.

2. Tracing GC has better performance characteristics than other automatic memory management policies, in a few important respects. Compacting adds locality; bumping gives very fast allocation; copying gives very fast deallocation of deep graphs in the nursery.


The problem is that garbage collection does impact that 3 %... since unless you’re using naive gc garbage collection limits your ability to not manage memory automatically. In general reference counting is more tolerant of mixing reced and non ref counted code. This hurts more since gc gives performance in the part of the code where performance doesn’t matter, while hurting you where performance does! And yes this is alleviated somewhat by memory pools, but not entirely.


> Of course not!

Like I said - "of course" to me, and apparently you, and probably anyone who has worked with multiple GCs and unmanaged languages over many years - but that's not most programmers. The vast majority of Java developers I interview, up to those with 5-6y experience, have no clue how their idioms affect allocations.

> Per Knuth on premature optimization, 97% of your code should not care about such things.

No, this is the classic misstatement of Knuth. 97% of the time, I shouldn't worry about such things. But 100% of my program still might be affected by a design needed to reduce GC pressure, if that means switching core data structures to pools or SoA or something.


Right, but if you're allocating way too much, no lifetime management model will save you.

As a game dev I still will take a modern GC over refcounting or manual lifetime management any day. There are scenarios where you simply don't want to put up with the GC's interference, but those are rare - and modern stacks like .NET let you do manual memory management in cases where you are confident it's what you want to do. For the vast majority of the code I write, GC is the right approach - the best-case performance is good and the worst-case scenario is bad performance instead of crashes. The tooling is out there for me to do allocation and lifetime profiling and that's usually sufficient to find all the hotspots causing GC pressure and optimize them out without having to write any unsafe code.

The cost of having a GC walk a huge heap is a pain, though. I have had to go out of my way to ensure that large buffers don't have GC references in them so that the GC isn't stuck scanning hundreds of megabytes worth of static data. Similar to what I said above though, the same problem would occur if those datastructures had a smart pointer in them :/


This is a meaningless assertion, if you insist on going that far we have to discuss whether OS context switches cause your process to pause or whether the allocator you're using causes page faults. When people say 'gc implementations that have no pauses' they mean no stop-the-world, which IS an achievable thing in production collectors in many cases.


GC has a lot of issues, especially in engine level code, but in practice every game I've worked on or shipped has had at least one garbage collector running for UI or gameplay code. Lua, Actionscript (Scaleform), Unreal Script, Javascript, managed C#. Every game had GC performance issues as well, and we wrote code to generate minimal or no garbage


Flipping this, in the C++ world, "garbage collection", is more than just freeing unused references. "Resource release is destruction"

Try-with-resources is ok, but not great.

Ultimately developers, especially those concerned with performance, prefer full control. And that means deterministic lifetime of memory and resources.


Part of it is also due to Unity3D sticked with the ancient stop-the-world mono GC for a very long time and discuraging developers from doing allocation in official talks.


Another big cost of garbage collection is memory usage: it's not uncommon for a high-performance GC to require 2x-3x the memory for the same application compared to non-GC. For games this is not trivial given how heavy some of the assets are.


Most of those people aren't aware that Unreal Blueprints and Unreal C++ do have GC.

They just see C++ and don't look into the details.

Besides that, given that most engines now have a scripting layer, even if deep down on the engine GC is forbidden, the layer above it, used by the game designers, will make use of it.

It is like fostering about C and C++ performance, while forgetting that 30 years ago, they weren't welcomed on game consoles, and on the 8/16 bit home micros for games.

My phone is more powerful than any Lisp Machine or Xerox PARC workstation, it can spare some GC cycles.


> My phone is more powerful than any Lisp Machine or Xerox PARC workstation, it can spare some GC cycles.

Speak for yourself; most phones I see still take 2+ seconds to open some apps.

As far as desktop goes, even fewer cycles can apparently be spared - IntelliJ frequently freezes for 5s just after startup, most applications have insane startup times considering the power they have.

So, no, I don't want to add yet another slowdown because "our machines are 1000x faster than high-end digital workstations from 30 years ago", because it seems to me that all off the increases in hardware power have already been used up, and more.


How long does Photoshop take up to start?

The problem isn't the hardware or the language, rather how many stuff gets programmed in an age where plenty of people run for Electron apps and don't care about algorithms and data structures.

My first applications had 64KB and 3.5 MHZ at their disposal to make their users happy.


> Speak for yourself; most phones I see still take 2+ seconds to open some apps.

Why is that though? I've written code since the 8 bit days of BBC micros and ZX Spectrums.

Casey Muratori's (who has been on here a lot lately) talk nails it [1] - developers just don't realise how fast their code could and should run. And the fact that we have layers upon layers of frameworks and no one cares.

[1] https://www.youtube.com/watch?v=Ge3aKEmZcqY&t=8638s


> By 2009, game developers will face... CPU’s with: 20+ cores

It seems like he was off by a matter of a few months at most.

I have a quad AMD Opteron(tm) Processor 6172 (Mar 2010) 2U server with 48 hardware cores. This motherboard can also take the 16 core CPU's which were also introduced the following year, which would put it at 64 cores and 128 threads.

The Opteron 6 core CPU's were introduced in June 2009, so a quad server would be 24 cores, or 48 threads.

Of course, that's not cores in a single CPU, if that's what even what the prediction was referring to, but that probably wouldn't matter very much from a gaming dev perspective anyway. It was probably a pretty good guess, all things considered.


> a quad server would be 24 cores

That‘s not usually what people game on, though. Even today, 20 core CPUs are the exception for gaming rigs.


16 core machines isn't that uncommon today, but i guess back in 2009, it was way less common.

But i think the trend is clear - more cores is the future as clock speed has slowed down.


> 16 core machines isn't that uncommon today

~0.31% according to the Steam hardware survey [1] If you are including 8-core machines with simultaneous multithreading (so 16 logical cores, 8 physical), then you would be at ~17%, however.

[1] https://store.steampowered.com/hwsurvey/cpus/


Point taken, but it is for server-side games like Fortnite, because the bulk of the game world actually runs on the server. However, I'm not sure when they started shifting over to that model (or even if that's what he was talking about as the future).


I suspect, when he says game developers will face 20 core CPUs, he means as a target baseline system, not as an exception.


You have to look at it from 2005 context. PS3 was just announced, Cell Processor with 8 CPU Core. But the original target for Cell was 32 Core. It was always thought some day, may be PS4 would get to that. Intel would have to compete with more Cores because they just went through the whole Pentium 4 scaling problem. The future is going to be super multiCore. And software will catch up ( somehow ) once the hardware is here. And that is not exactly a bad take in 2005/2006 considering Intel announced their work on Larabee in 2008 and was pushing for some collaboration with Sony on using it for PS4.

And of course none of these happened. GPU and CPU did not converged, they both scale far ahead than anyone could imagined.

I mean, if you ask developers in 2006, no one would thought of doing 4K rendering with HDR / 120fps and still not doing Ray Tracing.


he was off by some decades but that is mostly intel's fault. about a decade later we have intel chips coming out with 10 cores and amd with 16 cores. We're getting there.


> MacBook Pros, Razer Blades, the XBox Series X, and the PS5 all use 8-core CPUs, so the prediction of an ever-escalating number of CPU cores seems to have not really panned out.

You seem to forget the GPU, how may cores does that have? (but really the question is: isn't parallelism the default mode when working with GPUs.


GPUs have existed since long before 2009. Sorry, but the guy who wrote Unreal Tournament of all people definitely did not make a basic mistake or generalization like you're insinuating.


Well, the RTX 3090 apparently has 10496... But I'm not aware of how to use them to optimize garbage collection.


Pretty sure unreal engine 4 and 5 have garbage collection built into the runtime


Even with garbage collection, you still can manage memory manually on top of gc by creating a pool and managing it manually.


Recently listened to the Haskell Interlude podcast with Lennart Augustson where he mentions in the end that he's at Epic Games. Looks like they are investing greatly in Haskell-like things now.

[1] https://haskell.foundation/podcast/2/


He wrote an article on I think gamespy of the same variety. Back in 2000 nonetheless, it really influenced the direction I took in grad school: https://m.slashdot.org/story/9565 (the actual article seems to be lost, maybe I can dredge up a web archive link).


Thanks, that was a great read.


I always remember SPJ’s chart in a short jokey video “Haskell is useless” https://youtu.be/iSmkqocn0oQ

Really made me see things clearer.


It’s funny in my mind that I laughed at his joke about “the computer would just produce heat” and somehow the whole of cryptocurrency is literally doing just that.


I really like his chart about usefulness and Unsafe programming. And that was in 2011! We have seen in the past 10 years how programming moving in the direction he was pointing at.


I guess Rust is a language that went the move to Safe and Useful, following the top arrow?


Rust prevents a few types of unsafety, mostly relating to memory and concurrency. There are many more on which Rust does not do particularly well though. Specifically, Rust allows arbitrary side effects from basically any part of the code. You can always write to stdout or a logfile, or open up new sockets, or delete files from the filesystem and there will be nothing in the type signature to even warn you about this.


>You can always write to stdout or a logfile, or open up new sockets, or delete files from the filesystem

Haskell has a separate problem here: all of these can fail, and there's nothing in the type system to alert you of this (in the standard library), such failures just mindlessly throw exceptions like some Java monstrosity. In Rust, on the other hand, all such functions return something like Either Result Error, forcing the caller to check (and ideally handle) any errors. Not to mention async exceptions in Haskell, which can happen anywhere, and the fact that every value is really of type value|undefined, due to laziness. It's practically impossible to cleanly formally reason about code in Haskell due to the fact that anywhere could be interrupted by an async exception.

When even C++ is considering trying to remove exceptions from the standard library, Haskell's love for untyped exceptions everywhere is seriously behind the times for a language that prides itself on correctness.


I used to bash Haskell exceptions but my views changed recently after programming in Haskell for a while.

> all of these can fail, and there's nothing in the type system to alert you of this (in the standard library), such failures just mindlessly throw exceptions like some Java monstrosity

There's `MonadThrow` in Control.Monad.Catch which hints that the monad in question can throw exceptions. Admittedly, partial functions like `undefined` and `error` are still usable...

> Not to mention async exceptions in Haskell, which can happen anywhere, [...] anywhere could be interrupted by an async exception

... and they can throw exceptions everywhere, just like asynchronous exceptions, but it's actually a strength! Haskell enforces a clean separation between impure code (typically in a monad) and pure code. You can only catch exceptions in the IO monad, which often lies outside of the core logic. Due to this unique strength, Haskell is one of the very few languages that can safely terminate running threads.

Impure code can become harder to write because of exceptions, but since you don't write everything in the IO monad, the problem is largely mitigated. Yes, exceptions are hard to get right, and that's exactly why other languages are trying to get rid of, but Haskell makes it quite tractable, (though still quite annoying). Rust used more Maybes and Eithers in the IO monad (to borrow jargons from Haskell), but it's also got panic, which is the actual Haskell exception counterpart.

> and the fact that every value is really of type value|undefined, due to laziness

To be pedantic, Haskell has levity polymorphism, which gives you unlifted datatypes, like in OCaml and Idris. Even older Haskell has unboxed datatypes that are not lifted.

> ...Haskell's love for untyped exceptions everywhere...

Nope, Haskell's exceptions are typed.


> > ...Haskell's love for untyped exceptions everywhere...

> Nope, Haskell's exceptions are typed.

logicchains means that the exceptions that a function can throw are not noted in its type (and as a massive Haskell fan I agree with him/her that that is very annoying).


>write to stdout or a logfile, or open up new sockets, or delete files from the filesystem

Do those things come under the category of unsafe? Why would we be programming if it weren't to do those kinds of things? If I buy a shovel, at some point I'll probably want to dig a hole with it.


Absolutely. If you watch the youtube video linked in this thread, SPJ makes that very comment, that a program with zero effects is useless.

However, the goal of haskell (and indeed any language trying to be safe in the way SPJ means) is to be able to have most code be safe/effect-free, and then to have effects be very carefully controlled and limited in their use. Things like the IO monad mean many parts of haskell code can't do IO in fact.

We obviously do want some sort of effect in the end, but the idea is it's safer to contain those effects in very limited places, and not allow effects to happen literally anywhere at any time.

Note, unsafe in the SPJ video was specifically about effects, while "unsafe" in rust terminology is mostly about memory safety, so those two terms really aren't the same word, and to be honest that can make communication less clear. I don't know what "category of unsafe" meant in your comment really.


If you'd watched the video from the grandparent comment, you'd have seen that those kind of uncontrolled side effects are exactly what Simon Peyton Jones was talking about in the video when talking about "safe" versus "unsafe" languages.

But in any case, yes those things I mentioned can fall under the category of "unsafe". In the category of memory safety, you want to be sure (with compiler guarantees!) that no thread will secretly free some allocated memory while you are still using it. This is something the borrow checker can give you in rust. There are no builtin guarantees in Rust that this fancy new library you imported won't do `DROP TABLE users`, or open a socket to https://haxorz.com and pass on all the environment variables. There are other languages in which you can be sure that a function cannot open any sockets unless it specifically has "can open sockets" encoded in its type.


> There are no builtin guarantees in Rust that this fancy new library you imported won't do `DROP TABLE users`, or open a socket to https://haxorz.com and pass on all the environment variables.

In practice that's true of Haskell as well.


Indeed, and I think it was a mistake by the parent poster to cast it as some sort of security thing. It very much isn't.[0]

Anyway, controlling side effects is really about avoiding accidental/unintentional side effects, i.e. mutating the world[1] by accident. Of course if everything is in IO, you only get "can do anything to the world" and "can't change the world at all", so Haskellers are usually interested in more fine-grained separation of effects than just pure/impure.

Of course, you are also trusting that code you're using doesn't do crazy unsafePerformIO, etc. stuff, but at least you can grep the code for that :). And sometimes unsafePerformIO can be a good thing to do 'magical' things which are referentially transparent but require mutation for asymptotics or performance generally.

[0] Safe Haskell is more about that kind of thing, but AFAIUI it isn't really in much use and never really took off. IIRC, it might even be slated for removal?

[1] Which is the ultimate shared mutable state, after all.


Right, my "in practice" was hedging for the existence of SafeHaskell, which does rely on the types and is "built in" to GHC, but as you say isn't really used by the community.


I get that. (Another way to do it at run time is capabilities). What I don't like is calling this "unsafe". We know that use after free is never something anyone intended. We don't know that about opening a socket. If we take the attitude that any effect is unsafe then soon we will feel we have to control every one of them. If I have to control everything someone else does then I might as well do it myself (i.e. you eventually start to leak the implementation details and lose flexibility). Call it contracts or capabilities or something but not unsafe.


Use-after-free is bad. "Unsafe" isn't the same as bad; unsafe means "could be bad". The reason that is a useful definition for unsafe, is that it allows us to define safe as "definitely not bad".

In Haskell, a function like 'Int -> String' is safe (definitely not bad). A function like 'Int -> IO String' is unsafe (it might be bad; we hope not). If it were possible to specify "bad" via the type system (like the type of use-after-free) then we would want that to be a type error (like it is in Rust).


I don't know. Int -> String could be bad if the algorithm that calculates the value of the string is bad. But anyway I suppose I don't like changing the use of language where "unknown" now becomes "unsafe". Unsafe to me means that I know something about it. If I know nothing about it then it is neither safe nor unsafe. It's just unknown. Why not call it that? Otherwise we have two words for the same thing and have made our language more imprecise. (Alternatively if we take the attitude that something unknown could be good then we could argue that we call it safe).


> I suppose I don't like changing the use of language where "unknown" now becomes "unsafe".

I don't think that's changing the use of language at all. For example, playing one round of russian roulette is 'unsafe'; even though (a) we don't know if it will have a bad outcome or not, and (b) the chance of a good outcome is much higher than that of a bad outcome.


I do believe that "unknown" should mean "unsafe". Safe means that you can rely on it, if you don't know anything about it then from a safety perspective this must be treated as unsafe.


Except this isn’t true. I can embed arbitrary side effects in any line of Haskell code with something like

    foo :: Int -> String
    foo x = seq
            (unsafePerformIO $
              println “pwned”)
            (show x)


Yes, we know unsafePerformIO exists, but nobody really uses it, it's clearly "unsafe", you can have your build chain scream at its occurrence, and... fast and loose reasoning is morally correct[1].

[1]: https://dl.acm.org/doi/10.1145/1111320.1111056


FYI this sort of 'backdoor' can be forbidden using the "safe haskell" extension in GHC:

https://downloads.haskell.org/~ghc/latest/docs/html/users_gu...


The point is not that you shouldn't do these things. The point is that Rust does not provide any tools to help you do it safely. Mutation is also a necessary thing in programming, which can be unsafe if done incorrectly. Rust has many built in rules for keeping mutation safe and requires labelling functions that mutate. For side effects, there is not much of a safety net in rust.


The part you missed is "always". The unsafe part is being able to do this from any function, instead of only from functions explicitly marked as being able to perform effects.


The basic idea isn't that you want to avoid such activities, but that you want to know which functions do it, so that it is easier to reason about your program.


I think SPJ is using a slightly different definition of safe than Rust does! But Rust should definitely be a bit up his safeness scale, but not all the way, doesn't fully control all effects.


It lacks a GC however, which Haskell has for a reason.


Good video, never seen it before. Thanks! Also love the fact he doesn't take himself or programming things too seriously (Haskell in useless category is great).


It is pure speculation but I don’t believe Epic’s Verse language will be a functional programming language.

Verse is more intended for Blueprint people and there is no way people working on the Blueprint code will be able to move into functional programming.

Also I don’t get the investment into Verse language itself, why not just fund LuaJit, do we really need another scripting language when LUA nailed some many things right?

I guess Epic will craft a pretty fast scripting language and try to lock game developers to their own ecosystem.


I don't know, Simon has been big on "spreadsheets as functional programming" for a bit. Something blueprinty built with that point of view might be both "functional" and approachable.


>Simon has been big on "spreadsheets as functional programming"

Thanks for the keyword quote. Interesting topic as I felt Spreadsheet is very under-represented form of programming. Especially when trillions of dollars relies on Excel to function properly.


He's got some interesting talks on the topic, and sometimes the slides aren't in comic sans!


Blueprint is already basically Haskell, I've done both and the similarities are striking. I think the world is ready for a proper visual programming language that plays nice with git, compiles down to native code and has all the nice abstractions that Haskell offers and none of the downsides of a scripting language. No doubt SPJ is one of the people who can help make that dream a reality, not in small part because just his presence will attract the best and brightest to Epic.


> Blueprint is already basically Haskell

It doesn't have lambdas/closures. You have to create a whole separate object in a separate file to pass something new with data that can be called later.


Alright, it's maybe a bit hyperbole. If I'm thinking about what characterises Haskell, it's not that it has closures, it's that it makes control flow and side effects explicit through monads. Blueprint has a very similar idea where effectful/stateful code is ordered sequentially through the white lines, while pure functions are hooked up just through their inputs and outputs. It doesn't go much deeper than that, but looking at Haskell it seems blueprint really might go a lot deeper.


Visual languages can be quite functional, specially when they map into IC like modules.


It my not be FP, but I imagine it will be heavily type dependent and Haskell has one of the best type systems of any programming language.


Honestly Lua is a horrible, terrible language. It's only gotten so widespread because it's trivial to embed.


Bizarre move. AFAIK he does whatever he wants at MSR. Maybe he's sick of that environment's politics; maybe he's sick of academic research/the ICFP crowd and wants to work on "real world" stuff. I can't imagine it's about money.


MSR is having pretty substantial layoffs at the moment. Some kind of substantial reorientation, with whole projects being shut down.


Yes, AFAIK they are setting up a major lab in Amsterdam on molecule design using deep generative models.



I would’ve also assumed his job was perfect, but he’s the best judge of that!

This is basically what Erik Meijer did isn’t it? He worked at Microsoft and produced some really nice things that had a big impact, like reactive extensions for example.


Yeah. Simon Marlow also left MSR years ago, for Facebook.


Never underestimate the capacity of poor management for screwing things up.



    CurrentRound^ : int = 0

    for (i = 1..NumberOfRounds):
        CurrentRound := i
        if(!SetupPlayersAndSpawnPoints()?):
            ShutdownGame()
            return
        else:
            Players.PutAllInStasisAllowEmotes()
            ResetMatch()
Looks nonfunctional as heck to me. Also it isn't expression-based - at least that's what the bare `return` says to me.

Honestly just looks like a dialect of Python. I wonder what SPJ will do with it...


Ugh. Failing to see why lua isn't a better choice than this. Even a subset of lua augmented with those funky ^? stuff to achieve what you want to achieve in terms of safety.

Feels like "the whole everything is looking old we got new keywords! Metaverse! We need new languages...".

The older I get the more I feel like languages are just as generational as developers come into the market. They don't want to write Java, even python is getting old. We need new fresh languages every 5-10 years which frankly just feels like the same walls with a fresh coat of paint.


> same walls with a fresh coat of paint

Honestly, if anyone can solve that problem, SPJ can!


What is SPJ supposed to bring to this? Are they gonna throw a top end type system onto it or something? Nothing about this looks interesting at first blush.


Simon doesn't just do types -- he knows a lot about compiler architecture and mapping execution models to efficient hardware usage, too, among other things.


I didn't mean to imply he just does types, but improving the compiler toolchain is a fair counter argument.


If it were up to me: make sure it has the primitives required to write things in a "mostly functional" way if the programmer desires/when the problem fits the space. Not sure the whole list but maybe starting with nice syntax for lambdas and built in immutable collections? Maybe a macro system for introducing syntax? OK you caught me just make it a lisp with python syntax and types.


Timestamp?


It's in the query param: 01h06m24s


A long time coming since Tim Sweeney's 2005 talk referred to functional concepts and things like STM https://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced...


Sounds like Epic is hiring some pretty epic people. From the article, Dan Piponi works there as Principal Mathematician. As far as titles go, that's a dream one. I've really enjoyed some of Dan Piponi's writing and blog posts.


It may be worth noting that Steve Lucco[1] also left MS recently, after a stint of about 20 years.

[1] self-described co-designer of Typescript - https://www.linkedin.com/in/steve-lucco-b5084958/


Previously he only announced leaving MSR two months ago: https://news.ycombinator.com/item?id=28473733


Much as I am suspicious of Epic, this kind of move is way better than their silly stunt with the App Store.

The way to beat Apple is to develop better technology than them. It will take billions of dollars to do this, but Epic has the money.


Anyone know more information about Verse itself?


Wow! Didn't see that coming, but very interesting. When he says "the metaverse", does he mean Epic is working on one of their own? Or is it the one facebook is working on, and they have some sort of partnership?


Epic has been talking about the metaverse for years... e.g. it came up a lot in their Apple lawsuit, and searching on google turns this up from mid 2019: https://twitter.com/TimSweeneyEpic/status/115784180038569574...

So far it seems like they sort of vaguely want to turn Fortnite into it... but I haven't really seen much that I would call concrete.


>So far it seems like they sort of vaguely want to turn Fortnite into it... but I haven't really seen much that I would call concrete.

I always thought their Travis Scott event was a pretty good example of something you could call early metaverse-ish, going beyond the usual scope of the game

https://www.youtube.com/watch?v=wYeFAlVC8qU


Since at least 2016:

https://www.youtube.com/watch?v=xG9-6k3RYIU

Slide 46 and on (mentioned on 48) is largely metaverse and a bunch of stuff presaging the Apple lawsuit, even though Fortnite hadn't launched yet:

https://cdn.akamai.steamstatic.com/apps/steamdevdays/slides2...


When you say “the one FB is working on” - the stated goal is for there to be one Metaverse, as there is one Internet.

Of course that requires protocol interop (as did the Internet with TCP/IP and HTTP) and who knows if it will happen for the Metaverse.


Metaverses are less fundamental than the internet, and there's no need for there to be exactly one of them (other than lucrative proprietary lock-in, anyway). Metaverses are application-level concerns, and will be built on top of the existing internet infrastructure. In the same way that a web browser is a portal to HTTP content, a metaverse application will be a portal to whatever application-level protocols arise to support them.


Metaverses are application-level concerns

I'm pretty sure that's not how they see it. I think the vision is of the singular Metaverse as the new narrow waist upon which all future apps are built.


Yes, and that's why it's an application level concern. A metaverse like they envision is a platform for the discovery of applications created according to certain standards, just like a web browser, and just like an app store. It's in every big player's interest to delude you into believing that there's only going to be one metaverse, and in particular that it's going to be theirs, but we have no more reason to believe these claims than we have reason to believe that there will ever be exactly one software platform in any other context. It's just a VR-focused app platform.


Have either Tim Sweeney or Mark Zuckerberg said that there's plans for only a single metaverse?

I think it's been pretty clear that they're going to work on their own metaverse products. Epic seems to build on Fortnite , while Meta have Horizon.

I don't believe either have stated compatibility as a goal.


They're each going to build their own metaverse and they're planning for that to be the one and only metaverse. They're planning to kill each other.


Yeah that's totally what I expect, which is why I'm confused why the person I replied to seems to believe there'll be a single one.

I know that the Web 3.0 Cryptocurrency crowd have ideas on the idea of a decentralized web with a shared ontological understanding, so maybe that's the source of the idea of there being a single metaverse?

But neither Epic nor Meta seem interested in a shared ecosystem. The metaverse is a product and their implementations will compete. I'm not sure if people would even want to be in two separate metaverses so competition will be fierce.

Most media (books/movie) sidestep that by usually having a single metaverse creator. That's in large part because it's narratively easier to deal with a single villain and universe. I don't know of any that posit having competing offerings.


Agreed. You can have Facebook and Twitter open in two browser tabs and switch between them, but will Facebook and Epic even support the same headset? How much hassle would it be to log out of one and log in to the other? I think they literally want to monopolize your face.


Sweeney laid out his vision here, in 2016 before Fortnite launched:

https://www.youtube.com/watch?v=xG9-6k3RYIU


Yes, Zuck discussed this with Ben Thompson for example: https://stratechery.com/2021/an-interview-with-mark-zuckerbe...


Crucially he only seems to call out commerce and goods. He doesn't really say it'll be a single metaverse. It'll be similar to the app stores today. you'll have Android, iOS etc..and each will have some of the same apps, but they're not actually running the exact same app.

The metaverses will be similar. They'll end up with many of the same apps and experiences, but devs will have to publish independently to each platform.


> MZ: I think it’s probably more peer-to-peer, and I think the vocabulary on this matters a little bit. We don’t think about this as if different companies are going to build different metaverses. We think about it in terminology like the Mobile Internet. You wouldn’t say that Facebook or Google are building their own Internet and I don’t think in the future it will make sense to say that we are building our own metaverse either.



I think Epic is working on VR games, and saying metaverse is more cooler.


Might be similar to and looks like it's going to replace https://skookumscript.com


I wonder how fast the cutural view of haskell will shift now. Lots of young minds will be exposed to haskell/purefp with a very exciting and positive light, meaning they'll probably sip all of it without questionning it. Pretty good but maybe a little bad too.


What are you talking about?


Haskell is still niche, but if the gaming community gets wind of that thing Epic is using for their systems, they might get interested in a vastly different way people are looking at haskell and pure fp right now. This might bring a new (and vast) crowd, and a crowd that might come with an apriori appreciation of haskell (because of the Epic brand trust), this often causes a religious view on whatever idea is learned.


I don't think you should assume SPJ was brought in to transition Epic to Haskell.


You have a point, maybe it's fair to assume that it will put FP on the map though.


FP is already on the map, though often labeled "Here be dragons."


What programming language is Epic currently using? Do people have a religious view about that language?


> I also expect to work with Dan Piponi on data-centre-scale transactional memory

This sounds really interesting, I learned about STM in Haskell while working on Cardano & it seemed so obvious - I am sure there will be some seminal outcomes of this too.


I wonder what the new language is going to look like, one thing I'm interested in is declarative programming languages. I feel like a declarative programming language would actually be a pretty good fit for games.


Sorry for the later reply, but Epic bought Skookum, the company behind SkookumScript [0]. They have showed off usages of it in Fortnite [1] too, and it looks pretty interesting, if a bit wacky to current programmers. Assuming he's there to push SkookumScript/ Verse over the line for UE5's release and beyond, this would be a good preview of the final work.

[0] https://skookumscript.com/about/look/ [1] https://twitter.com/nephasto/status/1340046706642186247


I'd suspect based on SPJ's previous comments that it would be a strict statically-typed pure functional language but it probably depends on what amount of control he has on fun/research vs utility to Epic.


A huge loss to and a massive vote of no confidence in MSR Cambridge.

Is Epic worse than Facebook? I can’t imagine much of an upside to praying on human frailties in order to make games addictive to extract money from kids.


>Is Epic worse than Facebook? I can’t imagine much of an upside to praying on human frailties in order to make games addictive to extract money from kids.

What?


Have you seen kids these days? Video games are an addictive substance like cigarettes or alcohol.

(Yes, I know you played Mario Brothers as a child and turned out fine. So did I. But the stuff we had back in the day is extremely tame and weak compared to the hard stuff they peddle today.)


I dunno, they called EverQuest “EverCrack” for a reason. That was over two decades ago.


That's still in the area of D&D - social rewards encouraging large amounts of time. Modern psychological tricks used to encourage addiction and over-spending, especially in kids and neuro-atypical people are newer, and were a "great" contribution from the likes of Zynga or King.


Probably a reference to the fact that Epic makes a ton of money from Fortnite.


But Fortnite is a not a pay to win game. Unless All Games are "praying on human frailties".

Not only that, Epic invested every single dollar they made from Fortnite back to improving Unreal Engine.


> But Fortnite is a not a pay to win game.

No, but that's not the point.

The "addictive" factor comes from micro-transactions with randomized returns on investment ("loot boxes"). Kids get sucked into chasing the most valuable skins for characters, weapons, etc. This either results in them spending an inordinate amount of time playing the game (increasing odds of paying money) or just paying money directly to purchase in-game loot boxes that have fractional chances of dropping the rarest gear.

It's not "all games" that are problematic in this way; it is games which are monetized through micro-transactions with loot boxes. It's essentially legal gambling for kids, and it's absolutely a problem that we should be taking more seriously.


Didn't fortnite remove loot boxes in 2019? https://arstechnica.com/gaming/2019/01/fortnite-puts-an-end-...

There's still something to be said about the addictive nature of microtransactions, but loot boxes haven't been relevant to the discussion as far as fortnite goes for a few years.


They had to do that legally. But that was only in the unpopular save the world stuff, I think the break out battle royale mode or later creative modes never had them.


Didn't say I agreed. But there's an argument to be made that by making the game addictive, keeping players in the game encourages them to spend money on skins and whatever else they sell.

I'm personally pro-entertainment, pro-culture, etc... Tons of things we spend money on in life is entertainment and frankly, useless. But it's fun.


> But Fortnite is a not a pay to win game.

That depends on how you define "win". As a gacha player myself (Genshin Impact), a lot of people roll because of FOMO and because they want to own a thing, not necessarily because they're going to "win".

Heck, I main Yoimiya, and she's not exactly part of the meta. Maybe I'm a sucker (no, I definitely am), but I'm not paying to "win" in the usual sense. There's plenty of exploitation to be had without attaching a competitive advantage.


Pay to win or not is irrelevant.

Advertising and game design technicques to encourage vulnerable people to spend inordinate amounts of time and money on a game are the real problem. This leads to the current situation of most hugely successful games today, where the vast majority of players spend nothing or next to it, while a small minority of "whales" literally spend thousands of dollars monthly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: