Hacker News new | past | comments | ask | show | jobs | submit login
Beyond Functional Programming: The Verse Programming Language [pdf] (peytonjones.org)
491 points by WillPostForFood on Dec 11, 2022 | hide | past | favorite | 376 comments



This looks incredibly ambitious:

- There are no booleans in the language! Conditionals can still succeed or fail, but failure is defined as returning zero values and success is defined as returning one or more values.

- Verse uses a so-called 'lenient' evaluation strategy which is neither strict nor lazy, but somewhere in-between ("Everything is eventually evaluated, but only when it is ready")

- an expression does not evaluate to a value (like in Haskell), but instead to a sequence of zero or more values

- tries to bring functional logic programming into the mainstream

- Verse uses an effect system for I/O instead of monads

- A type in Verse is "simply a function". E.g. int is the identity function on integers, and fails otherwise

This all looks very mind-bending to me (in a good way). Perhaps Verse will one day be as influential for PL design as Haskell has been.


> sequence of zero or more values

You can try this type of programming at home, say in JavaScript Make any result or argument be an Array. The problem with nulls goes away because "not there" is represented simply by an empty Array (a.k.a "sequence").

And you will not get null-errors because you can write:

   newValue =   someValue.filter(...) . map(...) ;

You don't need to test whether filter() returns an empty array or not, the map() will work for empty arrays too but simply do nothing.

This is in a sense "generalized programming" because you don't deal with individual values but always with collections of them. I sometimes wonder why I don't use this pattern more often.


Because now you can't differentiate between an actual empty array or an error. That's horrible.

Or, you actually have to use nested arrays to describe that the result might be an error or an array. But now you have to deal with an array which might contain... zero errors or even multiple ones.

That's really not a great way of doing things. Instead, do it the other way around and make null treatable in the same way as arrays are. And if you do that, you end up with what languages like Haskell do.


> you can't differentiate between an actual empty array or an error

Empty array in this pattern does not represent an error. It represents the fact that no values you asked for can be found. It is like querying a database. It is not an error if your query finds no values.

The calling code that receives the empty array might decide it is an error or that it is not. And of course you can always throw an error, that can be done with the "throw" keyword.

Getting rid of using nulls by using arrays instead is not about handling errors, but about preventing errors, by changing the semantics of your functions slightly so they never need to return null.


You can just replace "error" in my post with "not there" and the meaning stays the same though.


No not really. If every result is an Array then such an array can contain an empty array and then it itself is NOT an empty array. It's all about how you specify the semantic contract of your function.

You can say a result was found and that result is an array, possibly an empty array. Or that no result was found which is indicated by returning TOP-LEVEL empty array.


Well yeah, that's why I said:

> Or, you actually have to use nested arrays to describe that the result might be an error or an array. But now you have to deal with an array which might contain... zero errors or even multiple ones.

In other words: now you have Array<Array<?>>

And now it is only convention that the outer array must only have 0 or 1 elements. But there is no guarantee and the compiler won't help you since it cannot know. That's why this is the inferior approach compared to turning it around and making it so that null can be mapped and flattened.


I had similar thoughts (I do JS/TS dev daily but I’m very interested in a lot of what this language appears to offer). But I’d caution trying this with JS for anything more than exploring the ideas presented by the pattern. The reality is that even adding a type checker like TypeScript, and as many lint rules as you can throw at a project, it’s still too easy to end up putting nullish values into your null-safe arrays.

And I similarly wondered why I don’t use this sort of pattern more often, but that wondering is what led me to add this caution. It’s a really compelling idea, but probably extremely hard to debug when it goes wrong in a language and idiomatic ecosystem designed for it to go wrong at any time.


I guess one reason is that it takes more code to return an Array when in most cases what the caller needs is always just a single value. Many such functions never return null so it is overhead to make them return an array.

But returning an array also makes your functions more general. Maybe in the future you want to modify it so that it does return arrays of length > 1. Evolving the program to that state is then easy if you didn't lock yourself into assumption that this function will never ever need to return more than a single value.


> I sometimes wonder why I don't use this pattern more often.

Because while handy, it's not universally applicable (just like everything else :-))

I (C programmer, mostly) use sequences more often than you'd expect for parameters[1], but that does not mean that it makes sense to always return sequences.

In a lot of cases, sure, a function should return a sequence of values. In many other cases, it makes no sense - `if (canProceed(x,y,z))` reads more sensibly than `if (canProceed(x, y, z)[0]`.

[1] My string functions mostly lend themselves well to unlimited number of parameters. For example concatenation takes unlimited parameters and concats all of them, which it returns. Same with substring searching - take a source string and an unlimited number of substrings to find.


I agree it often makes for more readable code to return a single value instead of an array. But it makes for more maintainable code to return an array.

Whatever you can do with a single value you can also do with an array containing just that single value. How practical that is depends on the language used. In JS after ES6 arrays are syntactically quite nice and easy to use.


I think this pattern is so useful that I've been working on a library to use it for more types of things. https://www.npmjs.com/package/schtate

This includes a state monad that allows you to apply .map just like an array and a maybe monad that let's you do the same on values that might be nullish.


Keep us posted


> This all looks very mind-bending to me (in a good way). Perhaps Verse will one day be as influential for PL design as Haskell has been.

I'm not convinced. Making non-determinism a first-class feature in the language might be good if you simply care about logical specifications that might enable you to prove something correct, but a real-world program implementation has to make heuristic choices for efficiency as to when to evaluate what (and perhaps where, especially in a parallel/distributed setting), and just stating that evaluation is "lenient" isn't really saying anything worthwhile. So this kind of design choice will ultimately be expressed with ugly hacks, as with e.g. Prolog where the 'cut' feature is used to override the default search strategy.


But it's very much not non-deterministic. It's a deterministic choice. You can give it an exact semantics.


Correct, but "non-determinism" is a misnomer used in programming language theory to refer to generators and pervasive backtracking. Prolog, Icon, jq, etc. -- all fully deterministic, but known as non-deterministic owing to the pervasive generators and backtracking.


It is not a misnomer. You can use generators and backtracking to simulate non determinism. That means the simulator plus non deterministic algorithm are deterministic but the algorithm is not.


The idea in Prolog and such is that you write "goal-seeking code" and it "magically" obtains the correct answer by brute force but without the brute force being syntactically/lexically apparent. But it's still brute force. It's a catchy name, but it's not accurate. It might be more accurate if "simulation" were part of the name.


As presented in the slides it's obvious that it's a generator-paradigm, thus like Icon etc. It's really just a way of making iterators very fundamental in the language. It's deterministic and you are assume to understand that. The "PROLOG style goal searching" is probably less relevant here.


How? The slide deck calls it a "lenient" evaluation order, and doesn't really say how to make an explicit choice of what cases should be tried first. So it seems to have all the same problems as naïve Prolog.


exact semantics are given here: https://simon.peytonjones.org/assets/pdfs/verse-conf.pdf. runtime efficiency isn't specifically addressed afaict


Icon had (has!) "failure is falsity" / "value production is truth", and pervasive generators / backtracking. In Icon a function ("procedure") either: fails, returns a value, or suspends (yields) a value.

jq is very much like this too, as it has pervasive generators and backtracking, but unlike Icon jq does not distinguish between "returning" a value vs. "suspending" (yielding) a value.

The jq way kinda means you have to have booleans, and so jq does.

I'm suspicious of the no-booleans approach. After all, one could also have no integers (the Lambda Calculus doesn't have integers, having only function values!). Boolean values are just useful to have, and being able to obtain boolean values from predicates is useful too. One could do the Icon/Verse thing of failure is false / anything is true, but provide an adapter function that produces actual boolean values for when you need them.


I was suspicious of javascript where number replaces int and double. The fail in verse is a way to replace boolean and null values. I think it is the direction of evolution.


> - There are no booleans in the language! Conditionals can still succeed or fail, but failure is defined as returning zero values and success is defined as returning one or more values.

This is similar to how Icon works: https://en.m.wikipedia.org/wiki/Icon_(programming_language)


Yes, I noticed that, too. I used Icon to process a big pile of code at Apple in the late 80s and found it pleasant to work with.


This article seems like a nice overview of the pros and cons of the Icon expression evaluation system:

https://tratt.net/laurie/research/pubs/html/tratt__experienc...


I wish I could upvote comments more than once on HN :-)


Sounds like it's not a million miles away from how Clojure treats `nil` values, either


Or the List monad.


Or Common Lisp.


> so-called 'lenient' evaluation strategy which is neither strict nor lazy, but somewhere in-between

Sounds like we are still clinging on to laziness in some regard, I wish we could get shod of it entirely. Although laziness makes doing Leetcode problems in Haskell really fun, Idris got it right when they opted for strict evaluation for predictable memory performance.


There's really not much new here. All these ideas have been used in other languages previously, but perhaps not in exactly this cocktail.

I'm decidedly in the not convinced camp.


I am not convinced because far more pressing aspects have been ignored. One million programmers means program structure is more important than the syntax used to write the algorithms.

Does the language address common pain points like modifying data structures? Does it make it easy to extend existing code even without having the ability to change it?


Surprised about the syntax. Curlies still hold a lot of power :)


> Perhaps Verse will one day be as influential for PL design as Haskell has been.

Has Haskell really been that influential though? It seems to me that MLs, Lisps, Schemes, and even Erlang have been more influential.


It's definitely the most influential of the statically typed functional languages. All other statically typed functional languages or libraries inherit a lot from Haskell implementations, type system and type class concepts.


Standard ML is more influential because it's the foundational statically typed functional language. But Haskell has gained more popularity with the general programming public, not just academics.


What modern language in use today was influenced by Haskell other than maybe Idris?

(To be clear, I am mot necessarily down on Haskell or what part it has played. It’s just that many languages seem to be stealing ideas from the languages I’ve listed more than Haskell, which was the criteria I had in mind.)


Rust's traits come to mind as one example.


As far as I can tell, from an admittedly short amount of searching and research, is that type classes in Haskell were influenced by Standard ML: https://people.csail.mit.edu/dnj/teaching/6898/papers/wadler...

> This paper presents type classes, a new approach to ad-hoc polymorphism. Type classes permit overloading of arithmetic operators such as multiplication, and generalise the "eqtype variables" of Standard ML. Type classes extend the Hindley/Milner polymorphic type system ...

And it was my understanding that traits came more from the object-oriented world, such as Self and Smalltalk. Scala and Racket also had them before Rust, and Scala also has type classes.


Phil Wadler, the first author of the paper you cite, was literally a principal designer of Haskell. SML does not have type classes; your quote points out a deficiency in SML that motivates type classes.

My understanding is that "trait" is an unfortunately overloaded term. Traits in Rust are much more closely related to Haskell's type classes than to traits in the OOP sense.


> Phil Wadler, the first author of the paper you cite, was literally a principal designer of Haskell.

Yes, I know that. That's the point.

> SML does not have type classes; your quote points out a deficiency in SML that motivates type classes.

Again, that's the point. It's what "have an influence" means in that it was SML that influenced or inspired Haskell's type classes. And it's part of my overall point of SML being rather influential, although quietly.

> Traits in Rust are much more closely related to Haskell's type classes than to traits in the OOP sense.

I'm not so sure about that from what I have read. But I don't have enough information other than to just point to things I have read.

https://stackoverflow.com/questions/28123453/what-is-the-dif...

While I can't find any concrete information, I would be quite surprised if Rust's traits were influenced by type classes rather than the more OOP interpretation of traits (in the design space of traits vs mixins vs interfaces).

https://doc.rust-lang.org/book/ch17-01-what-is-oo.html


You asked for an example of Haskell influencing languages other than Idris. Type classes are a pretty clear example. I don't see how it is particularly relevant to this that they were motivated by a deficiency in SML.

I'm not familiar with the origin of traits/mixins in the OOP world, but my understanding is that they generally contain implementations of methods; they are a "part of" a class. Type classes and Rust traits, however, are more like interfaces. I'm pretty confident you won't find any notion of trait that predates type classes and can define what a Functor or Monad is. My reading of the SO answer you link is that Rust traits started off similar to type classes and have grown closer and closer over time. I'm not sure what a clearer indication of Haskell's influence would look like!


CLOS classes are very similar to the "modern" concept of traits. It was released in 1990, same year as Haskell.

https://en.m.wikipedia.org/wiki/Common_Lisp_Object_System

> Another unusual feature is that methods do not "belong" to classes; classes do not provide a namespace for generic functions or methods. Methods are defined separately from classes, and they have no special access (e.g. "this", "self", or "protected") to class slots.

Of course, the whole idea of having methods as separate entities might sound a bit "alien" today. Still, the idea of traits as abstract contracts, with separate and potentially multiple distinct implementations, comes directly from this, IMHO.


I'm pretty sure Haskell is the reason why True and False are capitalized in Python.


It's influential but the most is stretching it.

It showed that monads can be used to isolate I/O but have a very real cost when it comes to complexity, that laziness by default is a bad idea and that typeclasses are a good way to introduce ad hoc polymorphism.

Definitely a good result for a research language but far less ground-breaking than ML was (admittedly putting the bar very high).


The first language to implement type classes. The first language to structure computation with monads, which are now omnipresent in functional languages.


Haskell has been enormously influential in the study of types; modern high-level type checkers are built using knowledge from these studies.


I think it influenced python list comprehensions.


These slides need a lot of work. I'll read the paper later, but the slides really lack context and motivation.

* I'm a little disheartened to see "Objectively: no. All languages are Turing-complete" in the context of the question of needing another language. It's a throw away comment and largely irrelevant when considering the actual use of programming languages. If one has to state this platitude, it is more accurate to say "theoretically" rather than "objectively", as I think the latter is actually debatable.

* The utility of the choice syntax over comprehension syntax is not clear at all. The choice thing seems neat, but I don't understand why comprehension semantics weren't just extended. Every example of choice seems to refer to sequences and comprehensions anyway to make the meaning clear. Will have to see more examples. Right now, the syntax seems a bit quirky, especially with `false?` thrown in.

* Why is `fst` not named `first`?! The shortening is maddening to me. It saves two characters (negligible savings), decreases readability, and first is probably just as fast to type.

* I'm still not sure why they're creating this language, what problems it solves, and what any of it has to do with a metaverse.

These are professors and language experts, so I'm a bit surprised by the presentation. Also, the idea that this language will be learnable as a first language is ambitious, if not naive.


> These slides need a lot of work. I'll read the paper later, but the slides really lack context and motivation.

SPJ is an exceptional public speaker, but that involves a lot of talking, a lot of context, audience participation, and a lot of wild gesticulating at slides.

I'm not sure if this talk was recorded, but I am sure if it was the recording will be much better than the slides, the slides are not really designed for consumption outside of the talk context, but rather as a reminder rather than taking notes. At least that's my experience having seen a number of his talks over the years.


Looks like it just went online: https://youtu.be/832JF1o7Ck8


Thanks for the heads up. Just watched it, and the talk gives no additional details to the metaverse context or purpose of the language. Also, he uses a ton of filler words, and I find it pretty hard to listen to at times.


I see using the choice syntax as PROLOG-esque, rather than Python-esque.

When you are building simple structures to iterate upon, it makes sense to use the more readable comprehension syntax, where you define a list or set of items and their desired properties.

Yet this terse syntax makes more sense for logic programming, where a lot of the domain logic is built upon generate-and-test patterns; you define a combinatoric search space which is the enumeration of all base value in pairs (or tuples), and check which ones are kept for the next processing steps. When using this pattern, it's clearer to be able to express the space with the shortest syntax rather than the verbose one.

As for the adequacy and need for the language, I have my own ideas on what's needed for the stated use case, and I do agree with the authors that this kind of language will really be easier to learn for people without a programming background; it looks much more like mathematics than the traditional von-Neumann-architecture, continuation-based languages, and its declarative nature may make it easier to approach without having a full understanding of runtime behavior.


> I see using the choice syntax as PROLOG-esque, rather than Python-esque.

I didn’t think otherwise. Many functional languages have comprehension syntax, and I prefer declarative programming.

> When using this pattern, it's clearer to be able to express the space with the shortest syntax rather than the verbose one.

How is

    x <- [1,2,3]
or

    x <- {1,2,3}
more verbose than

    x := (1|2|3)
? Also, the first and especially second are basically identical to mathematical syntax, but the first implies order better.


True, those are not more verbose, but are merely list enumerations, not comprehension, which bundles multiple generators and tests within the same structure, e.g.:

[(x, y) for x in [1, 3, 5] for y in [2, 7, 8] if x < y]

It's a single monolithic expression that builds the list. The choice syntax however represents each generator and each test as separate expressions, which may even be part of different functions. This seems to offer more flexibility. For example (I don't know the exact syntax):

x:=(1|2); y:=(7|8); x<y; result = (x,y)

Here result is bound to the same values as in the comprehensive list; but these values are yield one at a time, instead of being part of a structure that you traverse.


> These are professors and language experts, so I'm a bit surprised by the presentation.

I am not surprised by this at all. If they really want to create something practical, I would rather employ the creator of Turbo Pascal, than one of the creators of Haskell. To me it seems as SPJ et al. are taking the money to do something fun and interesting, and scratching an itch they had for some time. Nothing wrong with that, but don't expect something that will be useful anytime soon.


Labeling someone who's had an immense impact on making strict static functional programming mainstream and relevant in the industry as unfit to developing such a language seems quite out of place.

SPJ has been implementing features that the industry and real world users wanted in Haskell for decades and yet you expect his cooperation with another great dev and business person like Sweeney to be fooling around and not produce "anything useful anytime soon"?

I recommend you reading the transcript of one of the latest SPJ's podcasts, he speaks both about the role of languages in the real world and about Sweeney's project:

https://haskell.foundation/podcast/11/


We can revisit your comment in 10 years, and I'll be happy to have been wrong. Undoubtedly there will be interesting research arising out of this. Let's see how many people will be programming in Verse by then.


If you rename `fst` to `first`, then `snd` becomes `second`, which is much much (much!) longer.


Why does that matter though?

No writer does this. Why are programmers obsessed with word length to such a degree?


Writers do something different from programmers. What programmers do is often more akin to mathematics, especially the type of stuff language creators do. And mathematical notation likes to be short.


I rather strongly disagree when it comes to programming. Programming is as much communication between humans and domain modeling as it is instructing a computer to do something. Shortening names seems to be a cultural holdover when name length had some real effect or constraint, and now there’s highly subjective and debatable constraints like being shorter, faster to type, length symmetry, etc.

Mathematics uses symbols because there is usually no specific context for an abstract concept, and on other cases it relies on convention to provide context. Shorter names are additionally a side due to the density of information and the ability to write complex symbols on a chalkboard and in LaTeX. The symbols are the meaning and not some shortened version of it.


`𝜋` is a mathematical symbol, but you could also write `CircumferenceOfCircleWithUnitDiameter`. The reason everyone prefers `𝜋` should be clear. The same is true for a less extreme case like `fst` and `first`: You don't need to think about what `fst` means, if you are fluent in the programming language it is clear, and more concise than `first`. `fst` becomes an atomic symbol in the programmer's mind, it is not something you need to interpret like `first`. This is not about communicating with other humans or even the computer, it is about efficient representation within your own mind.


so much so that I think "differently" in good old car/cdr lisp. even though car/cdr means nothing on any cpu architecture, are cryptic as can be .. they embody the structural inductive step perfectly in my mind. when in clojure or sml I write slower :)


How about they use 1st and 2nd as a compromise?


`fst` and `snd` possess a symmetry which `first` / `second` lack

Personally I've got used to them like in no time


Why is symmetry in word length needed?


Why is anything "needed"? It's all a continuum of convenience, readability, familiarity, prior art, and preference.

That you fall somewhere differently on any or all of those spectrums than the author is fine.


Alignment.

    x = fst(some complex expression)
    y = snd(some complex expression)


The need for such alignment denotes a further deficiency in the language if it lacks pattern matching.

    (x, y) = some complex expression
And I’m not for sure if this alignment overrides the readability aspect. Verify few languages have alignment in mind, and even less formatters do, so it isn’t clear why alignment is so strongly needed in this very particular case. And what about third and fourth?


some people click on this.


I still hate them after 5 years of professional PureScript.


I am not sure that fst and snd decreases readability. There was studies that showed that at least for alphabetic languages humans recognized words by the first and last characters initially ignoring the middle. So these abbreviations nicely play on that.


That’s probably a stretch because the study is probably more about the brain being able to fill in if needed rather than it being the same cognitive load. There’s no point in testing the brain in a programming language, and as far as I can tell, programmers are really the only ones that are obsessed with word length. It’s as if authors would just shorten “the” to “th” or “t” since it repeats a lot.


It's still mental overhead for a subset of users, thus an unnecessary shortening.


I read “snd” as “send”. I also read “fst“ as “fast”. Word length also matters.


It's riveting to follow the evolution of Tim Sweeney's dream of a more Haskell-like programming language for games since his 2005 talk: https://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced...


When did the L in FLP enter into the vision? I heard SPJ talk about the project a couple of times in interviews, but don't recall him mentioning anything about Curry or logic programming. I'm thrilled they seem to be taking that path!


Wow, I didn't know he had these ideas for such a long time, thanks for sharing! Now SPJ joining makes a lot more sense to me.

Also they both seem to have a preference for using Comic Sans on slides ...


Very interesting slides. As someone not very acquainted with game programming it helped me better understand the logic behind it.

As a sidenote, I found funny his predictions for 2009, because they're not far off from what we currently are aiming for: - 20+ cores - 80+ hardware threads - GPU's with general computing capabilities


I honestly fail to see what properties of the language has anything to do with the metaverse, or millions of devs, etc...

At first I though the "choice" construct would be a way to distribute code execution among multiple computers (using the fact that in a grandiose metaverse, everything would be a connected computer with lots of compute time to spare ?) but there is not much detail.

Also it's not obvious from the slide if there is already a compiler / exécution env at the moment, or how far they are from it.

Now, the names behind this aren't exactly bozos, so I suppose we'll get to know soon.

Wait and see, I guess ?


> I honestly fail to see what properties of the language has anything to do with the metaverse, or millions of devs, etc...

They're pushing transactions straight from the beginning, which I think is the sanest way for multiple actors to all try to interact with the same world.

And for the compiler to stay sane in dealing with transactions, it's necessary to mark effects, which is also in the slides.

The inclusion of effects & transactions already puts it way outside of the mainstream, even without the Logic stuff (which I admittedly don't understand yet)


What are transactions in this context? Message passing?


Classic multiplayer world problem - I want to give you 10 monies for your rare sword. At no point, from any view should I have 10 monies and the sword, nor you the sword and 10 monies, and especially bad is if neither of us have either. Also, it shouldn't be possible for someone to lure a goblin over to stab me forcing me to cancel the trade because I died and warped back home. Also it needs to happen within like 200ms because otherwise people will leave for a less laggy game.


And entity updates, I presume.


I guess I sort of see where it's coming from - the context is that hundreds of devs have to write code that has to preemptively cooperate with other code. So I think the lenient evaluation might be key for this.

Suppose you wander into a new environment, it contains some object x, and it has some capability f(x). Suppose your avatar or whatever has some code that already integrates with this f(x). You could just call f(x) in advance, and once you enter that environment, it lazily executes f(x). Or maybe I'm way off base here.


Also it's not obvious from the slide if there is already a compiler / exécution env at the moment, or how far they are from it.

It is expected to release in January as part of Fortnight Creative (delayed from this month).


I mean logic languages have a history of... trying to be distributed but it's never really worked out.


My guess would be that it's the usual case of trying to solve a non-technical issue by throwing new technology at it.

I honestly cannot wrap my head around how a programming language of all things should solve the "millions of programmers working on a thing"-issue.


I guess it's because metaverse funds it? So it has to tie to metaverse in some angle.


I don't know enough about Fortnite, but at some point if the metaverse wants to escape the "it's just a video game" trope, it's going to have to be... More than a video game, I guess ?

Can epic do that ?

Facebook a least "invented" a form a "passing time" (I'm being generous not to call it wasting time) that doubled as an advertising and marketing platform. And even that only took aff with the advent of smartphones, which enabled "using Facebook / twitter on the go" (and, more accurately, "on the loo").

When there's a VR helmet attached in every toilet, though... I'll sell hemoroid creams :)


Some of this reminds of me a combination Icon and Mozart. (And happened to just come across my old Icon book a few days ago.)

Maybe the backtracking concepts are "too built in?" - e.g., (1|2) is not a first-class/reified "amb" object, right? - so if I want to introduce a different search than depth-first backtracking (breadth, dependency-directed), etc., I couldn't directly.

Seems like there could be some confusion between multiple, backtracked values and actual sequences -- this often leads to sub-optimal/confusing Prolog code as well, deciding when you have an explicit control structure vs. backtrack, when to "harden" results using setOf/bagOf/findAll, etc.


Can anyone speak to this for Icon?

For context, Prolog had a culture of "the built-in search is less than wonderful... but good for crafting a problem-appropriate search". II(fuzzily)RC, Mozart/Oz decoupled nondeterministic code (explicit amb, spelled "dis") specification from search, and allowed managing search over nested spaces of unification constraints and threads. With (very fuzzy) less use of backtrack-driven implicit sequences than in Prolog? I so don't remember Icon.

Curiously, https://www2.cs.arizona.edu/icon/books.htm includes a "newish" (2010) book, "Icon Programming for Humanists".


In Icon a function ("procedure") can: fail, return a value, or "suspend" (yield) a value (and then again and again). The difference between returning and suspending being that a generator that returns cannot be resumed again. That difference was needed (IIRC) because the alternative would be to end the generator with failure, which could be confused with failure in a boolean sense. jq gets this better by saying that a function can produce zero, one, or more values (just like Verse), but unlike Verse, jq does have booleans.

I prefer the jq approach to the Icon approach, which I think means I am suspicious of this aspect of Verse :)

Also, boolean values are inherently useful, and while checking that an expression is empty or not is a boolean predicate, it should be a predicate that produces a boolean value. Otherwise if we have no boolean values then we shouldn't have integer values either and just go back to the Lambda Calculus!


Boolean values are inherently a sign of an insufficient data model. Booleans carry no inherent meaning. They don't tell you where they came from or provide any context about what operations or data they are guarding.

If your language lets you declare your own algebraic data types, there's no need to lean on a built-in Boolean type. You can create types that actually carry all the information you need so that pattern matching provides provenance and data only in the branches that need them.

Robert Harper discusses the idea in some detail here: https://existentialtype.wordpress.com/2011/03/15/boolean-bli...


Pattern matching is just syntatic sugar over using booleans and extracting values from data types. This is an exaggerated problem with a cherry picked solution. See what happens when the condition is if a number > 100. Are you expected to convert the integer into a church encoded integer and then write 100 S types? Are you expected to adopt a language with refined or dependent types? Adding a ton of abstractions to fix a problem doesn't mean you now don't have an even bigger problem.


Why do you care if a number is greater than 100? That has to actually mean something to you. What does it actually mean? Is it a validation error? Is it a warning trigger? Why not have a data type that actually expresses that meaning?

Wait, are you hung up on implementation nonsense? Sure, there's a low-level operation that returns a 0 or 1 that you convert into your actually-meaningful data type. But that doesn't need to have a surface language boolean type that's privileged by the language design. And that's what's important here - language design. People write programs in a language, not compiled code. The model the language provides is of critical importance. It's how the users of the language think and how they communicate with each other.

And this isn't abstraction. Boolean is the abstraction. It throws away all but a single bit of information. This is creating data types with semantics that carry far more than that single bit. It's making your code less abstract.


>Why do you care if a number is greater than 100?

You could make a battle royal game well where people start taking damage when they are 100 units away from the center of the map.

>Why not have a data type that actually expresses that meaning?

It would add unneccessary complexity.

>Wait, are you hung up on implementation nonsense?

The base of what it being abstracted over is not nonsense.

>But that doesn't need to have a surface language boolean type that's privileged by the language design.

If exposing booleans lets people write simpler code then it makes sense to expose it instead of forcing complexity onto everything.

>Boolean is the abstraction.

No, booleans are a concept that the processor understands. Zero is false and nonzero is true. The processor has instructions like like logical and that can operate on these booleans. Branches can happen based off a bit in the status register. A bit has two states which maps to true and false. Booleans are a fundamental concept that is being built upon. The processor has a limited number of data types that it understands. Creating new data types is creating abstractions because those data types do not actually exist. You are simply hiding an actual data type in the new one you are creating.


> You could make a battle royal game well where people start taking damage when they are 100 units away from the center of the map.

So... You have some meaning there, don't you? Something beyond the values true or false? Does true mean "is safe" or "is taking damage"? Wouldn't something that actually says what you mean be a lot clearer? More direct? Fewer ideas between the expression and the meaning? Less abstract?

> If exposing booleans lets people write simpler code then it makes sense to expose it instead of forcing complexity onto everything.

Why do you think it's more complex to eliminate a semantic step?

Your last paragraph is... very weird. Do you think languages are required to privilege Boolean over any other 2-type, such that only Booleans are allowed to map to those values in machine code? Not every language is as poorly designed as C. You're not wrapping Booleans, you're creating new types which the compiler gives the exact same representation as Booleans get.

And for what it's worth, processors don't understand zero and one. They're mechanistic circuits that operate on combinations of high and low voltages. Zero and one are interpretations added on top of what's going on. If it's easier to think of high voltage as one and low voltage as zero, maybe it's even easier to think of them as "safe" and "taking damage".


>Does true mean "is safe" or "is taking damage"?

You could name a function that returns a boolean isInSafeZone. From the name it would be clear that true means safe and false means that it isn't.

>Wouldn't something that actually says what you mean be a lot clearer

No, compare "if (player.isInSafeZone())" to "if (player.getSafeZoneState() == SafeZoneState.SAFE)". Endure is extra verbosity that isn't giving you value.

>Fewer ideas between the expression and the meaning?

I don't know what you mean by this. You are creating extra types for every possible condition is introducing more ideas. Why have 1000 boolean types when you can have 1 that interopts with itself.

>Why do you think it's more complex to eliminate a semantic step?

There is more steps in creating a new type, maintaining it, and having to convert it into what you want when you could just use booleans from the start.

>Do you think languages are required to privilege Boolean over any other 2-type

No, even C has typedef. I never implied there was wrapping going on. I meant that you are introducing layers of indirection. It's a case of everything in programming can be solved by creating an interaction except having too many indirections. Indirection is not always the answer even if there is no performance impact.

>processors don't understand zero and one

By the interface that is exposed to programmers they do. Read the manual. For the purposes of making a programming language there is no benefit in going down to the level of voltages. It is an implementation detail that is the processor creator's job to worry about.


> Boolean values are inherently a sign of an insufficient data model. Booleans carry no inherent meaning.

Would you say the same about Ints or Strings?


Practical language-agnostic example: a database for a business where some VARCHAR(34) columns are IBAN codes and others are messages to be displayed on a 2 by 17 characters LCD screen.

They are two completely different data types that happen to have identical approximate representations but should be distinguished very strictly in a sufficiently expressive model.

For instance, it should be possible to turn an IBAN into a device screen message but not the opposite, but you can't do it if they are all "strings"; and the two types have different value constraints while a generic VARCHAR has none.


> They are two completely different data types that happen to have identical approximate representations but should be distinguished very strictly in a sufficiently expressive model.

About bools: that's my problem with this. If-statements now operate on anything, instead of just bools. And I don't think that makes sense, unless cheese.

About strings: It's great you can statically differentiate between their types. But if strings don't exist, it makes it a little hard to write WHERE clauses.


They're saying you should be able to define your own

The domain of int/str is infinite so there's less sense in redefining a number system for every property you want to model

For small finite domains it makes sense to define your own in terms of the problem domain

In this sense all two-valued types are bools


I was wondering if they are planning to use some datalog-style compilation strategy, where all choices are dumped into b-trees and nesting is replaced by indexed tables which are joined.

But I think mixing this with unification vars is sort of an open research problem?

Plus Haskell had a `data-parallel` mode which used a similar compilation strategy. It was dropped for being very complex and having virtually no users. Plus using too much broadcasting and compiling higher order functions into vectorized code had a serious performance tax.


Text from the "10,000 Foot View" slide:

--

Verse is a functional logic language (like Curry or Mercury).

Verse is a declarative language: a variable names a single value, not a cell whose value changes over time.

Verse is lenient but not strict:

Like strict:, everything gets evaluated in the end

Like lazy: functions can be called before the argument has a value

Verse has an unusual static type system: types are firstclass values.

Verse has an effect system, rather than using monads.


I'll add text from the "Take-aways" slide:

--

Verse is extremely ambitious

Kick functional logic programming out the lab and into the mainstream

Stretches from end users to professional developers

Transactional memory at scale

Very strong stability guarantees

A radical new approach to types [me: predicate functions]

Verse is open

Open spec, open-source compiler, published papers (I hope!)

Before long: a conversation to which you can contribute


> Like strict:, everything gets evaluated in the end

> Like lazy: functions can be called before the argument has a value

FYI, jq is just like that. In jq in `def f(g): ...;` `g` is a function value that will be applied to some input of `f`'s choice, and `g` is not evaluated unless `f` invokes it. The actual argument to `f` will be an expression which is wrapped in a closure named `g` while `f` is executing.

Lazy evaluation really is just hidden closures.

Lazy evaluation raises questions like:

  How obvious shall the syntax make it that
  you will be passing a closure instead of a
  value?

  Closures of dynamic or indefinite extent?
  Can you choose which?  How intrusive in the
  syntax is that choice?

  Does the use of lazy vs. strict infect
  callees?
  I.e., if I define a function `f` of one
  argument of type `T`, must it be able to
  take an argument that is a closure that
  produces values of type `T`, or must that
  be declared separately?
  If the latter, does that mean I end up with
  two `f`s, one which takes a value of type
  `T` and one which takes a closure that
  produces (a) value(s) of type `T`?
  If so, does that happen automatically or
  must I request it?
  If the latter, what is the default, and
  what happens if some library I want to use
  one option doesn't provide it?
Finding a happy medium with some flavor of lazy evaluation but also which doesn't make it hard to understand the performance considerations of it is just very hard.


> Types are firstclass values

That sounds interesting to me. Part of why I don’t like GADTs in OCaml is that it feels like you’re trying to fit a programming language within the syntax of the type system.


I'm not sure about the "learn it as a first language" bit.

    x:=(1|2); y:=(7|8); (x,y)
This stuff just doesn't seem intuitive to me. It's not verbose enough to be obvious to someone who doesn't know what's going on.

Looks interesting though; that's a really bright group of people. Be curious to see where their project ends up.


No programming language is obvious to someone who doesn't know any programming. This particular example is even less clear because it's purely abstract; it literally doesn't do anything.

If it were

    suit := (clubs | diamonds| hearts | spades);
    value:= (2..10 | J | Q | K | A);
    deck := card(suit, value)
You would recognize it instantly.


Your objection does not make sense. The code is simply bad and you replaced it with better more verbose code.


> I'm not sure about the "learn it as a first language" bit.

I think this is very subjective. Our minds have been "poisoned" - or rather trained over years - into a very specific way of thinking about expressions like in the example.

A first programming language, however, assumes that the learner hasn't been preconditioned in any way into a certain paradigm. So just keeping the two rules in mind that 1) evaluation is strictly from left to right and 2) an expression evaluates to a sequence of zero or more values immediately makes sense of the example.

Unfortunately it's incredibly hard to "reset" your mind into a state that doesn't know about variable assignments and thinking of variables as singular slots as is the case in Haskell and most other programming languages.

That's why "intuition" is a very subjective thing - your intuition changes with knowledge and experience and both are hard to suppress.


+1 to this

I remember struggling to understand the x = 7 syntax for storing a value in x. My young self was like “so if x is equal to 7, how come we can write x is equal to 8 later? That doesn’t make sense”


A cool thing about this in Erlang (which I think is the same in Prolog) is that the = operator works almost exactly how you would have expected: it binds the value but then it asserts the binding’s equality. So once you’ve established x = 7, x = 8 is actually an error! Because you’re right, it doesn’t make sense that at one point x = 7 and later the same x = 8!


That's what I did in my own language[1], no shadowing allowed.

[1]: https://github.com/mimoo/noname


I get that to some degree, but that code doesn't look like something most people are exposed to, whereas 'x = 10' or something that says 'foreach' seems like it'd make sense with a minimum of explanation to a lot of people.

    x:=(1|2); y:=(7|8); (x,y)
Doesn't look like any math I'm familiar with.


You can think of the choice ( | ) as a generator function. X generates 1 then 2. Y generates 7 then 8.

For x in [1,2]: For y in [7,8]: yield (x, y)


You could also write the example as a list comprehension, which essentially mimics set-builder notation in math, i.e. "The set of all pairs such that ..."

  [(x, y) for x in [1, 2] for y in [7, 8]]  // Python (list comprehension)
 
  {(x, y) | x \in {1, 2}, y \in {7, 8}}     // math (set-builder notation)


See, the Python code has some hints as to what's going on. If you know that x is a variable, 'for x in' spells it out to some degree. Maybe you don't nail the exact functionality just from looking, but it feels more accessible.


Cartesian product.


I'll be the first to hate on C++, but the only reason C++ might be considered a "don't learn it as a first language" language, is because it's huge. It's got 40 years worth of baggage, and it's "standard" libraries are an absolute mess. Javascript has a super simple syntax, and it's standard libraries though sometimes a bit idiosyncratic are comparatively clean and pragmatic.

If you just removed like 90% of C++'s useless standard library, and restricted the syntax to purely what's in Strousup's "Tour of C++" book, then I'm pretty sure that would be a pretty acceptable "learn it as a first language" language. Of course in that imaginary world C++ would also not have been deprecated by Rust.


> 90% of C++'s useless standard library,

The Python library is even larger and doesn't seem to be a barrier. If you know `std::vector`, `<<`, and `std::sort` it seems you're set to contribute to most code bases.

> C++ would also not have been deprecated by Rust.

That's an overstatement, to say the least.


In Python the standard library is fairly easily traversed and understood and if there's a better way of doing something it's usually linked I think. In C++ we have.. this..

https://en.cppreference.com/w/cpp/algorithm/transform_reduce


Python's standard library includes many modules for specialised applications and yet it's surprisingly lacking in fundamental data structures and algorithms. Personally I think that's the wrong way round. Simple things like building dicts from different data structures or working with optional values that can be something or None just aren't there and you always seem to end up with some kind of utils.py full of hand-written functions to do these things that everyone writes again every time when they would have been either built into the syntax or a one-liner using the standard library in some other languages.

I agree that the C++ example is (one reason) why modern C++ has jumped the shark.


That example is not how C++ is used. It would not pass code review.


Why is code that wouldn't pass code review part of the language documentation?


Why are there falsehoods in Wikipedia?


Because it's written by random people who may or may not know the topic well. How does that apply?


I know supposedly "harder to learn" languages (i.e., functional languages), but my god does that example and C++ look impenetrable.


That is not idiomatic C++. The author is showing off esoterica.


It's not? This is literally the function that I found I should use when I wanted to fold over a list I had. What would you recommend I use?


The library function is fine, although the long name is unfortunate. The apparatus around applying it, in the example, is aggressively weird.


I agree that the example is crazy. But the function is not fine, it's a testament to how unhinged the C++ standards committee is. Its existence is proof that the problems of C++ are not just with its history, but that it is an ongoing snowball of incompetence and bad decisions.


It is hard to know what you are talking about. It applies linear operators to an input sequence, defaulting to + and x, or whatever you specify. It is like inner_product, but sequence order is not specified.

There is a "range" version in C++20 that in use looks more like in other languages. (It is technically C++23, but appears in libraries shipped with C++20.)


It seems like a small thing, but everything about it is just slightly wrong. That it's called `transform_reduce` and not just `reduce`. The fact that it was in C++17 alongside a function called `reduce` that's basically useless. Why was it not in C++14? Why was it not in C++11? Why was it not in C++03?

That in 2017 the C++ standard committee was still introducing misdesigned cruft like std::reduce to the language says everything you need to be about C++, it's a clusterfuck.


Thanks for the heads up. From the outside looking in, my perception of the difficulty of learning C++ is knowing what is idiomatic or not, with the understanding that that differs depending on who you work for and with.


It is the same as in other languages: you do the simplest thing that gets the job done. Loops, particularly C-style "for" loops, often do not count as simple, despite their familiarity: std:: algorithms leave less scope for error.


Python as a language is smaller even if the library is larger.

You can do many many things in Python without needing to know much other than working with dictionaries.

In C++, doing a sort or a string search is an ordeal. So much of common programming requires so much cognitive overload.

C++ is not an easy language. Not by a long shot.


I absolutely agree, but this comment referred to the library size as a major burden.


Ah fair. Though I do think the issue they meant is perhaps the C++‘s STL is not fleshed out enough to be as ergonomic and useful as Python.

Python’s standard library is not without its faults but I’m always so frustrated that the C++ STL stops just short of providing many things, and when it does, it requires writing out so much verbiage.


I don't think it is an overstatement to say that that imaginary world C++ would have been deprecated by Rust.

C++ complexity is also its strength. Highly backward compatible, even with C, and multiparadigm. It has classes, but you don't have to use them, it has exceptions, but you don't have to use them, it has templates, lambdas, smart pointers,... and again, you don't have to use them, but they are here if you need them. Even the "deprecated" features of C++ (like most things related to raw pointers) are heavily used today, even in new projects, because they are useful.

Strip 90% of "deprecated" C++ and what you get is essentially Rust, but worse because it still has its C baggage without the advantage of being mostly compatible with C.


I read the exact opposite. He's implying that real C++ has been depreciated by Rust, but that wouldn't be the case with imaginary-world C++.

(Rust has not replaced C++ in the real world. Not even close.)


How much of that can be attributed to it not having as much time to attract cruft? See Haskell, a very minimal and carefully designed language with a heap of complex and contrary extensions.


In my uni they still teach C++ as a first language, that's why the "don't learn it as a first language" got me confused. The core language is very useful for learning programming, it is actually very clean.


My lecturer in second year, > 20 years ago mind, said we are learning C, because C++ is C with an ugly object oriented graft on top.


Your lecturer did you a disservice; clearly he only knew C, and didn't want to learn any more.

The more your C++ code resembles C, the worse it is.


Maybe he only knew C as you say. I did C++ professionally for a while. Not properly, just updating models in derivatives trading systems. I remember reading 3 Scott Meyer books, the Effective Programming series, at the time, they were a tour de force. Loved that guy. I've lost track of everything since 11 or whenever they introduced move semantics. One day I might go back to it.


Nowadays it's "rule of zero": compiler-generated constructors, destructor, assignments, compare. The members are of types smart enough on their own.

Lambdas are generic now, and variadics work most places.

Velocity has shot up. Fun, too.


I mean, C++ was my first language and I turned out fine (I think).


It was my first proper language too, if you discount what I did on my graphical calculator. Now I'm a Rubyist, heart and soul, make of that what you will.


Has it got currying? Partially applied functions passed around in folds and traverses are horribly difficult to read for beginners. Example: an Advent of Code solution posted in r/haskell for Day 10 this year, tell me how long it takes you to understand the function cycleStrengths.

   signalStrength cycle x = cycle \* x

   cycleGaps = [19, 40, 40, 40, 40, 40]

   cycleStrengths = foldr a (const []) cycleGaps . (\x -> (1, x))
     where
       a n r (i, xs) = signalStrength m y : r (m, ys)
         where
           m = i + n
           ys@(y : _) = drop n xs
From: https://www.reddit.com/r/haskell/comments/zhjg8m/advent_of_c...

Personally I love this headscratching stuff, but I would not ever dream of subjecting it upon a beginner programmer.


That kind of code is sometimes easier to write than read… you build it up incrementally. On the other hand it’s easy to refactor this to make it readable. Is Haskell supposed to be an easy language for beginners? Is Verse? Not every language should be.


> [Paraphrasing] Is Verse supposed to be an easy language for beginners ?

Yes! Well that was the point of my comment. Maybe you have a different interpretation of 'Learnable as a first language' from the article, which is fine, not interested in arguing about that.


I guess that part of my comment could have benefited from reading the article, which I have done now. Oops.


I might be totally off, as I don't know any vercel. But the declarations look like "x can be 1 or 2, y can be 7 or 8, and let's declare a superposed tuple of these which could be any combination of the values". I don't see the point, but that's what I understand by reading raw syntax.


That's the usual way to read it, yes. The point is that you can then ask it e.g. to "find a tuple where x > y", and later expressions will be evaluated only for tuples that meet this condition.

In logic programming it's known as the "generate and test" pattern, where you declare the tuple and the program logically generates all its possible values, and then tests each value for compliance. The runtime usually will ensure to make this in an efficient way.

P.S. See the definition of the "amb" operator in typical logic programming languages:

https://rosettacode.org/wiki/Amb


It's the cartesian product. Think of it as a table with x labeling the top (x-axis) and y labeling the left side (y-axis). The cells of the table are the result of the product.

     x   1      2
  y |--------------
  7 |  (1,7)  (2,7)
  8 |  (1,8)  (2,8)


Except, as the slides indicate, sequences of values are not sets. They are ordered. You can also, as they mention, refer to "y" in the definition of "x". This is like set union, but it is, again, sensitive to ordering of elements.


I'm not sure about the "learn it as a first language" bit.

My university taught Haskell as its intro language to math and science students, and the people who had the most problems where those who already knew a bit of programming. People for whom Haskell genuinely was their first introduction to a programming language on the whole had no problem getting it. I imagine this might be the same.

The main downside with the approach was that when these people who had just learnt Haskell moved on to the next programming course, that was taught using Java, they had a really hard time understanding what was going on.


That's a hard road to hoe if they're going to just do new syntax for no reason.

It's just more explaining, boring and driving people away.


C++ versions for comparison, much less intuitive:

https://stackoverflow.com/questions/29451291/cartesian-produ...


Contrary to what others are saying... From my time spent in MUDs/MOOs 30ish years ago and the languages we were playing with then, I can see plenty here that actually does relate to the problem of programming in a large scale shared-state world aka "multiverse" (hate the world)

Transactionality. Check. ACID type semantics are absolutely key to handling the concurrency issues created by a world with thousands of authors/programmers and concurrent actions. This is not the place for locks.

Declarative. Essential. This is going to be the only way to manage the complexity of interactions on a large scale. Though perhaps I don't jibe with the particular form described here and would instead encourage a more relational/datalog type approach, but that's my bias.

And management of mutable state / side-effects, through functional type approaches also seems key. Not just for expressing problems elegantly, but also for security / visibility management.

Some of the interesting things in here (approach to truth values etc) have a vibe similar to the approaches behind evaluation in a Datalog but also look similar in spirit to some of what's behind my employer's (RelationalAI) knowledge management programming language, Rel: https://docs.relational.ai/rel/primer/overview

Finally, people saying it looks difficult to learn, I think that for many people working in this kind of environment... it could be their first programming language. So they don't come with the same baggage & expectations about what programming languages are. Back in the 80s and 90s there were all sorts of "game builder" (esp for interactive fiction) type languages that often had what we'd now see as "odd" semantics. But this was part of the advantage. Heterodox approaches often bloom in domain specific locales.

Going to spend some more digging into this after dinner. Neat stuff.


Hi, fellow MUD/MMO person here with an additional background in blockchain.

Can you walk us through where you think a language like Verse really shines with transactionality versus the current server authoritative models we see implemented in [language here] using [framework/pattern] here (like C++ and ECS)?

I've done my fair share of FP (got into Haskell years ago) so I see value here, but I'm not sure I'm grokking your level of excitement and would like to understand better.


> Finally, people saying it looks difficult to learn…

“To answer "what does this program do, or what does it mean?“ just apply the rewrite rules.”

Whenever a senior engineer* uses the word "just" you're about to have a bad time.

* Or an expert in anything, really…


I mean it's either genius, madness, or both.

Most programming languages I can least understand the general gist, the core motivation, whatever.

I have a lot of respect for Epic Games so I read through most of it but I have to say I'm still non the wiser.

If forced to summarize I'd say.. it's some kind of Haskell-ish language where the types themselves are defined by ... runtime functions? With some attempt at also being a prolog-ish logic language?

But why?


My take, in short: imperative programming is too fragile for the use case of the metaverse, and functional programming is too mathematical.

In a context where you want to have software built by different independent developer teams working on the same persistent world objects, you can't expect them to collaborate on programming style and coordinated libraries, as is done in industry software. Any unexpected side effect or variable aliasing in another library may ruin your runtime execution.

Pure functional programming avoid this by forbidding side effects, but forces a programming style where all state updates need to be declared and carefully handled by the developer. Functional reactive programming (spreadsheet-like updates) is being used a lot in the web to ease that problem (all popular frameworks have a version of it), but it is best suited to processing client/server updates, not full world simulations.

Verse approach seems to bring new expressive capacities suited for dataflow programming, allowing data transform logic to be built with functional expressions and separating these from the techniques needed to link one component to another.


My guess would be to store game logic as data, as process it in a distributed way. An entity could have multiple rows for it's attributes and "methods", and the runtime just processes it


Which would be hella cool, especially if you abstract up a level and store and execute platform data (an MMO game vs. VR meeting place) in a distributed way.


Yeah. Super interesting concepts I hadn't come across before, but it's hard to see the motivation for the design choices, in light of the stated goal (which is itself a bit 'out there'). OTOH, it's in general a hard problem to predict the emergent characteristics of the ecosystem of a programming language just from the underlying design choices.


I think mathematicians are going to love Verse.

I'm concerned about hoping "millions" of Metaverse devs will use Verse when, right now, the biggest problems with Metaverse are more about frameworks/netcode than the underlying language.

An open framework for handling interpolation, client & server space simulation (w/ physics), forecasting/prediction for events, and handling at least 30 ticks/s for a connection target with <150ms latency, <10% packet loss, while being able to handle thousands of connections + tens of thousands of entities PER instance (and also seamlessly mirror data across instances to enable large maps without zoning): that's a huge need for large-scale MMORPG-style Metaverse design.

I'm not at all certain we need a new language to do that.

I know it's hard to compare the usage percentage of various programming paradigms because many languages are multi-paradigm, but FP represents a much smaller piece of the pie (especially in games/networking) and, honestly, we have several generations of existing engineers who don't do FP.

Still, it's nice to see that Sweeney hasn't given up on making FP more useful to the mainstream! =)


Sweeney commented on twitter

"The aim is a transactional programming model with no visible networking or multithreading: you write normal code, and the system distributes the simulation across cores, servers, and servers by running updates speculatively, then committing or aborting them"

Seems vaguely similiar to how Unreal networking already works, but I guess more automatic.

Some parts of the game will perhaps have to run outside verse so they can be interpolated/smoothed, unless they have some magic to handle that also.


There comes a point in any massively scaling networking project, and here I speak with some experience, when you realize that your goal, while admirable, has no bearing on the lessons learned over the last few decades of MMO development.

I fear this is a handwave.

What I mean by that is that its addressing the wrong problem for the Metaverse. Being able to implement an ECS model, for instance, where different platforms have common system requirements but because they're both built on Verse they can easily glom/reduce/map components and functions so entities in one ruleset can interact in the other... that's neat, but not a technology problem.

It's a combinatoric problem. And a game design problem.

By the time Verse is built up enough and has enough market penetration to try to take on this sort of role as a bedrock foundation layer for the Metaverse I think we're going to see two major shifts that make it obsolete:

1. The rise of AI-aided design and programming (think ChatGPT on steroids) that makes it pointless to worry about having One Great Solution when the AIs can just interop/translate and all the platforms (even competing corporate interests) can be "Metaversy" with their entities/players.

2. Either the combinatoric problem gets solved or it doesn't. Game designers have strong opinions on NFTs, for instance. The majority recognize them as incapable of solving the item portability problem (or as Raph Koster says, is it even desired?). Either novel ways emerge to do so and it's solvable, or they don't. I suspect either way the heavy lifting is not a programming technology problem, but a contractual/API one.


Also speaking as someone with experience in MMO development: This is fucking wild to describe as a handwave, you think Epic / Tim Sweeney have the wrong end of the stick regarding distributed execution because... the AIs will solve it for us in the next few years?

> By the time Verse is built up enough and has enough market penetration...

Verse will already have a dominant market position the minute you can do something in Fortnite with it.

I'm much more interested to know what the top LSL programmers or Minecraft modders think of it, than the team working on Guild Wars 2 or whatever. That's who it's relevant for.


You know, your comment made me rethink a few things.

I don't know if they have the wrong end of the stick. I'm afraid they might, because based on their presentations so far the Metaverse bit seems more tacked on than intrinsic. However, these are credentialed people well known in the field - it's hard to tell how much is exuberance re: Metaverse usage.

Regarding my comment on AI, I think it's relevant. It's not that AI will "solve all our problems", but that AI-aided design and code implementation will make learning a language like this obsolete. Transpilation will be seamless and backgroundy. So now there's a cost/benefit factor to learning a new language that was used to write a distributed execution layer for the Metaverse in a transaction-first manner, versus using tools that just "do it for us" and interact with the "product" created with Verse.

As far as Fortnite having a dominant market position once they do stuff with it, I'm not sure of that. I mean, StateScript being awesome doesn't make a dominant market position for it because of its use in Overwatch. I recognize the fundamental difference because the latter is closed and proprietary, sure, but I'm just not sure about the level of separation between Language and Product here when they're talking about the Metaverse. I look forward to learning more.


> Fortnite having a dominant market position once they do stuff with it, I'm not sure of that. I mean, StateScript being awesome doesn't make a dominant market position for it because of its use in Overwatch. I recognize the fundamental difference because the latter is closed and proprietary, sure...

No, the fundamental difference is between Fornite and Overwatch, not between the languages. Fortnite has an order of magnitude more users and, more critically, is a freeform social gathering place in a way Overwatch is not.

Like honestly, I'm not sure you know what Fortnite is today, if you're comparing it with Overwatch. It's a competitor for VRChat and Minecraft as much as it is a Battle Royale game. This is going to make it a competitor for Roblox and Second Life too.


Heh, that's pretty fair. When I saw my son running around as Naruto I also was no longer sure what Fortnite was =)


Very intriguing! Is there an explanation anywhere of how this is different from Prolog? It seems like almost the same thing. That is not a criticism BTW, I love Prolog. From what I can tell the main difference is it has functions with return values. It also looks like it has different semantics for the choice points, letting you nest alternatives in arguments and other expressions. I do like how choice points are seen as a more natural building block in the language vs. the 'magical' bagof in Prolog. For games, having real indexable arrays will be nice as well.

   % Verse:
 x:=(1|7|2); x+1
   % Prolog:
 foo(X, Y) :- (X = 1 ; X = 7 ; X = 2), Y is X + 1.
I think it might be nice to implement this language using Prolog. For example I think you could do the type definitions like this (let's ignore how :/2 is usually for module qualification):

  X:int :- freeze(X, integer(X)).
Kind of off-topic but I've always been curious why variable attributes aren't generally used for type-checking like this in Prolog.

Anyway, very cool. I'll be keeping an eye on it. Is Epic Games hiring logic programmers? :)


Major differences from Prolog seem to be as follows: Prolog always evaluates through unification (at least in theory). Verse is evaluated through reduction, including the unifications! This means a). there is a sublanguage in verse that doesn't do unification at all... it should have similar performance to a functional programming language like Haskell. b) verse is deterministic... the same prolog programs can at least theoretically produce two different answers to the same input. Verse has a defined denotational semantics (prolog has most of a defined operational semantics, but that's a different kind of thing). c). Verse an be type checked at compile time! It seems typechecking verse is undecidable, so presumably some type checks will be moved to run time, but lots of type checking can be done at compile time.


> Prolog always evaluates through unification (at least in theory).

What do you mean? The actual "computational" part of Prolog lies more in resolution than in unification. Unification does not "evaluate" in any sense of the word I'm familiar with.

> the same prolog programs can at least theoretically produce two different answers to the same input.

What do you mean? Do you mean through side effects, or are you suggesting that Prolog's evaluation order or something else is not fully specified and deterministic? You would be wrong about the latter.

EDIT:

> Verse has a defined denotational semantics

What do you mean? Verse (or rather the underlying calculus VC) has a rewrite semantics. Rewriting is pretty operational and not at all denotational.


This definitely isn't Prolog, but in some ways it seems to be closer to Prolog than some other things.

One thing you can do in Prolog is build partial data structures: `X = tree(Left, Right)` builds a tree node with two "holes" `Left` and `Right` that you can fill in later -- or, crucially, might choose not to fill in. You can pass this "partial" data structure around, and other parts of the program may or may not instantiate it further. This allows some nice programming tricks. It allows proper tail calls in cases where functional programming doesn't allow tail calls, or only with heroic help from the compiler. It allows you to decompose your program in different ways from languages where you must always build data structures "inside out".

In contrast, Mercury is a syntactically Prolog-like language that doesn't allow this: (AFAIK) when you pass a term to a predicate, it must either be ground, i.e., without any leftover "holes", or a variable, i.e., completely uninstantiated. This throws away much of the power and convenience of Prolog. And any language that takes a pure "Prolog's logic variables are just sequence generators" approach must also fall into this category.

Verse seems to be more lenient in the "it's all just sequences" department. You can pass uninstantiated stuff into functions and have it further instantiated in there. It's not clear to me to what extent this works, there are no examples with data structures in the slides and I haven't gone through the paper yet. I can well imagine this being closer to Prolog than to Mercury. But the "Everything is eventually evaluated" is definitely not fully Prolog-like; Prolog doesn't care about everything eventually being ground. There is no need for that.

On a related note, even though it's popular to say that Prolog can "run code backwards", that is in fact not the case at all. Prolog always runs your code forwards. If your code is designed accordingly, you can often treat an argument `X` as an input and an argument `Y` as an output, and also have a use case of the same code where `Y` is an input and `X` is an output. But the code itself always runs in a well-defined top-to-bottom, left-to-right way. This is notably different in Mercury, where the compiler will explicitly compile different versions of your code for different use cases, reordering things so that sometimes you are actually running bottom-to-top when compared with the source code order.

Evaluation order in Verse is... all over the place. I suspect this will be problematic in practice. If you understand how Prolog evaluates your code, you can work with it to write performant code. Verse seems to be too flexible in this regard, so that it will be difficult to impossible to understand what is actually going on in what order. If you treat all your values as generators that can start enumerating stuff at any point, it will be very easy to have cases of combinatorial explosion by choosing wrong orderings. I see that there are some notes on this in the paper, but not much more than "some things are obviously not what we want, but we don't know what exactly we want". So let's see what happens, but for now this doesn't seem to want to be close to Prolog.

Final syntactic notes: Mercury has the Prolog-like predicate syntax that as noted can run "backwards". It also has a functional syntax where (AFAIK) it's not possible to run "backwards". This seems to be the right choice to me. You can mix and match predicates and functions to build what you want and retain clarity. Using a function syntax for things that can run "backwards" will be cute but confusing. Relations should be written as relations IMHO.

As for micro-syntax, others have complained about `fst` and `snd`, and I also tend to think that we can afford a few more bytes. But my main complaint is with `false?`. If in the slides introducing your syntax you feel compelled to call something "quirky", that's a clear indication that it should change. It's such a weird name. Why the question mark? Why does the name evoke booleans if the language has no booleans and discourages boolean thinking? If I'm supposed to think in terms of sequences, a better way for a sequence that contains nothing would be `none`. If I'm supposed to think in terms of logic variables, a better name for a logic variable that is bound to no value is... also `none`. The name `false` is just such a surprisingly bad fit.

This is something to watch, it's an interesting point in the design space. It might end up as something that (finally) is better than Prolog. It won't end up as being "almost the same thing" though, I don't think.


I think false is sugar for the empty tuple / array. `?` is an operator that turns tuples / arrays back into choices. Thus `false?` can be seen as an expression that returns no values, which is semantically falsy in the language.


Ah, that makes sense. Thanks! It should still not be the default name for this construct.


Thank you for the detailed explanation, much appreciated.


I was thinking the same! I love Prolog, but maybe Verse will turn out to be an actually better, cleaned-up Prolog? Implementing Verse in Prolog would be so much fun and very natural.


For those wondering if there's actually something relevant to the Epic's approach to the metaverse: there's not beyond slide 3. Everything after slide 3 is a description of the semantics of a novel functional logic programming language, the sort of which you'd expect you'd get if you paid the world's premier Haskell core contributor a large sum to design a new language. Of course its intended audience is Haskell academics, so perhaps it would be weird to expect anything else.

Slide 61 and 63 hint at things that might actually be of consequence to real programmers making real software for a potentially real metaverse, such as the effect system for I/O, transactional memory and code organisation through classes and inheritance, but it gives no detail.

If you're interested how elegantly functions like head, tail, cons, snoc, append and map might be implemented, this presentation is for you. If you're more interested in how you might unpack a message from bytestring, model a state machine or coordinate a group of actors in a distributed system, then I guess you'll have to wait for the next presentation.


> its intended audience is Haskell academics

It was presented at Haskell eXchange so the audience is more like Haskell users in industry and those who would like to be.


Simon my boy.

The "View from 100,000 feet" slide does highlight the kind of experience SPJ would have in identifying where Haskell lacks a bit in pragmatism as a basis for a new, but not super different language. Hopefully the tooling is much better though.


I was expecting an entire section devoted to how this language would deliver on its promise on powering the metaverse. This was a nice post about a new language, but I was ready for a bitcoin paper.


This language is cyuuuute! The multi-valuedness making "if (x | y) then…" work is so neat, and having `x : int` be a thing of exactly the same nature as `x = 3` is cool to see; even in a fully dependently-typed language, those things are not of the same kind. This looks super mind-expanding to try out!

(My first reaction is some slight aversion to what the "flexible" and "rigid" stuff is doing in the conditional scrutinee body, but maybe that'll feel natural after playing with the formal specification a bit.)


Verse reminds me of Mozart/Oz[1][2].

I really liked Oz, and thought it had a lot of potential. But its documentation was a big adoption barrier (scattered mess plus expensive textbook), and Oz failed to escape being a turn-of-the-century European research and intro-CS language. The intro-CS role perhaps lends plausibility to Verse's "a first language" objective, despite the off-mainstream computation model. Explicit `amb` though.

[1] https://en.wikipedia.org/wiki/Oz_(programming_language) [2] http://mozart2.org/mozart-v1/doc-1.4.0/tutorial/index.html


I am happy to see i am not the only one who remembers Oz and i wonder if there are any connection between those projects.


As a programming language - I mean it seems cool but it doesn't seem like much of an innovation when things like Granule, Idris, Unison, Erlang, and more already exist.

I want to take a moment to look at the Metaverse angle. If there is going to be a unique programming language for the metaverse, I think it's not going to be textual but instead is going to primarily be a 3D language. This poses a big challenge for a VR language because text is great[1]. The point of making it 3D would be that you are somehow capable of transmitting more information via the higher-bandwidth channel of the human visual cortex than you are through text.

If you aren't making a 3D programming language, well then it might be a cool programming language but I don't know what it has to do with the metaverse.

[1] https://graydon2.dreamwidth.org/193447.html


We developed 2D text in a 3D world. There isn't really a precedence for 3D text really. We will greatly enhance writing and reading with eye-tracking on the VR/AR headsets, but I don't think we will march into 3D world of text. We might do 2.5D though (or something more of dynamic nature) – https://www.reddit.com/r/badUIbattles/comments/wsk2jz/debugg...


Jaron Lanier developed a language in VR in the 80s much like you describe (apparently). Unfortunately, there is little evidence of it beyond some of his later testimony.


To me, this doesn't seem like it has much to do with Unison at all beyond being a new FPL, could you elaborate a bit?


That's exactly what I meant, that Verse is just a functional programming language, and that the ideas in a language like Unison seem to be just as interesting or promising.


No offense, but: does this actually relate the metaverse, or was that part added in so Epic Games would fund it? I don't see any first-class features that would be specific to that use case.


> does this actually relate the metaverse, or was that part added in so Epic Games would fund it?

Verse and the metaverse were ideas at Epic Games long before SPJ was an employee at Epic games (at least 10 years before).


This question was addressed slide 4.


It doesn't. This presentation contains absolutely nothing of value for a real programmer trying to actually make stuff.

Hopefully we'll get more useful info soon, once it can be used in Fortnite.


> This presentation contains absolutely nothing of value for a real programmer trying to actually make stuff.

Speak for yourself! I got a lot from it that feels immediately practical for my current projects as well as others I want to take on, and it makes logic programming concepts and benefits much more accessible to me than any amount of Prolog literature has so far (granted I came in wanting to be compelled).


Do you have some examples of things in here that you think will help you solve current problems?

I'm approaching it from the angle that I don't see how functional programming is at all useful for the kind of gameplay programming that would be done in a "metaverse" or is done in Unreal Engine today. So "it can do functional programming stuff" by itself doesn't cause any excitement. Rather the opposite. There are zero code examples of using it for anything besides abstract math. It executing code out of order seems to serve no purpose besides letting people create undecipherable monstrosities with it.

I'd maybe sum up this presentation as "we took functional programming and tried to make it less unusable by letting you actually mutate state" but I'm still lacking the original motivation of why I'd want to use that in the first place.


Well I don’t build games, so huge lump of salt, but I could benefit from and imagine translating into my work the language’s approach to:

- null safety

- deferred evaluation of expressions with unresolved dependencies

- types as first class values

- the real thing functional programming is about: making it possible to know what value is bound to a thing at any point in a program

The first is valuable for any program, the last is valuable for any program with multiple inputs (so, games, but also basically any human facing program because IO, or really just any program with users and real world interfaces), and types as values is a world of oysters if you like what types afford for development.

Edit: lol I forgot to expand on deferred evaluation, it’s a really good model for distributed users but could also be a good model for highly dynamic applications like one I maintain where there’s a hard constraint on synchronous execution but currently a very lazy model of what needs to be executed.


The approach of using a function to describe a type where success is the identity is a very simple way to implement dependant types


But how does it help me implement a gameplay mechanic for the "metaverse"?


Logic programming and games actually go well together... inform (probably the most important text adventure engine) is notably a logic programming language. And many game engines have moved to an architecture in which game assets are represented in a database, and changes to game state are represented as queries on the database (which is canonically what logical programming does).

For instance this might be super mario... mario moves forward or the gumba moves forward or mario hits the gumba and mario enters death animation and the gumba stops or mario hits the gumba from above and the gumba enters death animation or mario hits a coin box and a his coins increase and mario stops and the box enters coin animation.


I, for one, find it very cool that two out of the five people named as authors are from my old university Chalmers University of Technology, in Gothenburg, Sweden. Chalmers have been very strong on functional programming for quite a while, being (or have been) home to many influential people. Apart from those named in this paper, I can name John Hughes (Haskell/QuickCheck), Thierry Coquand (Coq) and Ulf Norell (Agda) just from the top of my head.


These look like they might be the slides from SPJ's talk at Haskell eXchange this year.

I'm hoping they'll upload a video soon, which'll probably be a lot more fun than slides. They've only got the recording up for one of this year's talks so far.

https://skillsmatter.com/conferences/13688-haskell-exchange-...


Yeah Simon Peyton-Jones is quite good at talking, these slides are interesting but I think the full video will turn out to be much easier to digest



This looks a lot like a functional version of Icon https://en.wikipedia.org/wiki/Icon_(programming_language)


This looks like a language designed for language designers and not for language users.


What an amazing summary. Puts to words what I was thinking but couldn't quite describe...


Wouldn't a language designed for current language users not bring much new to the table and mostly focus on familiarity and interop with existing la.guages?


When you want to make a mainstream language you will have better luck by focusing on bringing value to the users of the language over focusing on creating something novel.


But isn't the point here that the largest value is in a novel space requiring novel solutions?


This is sadly the problem with most new languages.


The metaverse angle doesn’t make much sense to me but whatever.

The big idea I see is that prolog-style backtracking logic becomes a first class notion in the language – every expression denotes a sequence of values reached through backtracking-like behaviour – which allows mixing logic-programming with a more familiar kind of functional programming. Though perhaps there are other ideas too.

I think it will be interesting to see where this goes. Perhaps this will be another Fortress but SPJ is well regarded, has written a bunch of interesting papers, and has a bunch of experience managing a programming language project from haskell which will hopefully carry over.


See slides 2 and 4. He has an idea of what a metaverse would look like and the technical challenges that will bring up. The language is meant to help solve them.


I'd think the real interesting stuff would have been the transactional parts and the effects system. Unfortunately, these slides don't describe them; they just mention that they exist.


The paper also stops before going into these. These were the bits that actually interested me, but alas, guess we'll have to wait for that.


Thanks. I re-read them but the metaverse angle still doesn’t make much sense to me. I don’t think that’s super relevant to the PLT stuff though.


I think it makes more sense if you don't think of it as an angle but a motivation. We've all been attacked with stuff like "shopping lists but on the blockchain!" for so long. This isn't like that. It's that they're foreseeing a technical problem of a shitload of concurrency and extra pain of API changes (metaverse) and this is meant to help.


I didn’t find the metaverse stuff to be a compelling motivation for the programming language features described. It felt to me more like ‘I had some ideas about programming languages after 20+ years of Haskell and research. Here’s my plan to unify logic programming with more normal functional programming (also we’re getting rid of monads). By the way my boss cares about the metaverse.’ But maybe that is too cynical.

The metaverse things used to motivate this language don’t seem new to me either. Aren’t they basically ‘works well with concurrency’ and ‘somehow the language that finally makes it easy for different programs written by different people to talk to each other’ which don’t seem to be new goals in programming language design as far as I’m aware. But maybe I’m being uncharitable in my interpretation.

I’ve not looked at the associated paper or tried to understand what the metaverse is so maybe things are better discussed there. Or maybe they are glossed over because the presentation was for PLT people rather than metaverse people and I think many of the former would be turned off by a talk about the metaverse.

This doesn’t strike me as a very serious attempt to predict the future and design a programming language based on that prediction (though few such attempts exist for the design of most things in tech, and they needn’t correspond to success). It seems more like taking a punt and exploring a new point in the programming language design space that may be relevant.

I think it is interesting to look at what happened with Fortress for comparison:

- Well known PLT person (Guy Steele of scheme, CL, Java fame) in a reasonably funded industrial research lab

- they started with ideas about where computers were going around 20 years ago. So high core counts, big memory vs cpu frequency divergence, NUMA/HPC architectures. And an increasingly difficult amount of legacy Fortran code

- developed an idea for a language that would be easier to write parallel scientific programs in

- eg functional, map-reduce paradigm

- added in some known features like typeclasses (monads for map reduce)

- some other random features/ideas

- wrote a bunch of papers

- not that much actually came of it, I think

Maybe they weren’t good at executing on making a language people wanted, or maybe that was never the goal. Or maybe they were too early – a lot of the ideas became more important with gpu compute but most people still write those programs at a much lower level.


> It felt to me more like ‘I had some ideas about programming languages after 20+ years of Haskell and research. Here’s my plan to unify logic programming with more normal functional programming (also we’re getting rid of monads). By the way my boss cares about the metaverse.’ But maybe that is too cynical.

For what it's worth, Tim Sweeney (the "boss" in this story) has been thinking about Verse or something like it since at least 2006: https://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced...

It seems much less like SPJ getting funding from a big-bucks metaverse boss, and much more like Tim Sweeney getting together a dream team to make his dream language in one form or another.


I underestimated your disagreement with the metaverse motivation, taking it as "I don't get it" versus "I don't think that's true/right/good/worthwhile/honest/whatever".

I wouldn't take your skepticism as uncharitable as much as realistic. If a company is trying to apply some abstract theory to solve a problem, they might hire a theoretician to work on it. They'll say "We are advancing X theory to solve Y problem for our customers". Well, the "we" in there is one dude and his motivation certainly isn't Y. He'd be working on X regardless. Y is just the reason he's getting paid. That's not exactly what's going on here, just sayin distorted motivations aren't that rare or even really that nefarious.

On the not-newness, I think that's true, too. You can justify it by saying that a metaverse will have the same problems in different proportions. This would justify approaches with tradeoffs that wouldn't make sense in other situations. It looks like it's optimizing for a system with high concurrency, weak coordination and on one platform. So like Erlang at Ericsson but more programmers, or like the internet but less independent.

Thank you for sharing the Fortress example!


I followed Fortress eagerly at the time. I'd say if you want to see what eventually came of it, look at Julia. It's a pretty clear descendant.


I think one must squint a lot to see Julia as a fortress descendant. They seem much more like siblings trying to solve similar problems. Like fortress, Julia does have a reasonable amount of CL/Dylan ancestry with multimethods, etc, but didn’t try any of the fortress style efforts around typeclasses and writing varied code that magically parallelises – you can still get good perf doing typical vectorised stuff like numpy/APL/R, and you get decent perf in loops etc due to the jit and aggressive monomorphisation, and there is fancy stuff for clusters or gpu arrays, but it doesn’t try to achieve the things fortress wanted like making it easy to write code that can be more magically parallelised.


I think in the constellation of languages there's still more intersection of "feature dots" between the two than there is disjunction. Multimethods & numerics emphasis alone makes them uniquely similar, as the latter especially is not a feature that has been mined well in other languages.

Some of the influence is quite conscious, too, from my reading. Though as you mention is obviously a lot of common influence from Dylan and Common Lisp.

And here's a talk by Guy Steele about the Fortress experience, at JuliaCon:

https://www.youtube.com/watch?v=EZD3Scuv02g


This doesn’t strike me as a very serious attempt to predict the future and design a programming language based on that prediction

I don't think it is trying to predict, it is purpose built to be the Unreal engine scripting language. I hate the word metaverse, but it will be a highly relevant, important language to online multiplayer worlds as soon as it is released (as soon as a month from now).


The implication of the argument that the above comment was responding to was that the design of the language was driven by predictions for what the metaverse would be like and what that would imply about the needs of a programming language. I agree that the OP doesn’t seem to really try to predict that. I’m sceptical that a language like this will catch on in Unreal but maybe if that is the motivating use-case it will drive some interesting and practical language features. Will have to wait and see.


After reading the paper and watching the lecture, I think I know what it means that Types are first class values.

In verse `=` is unification and not assignment or comparison. Meaning its a constraint on the lhs and rhs. Unification is also an expression meaning it can be normalized to a value e.g. `x=3` normalizes to `3`

Expressions can be sequenced with `;` but note this is nothing like imperative programming due to unification. These sequences normalize to the last expression in the sequence but the unifications in all subexpressions apply to the whole sequence. Thus `=` appearing in subexpressions in any order does not change the resulting value (since the compiler uses normalization)

Now functions can be seen as lambdas or anonymous functions to these sequences of expressions where the arguments are also constraints! These functions themselves are values. Thus functions are first class.

Another key aspect of unification is that functions can run backwards. Thus `swap<1,2>` will return `<2,1>. But so will `swap(p) = <1,2>` constrain `p` to `<2,1>. Thus the meaning of function is no longer just a procedure. But more of a specification or constraint.

And this is exactly what types are in Verse. Types are functions thus first class. When we say `i : Int` it is akin to saying `Int(i)` constraining the variable `i` based on the constraint `Int`.

Thus if you have a function that succeeds when given an even number, that function can be seen as a type of even numbers.


It is nothing new. See Dependent Types. Used by most formal proof assistants (Coq/LEAN/…) and languages like Agda and Idris.

I highly recommend the book “Type Driven Development” if you want a great introduction to the power of DT.


Oh this is new because our notion of expressions; thus functions, are different.

Sure you can have terms in types in dependent types; thus functions in types is nothing new. But again, we have a much different notion of function here.

And if anything I would say types in Verse are much closer to refinement types because of its ability to apply constraints. But they are still not the same thing.


Yes it is different but it doesn’t seem to give you anything that is better than what DT and refinement types give you. So why not simply use DT or refinement types? I know that in research you have to come up with something new to write papers about. Even if it isn’t actually better in practice. However this seems to be aimed at being a practical non-ivory tower language?


I think your first question will be answered when spj releases details on the type system. And the second when he address the transactional distributed stuff.

As of now all we have is a core language. But its not hard to read between the lines to anticipate how powerful it can be so that features that the industry needs can be built upon it.


I dunno about slide 39 - where calling f(x) may or may not give x a value. From a code readability standpoint, a reader of the codebase needs to be wary of every function.

I see the issue is that iff you explicitly marked it as inout, it would limit the code from a logic unification point of view. I guess this is already explored in prolog, which I have little experience with - but it feels like it would inherently limit the ability to scale a codebase to huge sizes. Maybe a combination of convention and naming would solve the problem.


It reads very like passing uninitialised variables to a function which may initialise them for you. That's one of the worst parts of imperative programming.

The distinction between bringing variables into scope and assigning them values is a strange one for a functional language, but in fairness the talk is titled beyond functional programming.


It's cool to finally get some more details on this language. They showed some slides of it being used in Fortnite last year:

https://mobile.twitter.com/saji8k/status/1339709691564179464...

I believe it has roots in (but quite different from) Skookumscript

https://skookumscript.com/about/look/


The code shown in those images is not in the programming language that is being introduced/created now by SPJ. It is clearly a separate (and not functional) scripting project.


https://nitter.lacontrevoie.fr/pic/orig/media%2FEpeZMhDUwAAY...

The screenshot says "BoxFight.verse". Either Unreal has two separate new programming languages both called Verse, or you're plain wrong.


I don't know why I bother, but:

The screenshot you linked to shows boolean variables being defined ("bool") while the new SPJ language being discussed here very conspicuously lacks a boolean type. That fact alone is enough to convince me that there is an "old", less cool, less ambitious Verse language, with different syntax etc. which is about to be replaced by (or as Tim says in tweet, get "converged with") the new SPJ Verse.

They apparently like the name/branding so much that they want to keep using it, even though the fundamental principles of the language are going to be different. That does seem potentially confusing but it's their business. After all, the old verse never got released to the public.

So it seems totally clear to me that a previous, less complex and ambitious, scripting language called "verse", which has been around for a while and never really gone anywhere, is going to be replaced by the new SPJ functional language "verse".


https://twitter.com/TimSweeneyEpic/status/160218212070891520...

> We're more than a year away from an open source implementation that's converged with what's in Fortnite. In 2023 we'll be giving lots of talks and likely releasing some of the research pieces.


I believe it is the same. Tim's Twitter has a lot of references to it:

https://mobile.twitter.com/TimSweeneyEpic/status/15955006213...

https://mobile.twitter.com/TimSweeneyEpic/status/16021821207...

You can find more information here: https://www.reddit.com/r/uefn/

I believe they've been using Fortnite as a testing ground for a while now, to hammer out the design.


Every time someone gives a programming language a name that needs disambiguation terms for a decent google search, a puppy dies.


Python


Java, Go, Rust, Perl, Scheme, Swift, Ruby, Lisp, Basic...

But just because they all did it (and most before web searched existed) doesn't mean future languages should.


This essay opens with comments about utility for the metaverse but then switches to a description of a grammar without strongly connecting back to why it is so valuable for programmers to learn a new language to better build the metaverse. It may be helpful but is it that much more helpful?

I understand this is a pet project for these folks but I think that a metaverse grammar should not fe functional but procedural - it should be extremely simple and accessible to novices. We want these tools to be accessible to everybody I think - unless we as programmers want to be responsible for building all interactions for all users.

But I mostly think a grammar today isn’t just about the literal parser - instead it is more about the parser and surrounding tooling. Rust for example is surrounded by tooling that helps - cargo/crates are a nice helpful way to make rust developers effective.

If I was going to devote significant energy to this topic I’d solve other problems. Any code that users write needs to be late binding - allow software agents to be pushed to the cloud and participate in already running simulations or models. Code should allows users to define granular security around libraries and components to prevent them from doing bad things. Code should be highly portable; able to run across many devices at speed - not something new that has low cross platform support.

I think a better way of thinking about this isn’t the grammar itself but the kind of “computational sandbox” one is offering - the grammar container or app runner.

WASM is a good example of thinking in a better way - portable, secure, performant. It’s more like a metaverse tool than the above grammar.

What’s especially curious to me is that Tim’s project Unreal is almost the anti-metaverse. It is utterly fixated on and built around an extremely heavy high fidelity renderer - that is not portable - that does not run across many devices. Unreal uses an old school compilation philosophy that requires behaviors to be precompiled - so if you want to add a single feature you have to tear down all the instances, recompile them, recompile the server too, distribute a new build to all participants and restart the sim. It’s a tool designed for a different era and a different ecosystem… in a sense AAA games are the opposite of a participatory constantly evolving online shares consensual world. So maybe he should fix that first.


> It is utterly fixated on and built around an extremely heavy high fidelity renderer

This is doing unreal engine a significant disservice. Unreal is significantly more than a high fidelity renderer; Unreal's replication system is excellent, as is the gameplay ability system. (To my knowledge, neither of those systems exist in any of the other major engines that are readily available). There's a laundry list of things it does pretty well,distilling it down to a renderer only is very dismissive.

> that is not portable - that does not run across many devices.

Renderers by definition aren't going to be portable, they're pretty intrinsically tied to the hardware (and OS) they're running on. Also it's silly to say it doest run on many devices - it runs in every games console for the last decade, an enormous amount of mobile devices on both android and iOS, and on all major desktop platforms natively. What more do you want?

> Unreal uses an old school compilation philosophy... So maybe he should fix that first.

Unreal definitely has some dated design decisions that are showing their age, but that's to be expected for a codebase that's 25 years old. And if you've been paying attention to what epic are doing over the past few years you would see they are working on that.

(Disclaimer: I worked for epic until recently on exactly the things you're talking about in this comment)


Off topic but wow cannot believe someone used comic sans.


It's just because SPJ likes it. He uses it for presentations a lot.


> This is a very funny question, "Why use Comic Sans?" So, all my talks use Comic Sans and I frequently see remarks like 'Simon Peyton-Jones, great talk about Haskell but why did he use Comic Sans?' but nobody's ever been able to tell me what is wrong with it. It's a nice legible font, I like it. So until somebody explains to me ... Ah, I understand that it's meant to be a bit naff, but I don't care about naff stuff, I care about being able to read it. So if you have got a sort of ... some rational reasons why I should not then I'll listen to them. But just being unfashionable? I don't care.

source: https://graphicdesign.stackexchange.com/questions/38226/what...


Pure and brilliant counter signaling. His paper is worth reading and he knows it, the font is not there to entice readers. The paper does the enticing. The comics sans signals "This is either the ramblings of a child, or the utterance of genius, and I'm all out of childhood..."


He is asking for reasoning when he doesn't provide any, so his use of it is just idiosyncratic and stubborn. It is a noisy language that is hard to read in bulk and takes up a lot of space. It was invented for what was basically a computer game for kids, so why does he insist on using it for presenting technical material? Using fonts beyond their originally designed purpose with no reason is asinine and actually fashion, despite him thinking otherwise.


What's the big deal, using a font outside of its narrowly designed purpose is certainly not it. Some people venture outside the box


Venturing outside the box is fine. But it's an idiosyncratic choice without much, if any, upside and is ultimately a distraction and hides behind some perceived rational decision to use it. The linked Stack Exchange covers this, and one should consider the appropriateness of the tool being used. I think he'd agree on the same for programming languages.


I like it. It's easy to read.


He has used that PowerPoint style for years if not decades.


He used it in the "Tackling the Awkward Squad" talk from 2001[0], so it's been a long time.

[0] https://www.microsoft.com/en-us/research/publication/tacklin...


Fun fact: Comic Sans is supposed to be easier to read for dyslexic people.


IIRC fonts specifically made for dyslexic people (e.g. OpenDyslexic) tend to be even better in that regard; if the goal is accessibility, one of those would be a better choice.


sad fact: dyslexic fonts are pretty useless for dyslexic

[0]: https://www.edutopia.org/article/do-dyslexia-fonts-actually-...

a good phrase on this link: "... dyslexia is a language-based processing difference, not a vision problem, despite the popular and enduring misconceptions."


Semi-hard to read fonts like comic sans has been proven to improve retention in studies.

It’s one of the reasons I like him, zero ego but still a genius. It’s the opposite of McKinsley (zero substance but looks great).


I remember European Space Agency also using Comic Sans in one of their live broadcast presentations.


Why not? Comic sans is the most inclusive font.


I must be getting old.

Neither the prospect of spending time in the "metaverse" for "social interactions" nor a language design that requires you to keep in your head not just simple bindings to names but sequences of values, is just not very appealing.

What are younger folks thinking how they will be interacting with the metaverse? Less clunky VR glasses you can wear for more than an hour?


The metaverse is already a failure. So no it is not because you are getting old. Most people don’t want to strap a bulky helmet on their head to work or play. Maybe one day, if you just have to wear comfortable light weight glasses, this might change. But until then it will be a niche thing.


I'm sure they know a lot more about language design than I do, but the use of what is essentially SQL's NULL for boolean false doesn't feel like a good idea at first blush.


I didn't understand "false?" as a NULL value but more of a "Nothing"/"None" in the context of a Maybe monad. Since the language doesn't even have a Boolean equivalent, it makes sense to introduce a value for a failed condition or zero values.

The problem with SQL's NULL are its properties (comparison and arithmetic in particular), which doesn't seem to apply here (e.g. "42 + false? === false?, just as with monadic function composition and Maybe monads).


I don't quite follow the second paragraph. You say

> The problem with SQL's NULL are its properties (comparison and arithmetic in particular) [...]

Which I follow, then you say that those unfortunate properties don't apply and give a counterexample (I think it is intended as a counterexample) of

> 42 + false? === false?

But that seems to be the same as SQL's NULL:

    benji=# \pset null NULL
    Null display is "NULL".
    benji=# select 1 + NULL;
    ?column?
    ----------
        NULL
    (1 row)
Which leaves me confused.


The problem with SQL is that this behaviour is inconsistent when it comes to NULL values.

Consider a column with values (10, 20, NULL, 30).

In SQL the AVG() function would return 20, i.e. ignoring the NULL value entirely. COUNT(*) on a table, however, would happily include NULL values, whereas COUNT(col_name) wouldn't. The behaviour is all over the place, especially when different dialects and settings are considered.


Thanks for taking the time to respond. I appreciate it.


I like it. The structure of the code you write looks much more similar to what you would write on a piece of paper than Haskell (for instance, getting rid of in, the pulling into scope at the top). Not to mention the primitive version of the language also resembles how you would write in math much better than lambda calculus (which I acknowledge probably was never a goal of lambda calculus).

This doesn't present any examples of something that would be useful in the real world. But come-on, this was presented at a Haskell conference it looks like.

For anybody with an interest in the "mathish" flavor of functional languages, this certainly has a lot of promise as a foundation to build on.


I suppose this is a relatively minor thing, but being able to have conditionals like "if (0 < x < 20)" has been on my "things I wish just worked" list for awhile.


I believe you can do this in Python


Yup, you can. Also in Clojure, like this:

(< 1 2 3 4 5 6) => true


How do you do the (IMO common) 0 <= index < length?


This you can't do, unfortunately. Would have to do something stupid like `(< -1 index length)`.


> Like the metaverse vision, Verse itself is open

Aren't these companies pursuing a "metaverse" product because they can lock it down, control it and make money on it? Otherwise rather than FB and Epic etc making their own, you'd have something developers could work with and use right now

I wonder if maybe Simon Peyton-Jones has to play along with the whole metaverse fad just to do something he wants to do.


The strategy is more about getting a first mover's advantage. Companies believe the metaverse is inevitable and they believe that it will become a big part of the everyday person's life. These companies want to be key players in this space.

Look at the open ecosystem of the web and think about how much money ICANN and Verisign make from their DNS. Anyone can make their own independent DNS (see projects like namecoin), but ICANN's has a big first mover's advantage that makes it hard to compete with them.


Oh I get it, but they're aiming for first-mover's advantage with the goal of being in control of something, not about implementing a lovely open ecosystem.


Why not both? Google may dominate search for websites, but anyone is free to create their own search engine for people to use.


The first mover advantage also means that, as a matter of everyday experience for most people, the internet is not an open and free place, it is a highly structured and controlled place. Most people can only do stuff on the internet if a company has made a web hosted app that allows them to do that thing.

That goes all the way down to DNS and connecting a server to the internet in the first place.

ICANN and IANA allow you to have a name and IP address, but you pay a middleman extra (in many cases a whole lot extra) for those because ICANN and IANA don't want to deal with you directly so you do it they way they want you to.

Amazon/Microsoft/Google allow you to have server time and hard drive space, you buy from them because servers are super expensive...because they bought up all the servers and continue to do so.

Also you can't run a server at home anyway because the company that bought the rights to lay network cable in your town/county/city doesn't allow you to.

You might be able to run a server anyway if you buy or rent an office in one of the locations they serve business internet to, but you'll run it they way they allow, for the uses they allow, at the speeds they allow, and you'll pay a lot for it.

If you want to self publish, even with a basic site you probably use Wordpress or something similar because the existence of design platforms has raised the public's expectations for what a site should look like and be able to do. So you design your site the way the design companies allow you to.

If all that's too much and you're just looking to write stuff for people to read on the internet, you most likely use Medium or Substack or something similar. And to do so you agree to their terms of service and end up discovering what gets attention and what gets ignored on that platform. So instead of writing what you want per se, you write what they allow you to.

If you want to share video content, you do it the way YouTube wants people to, right down to reminding people to like and subscribe and hit the bell, in every damn video.

If you want to share your love of crafts, you use Etsy. If you want to sell stuff generally, Amazon again. In both cases, you follow their encyclopedic sets of rules about what and how to sell as well as their convoluted, ever changing, unpublished rules around what shows up where on the listings and searches.

If you want to connect with people, you connect how and to whom Facebook or Tiktok allow you to.

If all this pisses you off and you want to rage at random people, you rage about the things Twitter allows you to.

In theory anyone can do any of those things without the platforms, of course, but in practice they don't and always for a slew of reasons that basically amount to someone got here first and built a moat.

In the most basic cases, anyone trying to do something on their own will at least need to advertise...on the platforms, the way Google, Facebook, and TikTok allow you to. Good luck.


Functional programming as an introductory language? Maybe it's just me, but functional programming is a steep hill to climb, it has always felt more akin to mathematics than, say, scripting.

To this day I have still never really "got it". From what I can see, the people who love it really do love it, maybe one day I'll give learning it another shot.


I dont remember which paper, but there was once a FP vs OO for learning paper that showed that there was not much difference in speed of learning: but that was on total beginners. Since most langs are OO/imperative, the FP-style is simply less common, thus a steeper learning curve than the next OOish/imperative language.


The choice stuff might make vector and GPU programming more natural, assuming it can be made efficient enough. The syntax it provides is certainly convenient for that. Might even be an advance over array programming in some ways, by being more flexible and opening up more forms of expression. Has this been considered?


> Might even be an advance over array programming in some ways, by being more flexible and opening up more forms of expression

In what respect is it an advance? It seems to me strictly less expressive. (Which is not meant as a denigration; this design seems well suited to the problems it is trying to solve.)


Really exciting stuff. Couldn't give a damm about the metaverse but these are some cool language features. I'm really curious about how verse will be used in the wild.

Also, I appreciate how a bunch of new languages are getting Effect Systems (Unison, saw that one here too)


Its always amazing to see Tim Sweeney's name attached to things. The first game I played of his was ZZT around ~1992. It was a basic but super cool game that featured a basic level editor!! The idea of creating levels for a game was still very novel and very exciting for 8 year old me. I would draw out levels at school and try to create them at home (with mixed success).

It's 2022 and he's still very very relevant in the gaming world. He still runs Epic, he's still publishing papers on gaming. I don't think any of the other folks from that early era of PC gaming are still relevant in the field (Carmack moved on to VR and now AGI).


It has always been interesting to me that Carmack gets all the intention because he likes talking about himself so much, while Sweeney rather quietly toils away and is a multi-billionaire. Just reading more about him now, I'm happy to see that he is also a major conservationist.


if the goal is to use a language for a metaverse, with lots of actors, why not use Erlang which is great for dealing with multiple actors, plus it already exists? I don't use it because I don't need it for my usual programming, but once you get some of its quirks (coming from a functional world they're even less), it's a great language.

It seems to me that the objective is not finding a good language for a metaverse, but using the idea of a metaverse to justify this new language.

More realistically, an open metaverse (if it will see existence, I'm not yet convinced of its utility) will support multiple languages.

Even more realistically, the internet is a metaverse already.


A bit of a shame for this person: https://github.com/verse-lang/verse

(I'm pretty sure that's not what is discussed in these slides...)


I have also built a functional language, Winter (https://github.com/glaretechnologies/winter) that is used in our metaverse (https://substrata.info/).

A functional language has some advantages - because Winter programs can be bounded in space and time, we can efficiently execute untrusted scripts from users safely. The scope of Winter programs is reasonably narrow - mostly pretty simple scripts setting a transformation as a function of time. But for that it works well.


Your links didn't explain how you are bounding time and space. Giving programs a fixed time and memory budget doesn't require a language to be functional.


I know people will throw stones, but dynamic-oop-folks likely remember https://en.wikipedia.org/wiki/Croquet_OS


And before that, E. https://www.crockford.com/ec/


btw Croquet is still alive at https://www.croquet.io

The bit-identical virtual machine is pretty nice idea. I hope to see some demos for WebXR soon.


This looks like Erlang for game design.

I’ve done some game programming software and a large number of their decisions make sense in that environment. In particular, expression evaluation and a lack of booleans.


I have no experience with game programming. Why would you not want booleans when developing games?


Not a game dev, but I suspect that it might be to avoid branching (which slows down processing)


Branching is not a bottleneck in games.


I might as well start with: I was disappointed to get near the end and see that ShipVerse would not be available until sometime in 2023.

Other than that I like a galactic scale transactional memory and trying to make Haskell simpler for the masses while still supporting professional programmers.

I worked hard to understand the slides (BTW, I am an enthusiastic but not skilled Haskell programmer) and in 6 months or so when they drop ShipVerse I wonder how much I will remember.


I looked at the slides and the start of the paper. I'm mainly confused about two points:

- What's the advantage of introducing a new lambda calculus extension? What will we understand about Verse in virtue of the existence of VC that we don't understand about e.g. minikanren?

- How does the rewriting based execution compare with the search procedures of existing logic programming systems, and how does that impact how one would use Verse? For example, it sounds like they both want Verse to be purely declarative and performant enough to support the metaverse. But:

  - The way the slides expand out the choice of two variables makes it look like it's working in DFS order. The paper makes a big point of rewriting expanding out values "spatially" rather than in time. And I guess it's relatively clear how the rewriting system would produce DFS-like behavior ... but that's a problem isn't it?

  - E.g. minikanren as one of its key design points chose its interleaved search so that search could give comprehensive results on problems where DFS would (naively) never return, because it would go down some infinite rabbit hole. And minikanren, wanting to be more declarative, doesn't rely on "extra-relational" operators like "cut" which are important for prolog. This was supposed to have the impact that users don't need to worry about imperatively controlling the search, but this is only half-true in that users must pick the order in which variables are introduced extremely carefully.
  
  So if one says "x, y, z are all nats in (0, 1, ...) and x * x + y * y == z * z", it seems that, as with DFS, the rewrite system would include a path which tries to expand out and then check all of the (0, 0, z) possibilities to the left of any of all of the (0, 1, z) possibilities, etc. So even if under lenient evaluation there's _some_ tree that finds (3, 4, 5), and this may be reached in chronologically finite time, there's an infinite pile of work to be evaluated _to its left_. So can one never take the "first" element from this sequence, since it seems this means "left-most"? Or does this whole thing need to implicitly be restricted to searches over finite domains?

  It seems like either one must loosen assurances of ordering within sequences, or one must include "extra-logical" means to control the rewrite process.


This feels doomed to me.


Reading half of it I'm struggling to see why they chose the semantic 'false?'. How do you read it? It sounds like a question, why not just 'false'?


AFAICT: Just "false" would be a regular boolean value. The question mark turns it into a logic value so it can be used in conditionals.


But there are no bools in Verse so seems the keyword isn't taken and could have just been syntactic sugar for 0 from what I can tell?


There are no bools in Verse conditionals. That doesn't mean that there are no bools in the language at all.


I was looking at something and wasn't sure if I understood it. There's this part of verse:

x:int; y:int; if (x=0) then y=1 else y=2; x=7; y

Ok, so here, y=2. And then it goes on to say that the equal operator within a "conditional scrutinee" is "rigid" and can only be read, not unified. Does that mean if we take out the "x=7;" line, and then evaluate "y", that "y=(2|7)"? Or how exactly does that evaluate?


I would expect "y" to remain unbound, as in "not enough information".

When evaluated, the "if" expression is added to the "knowledge store", so that it will be evaluated when "x" is bound elsewhere, giving "y" the corresponding value (just "1" or just "2", never "(1|2)") only after that.

P.S. See the definition of the "amb" operator in typical logic programming languages:

https://rosettacode.org/wiki/Amb


>Kick functional logic programming out the lab and into the mainstream

Yet this presentation did none of that. There were no montivating examples on practical usage. The language does not map cleanly to webassembly, it instead reduces to a prolog like graph based execution model. In order to go mainstream you need to be solving more problems that existing languages have than you are creating by having someone use your new language.


It is already being used in fortnite, so it works, scales and does what it needs to do.


If you throw enough compute at any language it will work for scripting games. Is there any evidence that this will actually scale better than a language that better maps to the host processor.


The metaverse is a collection of 3D environments just like web2 is collection of documents.

A webserver serves document while a metaverse server is basically a multiplayer videogame host.

Your browser opens a metaverse site by downloading a game engine and running it through JS and fetches the resources from the server to load the 3D environments.

All this to say: what has this functional programming language actually to do with the metaverse?


ugh, so many downvotes and not a single person to explain why this is "a language for the metaverse" as opposed to being just "a new language"


They are designing and creating Verse to be the scripting language for the Unreal engine, and the game Fortnight. So it is purpose built to drive 3d multiplayer worlds. It will be fun to see how it works out. I think it is cool to see a functional language around a game engine.


> So it is purpose built to drive 3d multiplayer worlds

How ? I can't see anything in here that makes these worlds any easier to drive.


The short answer is Innate parallelism at a far higher degree than most languages.

The longer answer is pure functions applied to immutable objects (or objects held in a transition) can be applied in parallel.

The language is designed in a way to give you that, in a way where it is by default rather than in special cases.

The second part is, it being at a mid point between lazy and not - things will be evaluated as there is processor to do so. So things which can be delayed will be, but if something can be done now, for later, it will be.

The combo of this means you can handle more interactions with entities per second, using the cores available.


> pure functions applied to immutable objects (or objects held in a transition) can be applied in parallel

I have been writing code like this every day using Scala for years.

With frameworks like ZIO (zio.dev) making this trivial to do.

None of which is faster than if I had wrote that code imperatively using say Java. It is definitely easier and safer to write. But performance is about far more than just optimised threading constructs on top of immutable data structures.


Most mainstream programming languages are designed as if we are all targeting 70s von Neumann machines. When applying these languages to e.g. highly distributed or concurrent architectures, they are no longer a good fit. It's great to see at least an attempt at innovation rather than just a Java clone, which is all Google and Microsoft have ever offered so far.


I’ve never seen a program that would magically be better on a distributed architecture if it was written in a different language. Programs run on a single core because they have a single core’s worth of work to do. Autoparallelization, like autovectorization, doesn’t work.


> Autoparallelization, like autovectorization, doesn’t work.

I think concurrency is more of an issue for games and was the example I gave (Haskell's transactional memory for concurrency is a great example of what pure-functional buys you). Nethertheless autoparallism can work, if you again are prepared to accept more constrained declarative languages. Apache Spark is a great example, it offers a constrained functional language with maps and folds, that is autoparallized across a cluster of machines. The research language NESL (nested data parallelism) is even more impressive.


I don’t believe the metaverse will be a browser app; even Decentraland now pushes to a download because it has to do so many real-time operations.

That said, I’ve worked with a few of the Haskell OG team (not directly SPJ though) and if you bring in a functional language designer to work on a project you can bet it’ll start with a new language (I’ve worked on two domain specific languages due to this).

Some of the constraints right now with this sort of program are CPU/GPU bound, so if we they’re building a new I suspect it’s going to focus on concurrency across threads/GPUs to support the high rendering requirements etc and make it easier to work with networking and C/C++ FFI GPU drivers. This is speculation on my part though :)


I must admit that I am a bit disappointed. Verse doesn’t seem to bring anything new to the table that will make developers more productive.

Perhaps it would be better for the team to work on a runtime environment (think JVM or similar) optimized for the metaverse. With compilers for a number of languages compiling to it.


I feel a bit sorry for https://github.com/verse-lang/verse whose just-for-fun toy language has been name stomped by epic and SPJ.


Switching to a new programming language is very time-expensive, so I would really like to know the advantages of this language over C#, Java, Python etc. What is the motivation. Why not use something that exists, maybe Lisp?


Tracking of effects would be a big one; compare the number of sandbox escapes in Java to JavaScript to WebAssembly. It's much easier to secure a language without effects, and then slowly add the effects you can sandbox or control than to start with a large effectful language and try and slice bits of it off.


Programming can be a hobby and hence time-expensive doesn't mean much. A language doesn't necessarily have to be more productive than other languages from the very start.


> a variable names a single value, not a cell whose value changes over time.

So, not a variable? :)


Well, a variable in usual Math usage, just not the traditional imperative sense.


I still don’t get why people would want to type ‘:=‘ instead of just ‘=‘, but okay.


Confusion between '=' and '==' is a typical pain point that beginners have when learning to program. Plus it is super at odds with the notation we all learn in math.

':=' is didactically better because it shows that assignment has an direction. It also leaves '=' for actual equality.


Turns out, a few slides later, that ':=' doesn't have direction; it's merely a contraction of declaration, ':', and equality, '='. So 'x := 3' means the same as 'x:int; x=3'. Perhaps more surprisingly, so does, 'x=3; x:int'.


Big fan of <- for assignment for this reason.


F#!


I guess it's not strictly necessary to require a ':', but it tells people that a new variable is getting defined, instead of an existing one getting narrowed.


Turbo Pascal FTW!


Functional logic programming was all the rage when i started my career 30 years or so ago, but besides some obscure use cases it did not have a lot of success - i wonder if this will be the killer app.


I’m sure there were many discussions that led to concluding some of the languages design decisions and how they’d be impactful for ‘coding in the meta verse’, would love to see slides on that too.


That's a wise move from Epic.

If we somehow happen to have a few competing metaverses, the one with the biggest developer mind share will likely come out ahead.

Just reading the PDF energized me about the possibilities.


I am a fan of Simon PJ and an avid Haskeller, but very underwhelmed with this presentation. This is a simple rehash of logic programming with some stuff about the metaverse thrown in. I expect more. This is the haskell equivalent of crypto-bro hype.

I am equally surprised by the novelty that some people seem to have when encountering this language. As far as I can tell, nothing of real novelty was added to the corpus of PL knowledge. It's really a straightforwards logic language. These are notoriously hard to program in for one person, especially when things get interesting (i.e. cuts). I can only imagine how awful such a thing will be if multiple people were doing this.


Naive question: Why does the metaverse have to have an economy?


Also... sad to see the 'fst' abbreviation persisting into a new language designed from scratch

The sequence and narrowing stuff looks super interesting though!


Verse is really a disciplined cue-lang on steroid. https://cuelang.org/


One thing I really like Verse is it makes prolog/datalog cute.


I'm confused by the Nested choices and funky order slide.

How does x:=(y|2); y:=(7|8); (x|y) Give only (7,7), (8,8), (2,7), (2,8)?

What about (7,8) or (8,7)?


These are not possible, as x can be 7 only if y is 7, and likewise for 8. Semantics isn't based on sets, but on non-deterministic assignments to variables: y is either 7 or 8 and x is either y or 2.


(7, 8) and (8, 7) are not possible because:

If x = 2 then clearly (x, y) is not (7, 8) or (8, 7) because 2 is not 7 or 8.

If x = y then (x, y) is (7, 7) or (8, 8).


It can't be either of those, because any given outcome is forcing a single value of y (or any variable in general).


Edit: Ah... I see now.

By that logic, how is either (7,7) or (8,8) allowed?


"x:=(y|2); y:=(7|8); (x|y)"

translates (roughly) to:

(x|y) is a cartesian product--the set of all ordered pairs (X,Y) such that X is in x and Y is in Y, in order.

x has 2 elements: X=y and X=2.

y has 2 elements: Y=7 and Y=8.

So the set of all ordered pairs is (X=Y, Y=7), (X=Y, Y=8), (X=2, Y=7), (X=2,Y=8).

Since X=Y in the first two pairs, replace it with the value of Y from that pair: (X=7,Y=7), (X=8,Y=8).


It's an exclusive or. So if x is y that is 7, then it can't be 8.


y can't be both 7 and 8 at the same time. It's either one or the other.


I was confused by this too.


Is there any git repo I can read about discussions and see some toy implementations of the language?


Is the choice operator somehow analogous to OR? If not, why not just choose any other syntax than `|`?


It is or... i.e x := (3 | 4) means x is either 3 or 4. The ; seems to be basically be an and. The evaluation finds an assignment of the values such that the world is true. For instance x:= (3 |4); (x % 2 = 1); x evaluates to 4. Type checking and evaluation seem to be the same thing... so x:= (int | string) x = ("hello" | 4.3); x might evaluate to "hello"


(meta) Interesting to go with what looks like Comic Sans in this day and age. Deliberate trolling?


Using it as slide text isn't so bad. Placing parts of code, like as[i] in it, is pretty silly though.

My personal preference for a typeface that gives off the feeling of handwriting is an italic one. Palatino's italic bold is quite nice for that.


> Deliberate trolling?

Pretty much. Simon Peyton Jones has used for years, and if I recall correctly, he said that he just likes it, for whatever reason.


So.. the metaverse shall be written in Verse? - Would Shakespeare approve or die in disambiguity?


Seems like a bonkers meme language.


Does anyone know when this language will be ready to play with?


Epic already uses it, so I guess not too long: https://mobile.twitter.com/saji8k/status/1339709691564179464...


No, the language shown in the images in that tweet is definitely not this one.

It is not a functional language and is obviously quite cheesy in comparison (see someone's comment, in this thread, about it being derived from something called skoomumscript).

SPJ's language is not ready yet.


Based Comic Sans chad makes hackers seeth. More at 11.


Seems like a very small win for a lot of work.


Tim's great. Why is his face on the slide though?

:)


I guess someone will refrain using Verse


Is the paper just describing Clojure like Software transactional memory?


Tim doesn't like bread, that is all I need to know


they lost me at metaverse.


comic sans lmao


I don't want to program in any new language. I only want to describe the program to GPT and have it write it for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: