Hacker News new | past | comments | ask | show | jobs | submit login
Pony Programming Language (github.com/ponylang)
196 points by curling_grad on Dec 13, 2022 | hide | past | favorite | 82 comments



I want to thank whoever wrote the "Why Not To Use Pony" bullet points. They're quite honest and helpful. Most of these sorts of pages are 110% "rah rah our language is the best for every possible situation, which makes it hard to figure out how to evaluate them. For me personally, this has seemed worst with database products. No matter what the database's actual pros and cons are, their project pages will swear that they are the best at every possible scenario that might involve data.


> they're quite honest

I've come to appreciate the "when X is not the right choice" that some projects list. Even if it's the right choice it often provides additonal information because it deliberately takes a different perspective.

Related to "being honest" are the many bullet points some projects lists as features only to have you discover very much later that it's more like a roadmap without saying so. Just be honest when you describe your project because exaggerations will only lead to frustration among potential users.


I think it's a bit of a cop out. 3 variations on "we're not big yet!"

Examples of things that might be true (I do not know how true this is! Inferring off of what is seen, and I am not judging Pony on any level)

- "prototyping UI work in Pony will not work, as there are no great known patterns for writing small UIs that play well with the consequences of our design decisions prioritizing correctness"

- "our actor based model works well for the vast majority of highly concurrent programs, but can make realtime systems hard to get right" (I have had this experience in Rust trying tokio vs just opting for manual threading)

- "restrictive syntax means that solution spaces involing DSLs will leave you wanting."

I think everything is good and great, but it's valuable to lay out the priorities and what are less of a priority. Of course tooling kinda sucks for newer languages. Of course! Tell me about things that will likely always be a bit miserable.


The problem is that your proposed "bad at" list could be infinite. Pony is probably a bad choice for embedded, scientific, mobile, graphics, browser, etc.

What they've done instead is they've explained why you might want to use pony and then explained why you may not even if the "why" sounded like a perfect fit. You came up with a list of bad fits just fine based on what they told you it was good at, without them having to spell out every no-go. Their list of reasons why not is intended to disuade people who would otherwise be their target audience.


I don’t get your complaint, it seems you’ve provided very good examples of them doing exactly what you say you want them to do.

As an example from someone who finds the language compelling in some ways, outright ruling out DSLs is a major factor that would rule it out of consideration for me. Better that they saved me the trouble of wondering!


Sorry, I wasn't clear. What I listed was not in fact what they wrote, but what _I_ would write given this exercise.

What they wrote was (summarizing) that there is not a lot of tooling available, there still isn't API stability, and not many libraries available.


My apologies, I should’ve checked, but maybe you could offer some of these caveats as a contribution. Your language was even convincing enough I thought it was theirs.


Where is that list?



Fun fact: the person who created Pony, Sylvan Clebsch, has been working on a Microsoft Research project called Verona. From it's README [0]:

> Project Verona is a research programming language to explore the concept of concurrent ownership. We are providing a new concurrency model that seamlessly integrates ownership.

[0] https://github.com/microsoft/verona/tree/master


Quite a bit of PL research is now going in very similar directions; trying to extend Rust's ownership model in a way that can express more patterns of code, while still being reasonably intuitive and ergonomic. (The PL community is already familiar with separation logic, which is quite general and powerful. The assumption is that a middle-of-the-road approach could be very helpful.)


Without bothering to look at it first, that sounds like interesting things will come from this. Let's hope they don't share the fate of Singularity/Midori.


A previous conversation: https://news.ycombinator.com/item?id=17195580

On a sort-of-related topic: I've been checking out the Inko programming language, which has some similar goals/ideals as Pony:

https://inko-lang.org/

Inko lists Erlang and Pony as inspiration for its concurrency model:

"Inko uses lightweight processes for concurrency, and its concurrency model is inspired by Erlang and Pony. Processes are isolated from each other and communicate by sending messages. Processes and messages are defined as classes and methods, and the compiler type-checks these to ensure correctness."


Building a typed, OO language inspired by Erlang on top of Rust seems like a bit of a platypus if that makes sense. I'm really wondering who is using this other than former Pony users.


looks cool to me, really wish they had standard library documentation and better editor support, but it's obviously waaay too young for this to be critical. i can see this maturing into a really nice web-backend language though. typed elixir with rust error handling and algebraic types? sign me up


I always wonder when I come across a text like this what a lightweight process is. Is it different from just a thread? Or is it a type of lightweight/virtual thread?


Yes it is a lightweight thread or “green” thread. Coroutines that are scheduled by the languages runtime across a pool of multiple OS threads and are implemented using an event loop that resumes a coroutine whenever the IO it was blocked on is available, sort of like if in Node all the async/await keywords were implemented for you automatically on every IO OP.

Some languages (Erlang, Go, Haskell) have green threads with preemptive scheduling on CPU loops (though Go only preempts at function boundaries IIRC) but Pony does not, actors have to cooperate.


> though Go only preempts at function boundaries IIRC

I think this was improved with more recent Go versions and now it is really preemptive


There are multiple definitions of LWPs. I think the way it is being used here is a user-scheduled thread. In the old Unix days it meant "kernel thread"


I looked into pony a couple of years ago. The type system was gorgeous, but the library ecosystem was very spartan.

I hope they get there eventually, I liked the language itself a lot. The learning curve is very steep at first, but I felt that I got over hump after a couple of days playing with it after work.


To me, the interesting thing about Pony is its system of reference capabilities:

https://blog.beardhatcode.be/2018/10/pony-capabilities.html

They feel a lot like Rust's system of ownership and borrowing, but oriented more towards safe communication in an actor system rather than safe memory management.


It seems to this shares a lot of similarities with Elixir. The biggest thing that stood out to me is Pony appears to be object oriented in approach where as Elixir is function in approach.

Aside from Elixir being a bigger community the core goals and purpose seem very similar to me


I used Pony for a small hobby project around 2 years ago. It looked promising and I had a lot of fun using Pony. My biggest gripe was lack of high quality materials to actually learn the language, especially that some concepts were new to me. The community on Zulip, including the main contributors like Sean Allen, were very welcoming and helpful.


Minor nit, it takes far too many clicks to see what the language looks like.


Came here to say this also, couldn't find any code samples after 5 clicks and gave up.


I went much deeper than 5 clicks, into tutorial, about, getting started - no code.

Only thing remotely like the syntax is the cheat sheet.


I didn't take me too many clicks; from ponlyang.io I cliked "try it in your browser" and it dumped me to a hello world.

Alternatively there is an examples folder on the linked GH page.



>Actors themselves, however, are sequential. That is, each actor will only execute one behaviour at a time. This means all the code in an actor can be written without caring about concurrency: no need for locks or semaphores or anything like that.

I mean, sure, it's nice not to have to build this behavior in Java, but it can (and I and many other have) be built in Java, and then there's still Java's massive code base available. I have yet to do it in Rust, but Rust probably has great support for just this.

But let's not pretend that this solves all problems related to "concurrency", such as race conditions. In fact it specifically only solves problems where the individual state of an actor is completely independent of any other actor. Sure, that's a lot of places if done right, but in those places, a queue driven FSM in Java does the job.


It does solve all the problems related to concurrency, because there is no blocking IO. Blocking is forbidden. Pony guarantees freedom of race conditions and dead locks.


Was reading docs on actors

https://tutorial.ponylang.io/types/actors.html

And I wonder what makes pony actors different to Go's goroutines?


Pony actors:

1. Have a typed, nominal interface (called behaviors)

2. Have state

Those are probably the main differences, just with regards to actors.


In general, your latter question is better asked as "what makes actors different from threads?", because in practice a goroutine is just a thread.

(What makes Go different from C++ in the 1990s is more community practice around what concurrency patterns are most commonly used, rather than technical differences in the language. Technically Go is just a mutable threaded language; for this purpose details about how the threading is accomplished are not that important.)

In general the big difference with actors is that they are isolated from each other in some manner that means they can't just reach in to some other actor's memory space and manipulate the memory concurrently.

Languages are starting to blur the lines here. Rust's lifetime annotation is not on its own sufficient to call Rust an "actor-based language" but the fact that they also prevent threads/async processes from just reaching in and manipulating other threads/async process's value unexpectedly means you've certainly taken a big step in that direction. And while Go provides no language-level technical support for actors, I still have a lot of things that are de facto actors in Go that uses scoping and export rules to confine the ability of other goroutines to interfere with a particular "actor"'s values, combined with an API that enforces message-based communication even though it works through what look like normal methods. This is not enforced at all technically by the language but it works better with community code than it does in older thread-based languages because the Go community is more recent and you can generally rely on libraries using somewhat more modern concurrency primitives, like providing services based on actors or actor-like objects.

There are some things in programming where if you mostly follow some practice, you get most of the benefit. There are other things in programming where the benefit only kicks in if you do it almost entirely correctly; an example of the latter is Haskell's rigid insistence on functional programming and immutability, I'm not convinced that it's worth doing 90% of that but when you fully commit and build the language around it, there is an interesting spike in capability at the end.

After programming in this space for well over a decade now, I kinda feel like actors are in the former case. I'm not sure you truly need a full language-level dedication to them to get most of their benefits. There is a last level of confidence and surety you can get in a language like Erlang, but I still think in practice that's just a last incremental benefit rather than a sudden burst in utility as you get to 100% purity. I think it's fine to program an actor here and an actor there in other languages and you get the benefits right there on the spot. Even what benefit there is, things like Rust gain in a different way. Still, I've been watching Pony with some interest.


I have to disagree pretty strongly about the lack of a "sudden burst in utility" between non-pure actor model and the pure actor model. Erlang's (and Elixir's) VM, the BEAM benefits greatly from knowing, with absolute certainty, that it is impossible to express shared state between two actors.

First off, it's garbage collection system: In Erlang, most actors don't get garbage collected at all until the entire actor is terminated and its memory freed. Long-running processes get handled with a little more grace than that, but it's very performant compared to Java's constant scanning.

Likewise, certain language design decisions allow for optimizing the scheduling system: most obviously, Erlang's complete lack of loops. Want to repeat a computation? Recurse. That choice allows for the scheduler to be far more intelligent, because the new function call provides for a great opportunity for the process's thread to be co-opted.

Another example is Erlang's seamless distribution: since no threads can share memory anyway, there's not a fundamental difference between them running on two different CPU cores or two different machines, save for some added latency.

And, of course, the fault tolerance story. The guaranteed lack of shared state between two actors means that the VM can be certain the blast radius of a failing actor is limited to its context. Wipe its memory, move on. This failure handling model turns handling the error conditions of "invalid input", "missing resources", and "Frank from IT has finally snapped and taken a sledgehammer to one of our servers" into qualitatively the same thing: one or more actors have failed, their supervisor needs to respond appropriately.

All these benefits would be diminished or at least qualitatively different (for the worse) in a language that wanted to walk the middle ground.


Minor nit: In a language like Erlang, function calls (including tail recursion) are instrumented to implement things like preemption, GC, and statistics updates. But loops (backward branches) can just as easily be instrumented, and various imperative language runtimes do so.

Erlang is somewhat unusual among contemporaries in how it multiplexes actors onto system threads. This has historically been quite difficult to get 100% right because even if you manage to make all system calls non-blocking (e.g. TCP socket I/O), "non-blocking" disk I/O can still stall computation for a relative eon. I'm pretty excited about io_uring because it provides a general solution to this problem.


As far as I know the biggest diffeence between Erlang’s GC and other GC algorithms is that the former only has to work on a small local heap.

This also works to a degree in Java with thread local allocation buffers. Objects get allocated on a thread local basis and only move to a shared heap generation when they lived for long enough, or are being shared to another thread.


> As far as I know the biggest diffeence between Erlang’s GC and other GC algorithms is that the former only has to work on a small local heap.

This is also the biggest potential upside of "pluggable", optional GC in a language like Rust. In most programs, the size of a group of references that might form an ownership-relevant cycle is naturally quite small; for the most part memory can be managed with simple RAII, and even refcounting (where multiple "owners" might be extending the lifecycle of a single object) really is quite rare.

Leaving open the possibility of GC allows for expressing more programs, without reducing performance.


> for the most part memory can be managed with simple RAII, and even refcounting (where multiple "owners" might be extending the lifecycle of a single object) really is quite rare

This is absolutely not my experience. I believe there is a bias here between developers using mainly low- or high-level languages, but these nested lifetimes are common in the former because these developers have learned to form such lifetimes, and are biased towards those.

In plenty of areas lifetimes are very very dynamic (any language interpreter/symbolic processing, but even web backends).


Keep in mind that in most GC languages every reference between data objects is inherently managed by GC. That's where much of the overhead comes from, needing to repeatedly trace the full set of program references. Whereas it's absolutely not the case that most references in a real-world program will even be relevant to object lifecycle, let alone be involved in a case where such lifecycle is "dynamically" being extended by multiple objects.


> That's where much of the overhead comes from, needing to repeatedly trace the full set of program references

No modern GC goes over naively the full application, generational GCs basically group together objects and cache their inter-generational references.

Also, I don’t really get what you mean. What reference could be - regardless of language - that is not related to the life cycle of the targeted data? That reference can only be invalid in such case, so, the inverse of the life cycle which is absolutely related. Sure, a pointer can be encoded many smart ways, but in the end they either point to a semantically correct data location or not.


Generational GC's simply have multiple "generations" assigned by age, that get traced at different frequencies - the "younger" generation more frequently than "older" ones. Any caching involved is strictly temporary and does not obviate the need to "go over" the full set at some point. The alternative is a lot of memory overhead.

> What reference could be - regardless of language - that is not related to the life cycle of the targeted data?

If a program reference can be statically proven to never outlive the object it targets, it's per se irrelevant to that object's lifecycle. That's what the borrowck pass in Rust is all about.


Indeed, so generational GCs don’t in fact scan the whole heap all the time, which was my point.

That‘s just an optimization. You explicitly made the lifecycle requirement part of the semantics.


Re GC and scheduling: You're not going to like this, but like I said, I worked in this space for many years, and my assessment is that in 2022, these are not relevant. It might be relevant if Erlang was a much faster language, but it's not. It doesn't matter to me that Erlang's GC is hypothetically faster because of its actor model when Go is just straight-up so much faster across the board that it doesn't matter.

Whether a fully compiled language could recover this I don't know, but I doubt there's any room in 2022 to spank a modern GC by more than a few percent as a result of this. By the time you're getting to the scale where this is a problem, everything's a problem anyhow.

Likewise for the "seamless distribution". I don't need it, because the direction everyone has gone is to use other messaging busses like Kafka or the literally dozens of similar products that exist now. I don't need actors to achieve this. The historical accidents of how Erlang achieved this goal are not necessary. This is a classic Erlang mistake, to think that only that one precise space in the design space can achieve these goals, and not noticing just how many other alternatives in this space have straight-up surpassed Erlang in the meantime.

Fault tolerance is another thing that I've been operating on just fine with an 80/20 solution. I actually use my own version of supervisor trees, and while I lack language guarantees of their safety, the truth is they work just fine even so. Language guarantees aren't everything.

It's not that you're wrong in the sense those benefits don't exist, it's just, in 2022, they're not very interesting. It's not 2005 and Erlang is the only practical solution to these problems. It's 2022 and there are an abundance of other solutions, and while they may all have corresponding disadvantages of their own, the probability in 2022 that Erlang is the best choice for any given task really isn't all that good any more. I happily trade away the marginal improvements you mention for 80/20 solutions in Go, while I claim Go's very significantly better performance across the board, for instance. Rust, Node (as much as I otherwise despise it), and many other solutions provide other solutions in the space.

What Erlang gives up to force things into "everything is an actor" aren't things I want to give up anymore. A worthy experiment for the time but I view it in much the same way I view Java's "everything has to be in a class, even pure functions and stuff that ought to be standalone code". Nice try, but there's a reason numerous languages have been introduced since Java/Erlang and don't copy this particular purity. You just don't get the promised win. It's great that we tried it. I wouldn't know we don't get the promised win if nobody had. But now we have, and there isn't some amazing burst of benefit at that last step of purity.

(I keep banging on pure functional programming because it's honestly the only one that comes to mind. In general, you should be suspicious of anyone who claims that particular shape of purity vs. benefits, in practice few things work that way and you should only trust practices that yield marginal benefits as you marginally use more of them, and expect that somewhere before 100% purity the benefits will start toning back down again. Maybe software transactional memory is the other exception that comes to mind; it only works at all with near-total dedication.)


> It doesn't matter to me that Erlang's GC is hypothetically faster because of its actor model when Go is just straight-up so much faster across the board that it doesn't matter.

> Whether a fully compiled language could recover this I don't know, but I doubt there's any room in 2022 to spank a modern GC by more than a few percent as a result of this. By the time you're getting to the scale where this is a problem, everything's a problem anyhow.

It's still relevant if you need consistency. Any GC language eventually has to defragment the heap, and any pervasive-shared-mutability language pretty much has to stop the world to do so. For all that Go is a high throughput language (something that's overstated IME), it's not suitable for realtime, and Erlang is.


The benefit would be if there was a language where everything is an actor. It would greatly simplify your programming model.

I'm thinking of the early versions of Smalltalk which tilted towards this direction. Why did they abandon 'active objects'? I guess the hardware just wasn't there yet.


There are languages where everything is an actor, like Erlang. I said what I said because I worked in them professionally for many years. While there is incremental benefit to being 100% actors versus 90% actors, I don't think it's that impressive of an incremental benefit, and is easily overwhelmed by many other factors. That's why I work mostly in Go and borrow actor structures when helpful. That turns out to be every non-trivial program I've written so far... but there's no particular benefit and often non-trivial costs in trying to write the whole program as actors, rather than using actors as useful servers. The simplification of being able to look at something and just know it's an actor I don't find very helpful. It's not hard to document these things, or see them in the structure of the code.

By contrast, 90% pure functional programming is a pretty terrible paradigm. You really need to push that number up to get the benefits. As you approach 100% I think the benefit starts to take off. But, speaking very sloppily since "percent pure functional programming" is kind of difficult to precisely define, 90% is like the worst case scenario. 20%, which is to say mixing it in to other paradigms when useful without commitment and without paying much price and just harvesting the low hanging fruit can be very nice, and 99.9% can be very powerful, but 90% is the worst case, where you're paying most of the price but getting few of the benefits of true commitment. This is what I mean by seeing a lot of benefit in going the full way. I don't see this benefit to actors. They work just fine embedded into large programs and deliver the vast bulk of their benefit without having to structure the entire program around them.

Definitely still worth learning; I consider them a core technique in concurrency programming. When you can easily afford the message marshaling and concurrency expenses, they are one of the easiest ways to get concurrency without complexity. And most of the time, you can afford those costs.


Rust effectively accomplishes this, because mutating a value that's accessible by a different thread requires both interior mutability and a separate Sync trait, meaning that accesses will be properly synchronized. Some types implement this (atomic types, Arc, Mutex, Rwlock) but the default is not to, so the default is that data may only be mutated by a single thread at any given time.


I think there is more to Actors than just single-thread-access. Namely only the code of a given actor(-class) should be able to read and write the internal state of "instances" of that class. Why because then only the code-unit/module that defined the state variables can depends on their definition. A bit like "objects" internals can only be accessed by methods of its class.


I am interested in Pony due to the compiler maintaining safety while multithreading.

I am working on a multithreaded language that has its own interpreter and compiler that codegens targeting it.

I use message passing but shared memory mailboxes using a lockfree algorithm.

I am trying to solve a multithreading problem which is the sharing of objects between threads. I want shared memory semantics between separate interpreters but avoiding copying.

For an interpreter to be aware of an object it needs to have a pointer to it. So I need to update a book keeping for the object to refer to it.


(This is all based on my recollection) Pony does this through reference capabilities, ownership, and garbage collection.

Basically in Pony you can describe memory as:

1. Single Ownership, Immutable

2. Single Ownership, Mutable

3. Shared Ownership, Immutable

4. Shared Ownership, Mutable

If you're moving (1) or (2) that's basically free.

If you're sharing (3) you just need an atomic reference count.

If you're sharing (4) you need an atomic reference count and some sort of synchronizing primitive, such as a mutex (or maybe the runtime can understand how to schedule things such that they don't race, don't know).

In Pony these reference capabilities all have names that I don't remember and there are also ways to "recover" a capability, basically translating one into another, which was one of the more complex features for me to get my head around.

That's basically how I recall it working, although the reference counting implementation is very interesting and they wrote a paper about it (Orca).


This isn't quite right. The key thing is that Pony uses the actor model, where an "actor" is an object, a green thread, and an MPSC queue, all bundled together into a single conceptual unit. These MPSC queues are the only synchronization primitive; there aren't mutexes (which means that Pony programs can't internally deadlock, though they can livelock). For this reason, for any given reference (pointer) to a piece of data, you can have any two of mutation, aliasing, and concurrency (i.e., sending the reference to another actor's queue, which doesn't count as mutating the actor). But you can't have all three, because that would allow data races.

Consequently, three "reference capabilities" fall out of this design:

- "iso": allows mutation and concurrency, but not aliasing.

- "val": allows aliasing and concurrency, but not mutation.

- "ref": allows mutation and aliasing, but not concurrency.

The other three are more for generic kinds of programming or to facilitate more complicated tricks:

- "box": only allows aliasing, without mutation or concurrency. Subtype of both val and ref.

- "trn": allows mutation and aliasing, but the aliases are box and so don't themselves allow mutation. Also, you can subsequently change it to either ref or val, to get either mutable aliasing or concurrency (but not both).

- "tag": allows aliasing and concurrency, but not mutation, and (unlike any of the others) also doesn't allow reading the data. The only things you can do are pointer comparisons, and sending something to the referent's queue if the referent is an actor (again, this doesn't count as mutating the actor). Subtype of all the other ones; they all allow tag aliases even if they don't otherwise allow aliasing.


Most of these would seem to be allowed under Rust semantics, except that Rust does not conflate references/pointers and MPSC channel ends; there's no notion of "identity" for the sending half of a MPSC channel like there seems to be for a Pony ref. But if you have one of these you can clone it and send it wherever.


The main thing you can't do on the Rust side is mutable aliasing. Most languages allow this, and Pony also allows it with ref and trn, so long as the aliasing is all within a single actor, precluding data races (though not other kinds of aliasing bugs). Rust does not allow this, in part because making it work in a memory-safe way in the presence of algebraic data types requires a runtime garbage collector, which Pony has and Rust doesn't. Some use cases for mutable aliasing are addressed by Rust's interior-mutability types like Cell and RefCell, but these have limitations compared to full unrestricted mutable aliasing.

Other than that, most of what Pony does could, in principle, be written as a library in Rust, though the lack of language-level tracing garbage collection would mean that the ergonomics would be a lot worse and you wouldn't necessarily have the same guarantees around things like deadlock prevention.


I've been working on a multithreaded interpreter language which uses a lockfree algorithm to communicate.

I also have a buggy incomplete implementation of runtime reference passing.

The idea is that you request access to some data and then receive a reference that you can read or write to then you pass the reference to the next requester.

In theory everyone can read at the same time and writers can run sequentially. And alternate between reading and writing. This should prevent data races.

I want to avoid contention and blocking and synchronization cost and complexity. I think the compiler can generate thread safe code with the right design but it is not trivially easy.


Ah, ok, I had assumed there was a shared mutable version but couldn't remember. OK.


Correction: "subtype" should instead read "supertype".


Thanks for that summary.

I'm working on a system where references can be passed around at runtime. Can references in Pony be passed around at runtime between threads?

I'm working on ideas such as:

- all threads reading or writing at the same time. This is how I think GPUs work.

- I am working how to parallelise the bank transfer problem. You have bank accounts of money and want to transfer money between them but never deduct below 0. So you need consistency on an accounts balance. I was looking at sharding the data. I want to get more than 1 million transfers a second.

This article is good. https://travisdowns.github.io/blog/2020/07/06/concurrency-co...


Worth noting that Rust also supports "Shared Ownership, Immutable" data - that's what Arc<T> gives you when T has no interior mutability, and thus implements Sync by default. (By contrast, a T with interior mutability needs explicit atomicity or synchronization, or else cannot have its access shared by multiple threads.)


This is powerful.

I'm not sure how to implement shared memory semantics in an interpreter since each interpreter has its own state.


I wish there was some code samples. Just to get an idea of how it looks like. I browsed the website but could not find any.. Except for the playgroud with only one "hello world"


There are a few, but you are right that they could be more accessible: https://github.com/ponylang/ponyc/tree/main/examples

(I had the same problem)


is it just me, or is it super difficult to find code samples - even in the tutorial?


No, it's not just you. Huge mistake, in my opinion. If I see a page for a new programming language, I want to be one click away from code samples.



Thank you! I realized that the left-side menu is actually a TOC, which for some reason I didn't register at first.


The fourth link on the home page is "Example Pony Applications"


Indeed, like vapourware almost.


If you're curious about how Pony works, I did a Youtube stream(1) a few years back with Sean Allen from Wallaroo Labs. He patiently walked me through many of Pony's features.

1: https://www.youtube.com/watch?v=s4W4Jb-AAVI


Related:

Ask HN: Why didn't Pony take off? - https://news.ycombinator.com/item?id=31606084 - June 2022 (3 comments)

We moved from Pony to Rust - https://news.ycombinator.com/item?id=28777306 - Oct 2021 (175 comments)

Pony – High-Performance Safe Actor Programming - https://news.ycombinator.com/item?id=25957307 - Jan 2021 (152 comments)

Pony, Actors, Causality, Types, and Garbage Collection - https://news.ycombinator.com/item?id=24398469 - Sept 2020 (29 comments)

Pony: Lock-less, data-race-free concurrency - https://news.ycombinator.com/item?id=24201754 - Aug 2020 (1 comment)

Pony 0.33.1 - https://news.ycombinator.com/item?id=21784698 - Dec 2019 (2 comments)

Pony 0.29 - https://news.ycombinator.com/item?id=20370448 - July 2019 (19 comments)

Pony 0.27.0 has been released - https://news.ycombinator.com/item?id=19285762 - March 2019 (1 comment)

Fearless Concurrency: Clojure, Rust, Pony, Erlang and Dart - https://news.ycombinator.com/item?id=19241427 - Feb 2019 (143 comments)

Pony 0.25.0 released - https://news.ycombinator.com/item?id=18212633 - Oct 2018 (38 comments)

Show HN: Pony Programming Workshop - https://news.ycombinator.com/item?id=17619483 - July 2018 (5 comments)

Introduction to the Pony programming language - https://news.ycombinator.com/item?id=17195580 - May 2018 (72 comments)

The Snake and the Horse: How Wallaroo's Python API Works with Pony - https://news.ycombinator.com/item?id=16768706 - April 2018 (8 comments)

Some high level information about the Pony programming language - https://news.ycombinator.com/item?id=16619264 - March 2018 (10 comments)

Why we wrote our Kafka Client in Pony - https://news.ycombinator.com/item?id=16264845 - Jan 2018 (95 comments)

Dynamic Tracing a Pony and Python Program with DTrace - https://news.ycombinator.com/item?id=15953050 - Dec 2017 (2 comments)

Why we used Pony to write Wallaroo - https://news.ycombinator.com/item?id=15558051 - Oct 2017 (84 comments)

Pony Performance Cheatsheet - https://news.ycombinator.com/item?id=14999899 - Aug 2017 (35 comments)

Pony: Combining safe memory sharing with Erlang-like actors - https://news.ycombinator.com/item?id=14676505 - July 2017 (62 comments)

An Early History of Pony - https://news.ycombinator.com/item?id=14280565 - May 2017 (8 comments)

Pony language 0.11.0 released - https://news.ycombinator.com/item?id=13846063 - March 2017 (52 comments)

On the State of Pony - https://news.ycombinator.com/item?id=12331458 - Aug 2016 (40 comments)

Using Pony for Fintech [video] - https://news.ycombinator.com/item?id=11849579 - June 2016 (6 comments)

Using the Actor-Model Language Pony for FinTech - https://news.ycombinator.com/item?id=11297836 - March 2016 (1 comment)

Pony Patterns: Waiting - https://news.ycombinator.com/item?id=10927475 - Jan 2016 (3 comments)

Pony is an open-source, actor-model, high performance programming language - https://news.ycombinator.com/item?id=10902906 - Jan 2016 (57 comments)

Inside the Pony TCP Stack - https://news.ycombinator.com/item?id=10762196 - Dec 2015 (1 comment)

Pony – High Performance Actor Programming - https://news.ycombinator.com/item?id=9482483 - May 2015 (124 comments)


Isn't that language dead? Some startup used a new language because apparently nothing else in the world could fit their use case ( which was simple ) only to move to Rust after chocking discovery that it was a terrible idea from the start.

https://web.archive.org/web/20171028135810/https://blog.wall...

Then:

https://www.wallaroo.ai/blog/wallaroo-move-to-rust


Development has definitely slowed and the creator has moved on to other things.

I've played around with it and my main takeaway is that it's easier to create an actor system in an existing language than to create an entire ecosystem around Pony.


Can you please make your substantive points without breaking the site guidelines? They include:

"Don't be snarky."

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


I'm not sure, but the last commit on the github repo was 12 hours ago.


They used Pony as a good fit for their first project. They later made a second project with better market fit, and for that project Rust was a better fit.


Last time I checked, it was language backed by a research group.


I wonder if there's somewhere that people can propose a name for things and people can tell them why they might want to use something else - 'Pony' is fairly common slang for 'crap' in London / Southern England.


I've lived in London and the south pretty much all my life, and i'm not sure i've ever heard it used to mean that. It's rhyming slang, right? Not sure how many cabbies are in the market for memory-safe systems programming languages really.


Also in/from the south (though outside London), and was not familiar with that usage at all. A search does turn up some confirmation, though (e.g. https://www.phrases.org.uk/meanings/287275.html).


The language was developed by people in London.

> The big reveal on why it’s called “Pony”? Back in the flight sim days, when I would make my friends groan by telling them yet again about all the things I was going to do when I wrote a programming language, one of the people I would tell was Nathan Mehl. And one time, when I gave him yet another laundry list, he said: “yeah, and I want a pony”.

But perhaps they didn't want to admit "pony" is also a reference to the slang term..

https://www.ponylang.io/blog/2017/05/an-early-history-of-pon...


I'm not from London myself but I've never heard a Londoner [or southerner] use Pony in that sense. I've often heard it used for an amount of money. £20 I think.

When I read your opening 'I wonder if there's somewhere that people can propose a name for things and people can tell them why they might want to use something else...' I thought you were going to go on to make a point about not using everyday words [or single letters] as the name for your new programming language, as they'll be unsearchable. And everyone will end up tagging -lang' onto the end of the name, to discriminate.

EDIT: I was a fiver out. A 'pony' is apparently £25:

https://www.cockneyrhymingslang.co.uk/subjects/money


A pony is a small horse dude


No, a small horse dude is a gnometaur. You're thinking about a sporty tiger mascot for a breakfast cereal.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: