I'm not sure a lot is gained from this comparison (caveat: I've used neither language in anger).
PG states that Arc is intended as a 100-year language, so getting your core axioms and constructs right is pretty key. This kind of approach has a higher activation energy, but momentum can continually build.
The philosophy for Ruby is getting things done now, although arguably it gained a lot from it's time in the Japanese niche. It's a fantastically productive language and favoured by a lot of productive "lets get it done" people (not to say Arc isn't...). This philosophy accelerates quickly, but tends to eventually hit conceptual limits. In my view this happened with Java - but this doesn't mean it's a bad thing.
I'm struggling with some of the same problems (in microcosm, maybe nanocosm in comparison). For example, if you build an exception mechanism - do you make it an ultimately flexible signal system, or do you constrain it to certain norms?
The latter favours rapid growth as all your libraries have consistent exception mechanisms. I can build a library and integrate it easily with another library. The same applies for a whole bunch of other constructs, such as network models, concurrency, etc, etc. So what you have done is constrain X to allow growth in Y... (A lot of what a language is about is what it doesn't let you do, rather than what it allows).
The former may result in some nasty mutations in the short-term - For example, libraries that aren't interoperable because they have different exception mechanisms. However, if you are taking a long term evolutionary view then this approach may yield a much more innovative, stable mechanism in the long term.
So really, they are just two different philosophies with different start and end points - that's how I read pg's quotes also.
But one of the core constructs of a language is "how libraries work".
In practice Arc could have plenty of libraries if Arc had a process for accepting contributions from the community. Right now Arc is technically open source, but it does not work like most open source projects in the sense that there is no way to contribute yourself. Which is okay, but encouraging a community of programmers to use it and then not letting the excited ones help in any way is kind of a tease.
Does a hundred year programming language make sense at all? Is it possible to have the faintest clue what people will want to be programming then? Feel free to point me to prior responses :)
I was hoping from the title that this would be an article about language features, instead it's more like "what Kubrick should have learned from Spielberg (in terms of box office success)"
I don't write Ruby personally; are there actual features (as opposed to ways of generating ephemeral hype) that could make Arc better?
PG states that Arc is intended as a 100-year language, so getting your core axioms and constructs right is pretty key. This kind of approach has a higher activation energy, but momentum can continually build.
The philosophy for Ruby is getting things done now, although arguably it gained a lot from it's time in the Japanese niche. It's a fantastically productive language and favoured by a lot of productive "lets get it done" people (not to say Arc isn't...). This philosophy accelerates quickly, but tends to eventually hit conceptual limits. In my view this happened with Java - but this doesn't mean it's a bad thing.
I'm struggling with some of the same problems (in microcosm, maybe nanocosm in comparison). For example, if you build an exception mechanism - do you make it an ultimately flexible signal system, or do you constrain it to certain norms?
The latter favours rapid growth as all your libraries have consistent exception mechanisms. I can build a library and integrate it easily with another library. The same applies for a whole bunch of other constructs, such as network models, concurrency, etc, etc. So what you have done is constrain X to allow growth in Y... (A lot of what a language is about is what it doesn't let you do, rather than what it allows).
The former may result in some nasty mutations in the short-term - For example, libraries that aren't interoperable because they have different exception mechanisms. However, if you are taking a long term evolutionary view then this approach may yield a much more innovative, stable mechanism in the long term.
So really, they are just two different philosophies with different start and end points - that's how I read pg's quotes also.