Regarding the @ positional-argument syntax: I mean, my use of Ruby dropped considerably years ago, but from a Ruby-philosophy standpoint it looks fantastic to me. Having to constantly make up (usually single-character anyway) names for those parameters in map/filter chains was one of the ugliest parts of the language, especially when there was no actually-good name for things like temporary tuples, and the Smalltalk-derived (I think) syntax for block parameters is so visually heavyweight: { |…| … } { |…| … } { |…| … } where all the stuff in the || is close to useless, as opposed to just { … } { … } { … }. Method references passed with &, while separately useful, are not a good substitute; they're often less readable (to me) because the form of the expression is obscured, and they break down much faster, for instance as soon as you need any single other argument, rather than just when the block gets complex enough that the parameter names become important for readability.
I can appreciate that it moves farther away from “requiring things to be explicit means you get fewer accidental barriers to moving code around or increasing/decreasing complexity”, though, as some of my recent work has been in Java and I've had an eye for that aspect of it. But the next time I do anything in Ruby I'll be remembering that this is there.
I like Kotlins solution for this. If there is only one argument in the lambda/block and the type is clear from the context you don’t have to declare it and you can use the ‘it’ keyword instead.
Wow, that's splendid! In a sense, it's a very nice dual to "this", which is also a keyword everywhere and not something like $0 or something. I wish more languages had this. Or, it.
I'm not positive, but Groovy may in turn have been inspired by Perl's $_, as in 'map { $_ + 1 } (1..10);'.
If I recall correctly, the canonical name for that parameter is 'it' as well (though I've never met a Perl programmer that calls it that . . . er, calls it 'it' . . . er, you get the idea).
Sounds a lot like Scala's underscore (or at least one usage underscore can mean a few things depending on context). Haven't used Kotlin myself, but I'd agree that sounds like a good thing
Allowing @1 and @2 instead of |a, b| is a horrible change to Ruby. In the ruby tracker Matz himself states this is a compromise. So now Ruby is getting half-baked compromises that look out of place, all to appease some people making requests.
If the requested functionality is so pertinent, then a proper solution should be made, in line with Ruby's style. Not cryptic sigil soup with @ and .:, etc.
If a feature exists, it will get abused down the line. I do not want to read Ruby codebases that have @1, @2, @3, etc instead of named variables. We all know that temporary code is permanent. Just making this syntax possible is terrible.
I have to say I'm disappointed. I have enjoyed using Ruby for the last ~5 years but I disagree strongly with this syntax change.
On a team, I would prefer the former over the latter.
As a single contributor that's trying to hack together an idea, the second option is not so bad; however, it will probably need to be changed far off in the future.
I think named block parameters are cool in the sense that they enable some expressiveness but bad in the sense that they lose some readability. I won't know how I truly feel until I have to maintain/read a large codebase where they are used.
How do you feel about the Groovy/Kotlin style that would translate (roughly speaking):
[1,2,3].map{|n| n * 2}
to:
[1,2,3].map{it * 2}
?
It's not generalizable to multi-argument lambdas, but multi-argument lambdas are the ones that are more important to name. (e.g. with reduce, {|memo, it| memo += it})
I don't feel strongly one way or the other to be honest. It looks nice but it's not a syntax that I have been wanting or needing. If it enables people to express themselves better through code, then I'm all for it.
I think as far as Ruby goes, I don't like the idea of introducing a new abstraction for something that already has a solution; however, I wouldn't villify someone for using it unless they wrote something that took a large number of arguments.
Underscore is very commonly used to signifiy "I don't care about this argument" in Ruby, so at least on it's own it'd be confusing, and _1 is a valid local variable name, so would risk breaking existing code, though I've personally not seen it used, so maybe it'd be an acceptable risk.
I've never really seen a code base that is hard to work with due to "language feature abuse" (excepting inheritance). "Don't use this language feature" or your code will be hard to read is sort of the bogeyman of the programming world.
Code is hard to work with because of too much coupling, or bad architecture, or no tests, or bad abstractions.
> If a feature exists, it will get abused down the line. I do not want to read Ruby codebases that have @1, @2, @3, etc instead of named variables. We all know that temporary code is permanent. Just making this syntax possible is terrible.
@1 and friends, as I see it, are fine in permanent code, in the right role. Like braces instead of do...end, they are good for blocks that amount to single-line lambdas, where declaring and using the variable on the same line is excessive noise that impairs rather than aids visibility. Can it be used in contexts where it's bad? Sure, but the existence of that potential is better than forcing everyone to use a worse solution in a common case.
Ruby has always been a compromise where purity has almost always been sacrificed over convenience. It's not always obvious, until you start poking around the murkier parts of the Ruby grammar.
Keep in mind Ruby has a ton of Perl/Awk style global-like-but-actually-global variables already, for example (and awk-inspired command line Switches) - it's the bastard child of Perl and Smalltalk.
In this case I think that while it's slightly ufortunate to end up with "@", it will overall be clear - so much Ruby code with chained blocks are just using meaningless names on the basis that "everyone knows" what gets passed to map or collect etc. anyway, and especially in a chain where the same values often gets passed through multiple steps a positional argument will make no difference, as long as people strictly use them with common methods with known parameters.
These are just De Brujin indexes! https://en.wikipedia.org/wiki/De_Bruijn_index Far more elegant than literal lambda variables, since they save you the trouble of dealing with variable renaming and/or defining a valid syntax for local identifiers.
Variable names are a good thing. When I look at someone else's code (or my own, 6 months later) I do not want to see @1, @2, @3, and so forth. I don't care if it's some existing mathematical notation concept or not, we're not discussing lambda calculus here.
Naming things properly is a skill we all need to have as programmers. There are (multiple) reasons we don't call things k, d, j, z, o all over the place anymore.
And for the record, I actually prefer λx. λy. x instead of λ λ 2. We do not need to be as terse as possible when writing programs or mathematical statements. This is especially egregious for Ruby, a language that has been valued for being quote, "elegant".
a =
some_list_of_tuples()
|> Enum.map(&elem(&1, 1))
|> # more pipeline here
The alternative, without De Brujin indices, is:
a =
some_list_of_tuples()
|> Enum.map(fn x -> elem(x, 1) end)
|> # more pipeline here
Are you going to tell me that the second example is more readable? Personally, I find that the tiny little elem/2 expression gets lost in the line noise of the closure syntax. Reducing that syntax using an "anonymous closure" like in the first example makes it clearer to me what's going on.
And, as well, in such closures, there really is no "name" for the thing that I'm processing. It's an intermediate in a destructuring expression that I'm only holding onto in order to further rip it apart. (The output has a name, say, `foo`; but if the input is just, say, `{:ok, foo}`... then what is the name of said tuple? `result`? `maybe_foo`?) Whatever name you make up there, it would only distract a future reader by making them think that it might be something important to the domain.
Oh, and also, at least in Elixir, De Brujin anonymous closures "flow well" with function handles:
&List.first/1 # function handle
&List.first(&1) # equivalent anonymous closure
---
But to put a finer point on it, in practice, 99% of the use I get out of (De Brujin) anonymous closures is just using them as "function handles but with the parameters reordered"—i.e. as a way of wrapping a closure in a combinator, without having to remember the names of combinators.
Instead, with De Brujin indices, such "combinator" effects are self-evident descriptions. Elixir code:
&(x.(&2, &1))
What is that? An expression that returns a version of the closure x (of arity 2) where the two input parameters are swapped. In other words, that's the C (cardinal) combinator, or the Haskell function `flip`. Except you don't need a special name for it; you just describe the effect you want then and there. Anyone who reads it can see what it does. Is the non-De-Brujin equivalent any clearer?
fn a, b -> x.(b, a) end
&1 and &2 are metasyntactic variables. a and b here are also metasyntactic variables. There might be a better name for what they are, but frequently there isn't, if this code is e.g. sitting in a library that does something generic with a data structure or algorithm, rather than something specific with business logic.
Personally, if I'm going to be using metasyntactic variables anyway, I prefer to use ones that look different in a way that highlights them as metasyntactic variables. Just like I prefer languages that require some sigil on class-instance variables in a method to differentiate them from lexical variables. It allows both regular syntax highlighting, and the "syntax highlighter in my brain", to work more efficiently.
Yes, this is actually more readable to me (a non-Elixir user). This syntax is explicit about what is happening.
> fn a, b -> x.(b, a) end
Yes, this is more clear!
I'm not going to discuss whether these are good or bad in the context of Elixir, but I do think the former examples are much more clear than the ones with the indices. Since Elixir uses arity notation it makes more sense, at least.
You know, naming things is hard, and it's not reasonable to argue that your code should always be in the optimal state. There are "on the way" states, and it seems like this feature can be used well by saving you from choosing a wrong name too early in the development stages.
Sometimes you haven't decided what a name should be yet. I do not want to see @1 @2 @3 in code 6 months later, either. That's the kind of thing that I'd search for in my codebase before I closed the project, and assign a name before it's too late for me to remember what that bit of code was for. (And look, how handy it's a completely original arrangement of characters that you can even grep for without straining!)
But sometimes you don't have a name ready and it's actually better to defer naming the thing until you know what it is. This feature enables that. You can save yourself from picking a wrong name, come back later when the code around that @# has solidified and it's clearer what to call it. I have never seen this feature before this article, but seeing the ways that people are arguing against it has given me a better understanding of the ways that it can be used well.
I've just been catching up on my tech lore and I'm watching Sandi Metz talks from RailsConfs gone by. There's one talk in particular "all the little things" where the subject is the gilded rose exercise, and how she would refactor it. In the course of the exercise the code complexity goes up (almost doubles) before the gains are realized, and suddenly there's this huge bit of code that can be safely deleted, now the complexity has gone way down and the code is good.
My only point in bringing this up is that there are intermediary states that, if observed in a vacuum, we'd all agree are badly formed and simply not exemplary. This strikes me as an example of one of those features. You can come back and make it better. This is just how it will look "on the way" to something better. (I know, "you won't come back" but that's not an argument that I'm ready to accept.)
> sometimes you don't have a name ready and it's actually better to defer naming the thing until you know what it is
You can do that without this feature; you can use throwaway variable names like "x" and "y".
Taking an example from the post:
> (1..9).each_slice(3).map { |x, y, z| x + y + z }
vs
> (1..9).each_slice(3).map { @1 + @2 + @3 }
The shorthand saves you 7 characters in the rare case when you truly have no idea what to call your block parameters (even a single-character name can be descriptive, eg if x and y are coordinates). That seems minimally useful.
Meanwhile, you've made your block parameters look like instance variables, and everyone else (including all tools that work with Ruby code) has to learn why they aren't.
> it's not reasonable to argue that your code should always be in the optimal state.
This is a strawman and not what I believe. I never said your codebase should always be optimal in every way. I believe @1 instead of even the most basic of variable names is actively an anti-pattern.
> That's the kind of thing that I'd search for in my codebase before I closed the project, and assign a name before it's too late for me to remember what that bit of code was for.
Maybe you might, but plenty of other people will not. I have also lied to myself and added "TODO" comments in some small projects I've worked on. Months and years later I sometimes see "TODO" comments because I ended up working on something else or forgot. It's simply way too easy to not go back and refactor or fix things.
There is nothing more permanent than temporary code.
Sometimes you have to go back and rename things multiple times. This is vastly better than what you know full well will happen: People will immediately gravitate towards the easiest option of @1, and then never change it because they've moved onto other parts of the code or different projects, etc.
We should not be adding ugly, terse syntax to programming languages because it's "just more convenient" to not name things. That is not a tenable argument for a permanent change to a mature language. Being lazy is not an argument.
Of course I can. That doesn't mean I actually will go back and fix them.
TODO statements are little lies that programmers tell themselves. "I'll go back and fix this/refactor it, yeah, yeah...". And most of the time, that doesn't happen.
> Maybe you might, but plenty of other people will not. I have also lied to myself and added "TODO" comments in some small projects I've worked on.
So it was cheaper to leave the TODOs in the perfectly OK code, sounds like you dodged a bullet there.
This strikes me as a slippery slope. Start with "@1 is not an appropriate way to represent this variable" then move onto "I actually don't like the name you've chosen" and finally arrive at "this code isn't important enough for me to take care of those TODOs I placed before I'm old and gray, remind me why are we even worried about how this variable is called, now renaming it for the _third time_?"
I'll usually take a collection of "xs" and iterate over each "x" without any pretense that I'm coming back to make it better. We can agree to disagree, but I'm arguing that it's not any better to do that, and it's actually harder to grep for which makes it actively worse. This is a better option.
> I, however, can still read that code perfectly fine and well because it has variable names. Not @1 and @2 for hashes, etc.
If I do {@1 + @2} exactly once is that really harder to read than {|x, y| x + y}?
If I'm doing it everywhere all over my code, sure that's a problem and it will make unreadable garbage. But then again it was already possible to make unreadable garbage code.
In real code with actual logic (not simplified examples like here) the inner variables are typically not named |x, y|. |key, value| (or |k, v|) is one common shorthand, but even those -immediately- denote more semantic meaning than |@1, @2|.
Usually something is being mapped over, or being selected/filtered etc, with a context, so it's useful to have something like |course|, |report|, or |course_name, course_values|, especially for people-who-are-not-me that may be reading the code.
The issue here is that @1 and @2 make it much, much easier to write garbage code. At least with named vars you explicitly have to acknowledge that you're choosing a bad name. @1 makes it so you don't even have to think about it at all. This is a flagrant issue, and doesn't belong in Ruby at all.
{} is for one-liner blocks, stylistically speaking. I occasionally use it for a longer block without refactoring it into a method (and line-breaks are permitted for sure), but typically if the one-liner is above a single line's worth of complexity, it's already on its way to becoming a method.
"In real code with actual logic" this would be a "do-block" or you're violating another stylistic rule, and none of the examples I've seen use numbered parameters with a do-block. Is it allowed? I feel like this is important information that I need to know before I can make a fair judgment about whether this is a "flagrant issue" or not.
Do-blocks with numbered parameters, if it is an allowed way to write a block, would definitely be a terrible idea.
But I don't think it is, I can't find a source that says so, since all of the results for "numbered parameters" seem to be rants against the inclusion of this feature. Have you ever read bikeshed.com? Because that's exactly what we're doing right now.
Even for one liners I think it's a bad idea. It's not that tough to do .map{ |course| course.some_method + some_str } instead of .map { @1.some_method + some_str }.
The latter is extremely ugly and does not belong in Ruby whatsoever. I'm not budging on this.
You can keep your opinion, I don't need to spend any more time trying to convince you, but to be pedantic one last time, you omitted the part of that snippet that would have made it clear the named parameter is not needed and unnecessarily verbose:
is unabashedly and unnecessarily verbose, I know it's a course as it's from courses. No need to repeat yourself twice.
courses.map{@1.some_method + some_str}
by comparison is perfectly clear, once you know the syntax.
Edit:
I just installed ruby-2.7-head and I made a typo, first thought it is not permitted in a do-block. It is. I think it's bad to use this in a do-block. But like I said, there are already plenty of ways to make your code into unreadable garbage, it's up to the author to make a good decision. Some programmers will choose poorly no matter how hard you try to prevent it.
In my opinion, this syntax is for one-liner blocks only. You should avoid using tools in incorrect ways, and code that has numbered parameters strewn everywhere without any attention to naming, is guaranteed to have more than just one bad code smell in it.
But I disagree that it's the responsibility of the language to keep potentially dangerous tools out of the hands of developers.
Those are all scary indeed. Ways you can cut your hand off. Has a nice ring to it, sounds like something to worry about more than "ways you can write unreadable garbage code that not even the author can still parse for themselves in their head."
Nobody is arguing that there aren't ways to misuse this new feature. There obviously are! But I can tell you we have plenty of time to call the ways out and warn everyone not to do those things, before ruby-2.7 will be affecting any legacy codebases.
If you feel strongly about this, the time to argue about it is definitely now, not after the first 2.7 release is already cut.
It makes the list of un-googlable idiomatic syntaxes that you must learn and keep in a list, in order to guarantee you won't ever find a bit of Ruby code that you can't understand, marginally longer. There are several such features planned in the ruby-2.7-head now. Is this really the worst of them?
In that example, fair enough. However often one unpacks a tuple or operates on a hash, where it's not immediately obvious what the inner values are. So we can bikeshed if you want, but I still think it's a bad idea (and looks very ugly).
"If I do {@1 + @2} exactly once is that really harder to read than {|x, y| x + y}?"
It's certainly no easier. "{|x, y| x + y}" immediately clearly conveys "this is a block taking exactly two arguments and returns their sum". Adding syntax for "@1 + @2" adds an entirely new construct for every Ruby programmer to learn and recognize as a pattern, in addition to the full block syntax form, and their brains to be able to switch between pattern matching for either pattern to quickly parse what the code is doing.
All for zero benefits.
String enough of these arbitrarily different syntactic variants for the exact same thing, and you end up with C++.
You also lose information, because @1 and @2 magically appear. With || notation you have to explicitly spell out, beforehand, exactly how many variables will be in the block.
tuple.collect{ |x,y,z| [x * 3, y * 3, z * 3] }
is better than
tuple.collect{ [@1 * 3, @2 * 3, @3 * 3] }
because with the latter, I need to know in my head that there will be three parameters.
The more I read about this syntax the worse I feel about it.
The last one will generate a runtime error (unless "x" is already defined in the closed-over scope the block exists inside of, in that case it might be valid and correct.)
But both are valid Ruby expressions.
Neither are less inscrutable than:
tuple.collect{ [@1 * 3, @2 * 3, @3 * 3] }
This expression will return nil for @3 if only two arguments are passed in, again it's a perfectly valid Ruby expression, and you'll get a runtime error from inside of the block in this case, just like the pre-existing standard block syntaxes.
It's up to the author of the code to create readable constructions that aren't misleading or confusing. There's no reason why this tool is innately bad, more tools can only make it more likely that the author will find an expressive and clear way to say exactly what they're doing without writing something confusing or redundant.
From a language purist perspective, these are all very good reasons not to use Ruby at all. I don't think that adding numbered positional params makes it worse in any objective sense.
Except you don't. [1,2,3].collect{|x,y,z| ... } is perfectly valid ruby if "..." is an expression that does not blow up when y and z are nil.
Likewise
[[1,2,3]].collect{|x,y| [x,y]}
is perfectly valid even though 3 will not get assigned to anything.
So you're being forced to spell out which arguments you want to be able to reference, which is probably related to what it will get passed, but not required to be.
So if you don't know how many parameters there will be, you're in trouble either way.
"I'll usually take a collection of "xs" and iterate over each "x" without any pretense that I'm coming back to make it better."
Those can be perfectly fine names!
xs indicates you are writing a generic method or function applicable to just about any kind of collection, and x is an item in xs. So for functions like map and reduce or anything else that deals with arbitrary collections, these names are very clear in expressing the intent and far better than @1, etc.
This is similar to using "i" as the index variable in a for loop, an almost universally recognized convention.
I don't program in Ruby and never really have, but there is one thing I know dealing with plenty of legacy code: Temporary code is almost always permanent.
I'm sorry, but there's a great deal of literature in support of this idea that naming things /well/ is one of the harder problems of programming.
If you're arguing that "fst" and "snd" are better names than "@1" and "@2" then I think we have a disagreement. None of those are good names. (All of those might be perfectly "sufficient" names, though. For a local variable, I would argue that spending any time at all on naming is a mistake.
Spend that time writing a descriptive method name instead.)
This is foundations of computer science stuff. If you choose a solidly descriptive but wrong name before the design is fully solidified, you may be stuck with it because it will work its way through dependencies. Even if you can rename it later during refactoring, the name will shape the formulation of those refactors.
I think all of this is overblown when it comes to talking about local variables in a block scope. This is absolutely a case study in the "bikeshed problem" at this point in the thread.
There are other features planned in Ruby 2.7 (like the :. operator) that actually make the operation of some code less discoverable and will hurt to see in your codebase for the first time, but we're not talking about them at all, because they're simply not as easy for us to hate on.
A regularly repeated meme is not "a great deal of literature."
To say naming things is hard is like saying that getting a golf ball into a hole is hard; It is not. Sure it is if you're using some artificial restriction that you have to whack it with a stick from 400 yards away. Naming your children might be hard if you're trying to solve world poverty via the child-naming process. And naming things might be hard for programmers if they are not allowed to rename things or document the meaning and history of their names. But that really doesn't have much to do with language design.
>If you're arguing that "fst" and "snd" are better names than "@1" and "@2" then I think we have a disagreement.
I'm not arguing that. I'm arguing that they are not worse names. And as you probably recognized, if names like (@1, @2) are a good enough solution, then certainly (a, b), or (fst, snd) have an equivalently high bar for rejection.
One man's artificial restriction is another man's invariant. "We can't rename this thing without renaming it everywhere" is a thing, but honestly not in this case, because we're talking about a feature that is used to decide the name of local block variables.
The only place that local block parameter variable names will get used, except directly inside of the local block, is in blocks that are nested even deeper. We're not designing an interface inside of local blocks.
If you've read the thread and Matz opinion on the matter, the issue is that it's a requested feature and someone has taken the time to write it into the language. You need a good reason now, if you want to tell that person they can't have what they said they need. It was designed carefully so that it would not break existing valid uses of the language.
Maybe the block was meant to take 99 parameters, should the author be forced to come up with 99 one-letter variable names and type them all correctly, twice? The point is to provide another option, you won't be forced to use it. (I'm sure that Rubocop will even provide an option to enforce that your style guide specifies your devs must not use it.)
At any rate it sounds like this feature will land in Ruby 2.7 whether we like it or not.
> @1, @2 are names, and they're no better than fst, snd.
They are better than fst and snd in a new iteration of Ruby, because giving a special meaning to existing legal identifiers in certain contexts risks breaking existing code; @1 and @2 (despite the similarity to instance variable notation) are not legal identifiers in existing Ruby so their special role has no effect on existing code.
>giving a special meaning to existing legal identifiers in certain contexts risks breaking existing code
That's just what scoped variables are. I've never seen it be a hazard other than shadowing names, and you'll have the same problem with @1, @2, etc. A closure returning a closure for instance.
Unless you're just completely misunderstanding me, thinking I'd be a proponent of fst, snd as direct substitutions for @1, @2? That would be absurd. I'm comparing |fst,snd| fst+snd to @1+@2. Code isn't going to break with either.
Syntax is hard. In general I think that adding a simple numbered argument access for blocks makes sense, and something like swifts $ syntax would look pretty natural, ie:
```
books.map { $0.title }
```
In the Ruby lang discussion on this they go through a dozen different possible syntaxes and they all have issues -- for example the $ is problematic because Ruby currently uses these for global variables.
Even so, IMO using the `@` symbol for this will make the language harder to learn. It's not a very nice thing to have syntax that is almost ambiguous to the human.
```
books.map { @prefix + @1.author }
```
I teach Ruby to a lot of engineers (of all different experience levels), and one of the nice things is that it's so easy to pick up and be productive quickly. But there are bits where it takes some practice, and I'm not looking forward to telling people, "The at sign means it's an instance variable. Well, except if you're in a block and it's @1 or @2 or something, in which case it's a positional argument." Oof.
Given how little of Ruby syntax most people know before they're able to read and understand Ruby, I don't see that as much of a concern. "@" suggests it's probably local of sorts, the number makes it clear it's not an instance variable, and the absence of |somevar| in a chained block will strongly suggest it's an argument, without having to know the precise details.
There are many additions to Ruby I don't care much for, but this is one of the most intuitive I've seen.
Ruby 1.9 hash literal syntax was introduced as an evolutionary step toward keyword arguments. If the former didn't exist, the latter would be a special syntax similar to but incompatible with the existing idiom to use a hash in terminal argument position as keyword arguments. That is, without the ability to treat foo(:a => 1, :b => 2) as equivalent to foo(a: 1, b: 2), you end up with two versions of every API -- "hash kwargs" and "real kwargs". And you'd similarly lose the ability for existing APIs to migrate to explicit keyword arguments from hash kwargs.
Would I prefer it if Ruby had started out with Crystal's named argument syntax? Absolutely. But Crystal had the luxury of starting from scratch without any need for backwards compatibility.
It is, but it was also a compromise meant to maintain compatibility with existing code. And I mentioned it specifically because the OP identified keyword arguments as a good new feature but the new hash syntax as bad.
The more features the language has, the harder it is to read.
And using @ for positional arguments when it’s already used for instance variables is a questionable choice. But on the other hand, the syntax for these blocks has always been clunky.
But Ruby is all about options. You can do things the way you want, when you want. That’s the happiness you achieve. But it does become more difficult to read for newcomers and devs not up on the latest updates, which is a major sacrifice.
Some of these new features are blessings though. The safe navigation operator saves lots of checks and I wish I had it in JavaScript and other languages (rather than checking for a property first).
Having lots of options to write things optimizes the happiness of the developer writing the code, but reduces the happiness of the developer reading the code (which might be the same developer, a few months later). That's one of the things Go got right IMHO (although some people hate it, because they feel constrained if they're not able to write code in some "clever" way).
Err, is that why things like Rails are convention over configuration? I personally would rather have consistency throughout ruby codebases than various random solutions people have come up with.
@1 and @2 is more difficult to read, full stop. It's more difficult for newcomers because it's more random "magic" that one has to learn, and it's difficult for ruby devs when they read other people's code and have to discern what @1, @2, @3 might possibly be. At least variable names give information, if done properly.
Matz even stated this was a compromise. I think it's utterly unsatisfactory for Ruby to just accept this as it is currently. There has to be a better solution than @1 and @2.
Ruby != Rails. I think it makes sense that the language remains flexible while the framework is opinionated. I probably won't use @1 et al in my projects, that's where enforcing consistency via rubocop comes into play.
Sure, ruby isn't rails. But a lot of tools are also opinionated (like Rubocop, even.. or Brakeman, etc). One can always change values but there -are- default configurations that people generally tend to go with.
Having consistency in the language is also better overall for the community. If you read through a new codebase, having consistent patterns makes that process significantly better.
@1 and @2 are only going to make things less readable. People will choose the easier option and never go back to refactor it once completed. It's a horrible change for Ruby.
> The more features the language has, the harder it is to read.
So you're saying Brainfuck is a really easy language to read?
Surely there's a balance. The key is you want your language to convey the intentions of the programmer. Features can help do that, but they can also cause convolution, hiding the intentions.
Brainfuck is about as simple as they come. If you were to evolve Brainfuck into something like Ruby, you have to add a lot of features.
The poster literally said "[t]he more features the language has, the harder it is to read". From that it follows that something like Ruby is harder to read than Brainfuck.
I disagree with that absolute, and I was trying to say that I think it's more nuanced. Sometimes a feature can be such a natural addition to the language that it explains itself, so to speak, and makes the code easier to read even to those who are new to the feature.
No, you’re making an incorrect extrapolation. Obviously there’s a minimum number of features for readability. Example: dictionaries use a core of a few thousand words. Yes, they could use less or more, but it’s an optimization. Obviously a language that doesn’t allow sufficient expression isn’t easy to use either. But still, the more rules, the more you have to learn and that means harder to read in that respect.
Consider a language which has float types, but which also allows you to declare vectors floats. However in v1 of the language, you can only initialize them and operate on their elements. So if you want to add two vectors, you'll have to write a function vector_add() which does the element-wise addition.
Then v2 comes along and adds addition, subtraction and scalar multiplication of vectors as a new feature, using the normal operators. Suddenly your code can now read like the math it represents.
You've added a language feature, but it makes it easier to read since there is less useless clutter and the code matches the original math more closely. And it's actually less to learn since you don't need to know if the vector library you had used vector_add or vectorAdd etc, you re-use the same operators as for scalar types, just as regular math does.
The point I was trying to make with the Brainfuck comparison is that Brainfuck is about as simple as they come, and as such actually very easy to read as such (replace the actual symbols with whatever you like).
It's just very difficult to comprehend what the code actually does. Writing complex programs in Brainfuck means you need to piece together constructs, and due to the simplicity of Brainfuck, these constructs can be large. As such it's actually very difficult for non-expert Brainfuck coders to recognize say a for-loop.
In higher order languages that have explicit for-loops in them, you still need to know what they are and what they do, but I argue they're easier to remember, read and reason about. As such the extra feature (a for-loop language construct) can make the language easier to read.
If the Brainfuck to Ruby example is too extreme for you, consider classical Pascal with Object Pascal (Delphi). Object Pascal has a lot more features today than the classical Pascal language. I think that certain code is more readable expressed in Object Pascal than in classic Pascal, due to these features.
I don't necessarily agree but it's a fair argument and well written. It's probably impossible to satisfy everyone; if Matz drops these features the people who want them (and I'm sure they exist) will yell that the language is stagnating compared to Python 3 and Node. If he adds them, we are adding bloat. There is constant pressure to add more stuff to "evolve" the language, hopefully the community steer through this pressure to the right direction.
As a rubyist of 10 years, a few thoughts. I wasn't aware of these upcoming features, but to some extent they make a lot of sense. Except... yeah, those symbol choices.
I agree that @1, @2 etc look totally wrong, and maybe that subjective view will change - but scope-wise, having @-variables that are nothing to do with the current instance would utterly ruin scannability. On the other side, I suspect Rubocop will very quickly react to outlaw it, so no big problem as far as my code goes.
I very rarely use &method(:foo). I find it ugly, and &:foo is more useful in 98% of circumstances. Maybe I would use it if it were shorter - but .:foo seems wrong. I feel like it would, again, be harder to scan.
That said, the more I let it sit in my brain, the less passionately I object.
> Still, I was disappointed when the safe navigation operator as introduced, as it basically acknowledged that it’s fine to return nil
I think this is indicative of the crux of the problem. Optimizing for programmer happiness implies you know what programmers need. Some programmers have no problem working with nils - we've long since accepted them, even embraced them.
The safe navigator pattern almost gave us a way of implementing a null object pattern with literally no work, while screaming out to the world "this method accepts nil, so watch out". I would much prefer code that accepts nil as a fact of life than code that denies its very existence.
I feel that "hacky code" will be well served by these changes, and I think that "production code" will be better served by tight Rubocop rules. So, everyone's happy!
> I very rarely use &method(:foo). I find it ugly, and &:foo is more useful in 98% of circumstances. Maybe I would use it if it were shorter - but .:foo seems wrong. I feel like it would, again, be harder to scan.
Ever since the near-death of the hash rocket syntax have these days been coming as you can already see the same line of thinking coming out of it. Superficially the introduction of the symbolic hash syntax that aligns it with yaml/js and saves a few chars was locally good for readability, yet fundamentally it could not deprecate the hash rocket because if one of your keys is not a literal symbol then you're toast and have to use hash rockets in your code, which means you end up with mixed patterns throughout your code and thus have to 1/ keep two tokenisation branches inside your head and 2/ make decisions about the syntax to use every single time you write a hash, which looks like small things but long term sand in a gearbox grinds it to a stop.
At some point you end up with all those little syntactic sugary things which make sense locally but which overall makes for increasingly heavy cognitive overload and you end up in a tar pit through normalization of deviance.
When Ruby began to take off, people said it was like Perl's heir, only readable. Today as both a Rubyist and a former Perlist, I do long for Perl (and violently enforce readability rules on Ruby projects I own)
I'll come to the defense of &. - for better or worse Ruby is mostly used for building database backed web applications where nils are simply unavoidable without some extreme over engineering or awkward idioms. `user&.address&.street` is at least safer than `try` - `user.try(:adress).try(:street)` - will not raise an error that `adress` is not a method. You could try and apply the null object pattern, but then `user.address.present?` is quite a rabbit hole - and frankly nil is the right return value for a user with no address on file.
Yeah, the safe navigation operator is great, and I really miss it especially when I write JavaScript and have to do something like `foo && foo.bar && foo.bar.baz` instead (...which isn't even the same since it returns `false` if something is null).
> which isn't even the same since it returns `false` if something is null
That's not the case AFAIK - checking in both node and FireFox, `true && null` evalutes to `null`, `true && undefined` evaluates to `undefined`, and `true && null && true` again evaluates to `null`. In what scenario does `&&` coerce the returned value to a boolean?
The point isn't that it's an unsolvable problem, the point is that you have to resort to clunky syntax or a 3rd party library to do this. And for something as basic as this, there's probably a number of different libraries that solve the same problem a slightly different way.
I also use and prefer safe navigation over the standard alternative. However, Maybe/Option monads are also nice. Perhaps that is what the author has in mind.
Only for fields where a not null constraint makes sense of course. But NULL represents the absence of a value - which is very often a valid choice in a database.
> Only for fields where a not null constraint makes sense of course
A NOT NULL constraint always makes sense, assuming your tables are designed correctly, such that each table contains all and only those columns needed to represent a distinct category of fact that needs to be stored, which necessarily means that all columns must occur together or not at all.
When tables are designed instead to contain multiple different shapes of facts with some shared elements, then you get the need for nullable columns, but that's problematic in other ways, too.
Just because you can design a database that way doesn't mean you want to, or that you even have control over the db design. Anyway, data is still present or absent. Even if you have an address table where all the columns are not null you might not have an entry in that table for a user so what would you like user.address to return?
> Even if you have an address table where all the columns are not null you might not have an entry in that table for a user so what would you like user.address to return?
IMO, the best general solution is: if a relationship has a fixed cardinality of exactly one in the direction of interest, trying to traverse it should return the exactly one item it refers to.
If a relationship has any other cardinality (fixed at some number > 1, variable between 0 and 1, variable between 0 and +∞, variable between 7 and 13, ...), trying to traverse it should return a container (set, bag, list, generator, ...) of values, whose cardinality will be the actual cardinality of the relationship.
(That's really just as true if the underlying DB uses nulls or magic values or whatever else to indicate missing data; and, yes, returning application language nulls from low level database routines may be a convenience for those implementing the higher level abstraction layer on top of that low-level interface, but once you get into the mode layer there's not a great reason for presenting NULLs -- except in the special case of languages where the NULL is the empty list or another appropriate empty container.)
Yeah. I think the 0..1 case is cleaner as a nil/not nil vs a collection. The issue just doesn't go away. users.addresses.first&.street or whatever functional voodoo that buries if/else - users.addresses.map(&:street).first. You just end up switching empty/not empty list rather than nil/not nil. And the interface makes it look like a 0..n relationship which is super confusing for anyone not clued in to the pattern.
You've missed my point about `&.` (probably because I didn't provide much of an explanation) - it's fine in certain cases (your example is a good one), but it's also something that can be easily abused. Very often I've seen people use `&.` even for methods that can never return `nil`, which is confusing for people who read their code. I've also seen plenty of methods which could have returned a en empty collection on an empty string that return a nil for no good reason, but it's easy to deal with nil now, so why bother to think carefully... Every feature that promotes sloppy APIs nil is a potential liability and should be used carefully.
The &something syntax to pass on a block is very old syntax to allow passing on a block that is reified as a Proc, it mirrors the syntax used to receive a block (if you don't want to use, or can't use yield, for example because you need to pass on the block to another method).
e.g.
def apply_to_all(&callback)
@all.each(&callback)
end
or
def initialize(a, b, c, &block)
@a = a
super(b, c, &block)
end
The only thing that &:symbol adds is that it defines to_proc on Symbols to create a Proc object that sends the symbol to it's first parameter, you could define it yourself if Ruby didn't already, and in fact it was first defined by users of Ruby before it became part of Ruby core.
Definition:
class Symbol
def to_proc
return Proc.new {|receiver| receiver.send(self)}
end
end
This definition allows `["1", "2", 3].map(&:to_i)` to work in Ruby 1.8.6, which otherwise doesn't support it.
I don't actually mind the syntax that much, `&` on a symbol just gives you a proc for calling `send` on the first argument. What bothers me is the limitations. If you're going to have special case syntax to avoid having a useless first argument why not go all the way?
Something like this would have made a lot more sense imo:
The cognitive burden argument is completely subjective. If you’re used to point-free programming, the first short form probably requires less cognitive burden. To me, it makes way more sense to think about “map the users list over the function that returns name” rather than “map the users list over a brand new anaonymous function that takes a user and calls the name method on it.”
Another way to look at it is that cross-pollination is a good measure of the merit of a language feature. If one language invents it another a dozen other popular languages adopt it, it's probably a good idea, and its proliferation is good for everyone.
> In my opinion half-baked micro-optimizations add nothing, but complexity in the long run, and should be avoided. I’m really worried that Ruby has been gradually losing its way and has strayed from its creed.
It seems to me that "Optimizing for programmer happiness" is bound to end up with these half-baked micro-optimizations. The very first example of what the author considers the essence of Ruby is a perfect showcase of that, in my opinion.
Without a criteria that establishes whether or not you've now optimized more or less for programmer happiness, your users will just create blub fortresses that they stay in. Since the entire tagline is based on subjectivity and the users don't have any meaningful foundational principles to appreciate, all they have is their opinions on that feature they don't see why they need.
I like them too, but only because I've been programming Ruby for a while now, and enjoy picking up new tidbits as I go. If I were new to Ruby (or programming in general), I think this would reasonably constitute feature bloat that would make it a less approachable language.
Also, bear in mind that "this person" is not just any Joe Blow but Bozhidar Batsov, maintainer of Rubocop and (as an author of the Ruby Style Guide) something of an authority in the Ruby community.
But do any of these features progress the language or provide any performance benefits? Most read as syntactic-sugar (I understand Ruby is about developer happiness) and unnecessary.
They progress the language by adding syntactic sugar. Many Ruby programmers aren't waiting around for Ruby to suddenly become fast. If that happened it'd be great, but most code I write isn't performance critical, so it's okay if they just give me a feature where i can write `user&.name` instead of `user && user.name`
I almost invariably use x for @1 in existing code; for manipulating hashes, I almost always use |k, v|. If I'm working with pairs, |x, y|.
What's the big difference between x and @1? y and @2? It doesn't really feel that significant to me.
I'm a bit meh on the feature. I'm suspicious of lambdas / blocks that take more than three parameters, and for three parameters I'm good with |x, y, z| or |a, b, c|. I do think that giving them meaningful names is usually obscures intent more than it clarifies; |k, v| for maps and |i| for loops self-document well enough.
"for manipulating hashes, I almost always use |k, v|. If I'm working with pairs, |x, y|"
Those are massive improvements over "@1 @2".
You make the case yourself. Using "k, v" as a convention immediately signifies these are a key and its value. Using x, y indicate this is a pair (and x and y are of the same type, I assume?).
Completely agreed. I find it hard to read anything but the simplest of @ positional examples, and code has to be written to be read.
I have been writing Go a lot more these days and greatly appreciate the simplicity of language. Coming from a Perl background, and the TMTOWTDI (There's More Than One Way To Do It) philosophy, I've seen how that can make for clever but hard to read programs and I'd have preferred Ruby to err in the other direction.
I was happy with Ruby and Rails until when i have to write my own abstraction. (something like has_many, belongs_to in ActiveRecord).
Ruby class/metaclass things is really cumberstone to me. It's not about the magic, it's about the complicated data flow through all the class abstractions that push me off Ruby.
I prefer functional abstraction to class abstraction whenever i can.
That and difficulty introducing new developers to Ruby projects. When so much metaprogramming and abstraction is used, and everyone creating their own DSLs, developers often have to learn new languages every time they are brought on board to a Ruby project. It basically defeats the point of "convention over configuration" in Rails. Rails is fine on its own, but what Ruby allows people to do creates these "elegant" messes that end up being more difficult to learn, requiring people to become "experts". (which I suppose creates job security)
Don't even get me started on how lots of Ruby developers don't believe in documentation because the expressiveness of the language makes their code "self-documenting". (i.e. their shit doesn't stink)
I actually love Ruby, but I've learned to avoid the advanced features unless they make perfect sense for the task at hand. The Ruby culture, however, is what caused me to appreciate JavaScript; the fact that it contains a simpler set of tools makes the language more readable IMO, despite it being less English-like, and I find today's JS projects much easier to understand than almost any Ruby or Rails project of even modest scale.
you were either burned really hard by metaprogramming gone wrong or you've actually never seen it done properly.
there is a school of thought that says in some cases you should not shoehorn the problem you are trying to solve into <insert your favorite programming language in here>. In cases like that having a DSL (which does stand for Domain Specific Language) is worth its weight in gold. Focusing on expressing the problem in it's own domain language is beautiful. Ruby just happens to be awesome at building DSLs.
to your point about "elegant" messes: you can create a mess whatever tool/language you want. the language is not there as guardrails. I agree with you that less is more and in most cases you don't need some of the uberpowerful features, but when you do, the way you solve the problem is absolutely beautiful.
IMHO, Ruby code is the closest to code poetry that you can get with a vanilla/for the masses programming language.
I've seen it done properly, it's just that the majority of the Ruby metaprogramming I've seen is completely unnecessary. I'm sure it's God Mode to a lot of people, and the powers of God are difficult to resist.
DSLs can be great. The problem I have with DSLs in the Ruby world is that it's too normal to just whip up a DSL for things that don't require it. A good example of a DSL is RSpec, which has a very specific goal in mind and is well documented. A bad example of a DSL is one that Bob the developer created for a Rails app at Acme Corp., which is sparsely documented because Bob is a busy guy and there's a lot of tasks in the backlog more important to stakeholders than documentation. Said Rails app at the end of the day serves webpages, and somehow the business logic "required" its own language built by one or two people who couldn't dedicate enough time and thought to the design of the DSL; it was built pragmatically, and down the road when Bob quits nobody is going to want to touch it because the methods defined in the DSL have their tentacles in everything. In the best case, Bob actually wrote tests just for the DSL as well as the business application itself.
You are right in that a mass can be made using any tool. Where I disagree is where a language can act as guardrails. Guardrails might not even be the right term. What I do believe is that, a great deal of the time, the most powerful tool isn't necessarily better than the simpler one. Someone learning to fly probably shouldn't be given a fifth-generation fighter jet, as a much smaller aircraft would allow them to fly sufficiently without the complexities I'm sure there are in operating a military fighter jet, despite the fact that the fighter jet can do "more". Even an experienced aviator probably doesn't need to be flying a fighter jet for their purposes.
Ruby, IMO, is a lot like a fighter jet. Yes, it can fly from A to B, but it's can evade radar, has a machine gun, and do maneuvers other planes can't. Other languages might not have those other things, but that prevents less disciplined pilots from the temptation of doing unnecessary things that can result in mistakes. Sometimes you need a fighter jet, but most of the time we're not at war.
Everyone's experience is different, and I'm not saying your point of view is invalid, but I just feel different personally. I really wish people wouldn't use Ruby like they're a fighter jet pilot all the time. As someone who attended a coding bootcamp, I would like for bootcamps to emphasize hesitance around writing DSLs and metaprogramming rather than pushing those concepts on to junior developers as if they are the duct tape that applies to all problems.
In “optimizing for programming happiness”, there's much more left to be defined than ”happiness”. I think the biggest ambiguity here is ”programming”.
”Programming” can mean a lot of things, and I think Ruby still tailors too much to the ”be able to scrap things together somewhat elegantly” meaning.
But once the project takes off, you want performance and safety. Performance is constantly improved, but it seems safety is completely overlooked. TypeScript & co. proved you can make a type system that doesn't compromise for flexibility, so I think it should be at the core of the Ruby language improvements discussions.
I would like to have a language that has a builtin dial for strictness. The lowest setting (for use on a REPL or when prototyping something) would use dynamic typing with implicit coercions like e.g. JavaScript. But then, as the safety/performance/stability of your code becomes more critical, you can turn the dial higher and higher to get static type checking, then borrow checking, etc.
Sort of like how the NPM guys rewrite some critical-path components in Rust, but without having to hop to a different language. Sort of like how a lot of JS devs move to TypeScript halfway through the project, but then again, TypeScript can only go so far because it's shoehorned onto the existing JS, rather than designed in from the get go.
ah, that topic... Well, for what its worth, I mostly agree with the author, except I never felt Ruby has made me "happy" more than any other programming language with at least some semi-decent features. Its problems make up for its features and, at the end of the day, it has enough annoying parts to make me angry at times.
The new features, of course, is pointless. It solves no problem, but brings with it the typical downsides of adding a new language feature. It reinvents the wheel, but makes it triangle-shaped, and tells you it's only meant for crash-tests anyway.
> Well, for what its worth, I mostly agree with the author, except I never felt Ruby has made me "happy" more than any other programming language with at least some semi-decent features.
In surveys, Ruby hasn't (to my knowledge) scored highest in developer happiness for a very long time (if ever). Languages that don't tie themselves to a subjective tagline like that do, though. Rust, for example, has a pretty clear direction and mission and has some of the most excited and happy users in every survey I've seen.
Trying to talk about developer happiness as some kind of defining characteristic has always been a bad idea and I think, ironically, you end up with unhappy users by using it as an ideal to make change in your language. I think it only gets worse when the paradigm the language is built on and how it expresses that paradigm is also susceptible to more bugs than the alternatives.
The only real way to optimize for developer happiness is to optimize aggressively for simplicity. The entire foundation of Ruby is a barrier to that and it doesn't have the good tooling, interactivity or introspection of something like Smalltalk to handle that.
I don't understand his problem with hash literals. Hash rockets are just ugly and verbose and needed to go (through they're still needed for some object types to be declared as keys).
The positional arguments I'm on-the-fence about. I enjoy them in Elixir, but at the same time I wish it had a little more flexibility to name arguments. And I like the clarity that named arguments provide.
The `then` alias is also a major improvement; `yield_self` is a confusing and ugly combination of protected keywords.
Ugh, you make it sound like playing code golf and counting every character automatically leads to the most readable code.
Replacing a two character operator with a one character operator, or whatever, is not a significant win for developer productivity. More powerful abstraction tools are what allow non linear productivity gains. Slightly different literal syntax is more a bike shedding topic than anything that's going to help programmers produce better software faster.
I contend it pretty much is an issue of bikeshedding. Still, it's less code, and I don't really see how `{ key: 'value' }` is not preferred over `{ :key => 'value' }`. Ok I'm done on this issue...
As for hash literals - I didn't like the addition of the new syntax simply because it's limited to one type of keys. We had a perfectly good universal syntax and introduced a second one just to save up on typing in one common usecase. Not to mention that in the 1.9 syntax you don't use keywords directly but labels, which are different syntactic construct (:keyword vs label:). I think I understand your perspective, but it seems you value typing less and subjective aesthetics and I value mostly simplicity (minimalistic syntax and uniformity being some of its key aspects).
Sorry, reread your comment a couple times, and I still don't understand what you mean by the 1.9 syntax being a label. The keys in `{ key: 'value' }` and `{ :key => 'value' }` are both Symbols.
The author starts with ` named block parameters`. But Clojure had this. I always Ruby has similar thing. In Clojure you can do `%1` or `%2` similar to `@1` `@2` in Ruby. So I guess the feature were inspired by Clojure. Even Elixir has it. The author is also a Clojure developer so I found it's weird that he hate that Ruby syntax.
However, at the same time, I totally agree that Ruby has so much more thing to work and improve, not to worry about these thing. As a Ruby dev, I don't want a new syntax to catch up with JavaScript where new syntax come up every weeks.
Ruby should focus more on type system, performance. Even some type hint like PHP would be very helpful. Lots of Ruby gem kind of introduce type. Example in Mongoid:
class Person
include Mongoid::Document
field :first_name, type: String
field :middle_name, type: String
field :last_name, type: String
end
I would love to have some type so we can use it directly.
Another weak point of Ruby is its module system :(. Some module system like Python would be great where we can load thing and alias them to avoid conflict.
> In Clojure you can do `%1` or `%2` similar to `@1` `@2` in Ruby
I very much like the feature itself, esp. for one liners you can come up with on a pry prompt. What I very much dislike though is the use of @ which is a glorious hack if there is any since currently @ tokenizably resolves to 'instance variable' in many a brain (and @@ to class variable but those should die since class variables can be implemented as instance variables of an instance of Class which is both much more elegant and less prone to corner cases, but I digress). In fact I'd very much be happy to use %1 as a syntax since that would match the "%1" placeholders in strings right away. "%1" in strings, \1 in regex for backrefs, $1 for regex groups, {%1} in blocks somehow works as a metapattern, but reusing @1 vs @myvar is such a mismatch that it brings a terrible cognitive dissonance (just see e.g how in [1, 2, 3].each { @1 + @I } both @ have different scopes!).
I can see the argument coming (if it did not already happen) that "OMG this is going to special case stuff in the parser so that the % method can be separate from this new syntax! plus this may break existing code!" to which I reply %1 won't ever appear in the same place in the AST as a modulo operator with a right hand integer argument and this is your job as a language implementor to make the dev job easy, and it's crackpot anyway since that will already special case the @ syntax in the parser. Simple is hard, bailing on implementing the simple solution as a language implementor even if it's internally tough is a copout.
I was sad to see => aka "hashrocket" syntax go, and it still slips in sometimes... but it is nice being able to use the same hash syntax in Ruby and JS.
What I'm intrigued by is the author's suggestion that there's more to the change than a relatively minor syntax change. Is there more to the story? I always assumed that both syntaxes are functionally identical.
One can only ever use Symbols as keys in the 1.9 hash literal syntax, whereas complex objects responding to `#hash` can be used as keys in hash rocket literals.
> What I'm intrigued by is the author's suggestion that there's more to the change than a relatively minor syntax change.
I don't remember him saying that about hash literals. All I caught was that he doesn't like how it's limited to expressing only symbol-keyed hashes. Maybe I missed something, though.
That was exactly what I didn't pick up on, and you took for granted. I only rarely use non-symbol keys, so it hadn't dawned on me that the new syntax was, in fact, knee-capped.
Personally, even though `.then` resembles A+ promise-like syntax -- I find that it feels more idiomatic than `yeild_self`. I remember Matz talking about this at RubyKaigi, and I wondered how it would work in practice; however, I enjoy that syntax much more than its predecessor.
Addressing some of the author's comments, I would agree with:
- Endless ranges
- The "@" syntax names.each { puts @1 }
I would love to see more dialogue on how these changes _could_ positively affect the language. Also, if these are adopted widely, there will be the inevitable Rubocop rule telling you that it prefers this syntax over the "legacy." I love Rubocop, but sometimes I feel there's no end to the journey to writing 'idomatic' ruby and the continued language sprawl definitely does not help on this front.
I've been primarily a ruby programmer for the last 10 years. I remember when I started to learn Ruby the "there's more than one way to do it" philosophy really bothered me, and I thought it would cause lots of problems. But I just haven't found that to be the case. Even though a lot of these language additions don't add functionality they do make day to day programming more pleasant. Maybe I just kind of think the same way as Matz.
I remember I thought &. was going to lead to loads of PHP style abuse, but I haven't seen that. My initial reaction to the new implicit block parameters was also "yuk" for all the reasons mentioned in the article. But I think they will generally be used responsibly for very small blocks, where people would typically just name their vars x, y, z anyway.
"There's more than one way to do it" did make learning a little harder, but you spend much more time as an expert than as a beginner, so I don't think languages should be optimized for beginners.
As for refinements, I have never used them, never seen them used, and I don't even really remember what they are. I think they were to save us from the evils of monkey patching. But in general the community has learned to monkey patch fairly responsibly, and I've rarely had a problem with it, except with a few fairly old gems.
As for new features that I think would really bring ruby to the next level - better functional programming support, better concurrency support, and optional typing would be fantastic (I know, hello Elixir. But I hear typespecs in elixir are meh).
Functions are only "first class-ish" in ruby - the block/proc stuff was fairly revolutionary to people coming from Java and C, but it is clunky compared to most any other language with first class functions. Javascript should never do something better than ruby.
Good concurrency primitives are also key for language adoption in 2019 - even though simple process based concurrency is simple and safe for probably 80% to 90% of the apps out there - when people choose languages they want the one that will handle 10,000 requests per second using 2 GB of RAM.
As for refinements, I have never used them, never
seen them used, and I don't even really remember
what they are. I think they were to save us from
the evils of monkey patching.
They're pretty cool. IMHO they could be considered a best practice whenever you're reopening a class, whether it's for the purpose of monkey patching an existing method or adding your own.
Rather than overriding, say, `String#upcase` on a global basis you could override it in some reduced scope, like within some specific class. This is arguably not ideal, but I would say that it is more explicit and less dangerous than a global monkey patch.
Or, if you have a use case for adding a bunch of new methods to `String`, you can go ahead and add that `String#decorate_with_random_emoji` method you've always dreamed of - but only within particular scopes, such as a particular Rails presenter or something. I think this is often better than polluting the `String` class globally, or having some utility class full of methods like `MyAwesomeStringUtils.decorate_with_random_emoji(some_string)`.
In most cases I would say that these refinements belong in a mixin that classes can explicitly opt into -- `include my_string_refinements` or whatever.
I learned some cool new features coming in Ruby 2.7 and I am very much looking forward to them now.
The rest of this post is drivel, complaints, and ends with begging questions answered in his opening bullets. How does this optimize ruby for happiness? Flexibility and choice.
Backwards compatibility is maintained. You can write the same old ruby you wrote before if you want to avoid change. I'm very happy to have some new syntax tools in the toolbox.
I like JavaScript's (old?) solution to the indexed argument problem: the implicit `arguments` parameter. I haven't bothered looking into the rationale for why that wasn't carried over into arrow functions, but I miss it when debugging programs written using ES6.
I'd be interested to know if anything similar was considered during the discussions around adding indexed arguments to Ruby.
Besides debugging, I don't know that I've ever used the arguments parameter to do anything useful. I've leaned on underscore/lodash/ramda when doing anything involving clever involving partial application, shuffling arguments, etc. As I mentioned, though, I like/d `arguments` (I don't write much JS these days) because it gives you lots of debugging information with a simple call to `console.log(arguments);` -- made even simpler courtesy of a Vim abbreviation (`autocmd FileType javascript iabbrev adebugger console.log(arguments)`).
As for refinements, the use case I imagine is when you need to be defining a method that dispatches on the type of objects that are not “yours”, especially where that dispatch plausibly needs to be extended with more types by other code. (If it doesn't need to dispatch on type, you can just define a function in a module and be done; if the dispatch never needs to be extended, you can kinda do the same thing, but it's much uglier.) Serialization is the most common case where I've run into this, where objects of different types need to be serialized differently; indeed the https://bugs.ruby-lang.org/projects/ruby-trunk/wiki/Refineme... gives “ToJSON” as an example refinement.
A different way of handling this exists in CLOS: methods are defined on generic functions, which are are namespaced in packages just like class names are, but importantly, the class package and the generic package have no pervasive special relationship: a class doesn't “own” methods. So I can defgeneric extravagant-imperial-licentiousness:freeze and then define methods for it on all sorts of types, and it won't clash with someone else's jawbreakingly-simplistic-obdurate-numbness:freeze method. A consumer of either or both methods can choose to import one of the symbols and/or refer to either or both of them by their full names.
In Ruby, I've observed two “traditional” ways to do this: (1) add an ugly decoration to the method name which can never be elided (which is what I did in some internal projects, and then I added a prettier module function for external consumption that just trampolined to that method); or, seemingly more commonly and much worse, (2) ignore collisions, define whatever name you like on whichever classes you like, assume that you have dominion over your favorite short name and/or that the semantics are “obvious” and universal and that no one will ever load a library that does it some other way on top of yours and thereby break all the code depending on your version, and thereby, when that sometimes happens, land the hapless integrator in the dreaded “monkey patch hell”.
Refinements supposedly provide a better way to evade (2). I didn't get to play with them enough to be sure that they do, and in particular ISTR this might not actually let you have a fresh method name which can also be extended elsewhere without requiring all consumers of the method name to be aware of all the extension refinements, which if so seems like a disaster for serious use. I feel like there's a better mechanism for this hiding somewhere in Ruby design-space, but I don't know what it is.
This is a good reminder as to why Ruby is often called "the bad parts of Perl." I really appreciate other programming language ecosystems (almost any modern ones besides Ruby) that have abandoned things like this as bad practice, while Ruby continues to double down on it.
That's funny, I usually hear Ruby referred to as "the good parts of Perl" (and Google searches for those phrases suggest the same[1]) -- there's no sigil conjugation, objects are first class, and there are fewer magic variables/idiomatic usages of magic variables. I agree with the author that 2.7's new block syntax is a step in the wrong direction (and that safe navigation/method reference are a little uglier than they need to be), but I think Ruby has otherwise done an exceptionally good job of taking the best parts of Perl and combining them with the best parts of Smalltalk.
[1]: I only get 1 Google result for "the bad parts of perl" and about 1000 for "the good parts of perl".
I definitely see Ruby as Perl's spiritual successor. I would say it's one of the few languages other than Perl that can be considered "write optimized". Ruby makes it very very easy to just write code that just works. Just, a lot of code later you have huge monstrosity of rot that's almost impossible to maintain; I think that happens with all large codebases, Ruby just gets there faster.
> I definitely see Ruby as Perl's spiritual successor.
Matz has said this himself:
* "I chose the name of the language from the jewel name, influenced by Perl. Although I named Ruby rather by coincidence, I later realized that ruby comes right after pearl in several situations, including birthstones (pearl for June, ruby for July) and font sizes (Perl is 5-point, Ruby is 5.5-point). I thought that it only made sense to use Ruby as a name for a scripting language that was newer (and, hopefully, better) than Perl." (https://lingualeo.com/tr/jungle/231625)
* "Ruby inherited the Perl philosophy of having more than one way to do the same thing. I inherited that philosophy from Larry Wall, who is my hero actually." (https://www.artima.com/intv/ruby3.html)
You have a huge monstrosity if you or someone on your team is sloppy. I guess you’ve never seen a Drupal or Joomla codebase before. Or JavaScript projects that pull a myriad of junk dependencies. We have a Perl codebase which is actually quite clean. Mojolicious also helps a lot to keep it nice and tidy.
The author of this piece seems to simply disagree that this is a feature, and that's a widely held, not unreasonable position. But the described changes are just a further expression of that long held design philosophy.
Hm. Personally, I think there's a big difference between "there's more than one way to do it" and "there are some extra, esoterically unreadable ways to do it".
I definitely don't know enough to be questioning Matz's decisions, but the burden of backwards compatibility means there is a material cost to adding features that end up not getting widely adopted, and if that cost is not accepted by folks like Bozhidar (who must now update Rubocop to recognize this syntax), it gets passed on to the Ruby community in general, who must instead deal with a decaying and fragmented language ecosystem.
I'd say the strongest possible interpretation of what stevebmark said would include older additions to the language such as from this section of the article:
---------
Here’s a list of changes and additions that I certainly never needed (or wanted):
Ruby 1.9 hash literals
Refinements
%i literals
Rational/Complex literals (2/3r, 2+1i)
Endless ranges (1..)
Safe navigation operator (&.)
I think what drstewart is getting at is the vagueness of stevebmark's claim:
> I really appreciate other programming language ecosystems... that have abandoned things like this
He doesn't articulate what "things like this" are, and instead leans on bbatsov's critique, though it's not clear how. It seems you're suggesting he means "the particular syntax additions bbatsov doesn't like," but that doesn't make sense, either—you're telling me he means that other language ecosystems have abandoned Ruby 1.9 hash literals, refinements, %i literals, etc.?
Well, I'm sorry if it seems to you like this, but there were plenty of features I didn't like in the past as well. :D I just felt that recently things took a turn for the worse. Language design doesn't get easier with time - quite the contrary, because your options constantly shrink.
If that's your only take away I guess I've failed to communicate my points.
I can appreciate that it moves farther away from “requiring things to be explicit means you get fewer accidental barriers to moving code around or increasing/decreasing complexity”, though, as some of my recent work has been in Java and I've had an eye for that aspect of it. But the next time I do anything in Ruby I'll be remembering that this is there.