I write Ruby professionally and while I agree that this can lead to some very sloppy code, I don't find that this causes a lot of problems in practice.
A place where we would use nil conversions is especially in logging and error handling — you can cast anything, including nil, to a string, and that is useful in debugging.
Probably more often than nil conversions, we use the safe navigation operator &. as a way of working with "might be nil" values without having to add too much verboseness.
I do miss the way languages like Swift think about Optional values, where you have to be much more explicit about whether or not you are dealing with `nil`. It's easier to reason about and leads to clearer codepaths. And in general, I miss stronger type systems in Ruby.
But in a professional environment with relatively high developer expertise in the language and relatively clear style guides, we're still very productive in Ruby and don't suffer hugely from the typing issues. I guess eventually you learn to work around the warts in whatever language you use.
I used to write Ruby professionally, and found the lack of any type system and duck typing to cause pretty severe problems. Not knowing if the variable you were dealing with was a String or a URI was great until it very suddenly became not so great.
But this kind of API where you have 'safe' operators that don't throw is a pretty good convenience, since forgetting that a variable might be nil is pretty common (or passing nil into a method that previously never accepted a nil and where the API was never tested for nil).
There's a better way to do it like what Rust does with Some()/None enums and forcing you to unwrap the value and handle the None value explicitly, which is much better than using 'unsafe' operators that just blow up and throw/panic if you don't think about the edge condition (So I wouldn't consider the Python alternative to be any better, you're just trading one possibly bad behavior of an edge condition for another possibly bad behavior of an edge condition and forcing programmers and code reviewers to constantly have them in their mind, and blaming them for when it goes wrong--with one group or the other feeling superior about their choice of when to blame the programmer for forgetting to handle the behavior). I think this is also roughly equivalent to languages like C# 8.0+ that have nillable/non-nillable types and static checking to make sure you're using them right.
I'm not surprised you found Ruby uncomfortable when you describe Ruby as not having a type system. It might just not be the right language for you. That's fine.
> Not knowing if the variable you were dealing with was a String or a URI was great until it very suddenly became not so great.
Nothing stops you from declaring that you expect a given type in Ruby; it's just code, and not mandatory, though you can certainly write code to impose mandatory types with Ruby - there was a while when writing those seemed a rite of passage for Ruby developers, but the reality was that they buy far less than people think until they've tried writing one and/or spent some time using them.
> There's a better way to do it like what Rust does with Some()/None enums and forcing you to unwrap the value and handle the None value explicitly
Nothing stops you from doing that in Ruby, and in fact there are several monad implementations for Ruby that provides building blocks for those kinds of patterns if/when you want to use them.
> I'm not surprised you found Ruby uncomfortable when you describe Ruby as not having a type system. It might just not be the right language for you. That's fine.
Thought we were supposed to "respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize" around here and not nitpick misstatements.
I did not suggest you believe that Ruby does not actually have a type system at all, but your statement to me still implies a view about the way Ruby treats typing that suggests to me that it is a fairly natural consequence you will find Ruby uncomfortable. There's no value judgment in that, nor nitpicking or criticism. Just an observation that Ruby might just not be right for you.
I too wrote Ruby fulltime and professionally. For over a decade.
The sheer amount of code I wrote, just to protect against nil-errors is staggering. Every third function or method had guard clauses and/or tests. Just to ensure that the data was in the expected shape (that it quacked and walked like a duck).
I spent, I think, hundreds maybe thousand hours debugging actual production "nil-errors" and weird conversion errors. That happened in production. Sometimes legacy database records, sometimes weird race conditions causing corrupt data, sometimes (most often) just stupid mistakes by me or another dev.
All of these -all!- would have been caught by a typing system and type checker like e.g. Rusts'. All.
Which then leaves just the huge category of stupid bugs that I'll make in the business logic.
But now, working with rust, the LSP tells within ms that I overlooked something that In Ruby would've been a data-error. Often my Ruby test suite would find these after minutes of running. Sometimes we would find them after deploying to prod and seeing havoc. Sometimes they'd take weeks before popping up.
I'm convinced rust makes me develop much faster because of this.
If you need to test again nil "every third function or method", then that reflects broken abstractions and/or very dated Ruby. For "maybe nil" situations, modern Ruby code will lean heavily into safe navigation, dig, or wrapping values, and sometimes monads or similar.
If that's not enough for you, Ruby has optional type checkers available.
The position of “I regularly don’t know if I have a string or a uri” is indeed a commonly raised issue. It folds out to two other problems:
It implies a lack of conventions around interfaces and variable names.
The surrounding statement also implies that it was not easy to figure out quickly (hard to exercise that code synthetically).
Highly successful workflows in loosely typed environments like this depend upon a strict culture to ensure both of these aspects are under constant improvement.
This is difficult to scale, and improving these aspects in other languages also helps other workflows, but their impact in these environments is much higher. It’s part of the reason why you would hear _so much_, density wise, about code patterns, code health, testing and so on in the ecosystem in its peak era, all of those things were really just mechanisms to try to achieve the above factors.
In the grand scheme of things, it would be difficult to come up with something worse than having both null and undefined in JS, while also being able to define variables as undefined.
> A place where we would use nil conversions is especially in logging and error handling — you can cast anything, including nil, to a string, and that is useful in debugging.
Lots of languages can format anything to a string though. Even Java will let you write
"foo " + null
And logging / error handling / debugging seems like the context where you would least want nil to become an empty string (and thus be confusable with an actual empty string, or not appear in the output).
I agree with the grandparent general point, but i would say that it is string formatting more precisely where these nil conversions are useful.
For example, you can write something like:
bananas = 4
puts "you have #{bananas} banana#{'s' if bananas != 1}"
And this works because: `if` is an expression, `if` evaluates to `nil` if the condition is not met and there's no `else`, and `nil.to_s` is an empty string.
> And logging / error handling / debugging seems like the context where you would least want nil to become an empty string
Yep, 100% agree. And i think Ruby's designers agree too, since the language has the shortest method possible, `p`, for print-debugging. You can do `p something` and that'll print a useful string for debugging (`""` for an empty string, `nil` for nil, `[]` for an empty array, etc).
There are many stronger type systems for Ruby. And while that is partially a joke, there are various Optional type implementations, though of course you do then need the discipline of wrapping values as needed.
I don't tend to use it myself because between safe navigation and things like Array() etc. it's rarely needed.
When I first wrote Ruby and was new to programming, and I got the typical “undefined method X for NilClass”, I did the logical thing: I defined X on NilClass. I still have nightmares from that self-inflicted madness.
I agree if it is known that the value can only be Hash or nil. But to_h often makes sense when you need a Hash, and there are other possible values too.
So it's often a) not available, b) semantically not an alternative.
In particular it very specifically does not return the default only when the object is nil, and I have plenty of code where people thinking they're the same would cause serious breakage.
This is because in Common Lisp, nil is an instance of the null class, the one and only instance. You can specialize method arguments to the null class, so that they capture nil arguments, which is very useful and good. (Mind you, I can't think of anything built into the language which exploits this, let alone for converting nil to 0.0.)
Some things in Ruby are patterned directly after Lisp. You notice it here and there, like the #<...> notation for unreadable objects, or calling small integers fixnum, ...
In the world of static reference OOP like Java and C++, there is the Null Object Pattern:
There is nothing “dangerous” about this if you know the behavior.
In Ruby nil is a representation of emptiness, so when using the to_ conversion methods you will get an empty instance of a type. If you don’t want this behavior you use a constructor instead.
FYI For a long time there was no nil.to_h and there was some consternation about adding it. But finally it was embraced and this emptiness notion solidified into the language for good.
> There is nothing “dangerous” about this if you know the behavior.
There’s nothing dangerous about any functionality in any language if you know the behavior. Not that I’m saying this is a particularly dangerous functionality of Ruby (coming mostly from other dynamic/scripting languages, it seems refreshingly explicit if nothing else), but I don’t think expertise in usage of a footgun is what should determine whether it’s a footgun.
Integer(nil) will raise a TypeError, as will Integer("4x"), but not Integer("4"), so when you want that behaviour, just use Integer(), or Integer.try_conver() if you want it to return nil instead of raising an exception.
These are pretty closely aligned with “zero values” in Go. I think “zero value” is clearer terminology than “carrying the concept of nullity”. It’s a lot more natural to ask “what’s the zero value for this type” than “how is the concept of nullity carried to this class?”
I know companies that write banking software in this language. I wouldn't. But I guess if you write enough unit tests on the software and remember to avoid gotchas like this you'll probably be fine, and I think that's how they feel. To each their own, I guess.
Having worked in a handful of fintechs that use Ruby extensively, this generally isn't a problem in practice. In my experience there's a Money type that gets passed around rather than raw numbers, and that Money type will enforce a bunch of constraints, among them non-nilness.
I really don't consider this to be a gotcha. If you explicitly cast to type X, it isn't surprising to get back an object of type X, right? These aren't implicit type conversions a la the Javascript + operator
It is if the object can't be meaningfully cast to the type. I'd expect to get an exception. That's why they exist.
For example, consider explicitly casting the string "abc" to an int. Python throws a ValueError. Ruby silently ignores the problem and gives 0. I consider the Ruby behavior to be a gotcha.
If you want to "explicitly cast" to an Integer only when the string strictly represents an int in Ruby you shouldn't be calling to_i. The Ruby behaviour is only a gotcha of you don't know the proper way of doing this in Ruby.
If you want an Integer or an exception you would call Integer(). If you want an Integer or nil, you would call Integer.try_convert(). If you call to_i, you're explicitly asking for a best-effort conversion, and shouldn't be surprised that's what you get.
I specifically avoid mentioning Crystal because there are a group of people on HN who doesn't like Crystal being mentioned in a Ruby Thread. But yes Crystal would be close if not "it".
What part / domain of "banking software" probably helps as well e.g. for a web frontend or other gateway to more resilient backends it might make sense enough.
As others have mentioned, the safe navigation operator (&.) obviates the need for converting nil to "default" cast values. Would be nice if Integer(nil) worked like Array(nil) though.
I go back and forth on this. We "need" what Array() currently does. Arguably we need, and typically use it, far more than Integer(). I'd be inclined to argue both represent the most commonly desired (in Ruby anyway) treatment of input. Silently converting an integer is very often very bad, but being able to pass either single values or an array is often very good, and in the same cases guarding against nil is also very often desirable.
At the same time it might be nice if they had names that were differentiated to indicate the behavioural difference, but I don't know what a better, similarly short option would be.
We also do have "Integer.try_convert" and "Array.try_convert" which both consistently will refuse to convert "nil", though returning nil on failure instead of raising an exception, but I very rarely see it used.
A place where we would use nil conversions is especially in logging and error handling — you can cast anything, including nil, to a string, and that is useful in debugging.
Probably more often than nil conversions, we use the safe navigation operator &. as a way of working with "might be nil" values without having to add too much verboseness.
I do miss the way languages like Swift think about Optional values, where you have to be much more explicit about whether or not you are dealing with `nil`. It's easier to reason about and leads to clearer codepaths. And in general, I miss stronger type systems in Ruby.
But in a professional environment with relatively high developer expertise in the language and relatively clear style guides, we're still very productive in Ruby and don't suffer hugely from the typing issues. I guess eventually you learn to work around the warts in whatever language you use.