I don't think the ability to write expressions similar to natural language like "2.days.from_now" should be called "expresiveness". Real expresiveness is about what SICP calls "means of abstraction" and "means of combination", not necessarily about having very nice syntax. In general I think those considerations would benefit from distinguishing a bit more between syntax and semantics. As another example, I have been doing Ruby professionally for almost 10 years now, and despite this I have to strongly disagree with the conclusion of this quotation:
Python is a beautiful, clean language. But the same restrictions that make it nice and clean mean that it’s hard to write beautiful, clean libraries. Ruby, on the other hand, is a complicated, ugly language. But that complexity allows you to write really clean, nice, easy-to-use libraries.
The Ruby metaprogramming magic makes for really nice syntax, which is what made me choose it over Python all those years ago, but it complicates understanding of what's going on - as I matured as an engineer, maintained a codebase over several years, several times spend long hours debugging issues coming from metaprogramming pitfalls, I shift more and more to the Python approach of building APIs in the simplest possible way, as much as possible sticking to simple function calls and no really fancy metaprogramming just to get a cleaner syntax, e.g. compare Rails has_many :foo with Django's models.ForeignKey(Foo) and the internal implementations of those. Same thing with using "BDD" testing frameworks like RSpec vs. traditional assert stuff, what is the nice syntax worth if tricky bugs hit you where you most want to avoid it, in tests. Libraries like RSpec might be "nice", but they are certainly not "easy-to-use", and I don't think I would call them quite "clean", if I look at their source code.
In other words, while clean syntax is luring, in the long run clean and simple semantics is what I think is really important.
Rubyists try too hard to emulate natural language syntax, but in the process forget that natural languages are not particularly clean - in fact, more often than not they are highly dependent on contextual information, not just semantically, but even syntactically.
Ultimately, natural languages and programming languages serve very different purposes: communication in natural languages is highly dependent on context for resolving ambiguity (e.g., if we took this discussion to a non-programming forum, everybody else would be like "WTF is going on?"), while programming languages are all about making it possible to be pedantically precise without reducing convenience.
> In other words, while clean syntax is luring, in the long run clean and simple semantics is what I think is really important.
I agree. And I think that Rust has one of the simplest semantics around in programming languages: the method resolution rules are quite simple, because there is no method overloading and all extension methods have to be explicitly imported to be used. These two restrictions are important for maintainability: the former means that the compiler "never guesses"—multiple applicable methods are a compile time error—and the latter means that you never have to guess about where a method comes from, because it was always defined either alongside the type or defined as part of a trait that was explicitly imported at the call site.
A former developer at our company was someone who was really excited about using Ruby metaprogramming features to the fullest extent possible to enable concise code. He wrote a bunch of subsystems based around a few custom frameworks.
Whenever a project takes me into modifying that codebase I get that sinking feeling you get when you realize you left your wallet at a restaurant or that your car has been towed.
Django's models are entirely fancy metaprogramming behind the scenes. When it works, it works, but the code that implements it is not easy to understand by any means.
Django models are definitely more self-documenting than Rails models, which seems like a result of the Python legibility culture.
Yes, but Django model metaclass magic is context-independent; the logic resides entirely in the model and does not affect the rest of the environment, which is something that Ruby's includes can do (and give you no warning when method overloads break stuff).
Exactly, in Rust you can't (easily) pass in a named function into a map because of all of the pointery stuff going on. Rust clearly has its uses but if you can't write high order functions then expressive it is not.
You can write higher-order functions in Rust - in fact, much of the standard library relies on them. What you cannot do is use named and anonymous functions interchangeably.
It is a bit weird, and it's one of the parts of the language still being worked on [1]. What it comes down to is that while they're both bits of machine code being executed, their environments are different. I'm a bit out of date as there's been movement in this area since I've written much "hard core" Rust code, but the basic problem is that what the grandparent poster calls an "anonymous function" is a closure, with a captured environment. Thus said closure has different interactions with lifetimes, memory management, and program safety.
Since safety and soundness is one of the primary goals of Rust, problems like this do crop up and are surely what I see as the hardest part of designing a usable language.
To be frank, I am not sure of the specifics. At first I thought it was because anonymous functions have a notion of the lifetime of the variables they close over, which named functions do not. However, taking into account that named functions do not close over anything in first place, this suggests that one should be able to use a named function wherever an anonymous function is expected.
I don't think it's the similarity to natural language, but rather expressing things directly.
I'd agree that unconstrained metaprogramming can make code hard to read, and Ruby especially tends to go overboard. One of the goals of Objective-Smalltalk is to create a language with cleaner extensions points, making semantically consistent extension/adaptation easier. We'll see how it works out :-)
> Real expresiveness is about what SICP calls "means of abstraction" and "means of combination", not necessarily about having very nice syntax.
I was thinking specifically of the expression problem:
> The Expression Problem is a new name for an old problem. The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts).
Now, I'm not saying Rust solves the expression problem, nor that what I'm calling "expressiveness" here maps to it, even. Just that we can easily argue over what "expressive" means.
> Libraries like RSpec might be "nice", but they are certainly not "easy-to-use", and I don't think I would call them quite "clean", if I look at their source code.
The whole point of the quote is that the source code is ugly. I think we're in violent agreement here.
This paraphrased quote was quite ambiguous to me, but the conclusion of having preferred Ruby in the end without further commentary makes it seem like the nice syntax is worth the complications, while I would say those "intuitions" of initial liking/disliking a language because of syntax are completely misleading and should not be trusted.
Also, it's not only about ugly internals, I wrote several RSpec tests that looked like a perfectly reasonable "natural-languigish" expression of what I wanted to, but did not work at all, because of the tricky mechanics underneath happened not to support this particular combination of incantations. In the end you have to understand all the ugly internals to know which of the "nice" calls will work and which not, and I all the time see people shooting themselves in the foot because of not having this understanding, e.g. not wrapping some bit in a lambda in Ruby and having something evaluated at the wrong time etc.
> This paraphrased quote was quite ambiguous to me, but the conclusion of having preferred Ruby in the end without further commentary makes it seem like the nice syntax is worth the complications
It doesn't actually complicate the language to allow stuff like this: it just naturally fell out of having traits and methods, which we needed for other reasons.
You can write 2.days().from_now() in almost any OO language which allows extending primitive types, so it isn't really very impressive. There are lots of example of languages with very powerful means of abstraction / composition and not syntax people commonly consider pretty or natural: Lisp, Erlang, Prolog. There are also programming languages which imitate natural languages to some degree and aren't very expressive, like COBOL.
I think his declaring something as impressive or not in an objective way is fraught with peril since impression is always a subjective feeling. If it impressed you, it is impressive.
However that being said 2.days.ago is not a particularly differentiating bit of magic because as has been stated elsewhere every object oriented language that allows extensions of primitive types will allow this.
The main question here though is whether 2.days.ago is a good design and does it really demonstrate expressiveness, which as has also been stated above related more to the concept of "small pieces, loosely joined"
2.days.ago breaks encapsulation and abstraction and inverts the flow of control (parameter calls function).
Sometimes I think I must be a humongous idiot because adding a "days" method to Fixnum smells like the stupidest, most counterproductive, cutesy bullshit to me, but some pretty smart folks think it's great (that's a compliment btw).
Stuff like this is the opposite of expressive to me because cute expressions that read like english scream "please take a moment to analyze this and make sure it does what it sounds like." I am instantly suspicious of code that is so expressive that you can just intuit out its purpose without thinking it through. Code is too subtle for treating it that way.
I think I agree with the thrust of what you are saying here. Just as English has rules with what you do with nouns and verbs etc, code has rules too that are different to English.
2.days().from_now()
says that we are performing the action 'days' on the object '2'. What could that possibly do? Well there're a bunch of options and none of them are obvious. It's weird and counter-intuitive. And then we go and perform the action 'from_now' on whatever the result of that was.
I don't mind 'fluent' style interfaces as long as they don't mess up the normal meanings of what objects and methods are, but sadly they usually do. Another classic sign of a bad interface is that with many of these fluent interfaces it's not obvious whether a particular thing should be a chained method call or argument, and if it's an argument, whether it should be got by calling a method, using a field or what.
Code should read like clear code. English should read like clear English. These are not the same things, and efforts to make them the same usually make a mess of both.
Code should read like clear code. English should read like clear English. These are not the same things, and efforts to make them the same usually make a mess of both.
This stinks of principle over pragmatism.
Using the sugar offered by expressions like this requires a bit of knowledge of the development environment. I accept that it's a bit of a different syntax from some other languages, but that doesn't make it unclear at all.
Speaking to Ruby (where ActiveSupport is from), we all know that it's a language that focuses on providing flexible syntax and "multiple ways of doing things", in contrast to some other languages which restrict flexibility to provide more consistently structured code. I don't think it's a bad thing at all to have these options available, and I often find my intentions are much clearer when writing, say:
That reads like "if post was created less than five minutes ago", i.e. if the post is more recent than five minutes. However, if "5.minutes.ago" gives you a timestamp, as is reasonable, then that actually means "if post was created more than five minutes ago", i.e. the very opposite of what it would mean in English. Whoops!
A more sensible syntax might be:
if post.creation_date < now - 5*minutes
"ago", "from_now", "since_now" etc. are especially despicable in a language where you can just use + and -.
to everything I've seen. There's a concept behind the if statement that should go in the model. I realize this is sort of off-topic, but it kind of negates the whole discussion since you can wrap up the cute/ugly and test it alone. :)
Yes, it's wrong. No amount of unit testing or isolation is going to change the fact that whenever you read code like that, your brain will come up with two conflicting meanings, because these cutesy methods make it look too much like English, and English has different rules.
English parse tree is: ((less than (five minutes)) ago), whereas code parse tree is (less than ((five minutes) ago)).
A capital idea, my friend! But I'll do you one better: why not just wrap up the entire program inside a function called "main"? Clearly that would make all syntax discussions moot, now and forever.
I know this is a little nitpicky, but your Python example has way more brackets than are strictly necessary.
if post.created_at < datetime.now() - timedelta(minutes=5):
I agree with your argument in principle though. Just wanted to point out if you're trying to make a point about readability / ease of use with code examples, you should try as hard as possible to make each example optimized to avoid accidentally creating a straw man.
Also, one criticism of the Ruby version is the invisibility of the types involved. What type is returned by 5.minutes vs 5.minutes.ago? How do I know I can compare the second to a date attribute from ActiveRecord? The Python version is more verbose, but I think it's more clear that I'm using two datetime's and a timedelta.
In both cases you have to either remember a lot of details, play with it in the reply, or look at the docs as you are writing it. Once it is written the ruby code is far easier to read.
In Rust, you have to import a trait (in this case active_support::Period) into the file that uses 2.days(). It's not a global change.
If you see 2.days(), you know where to look for the definition, and the type system means that you can easily learn what 2.days() returns. In this case, 2.days() returns a `TimeChange` object (https://github.com/wycats/rust-activesupport/blob/master/dsl...).
Yes, I took the fact that this was not a global modification of a widely used data type as a given, since to do so would be crazy in any language.
And sure, you can always look up in the type system or (in a less well typed language) the documentation to see what it does. The point of writing clear code is that doing so should be a last resort and not something you force on developers in order to understand what's going on.
Readability is what we're talking about here, and TimeChange.days(2) or new TimeChange(2, Days) are both more readable options than 2.days() because a programmer knows what is happening without querying information that isn't written in front of them.
> And sure, you can always look up in the type system or (in a less well typed language) the documentation to see what it does. The point of writing clear code is that doing so should be a last resort and not something you force on developers in order to understand what's going on.
Well, all you have to do in Rust is to look at the imports, since you have to import the trait defining the method at the call site. Because Rust modules never span multiple files, and imports must be at the top of the block that they're used in, that import will always be before the call. There are also no global imports in Rust (at least, without using a deprecated feature flag), so the trait in question is always named explicitly.
Numeric#days is an ActiveSupport method, not core Ruby. But yes, it does patch the behaviour into a core class.
This is a bit of a legacy of Ruby programming, and it's definitely bad to pollute the global namespace. However, I expect we'll see eventual use of Refinements (http://www.rubyinside.com/ruby-refinements-an-overview-of-a-...) now that they are available; this will allow classes to override core methods without affecting other classes out of scope.
Refinements do a little less than previously advertised three years ago, and as far as language features go I don't think they're worth the performance implications.
This must be a question of aesthetics because the 2.days().from_now() seems perfectly obvious to me. Maybe someone can help me understand why it seems confusing?
I don't think that's a problem: avoiding features or fiddling with syntax in order to make the code more accessible to people who are not familiar with the language or its libraries isn't going to gain much in the long run. Languages and libraries should be geared towards long-term developer happiness, rather than a shallow learning curve.
Anyway, I know as close as possible to nothing about Rust. It seems blindingly obvious to me that `2.days()` returns some kind of duration, and `2.days().from_now()` returns some kind of time. So I've looked at the code, and that appears to be correct - `2.days()` returns a `TimeChange`, and `2.days().from_now()` returns a Time. I can't think of any other reasonable interpretation.
Perl's autobox is lexical so it is also not a global change. For eg.
use 5.016;
use warnings;
{
use autobox;
use autobox::DateTime::Duration;
say 2->days->ago; # 2013-12-27T14:03:47 (stringified output of DateTime object)
}
say 2->days->ago; # Can't call method "days".... (runtime error)
You're saying `code` but what you really mean is "Java" or your favorite OOP paradigm. If someone comes up with different rules for what `2.days()` means then they are valid rules (and we can discuss whether they provide benefits to the other ones).
Not every nonsense 'rule' that someone comes up with is 'valid'. Rules need to be consistent.
And yes, I was talking OOP since we were talking about ruby. Functional or logic programs also have their own rules that tell you how to understand them. If you want to create good programs you shouldn't mess with those basic rules in order to make the code look more like something it actually isn't.
> 2.days().from_now()
>
> says that we are performing the action 'days' on the object '2'. What could that possibly do?
It does what the language definition says it does: it tells the compiler to search through traits that were explicitly imported to find a method called "days" and calls that. (Since "int" is a built-in type, it has no methods except for those provided via traits that were imported, so traits are the only potential source of these methods.) If there is more than one matching method, Rust never guesses: it instead signals an error.
> I don't mind 'fluent' style interfaces as long as they don't mess up the normal meanings of what objects and methods are, but sadly they usually do.
In this case they don't. It falls out of what traits and methods already provide in Rust.
One possible answer: static typing + autocompletion.
Autocompletion clearly show what you can and can't chain, but admittedly it helps a little less when the DSL asks for a parameter of some type and you don't know how to start building one. Perhaps one way to evaluate the learning curve of an API is to count how many non-standard types appear in the signatures of its methods, excluding the `this` parameter.
You're conflating different things here. `Date.today - DateTime.days(2)` would be just as clear as `2.days.ago` without muddling the semantics of integers.
If you can get exactly the same benefit, then why pay the cost of overriding behavior of the integer type and dealing with hidden black magic that makes your program harder to reason about?
I'm not sure which example you're talking about, but there's no additional cost in the Rust example. And it's not hidden; it's in the trait definition, which is scoped.
How about this 2.days.from_last_update? What does it do? What is going on with "from_last_update"? Is it actually reaching out to a database to find the last updated value? Which database? How do you configure it?
All these questions and more are why you should question "english as a DSL" methods. Granted, there's a ton of great libraries out there which do this well. Take Unfiltered's handling of parsing out requests for pattern matching:
case POST(Path(Seg("api" :: "calls" :: name :: Nil))) =>
this is "englishy" without having you scratch your head. I've also seen
case POST(Path("/api/auth")) & Authenticate(user) =>
which then causes a DB access to happen in a configured implicit value. Now try to reason out why something is not hitting a URL or being denied when you first get into a project. Unless there's someone to explain it to you, it just happens like "magic."
Your example is a sort of strawman argument, probably constructed without you being aware of it. "2.days.from_now" is indeed not free of side-effects, however "now" is a function that depends on the computer's internal clock and is generally understood what it does, the clock being a sort of global state that mutates and is generally available, whereas "2.days.from_last_update" implies usage of local mutable state, passed implicitly and highly dependent on the context you're in.
Also, this example is simply wrong ...
case POST(Path("/api/auth")) & Authenticate(user) =>
It's wrong, because this is a route matcher, so if the user is not authenticated, the server would throw an HTTP 404 Not Found status. HTTP 404 Not Found is not meant to be thrown in case the user is not authenticated. There are more relevant, more correct status codes for that, like 401 Unauthorized or 403 Forbidden, or heck, in case of classic websites, a 302 redirect to a login page is in order. So you're basically giving broken examples and calling them "magic", but the examples are obviously broken, the first one in terms of API design, the second one in terms of functionality.
I really do understand your hint. But the examples you're giving are bad because they rely on totally non-obvious mutable state that's passed implicitly. This is indeed "magic", however it has nothing to do with syntactic sugar.
Btw, in Scala 2.10, thanks to String interpolation, you can also construct route-matchers such as this on [1]:
case POST(p"/api/calls/$name/") =>
Doesn't work by default, you really have to build your own stuff for this to work (see the linked example). Or if you want to throw in regular expressions, you can either do this:
case POST(p"/api/calls/${Integer(id)}/") =>
Or you can have like a mini-DSL with regular expressions built-in:
case POST(p"/api/calls/$id<\d+>/?") => // note the optional ending slash
Yeah, you can do this. And it's certainly much nicer to me, more flexible and you can also make it faster than pattern-matching on Lists, as in your Unfiltered example. Is this magic? I don't see a local, mutable and implicitly-passed context, so for me it doesn't qualify.
Also, whenever I see "2.days" or "4.kilobytes" in Scala code, it's certainly nicer than seeing "172800 /* 2 days in secs /" or "4096 / 4 kb */". Given that these are statically type-checked extension functions that do not modify the original Int type and that do not rely on some mutable local state passed implicitly, is this magic? My IDE doesn't think so.
I think it basically comes down to trust. Can you trust that the person who wrote the library isn't a dingbat, and that they've thought out the edge cases and composability of the interface they've designed, and that they've taken care that code actually does what it looks like it does, and confusing cases are either inexpressible or at least look confusing so you know you have to look closer. In short, can you trust the author to design interfaces well.
Personally I don't think it's inherently good or inherently bad, it's just one of those things that makes good code better (concise & expressive) and bad code worse (inscrutable & brittle). Glass-half-full and glass-half-empty are both valid opinions on the matter, and I strongly suspect which way you leans towards depends heavily on the quality of other people's architecture that you have to deal with on a daily basis.
"Let's just throw away the single responsibility principle and give Integer a method that multiplies it by the number of seconds in a day, and another one that adds it to the current day to get a new Time or Date object. Obviously that's something that an integer should be concerned with."
That's kind of the philosophy of Ruby though, and ActiveSupport by extension.
Where it makes code clearer and more concise, or where it simplifies an often-used idiom, and where it outweighs the benefit of sticking to principles—as I believe it does here—then breaking those principles is acceptable, in languages where coding conventions support this.
I don't think `2.days.ago` makes code any clearer or appreciably more concise than, for instance, `Date.today - Date.days(2)` (or even, to meet you in the middle, `Date.today - 2.days`).
We must not mistake principles and guidelines for absolute rules. I'm not judging this particular case, but I see the SOLID guidelines treated as absolutes far too often.
> Now, I should say that I almost never use 2.days, but the point is that it’s a benchmark of expressiveness: if you can write it, you can do all sorts of other flexible things.
I tend to use MiniTest, not RSpec. But I'm very glad RSpec can be built.
I use https://github.com/sconover/wrong with MiniTest which might qualify as another example of Ruby "tricks" to achieve a different interpretation of expressiveness.
The folks who "'think' this is great" are the same folks who self-submit their blog entries to news.ycombinator.com: self-promoting quacks. Bournegol has been around for awhile:
I found this a lot less problematic in scala with IDE support. You can write 2.days, with appropriate imports, but the IDE will highlight the implicit conversion on the 2 (a green underline seems to be the standard visual representation), and you can click through to the definition.
Extension methods in C# do the same thing and have been around, what 5ish years now? Since .Net 3.5.
Here's an example implementation:
public static class DateExtension
{
public static TimeSpan Days(this int daysCount)
{
return new TimeSpan(daysCount, 0, 0, 0); //days, hours, mins, secs for those wondering
}
public static DateTime FromNow(this TimeSpan addTime)
{
return DateTime.Now.Add(addTime);
}
}
usage is the same as Rust:
2.Days().FromNow();
Basically if this impresses you, you really should give modern C# a whirl as there's a lot more you can do than that.
Much of how it does stuff now is how my perfect language would do it. There are bits of Ruby I love, and Rust is looking more exciting to me than Go, but the C# team really have done some amazing work.
I am certainly not implying that Rust is the only language that supports this kind of thing. Haskell is where I first saw some of this. I'm just happy that Rust is doing it well.
C# is a great language, but I don't use Microsoft technology, and it's effectively Windows only.
That is just FUD im my opinion. I havent har any issues with getting my applications working on multiple platforms. Have you har performance issues with C# on IOS?
Be practical. I've had to support wordpress on IIS before, it's always a lot of hassle not using the tech in its native environment. Same with MySQL + .Net. The EF, for example, played really poorly with MySQL.
I can't imagine using mono is for the faint hearted. For example in the new MS programming language thread [1] today someone mentioned they ended up giving up on the .Net GUI controls and using GTK# instead because it was so unreliable. Another mentioned that the transition from one SQL driver to another had left a lot of projects hanging in the wind with SQL breaking on mono but working on Windows.
I don't think that's a fair comparison. Rust doesn't come with a set of functioning, well-tested and cross-platform GUI libraries and database drivers either.
Running C# the language with the BCL on mono is pretty painless, and offers a ton of functionality.
One problem with extension methods is that you cannot statically express the idea that multiple classes implement the same extension methods. In particular, you cannot make a generic whose type parameter is guaranteed to implement a given set of extension methods. This, in my opinion, reduces the usefulness of extension methods from "potential game changer" to "mere syntactic sugar".
You mean type-classes? Scala and Haskel have them. It's certainly nice to be able to do statically type-checked stuff like:
list.sorted
list.max
list.sum
The problems with languages like C# (or Java) go beyond the lack of type-classes or similar. For example, given a type such as List<T>, you're restricted to define methods only "for all T", as these languages don't allow you to define methods only "for some T". Examples include methods such as "sorted" or "max", relying on type T implement some sort of Ordering interface, or "sum", relying on type T to be some sort of Number.
So even if you would have a way to specify that types X, Y and Z have implementations for this and that set of extension methods, it would still be of somewhat limited use, given that one wouldn't be able to also use this in code making heavy use of generics, much like how interfaces themselves aren't used much in code making use of generics, because code making use of generics is usually some dumb Container<T>, where almost nothing about T is known.
The best part about type classes is that you don't have to control/modify the type's implementation, in order to make it implement a certain type-class. Type-classes are like open interfaces. If the builders of C# never bothered to provide you with some common interface you want for built-in types, that's OK, because you can make one up.
Scala also has implicit conversions and "view bounds". Implicit conversions go beyond extension methods and View Bounds allow one to specify that some type X can be viewed as some type Y (either Y inherits from X, or there exists an implicit conversion from X to Y). Something like:
But these are mostly useful for builtins and deprecated for general-purpose usage in favor of context bounds (type-classes). It's still nice that builtins such as Array[T] is viewable as a Seq[T] and Strings are viewable (in true FP tradition) as sequences of chars, allowing one to do this:
"some string".map(_.toUpper) // => "SOME STRING"
What's interesting about the above example is that mapping over a string, produces a string, highlighting another Scala feature that goes beyond type-classes:
"some string".map(_.toString)
// => Seq(s, o, m, e, " ", s, t, r, i, n, g)
Correction: Haskell has them, Rust has a limited form of them, and Scala encodes them using a horrible hack.
> The problems with languages like C# (or Java) go beyond the lack of type-classes or similar. For example, given a type such as List<T>, you're restricted to define methods only "for all T", as these languages don't allow you to define methods only "for some T". Examples include methods such as "sorted", relying on type T to be part of the Ordering type-class, or "max", relying on type T to be part of the Number type-class.
Both C# and Java have bounded generics. The problem is that bounded quantification is fundamentally tied to class/interface hierarchies.
> The best part about type classes is that you don't have to control/modify the type's implementation, in order to make it implement a certain type-class. Type-classes are like open interfaces.
I know.
> Scala also has implicit conversions and "view bounds".
To be honest, I am not fond of those. (And, in general, I am not fond of the way Scala does things.) If I want to view an X as a Y, a regular function "f : X -> Y" can already do that. The notion of subtyping roughly corresponds to the idea that certain functions of the form "upcast : Subtype -> Supertype" are worth implicitly applying. I do not agree with this idea.
> Haskell has them, and Scala encodes them using a horrible hack
You can view it like that, my perspective is that in Scala the underlying mechanism (passing the dictionary around by means of implicit parameters) is more explicit. Much like how prototypes in Javascript work, versus class-based OOP. With prototypes you can have class-based design, but you can also go beyond that. And this is similar to how Scala deals with type-classes. This does have a cost in terms of learning curve, but then you've got extra flexibility. CanBuildFrom is not a type-class.
On implicit conversions and view bounds, I'm not fond of them either. I actually hope that they'll go away completely. I think they made it in the language (and survive), only because Scala runs on the JVM and they had to make the builtins (like Arrays, Strings) behave nicely without much overhead. In Haskell, Strings really are lists of chars. In Scala you can't do it, unless they introduced their own String class, which would have been awful, or if they introduced some sort of implicit conversion mechanism, which they did. And this could have been some magic that Scala's compiler applies for builtins, but personally I dislike compiler magic (that you can't hook into) more than I dislike hacks made for interoperability (if only they got rid of that implicit conversion from Any to String in Predef, which is so freaking annoying).
> You can view it like that, my perspective is that in Scala the underlying mechanism (passing the dictionary around by means of implicit parameters) is more explicit.
What you lose is compiler-enforced coherence. When you use type classes, certain functions come with the expectation that you will consistently pass the same dictionary around between calls. With actual type classes, this is a given: there can be at most one instance of a type class for a type (or tuple of types). With Scala's implicits, you can trick the compiler into passing different dictionaries between two calls that expect the same dictionary.
I am aware that this is a tradeoff: Haskell's type classes are antimodular (because a global set of instances must be kept in order to guarantee instance uniqueness per type) and Scala's implicits can be used in potentially incoherent ways.
(The only way to escape this tradeoff would be to use dependent types to explicitly establish a connection between values and the dictionaries that must be passed around.)
Personally, I find antimodularity annoying, but incorrectness freaking scary. So I prefer Haskell's type classes.
> In Haskell, Strings really are lists of chars. In Scala you can't do it, unless they introduced their own String class, which would have been awful, or if they introduced some sort of implicit conversion mechanism, which they did.
I do not find linked lists of Chars to be all that useful. In fact, most of the time they just get in the way. Fortunately, all I have to do is turn on the -XOverloadedStrings GHC extension.
I only scratched the surface of Haskel, so I don't really know what's annoying and what isn't :-)
I think Arrays in Scala are more important, because Scala introduced its own collections library and you end up manipulating Arrays a lot, since they are so pervasive and it's rather nice for Arrays to be viewed as Seqs.
On the other hand I would have been happy with just extension methods and given that simple extension methods made their way in Scala 2.10, I hope they'll break backwards compatibility at some point and pull implicit conversions out of the language - which isn't possible without a source-code migration tool, ala Go, but I saw that people on Scala's mailing list dream of one, so there is hope :-)
> I think Arrays in Scala are more important, because Scala introduced its own collections library and you end up manipulating Arrays a lot, since they are so pervasive and it's rather nice for Arrays to be viewed as Seqs.
Java's collection library is not well suited for a language that claims to support functional programming, so this was a no-brainer.
> On the other hand I would have been happy with just extension methods and given that simple extension methods made their way in Scala 2.10
This is precisely what annoys me so much about Scala: so many features that overlap with each other in terms of the functionality they provide!
> I hope they'll break backwards compatibility at some point and pull implicit conversions out of the language
What Scala (or, most likely, a successor to Scala) really needs to do is: 1. define clearly what means of abstraction it wants to provide, 2. make sure these means of abstraction do not overlap with each other. For example, subtype polymorphism (inheritance) is a strict subset [1] of ad-hoc polymorphism (implicits), so a decision has to be made whether subtype polymorphism is good enough, in which case implicits must go away, or full ad-hoc polymorphism is necessary, in which case inheritance must go away.
[1] When the Liskov substitution principle is respected, at least.
I understand that Scala was designed to be compatible with Java, but ugly stuff that is there only for compatibility reasons can always be put into the standard library rather than the core language. In the former case, only those who need it pay the price. In the latter case, everybody pays the price.
Scala's features don't really overlap. Defining extension methods (with the new implicit class construct) is just syntactic sugar over implicit conversions.
Also, I find implicit parameters in general to be awesome. Because of implicit parameters, Scala achieves the best marriage available between OOP and FP. Scala doesn't just pretend to be FP. Scala is FP. Think of languages like Ocaml or F#, which have 2 type-systems in the same language.
Scala also doesn't side-step covariance/contravariance problems when dealing with generics, like other OOP languages do, providing the tools to deal with both. And we could agree that covariance/contravariance is a problem created by OOP subtyping, but if you've got OOP in the language, then you can't really pretend that all generic types are covariant or invariant and it's just awful to have two type systems in the same language.
> but ugly stuff that is there only for compatibility reasons can always be put into the standard library rather than the core language
That's what they are trying to do with SIP-18. Scala 2.10 emits warnings if you use implicit conversions without importing language.implicitConversions and they plan to transform those warnings into errors in future versions. Scala 2.11 went through a compiler refactoring phase, badly needed as the compiler's internals are messy, the plan being to make it more modular. Things are progressing in the right direction ... if only they got rid of the implicit conversions defined in Predef (and thus imported implicitly everywhere).
> Scala's features don't really overlap. Defining extension methods (with the new implicit class construct) is just syntactic sugar over implicit conversions.
Point taken, for this particular case. But subtyping and implicits still cover a large part of each other's functionality, and the former is a subset of the latter when you take into account the Liskov substitution principle.
> Also, I find implicit parameters in general to be awesome.
It is certainly better than having no way to do general ad-hoc polymorphism, but...
> Because of implicit parameters, Scala achieves the best marriage available between OOP and FP. Scala doesn't just pretend to be FP. Scala is FP. Think of languages like Ocaml or F#, which have 2 type-systems in the same language.
... I beg to differ here. Scala is biased towards OOP: you can do Java-style programming with no inconvenience and awkwardness, but Haskell- or even ML-style programming requires imperfect encodings in Scala. How do I do the equivalent of a ML signature ascription? (That is, assigning a signature to a module that possibly hides some members and/or makes some types externally opaque. This requires structural subtyping for modules.) Why is type inference unavailable precisely when you would most need it?
On the other hand, OCaml is biased towards FP. I still dislike it for other reasons, though: functions being eqtypes makes no frigging sense; the syntax is heavily biased towards imperative programming, so I end up having to parenthesize a lot; applicative functors are leaky abstractions in an impure language. And, unlike, Haskell, which may be huge but is ultimately built on top of a small core, OCaml strikes me as intrinsically huge (just huge, though, not full of warts like Scala). I dislike F# even worse, because it ditches the ML module system. When I want to use a ML, I use Standard ML.
> Scala also doesn't side-step covariance/contravariance problems when dealing with generics, like other OOP languages do, providing the tools to deal with both. And we could agree that covariance/contravariance is a problem created by OOP subtyping, but if you've got OOP in the language, then you can't really pretend that all generic types are covariant or invariant and it's just awful to have two type systems in the same language.
I do appreciate that it provides tools for managing the mess, but I still prefer languages that do not create a mess.
On a second though: does it actually manage the mess, or does it just make it worse? When a language's object model makes C++'s look simple by comparison, that is a sign that there is something wrong going on.
> That's what they are trying to do with SIP-18. Scala 2.10 emits warnings if you use implicit conversions without importing language.implicitConversions and they plan to transform those warnings into errors in future versions.
The whole subtyping via inheritance is far more ugly than that, and that is not going away anytime soon.
> When a language's object model makes C++'s look simple by comparison, that is a sign that there is something wrong going on.
Yeah, and when you want to make it excessively clear to everyone involved that you have not the slightest clue what you are talking about, then bringing up C++ is absolutely the best way to go.
> Correction: Haskell has them, Rust has a limited form of them, and Scala encodes them using a horrible hack.
Fun fact: The way Haskell encodes them is identical to the way Scala does.
> What you lose is compiler-enforced coherence. When you use type classes, certain functions come with the expectation that you will consistently pass the same dictionary around between calls.
Could you please show an example where this goes wrong?
> With actual type classes, this is a given: there can be at most one instance of a type class for a type (or tuple of types). With Scala's implicits, you can trick the compiler into passing different dictionaries between two calls that expect the same dictionary.
Limiting type classes to one instance per type ... that's what I would call completely pointless. With that kind of restriction, why have typeclasses in the first place? There is probably not much (anything?) which could be enabled by such a crippled feature compared to dynamic dispatch/OO/subtyping.
> I am aware that this is a tradeoff: Haskell's type classes are antimodular (because a global set of instances must be kept in order to guarantee instance uniqueness per type) and Scala's implicits can be used in potentially incoherent ways.
> Fun fact: The way Haskell encodes them is identical to the way Scala does.
No. The way Haskell does type classes ensures coherence: you can never pass the "wrong" dictionary. There is at most one dictionary per type, globally, so it is always the right one.
> Limiting type classes to one instance per type ... that's what I would call completely pointless.
Ensuring instance coherence is not pointless. It helps in the correctness department.
> With that kind of restriction, why have typeclasses in the first place? There is probably not much (anything?) which could be enabled by such a crippled feature compared to dynamic dispatch/OO/subtyping.
>> Fun fact: The way Haskell encodes them is identical to the way Scala does.
> No. The way Haskell does type classes ensures coherence [...]
No one is disputing that Haskell layers additional restrictions on top of typeclasses, but as mentioned the encoding is the same in Haskell and Scala (look it up if you don't believe me). That's why it's kind of funny that you think Scala's encoding is terrible.
> Wrong.
I guess that's why all those GHC extensions to make Haskell's typeclasses less half-assed exist in the first place? :-)
> This abomination would not have happened in Haskell.
What abomination? This code does exactly what you told it to do. It tells us more about your inability to write code which doesn't look like a terrible translation of Haskell.
If you wanted the ordering to be a property of your type, then make it a property. But don't whine if you write code which lets you switch typeclass instances that it in fact let's you switch typeclass instances. :-)
> No one is disputing that Haskell layers additional restrictions on top of typeclasses
The definition of "type class" requires that there can be at most one instance per type. I know perfectly well that, operationally, Haskell just passes a dictionary around just like in Scala. But, denotationally, instance uniqueness causes a difference in semantics, namely, guaranteed coherence. And, to be frank, one of the reasons why I have adopted functional programming is that, most of the time, I do not want to be slowed down by operational concerns. Haskell allows me to write code I can reason about in an equational fashion. Scala does not.
> I guess that's why all those GHC extensions to make Haskell's typeclasses less half-assed exist in the first place? :-)
I was talking about Haskell, not GHC. Personally, when it comes to GHC's type system extensions, I am rather conservative: MultiParamTypeClasses, FunctionalDependencies, GADTs, TypeFamilies (only for associated families, never for "free-floating" families), RankNTypes. (That is, I may use other extensions, like OverloadedStrings, but they are not extensions of the type system itself.)
As I said above in this thread (though not in reply to you), I find antimodularity (due to instance uniqueness) annoying, but incorrectness freaking scary. Of course I would rather have both modularity and guaranteed coherence, but if one must absolutely go away, then it is modularity.
> What abomination? (...) If you wanted the ordering to be a property of your type, then make it a property.
I want the language to help me write correct code, and implicits do not help.
> Haskell allows me to write code I can reason about in an equational fashion. Scala does not.
That's just non-sense. In Scala typeclass instances are just part of the equation. That's just not a big deal at all.
> Personally, when it comes to GHC's type system extensions, I am rather conservative: MultiParamTypeClasses, FunctionalDependencies, GADTs, TypeFamilies (only for associated families, never for "free-floating" families), RankNTypes.
LOL. I guess that's a joke, right?
> I find antimodularity (due to instance uniqueness) annoying, but incorrectness freaking scary.
There is no incorrectness. The code you have written does exactly what you have specified. Don't blame the tools for writing bad code. :-)
Haskell providing you the tools to easily corrupt your runtime, now that's what I call "freaking scary incorrectness".
> I want the language to help me write correct code, and implicits do not help.
Then why do you fight the language so much in your code example?
I can write terrible code in Haskell, too. Does that prove anything? No.
> That's just non-sense. In Scala typeclass instances are just part of the equation. That's just not a big deal at all.
The important part of the equation that Scala misses is that, when a value V has a type T that is an instance of a type class C, then V has to respect the laws associated to C. Granted, this is not enforced statically (because it requires dependent types), but instance uniqueness allows you to confine all the manual testing/enforcement to the point where the instance is defined.
> LOL. I guess that's a joke, right?
No OverlappingInstances, no UndecidableInstances, no ImpredicativeTypes, no DataKinds, no PolyKinds... sounds fairly conservative to me.
> There is no incorrectness. The code you have written does exactly what you have specified.
Dereferencing a dangling pointer is undefined behavior, and in particular, it may cause a segfault. Just as specified.
> Then why do you fight the language so much in your code example? I can write terrible code in Haskell, too. Does that prove anything? No.
This kind of argument I expect from dynamic language folks, not from Scala proponents.
> when a value V has a type T that is an instance of a type class C, then V has to respect the laws associated to C
And that's exactly what happens in Scala if you cared to look at actual code.
The only difference is that one can associate different values with different typeclass instances.
If you decide not to associate values with specific typeclass instances and instead write code which accepts arbitrary instances, despite the fact that the values invisibly depend on a specific one ... sorry, that's just plain dumb.
That's like writing an API which accepts numbers and an arbitrary operation on those numbers, but only returns reasonable results for addition; and then complain that your code is fine as long as the "arbitrary operation" is addition.
No compiler on this planet can fix a lack of brain cells.
> Dereferencing a dangling pointer is undefined behavior, and in particular, it may cause a segfault. Just as specified.
If you can't see the difference yourself, I can't help you.
> This kind of argument I expect from dynamic language folks, not from Scala proponents.
Yeah, sorry. Haskell users know how to deal with different programming languages, but the latest influx of HN's "I-read-something-about-Haskell-on-the-internet-a-few-minutes-ago-let's-tell-everyone-how-dumb-they-are-to-show-my-new-intellectual-superiority-as-a-Haskell-expert"-kids has muddied the water a bit and caught me a bit off-guard, because I usually deal with thinking people only.
> And that's exactly what happens in Scala if you cared to look at actual code.
My example notwithstanding? Heh.
> If you decide not to associate values with specific typeclass instances and instead write code which accepts arbitrary instances, despite the fact that the values invisibly depend on a specific one ... sorry, that's just plain dumb.
I am not disagreeing that it is dumb. I am complaining that the language allows me to do something that is dumb in first place. This is attitude towards correctness (the language washing its hands of having to deal with it) is what I dislike.
> If you can't see the difference yourself, I can't help you.
There is no difference. In both cases, an undesirable behavior is possible because of a hole in the language's design.
> Yeah, sorry. Haskell users know how to deal with different programming languages
Yes, exactly. The way I deal with Scala is to not use it.
>> And that's exactly what happens in Scala if you cared to look at actual code.
> My example notwithstanding? Heh.
Yes. Just go and read some code instead of using your imagination. Compare for instance TreeSet.apply and List#sorted.
> I am complaining that the language allows me to do something that is dumb in first place. This is attitude towards correctness (the language washing its hands of having to deal with it) is what I dislike.
Then you should dislike Haskel, too, because that's exactly what Haskell allows you to do, too.
Prelude> let getLargestElement (x:xs) = x
Prelude> getLargestElement [3,2,1]
3
Prelude> getLargestElement [1,2,3]
1 -- OMG!!!
Haskell is sooo incoherent! I wrote a function that obviously relies on that the input is sorted in a certain way, but Haskell let's me pass arbitrary lists to it! How dumb is that?! Haskell has a really terrible attitude towards correctness!
>> Yeah, sorry. Haskell users know how to deal with different programming languages
> Yes, exactly.
Sorry, but you've been sorted into the I-read-something-about-Haskell-on-the-internet-a-few-minutes-ago-let's-tell-everyone-how-dumb-they-are-to-show-my-new-intellectual-superiority-as-a-Haskell-expert category already, so this doesn't really apply to you.
> The way I deal with Scala is to not use it.
That's perfectly fine, but please stop commenting on things you don't understand. You are only embarrassing yourself.
> Both C# and Java have bounded generics. The problem is that bounded quantification is fundamentally tied to class/interface hierarchies.
Bounded generics do not solve the problem I tried to express. Here's a Scala sample:
case class Container[T](elems: T*) {
def max(implicit ev: T <:< Comparable[T]) =
elems.reduceLeft((a, b) => {
if (a.compareTo(b) > 1) a else b
})
}
Comparable is not a type-class, it's the standard java.lang.Comparable interface. Yet the defined method, "max", only works if the container's generic T implements Comparable[T]. As in ...
In both C# and Java, methods themselves can be generic:
class ListExtras {
static <T extends Comparable<T>> T max (List<T> ts) {
T res = ts.get(0);
for (T t : ts)
if (res.compareTo(t) < 0)
res = t;
return res;
}
// ...
}
C#/Java-style bounded quantification is enough for this particular use case. My complaint is that bounded quantification in these languages is tied to inheritance as shown by <T extends Comparable<T>>. In Haskell and Rust, bounded quantification is tied to type classes, which, as you said, are open - a type class instance for a given type may be made anywhere in the program, not just where the type is defined. So bounded quantification in Haskell and Rust (and its emulation in Scala via implicits) is strictly more powerful than bounded quantification in C# and Java.
As an aside, C# is a little less retarded than Java, so even primitive types like int are objects, and, in particular, int implements IComparable<int>.
One thing I wanted to do awhile back was create an interface that could only be implemented by a class that had already implemented certain other interfaces.
In retrospect, sort of a horrible idea, but I wonder, is there a type system out there expressive enough to allow that behavior?
After a few moments thought I realized why it wouldn't be possible in languages that use a fixed v-table, but it should be possible in languages that use more flexible calling techniques!
Or am I thinking about this too much and this is basically what type classes would buy me?
trait InterfaceA
trait InterfaceB { self: InterfaceA => }
class A extends InterfaceA // compiles
class B extends InterfaceB // does not compile
class B extends InterfaceA with InterfaceB // compiles
"self" isn't a keyword, you can call it whatever you like.
I forget what convoluted use case I wanted it for. I ended up solving it using composition instead (which is the right solution 99% of the time anyway. :) )
Still though, I love using interfaces as contracts, so I get frequently annoyed at how poorly my tools (limited to C++2003 right now) support doing such!
> One thing I wanted to do awhile back was create an interface that could only be implemented by a class that had already implemented certain other interfaces.
I think that's easily possible in Scala. I guess one wouldn't even need typeclasses for that.
The example presented in this post is neither a great example of what Rust can do nor an elegant solution to the particular problem of dealing with dates and times.
For a much further-reaching solution to this problem look at C++ user-defined literals, combined with what Bjarne Stroustrup calls "type rich programming".
now() + 3days + 21s + 102ms
This is not to say that c++ is the only language that can do this (although it does do it very well).
It seems to me that many distinctions we still suffer from are quite arbitrary. Yes, having numbers be objects (boxed or tagged) is one way of achieving this sort of uniform handling, but you can also achieve it statically.
Just like you don't have to switch to a "scripting" language for expressive file manipulation:
I guess we'd call these "type extensions"? F# supports this just fine, and C# can do a bit, too.
Overall, I haven't found I need type extensions that much, if I have flexible function handling. It's the same level of expressiveness to write "fromNow 2 days". (fromNow having signature (int -> (int -> TimeSpan) -> DateTime). Or even "2 |> days |> fromNow" (fromNow being TimeSpan -> DateTime).
Actually, this example is pretty terrible, as even C#'s basic format would be "DateTime.Now.AddDays(2)" which seems pretty fine to me - add a top-level binding for now and it's on the same level of conciseness. Also, adding type extensions to integers for time seems quite ugly. Most 3rd party libraries I've seen don't use type extensions in a graceful manner, and end up slapping members on everything and it's just silly. (And I'm ignoring the fact that using .Now in .NET is rarely correct since it's rare when you don't want to use whatever random timezone the server's configured for, instead of UTC.)
"2.days.from_now" is different from DateTime.Now.AddDays(2), because it can be broken in two parts:
- 2.days is a duration, or a time delta, or time span, or whatever you want to call it
- from_now is a method called on this duration value
This construct "2.days" is useful on its own. It's actually more useful than "DateTime.Now", as durations are often part of public interfaces, whereas usage of concrete timestamps is very often hidden. The equivalent in C# for it would be "new TimeSpan(2, 0, 0, 0)". That's much more unreadable than just "2.days" and often, instead of thinking about the problem, developers fallback to specifying time spans as Ints in seconds or similar.
On TimeZone, "2.days.from_now" is a timestamp. If the underlying library is not totally fucked up, like "java.util.Date" is, then it does have an attached timezone. Or in case this yields a Unix timestamp, it's really not a Unix timestamp unless it was calculated since 1970-01-01 00:00:00 UTC.
I just don't get it. So a literal like "0.2f" to denote a Float (and not a Long) is acceptable. A literal like "300.5m" to denote a decimal and not a floating-point is acceptable.
But somehow other kinds of literals for other units of measurement are not, because Dear God, magic, magic!!!
I agree that the constructor isn't very easy to read for TimeSpans. But there's a shorthand, you can write "TimeSpan.FromDays(2)" to create a span of 2 days.
> It seems to me that many distinctions we still suffer from are quite arbitrary.
Agreed. And it's nice to see newer languages which bring these kinds of theoretical improvements into practical languages we as developers can actually use, rather than just as papers. I don't want to come off as anti-theory, I love theory. It's just nice to see it finally trickling down.
Yes, indeed. My last example is from Objective-Smalltalk [1], which straddles academia [2] and practicality [3]. The basic idea is that you really need both the practical application to show the problems and then the time to think about solutions.
One thing worth noting is that the implementation isn't globally extending `uint` with `.days()`. In order to write `2.days()`, you need to first import the trait that provides the implementation.
From that perspective, it's not much different from importing a function like `days_from_now` that you call as `days_from_now(2)`.
The basic idea is that objects should know how to represent themselves. In Smalltalk every Object implments writeOn: method, which gets a stream and prints self on this stream.
By extension, if an object knows how to convert itself into another object, it implements a method for doing do. In Smalltalk there is a convention of putting all such methods inside "converting" protocol. For great example of how is this useful you can look at the Collection and it's subclasses in Smalltalk - all collections have many "asOtherKindOfCollection" methds, which are either inherited or implemented, depending on a collection. It's great to be able to send "asSet" to any collection that comes your way and the way Smalltalk does it is natural in purely object oriented languages.
That and the fact that creating IntsToRange or StringToDayOfWeek "static" classes is actually even worse leads to the design where Integers have `to:` and `days` methods.
Actually, there is one more probable cause: in Smalltalk and some other languages extending already defined types is easy, really easy, easier even than creating a new, empty class. The methods are grouped into "protocols", which are grouped into packages and it's - not quite - largely orthogonal to the classes. Combine that with powerful IDE, VC support (Monticello, Metacello) etc. and you have no reason whatsoever not to extend even the most basic classes.
The question is what "2.days" actually returns. IIRC, in ActiveSupport it returns an integer number of seconds. Is that a sensible convention? If you had another class representing an span of time (with or without any absolute calendar reference) that an Integer could convert into, that might be a cleaner design. But even in that case you wouldn't necessarily have a method on Integer to confer it to a TimeSpan; you'd initialize a TimeSpan with an integer argument.
In general I find it cleaner when less primitive classes can represent themselves as more or equally primitive classes, but not vice-versa. to_integer, to_float, to_list, to_boolean, etc. But on the other extreme, if your application entails establishing database connections, implementing String#to_database_connection would be ridiculous.
For me, integers are much more primitive than date and time objects. `2.days` would be equivalent to implementing `2.meters` in a CAD application, but I suppose you might be OK with that.
But even granting that an integer can transform itself into a representation of a time span, i.e. `2.days` is okay, `2.days.ago` is right out. That's not just an Integer representing itself as a span of time. Now all of a sudden Integer has to have knowledge and, even worse, opinions about the current date and time (which is one of the most hazardous boundary cases in all of software).
> integers are much more primitive than date and time objects
In terms of concepts, yes, but in terms of language syntax and semantics they are the same (in Smalltalk).
> `2.days` is okay, `2.days.ago` is right out
Yeah. But not because it "looks bad", and not because Integer would need to know anything more than it does already (it wouldn't), but because `2 days ago` is actually less flexible and harder to maintain.
Consider:
Date today - 2 days. "27 December 2013"
DateAndTime now - 2 days. "2013-12-27T04:43:42.679+01:00"
We can get either Date or DateAndTime without changing anything. With proper planning the superclass of both Date and DateAndTime can have a generic - method which will work for all the cases. With the `2.days.ago` we're tied to one result type. We need to code around it, giving it a default argument with a class or remembering the "kind" of Duration.
Anyway, if it's ok to have a function which takes integer it's equally ok to have an Integer have a method. Especially when you just can't have a function, like in Smalltalk and the only alternative is writing yet another class with one method in it.
> Anyway, if it's ok to have a function which takes integer it's equally ok to have an Integer have a method.
Seriously? Any function that takes an integer argument? Like, "exit". You want to call 1.exit to terminate a program with a return code of 1? Is there any difference between a function that takes an argument and a method on an object? Shall we implement all functions that take n arguments as multimethods? Are you actually going to implement String#to_database_connection? That was meant as a reductio ad absurdum, I didn't expect anyone to literally follow me there.
If you want some particular Integer to get decorated with additional methods because you're using it in some particular domain context but you're not globally changing the interface of all Integers that's a different story.
> Especially when you just can't have a function, like in Smalltalk and the only alternative is writing yet another class with one method in it.
There are perfectly same object oriented ways to solve this problem, like instantiating some kind of TimeSpan object, or having a singleton for process status in the case of the "exit" function.
which is especially interesting: how come a Symbol knows how to perform an action it can represent on some given object? I dunno. It just does.
I can't tell you exactly when adding a method to some class begins to be a bad idea and when it's still ok to do. I don't know any hard rule for this. I agree that `1.exit` is a bad idea and I provided a rationale for why I think so. But I'm still convinced that there are many (many more than you seem to think) cases where it's ok, it's acceptable and convenient to extend String, or Object, or any class you don't "own".
I'm actually looking at the system which is built with very many methods like the examples above (although it doesn't have 1.exit) and I see that it works. One of the most basic classes, Object, has this many external (with extension methods) protocols in it:
And the system still works. And it's easy to navigate. And to understand. And to use, even. So, once again - I'm not arguing that 1.exit is ok, just that there are many "saner" extension methods which are ok. To get the above list I had to write some code:
|allProtocolNames externalProtocols|
allProtocolNames := (Object allMethods collect: [ :x | x category asString]) asSet asSortedCollection.
externalProtocols := (allProtocolNames select: [ :x | x beginsWith: '*' ])
inject: Character cr asString
into: [:acc :x |
acc , x , Character cr asString
].
Transcript show: externalProtocols.
I may not be very experienced Smalltalker, but I believe this is more or less idiomatic Smalltalk code. Even in this short snippet there are 3 asSomething methods - and the code still works and I still think it's readable. And cute.
Is it Smalltalk specifically which enables this technique or what other rules extension methods need to follow to be sane I don't know. Which is why I'm hesitant to implement those methods myself. But I accept their existence and the fact that they have their place in a sound, sane design, at least if done right (for any acceptable value of "right" we can agree on).
But well, my day to day job is writing Python, where "monkey patching" is considered a sin and the community consensus is pretty much the same as your opinion: don't extend classes, write functions instead. So I kind of understand this position. Although I still think it's not inherently bad idea to extend base classes and that it's the matter of language design, project design and tooling. But I can't say for sure. I'm just telling you what I saw when I went and learned Smalltalk, that's all.
This is surprising enough I'm not sure how to respond anymore.
> which is especially interesting: how come a Symbol knows how to perform an action it can represent on some given object? I dunno. It just does.
Presumably it just sends itself to the object you pass in? This is not something I'm actually bothered by.
I guess it depends on how you extend classes you don't own. If the adapters themselves are modular and are only loaded with the class that they're trying to adapt to, that might be OK. In other words, if I have a DatabaseConnection class that extends other classes with asDatabaseConnection methods, and that extension only happens when I load the DatabaseConnection class, that still satisfies the intention of SRP even though I'm technically implementing methods on other classes.
That's how they do it in Ruby, but in a language with a different grammar, you could parse '2' as a different datatype depending on how it's being used. In any case, the Rust example is fairly clean, since it only injects the methods for the code that loads the Period trait.
The great thing is that you don't have the weird semantic problems that the rust or ruby examples have. There are just two functions here, days and from-now.
That's essentially how traits work in Rust. You need to import the trait into your scope, and 2.days().from_now() is essentially sugar for days(2).from_now() where days() returns a TimeChange object.
Don't get wrong, rust looks fascinating. I realize that the syntax is basically sugar, but from what I understand is that you are still adding a method to a type. Here's a snippet from the code.
fn days(&self) -> TimeChange {
TimeChange::new().days(*self as f32)
}
So while you are right, it is basically sugar for days(2).from_now(), but you couldn't call the function that way. It must be attached to an instance of an uint.
Also in order to get the English looking version you have to implement those function specially using traits. Whereas with the clojure version they are just normal functions, but the threading macro gives you that English readability you are looking for.
> So while you are right, it is basically sugar for days(2).from_now(), but you couldn't call the function that way. It must be attached to an instance of an uint.
> Whereas with the clojure version they are just normal functions, but the threading macro gives you that English readability you are looking for.
You could do that with a macro in Rust as well. We don't make ordinary functions acquire that sugar for namespacing reasons: you can have multiple methods called e.g. "get" in scope as long as they're attached to different types, but that isn't true for functions.
Yep! Those differences are totally correct. There are tradeoffs in each approach. One of the benefits of the Rust approach is that you can have a different implementation for each type (polymorphism), which is a feature I needed last week in some Rust code I wrote.
FWIW: I believe that the Rust folks want to allow support for calling the functions directly.
For the most part, I’d found the complaints about dynamic typing from the static typing camp to be very FUD-y. I very rarely get TypeErrors in my Ruby code. Refactoring generally isn’t very difficult.
But slowly, this has started to change. I think that it’s that I tend to do more infrastructure and plumbing work now. I’ve run into more and more situations where types would be helpful, or picking the mutable state path has caused headaches.
Explaining the attraction of dynamic debugging to static type die-hards is a lot like explaining the attractions of static typing to dynamic programming die-hards. You can only get a sense of what the other side is talking about through lots of hands-on experience. Where it's really valuable is often an edge-case, but often it's a really expensive edge-case! (Goes both ways.)
This from an old Smalltalker. If you do enough Smalltalk, especially in a large established code base, you eventually see situations where you wish there were type specifications. However, you still wish that more statically compiled language people were more open minded and less knee jerk when you talk about the magic you can do. (Which also means that the open minded ones are particularly valuable.)
My personal choice for "statically typed Ruby" is actually Scala or Kotlin. Scala has its drawbacks, especially the compile time, (which is getting better) but both have similar expressiveness when it comes to mind blowing library usage. (Although sometimes arguably in the expense of potentially complexity of the language when it comes to being able to writing such elegant libraries) I found it similar to the OP's argument regarding python vs ruby. Scala as a language might be a very subjective and divided topic, but using it with the right libraries can be very expressive just with added static typing.
I dislike all these 'expressive' or how abstract rust as a language can be from what it's trying to achieve which is to be a 'systems programming language' on par with c (c++ is not the same as c). Show the resulting assembly for both a C version and a Rust version, now map both the rust version and c version to the resulting assembly.
Rust is meant to replace C++, not C. The original motivation was that the Mozilla devs were sick and tired of all the subtle ways bugs can sneak into a large, complex C++ code base.
The ultimate aim is to write the nextgen browser engine not an operating system kernel or some embedded code for a system with 4kB RAM.
If I read it right and the article cites the ability to add a method to the primitive number type as "expressive", then that's a pretty narrow view of expressivity. More examples would help.
Static v. dynamic typing aside, one thing I love about Ruby is the syntactic ambiguity of a local variable versus a method call, in which parentheses are not needed.
Python is a beautiful, clean language. But the same restrictions that make it nice and clean mean that it’s hard to write beautiful, clean libraries. Ruby, on the other hand, is a complicated, ugly language. But that complexity allows you to write really clean, nice, easy-to-use libraries.
The Ruby metaprogramming magic makes for really nice syntax, which is what made me choose it over Python all those years ago, but it complicates understanding of what's going on - as I matured as an engineer, maintained a codebase over several years, several times spend long hours debugging issues coming from metaprogramming pitfalls, I shift more and more to the Python approach of building APIs in the simplest possible way, as much as possible sticking to simple function calls and no really fancy metaprogramming just to get a cleaner syntax, e.g. compare Rails has_many :foo with Django's models.ForeignKey(Foo) and the internal implementations of those. Same thing with using "BDD" testing frameworks like RSpec vs. traditional assert stuff, what is the nice syntax worth if tricky bugs hit you where you most want to avoid it, in tests. Libraries like RSpec might be "nice", but they are certainly not "easy-to-use", and I don't think I would call them quite "clean", if I look at their source code.
In other words, while clean syntax is luring, in the long run clean and simple semantics is what I think is really important.