The DVD would have been manufactured by a factory somewhere in china, by a bunch of people who does not know you. And yet, the DVD fits perfectly, on a drive that you bought, probably manufactured in the United States, by people who does not know you, nor the DVD manufacturer.
The software to read the DVD is written by yet another set of people, in yet another country, all of whom would not have talked to any parties mentioned above. And yet, the DVD is read perfectly, with error correction to adjust for any minor scratches.
The DVD plays video and audio data, created by people (probably in hollywood) who also have not met any of the parties above. And yet, it reproduces the sound and audio completely correctly as the original directors/composers intended.
And all of this, costs less than 50c a piece, with a drive that could be bought for less than $30.
That is the power of interfaces.
First of all, there is comment from Erik Meijer who said something like "interface without laws is useless" (he was referring to "interface" as a concept in OOP languages, and by "laws" he meant some algebraic laws). Basically what he wanted to know was the algebra, not just interface.
And if you look at it from Curry-Howard correspondence perspective, where programs are proofs and types are theorems, then basically, since type is an interface, you could say that theorems are interfaces in mathematics. So there already is a very precise notion of what is interface.
On the other hand, I also kinda like the notion of DSL as an interface, as described in the recent article here on HN: http://degoes.net/articles/modern-fp/
What seems to be the main contention here - should the interface just use the names (akin to philosophical nominalism) and leave them open to interpretation or should it somehow encode the properties of things it describes (akin to philosophical realism)?
You also can get a very slight taste of this in C#, Java, etc as well where larger interfaces seem clunky and get less reuse than smaller interfaces. In C#, if an interface has some nice properties, typically the extension methods on these interfaces with allow for a combinational explosion of generic utility functions. So this might be one way to judge the "algebraicness" of interfaces. You see this a lot of the LINQ collections libraries, which seem to have put some thought into laws.
Unfortunately in these languages you can only go so far due to the lack of higher kinded polymorphism (T<A> types vs just Type<A> types).
On one hand you've got the IEnumerable<T> interface, which supports the fluent syntax and is extremely easy to riff on with extension methods. (The "fluent syntax" itself is just a bunch of extension methods.)
On the other hand you've got the mechanisms you need to support the query syntax, which behave more like duck typing. If you implement a couple methods with certain names and signatures, not all of which are formalized by an interface, then the compiler will let you use query syntax with objects of that type. It also turns out that the stuff you need to implement are roughly the bind and return operations from a monad. . . but not exactly. In particular, the element they use in place of bind is a bit more complicated, and not in a way that adds any real expressive power. It just makes the signature a bit more irritating to support.
In other news, re: lack of higher kinded polymorphism - That's something that the maintainers of F# have forcefully resisted adding to the language. The argument, which I've yet to take the time to look into deeply, is that typeclasses interact poorly with formal interfaces so you shouldn't allow a language to have both. I'm a bit skepty on that one, though, since I'd assume there would be wailing coming from the Scala community if it were really that bad.
Usually the recommended approach for making F# code usable from C# is to add an object-oriented wrapper layer to the F# library. That ends up being much, much easier in practice than trying to deal with F# code on its own terms from C#.
Do you have any sample code that serves as an example? It would be immensely helpful. Thanks.
Which reminds me, I should finish reading Tomer Ullman's PhD thesis to see if he managed to integrate existential types into his theory of causal-role concepts and theory formation.
That said, even without this, Interfaces forces to think toward minimal assumptions which is a very important property.
I'll chew on your statements about the success of Python. Though my first love was LISP, I'm now far more comfortable leaning on static typing and composition.
The best book on software design I've ever read was written by two economists.
Design Rules: The Power of Modularity
This book didn't change how I program so much as changed how I think. Like the difference between making and criticizing art. Whereas SICP gave me new mental models, Design Rules gave me new philosophies. More like Design of Everyday Things did.
Reminded me of this https://www.youtube.com/watch?v=5tg1ONG18H8
An implementation is the system minus the interface
These definitions seem at odds with each other. The first sentence would imply that the interface is part of the implementation, but the second says that the implementation and the interface are exclusive. What do you feel you gain by defining things in this way, as opposed to saying something like "the interface is the portion of the implementation with which the system interacts"?
Dependency injection probably competes for the top spot (though interfaces work really well there too)
Anyways, looking forward to reading it
While I agree that dynamically typed languages -- by their loosely defined API requirements -- are more difficult to scale, it's not difficult to add type checking and/or provide a well structured public API where necessary.
I think statically typed languages go too far in the other direction. Tyr type and API definitions are too strict leading in a ton of unnecessary effort (ie boilerplate), increased surface area for potential bugs, and overly restrictive limits that require 'creative' workarounds to effectively write code.
I think there's a 'happy medium' to be found where type checks are required for certain inputs and a clearly defined API can be established without the need for private/internal/public syntax artifacts.
I've been playing with this a bit in JS lately. Using model definitions to specify the structure and enforce validation. As well as defining facades with the ES6 module import/export syntax to define public APIs. Finer grained control (ie private vs internal) can be defined using closures that provide internal interactions while hiding the private implementation details.
Given this description of statically typed languages, I'm guessing you haven't tried OCaml or Haskell.
For example, I've recently been struggling with a Haskell library that makes heavy use of `TypeRep` values, which is fine when everything's done in a single process invocation, but very awkward when attempting to serialise/deserialise (e.g. to pass data between processes, or to suspend a computation and resume it later)
You're right, I haven't tried OCaml or Haskell so I can't make a qualitative judgement on how easy/hard type coercion is in either.
I'm not sure what you'd like to see discussed from the article. To me, it read like an overview of something I already knew.
It's one of those things where if you know it, you don't need to hear about it. If you don't know it, hearing about it won't make sense anyway.
I can see this being part of a course for beginners but frankly - people will figure out what interfaces are for and what leads to problems by writing enough code and/or seeing how good libraries/frameworks tackle the problems they had difficulty with and learn from that.
My 2 cents.
But abstraction layers have costs. (Explicitly declared) interfaces are a cost (time to introduce into the system, complexity, lines of code, impedes refactoring).
Therefore, don't use interfaces (i.e. introduce abstraction layers) until needed (cost is justified).
By the way, if you manage to come up with a layer of abstraction that doesn't have as much explicit cost, it's a cheaper layer of abstraction. So dynamic languages' "duck typing" feature allows interfaces to emerge gradually, without explicit code to introduce them. Arguably, but IMO, that's a better approach.
I disagree with the article's definition of a leaky interface:
"A Leaky interface exists when the interface is prone to being ignored during any communication between the system and the environment."
A leaky interface is an interface that has non-obvious, non-declared behaviors, and the code has to rely on those behaviors to deliver value/functionality.
It can be something as simple as an inconsistent implementation that leaks an underlying implementation detail. It doesn't mean that the interface is prone to be ignored. In fact, it can be used quite a lot--just in slightly different ways.
[p.s. edit: structural indirection, to be precise.]
Thinking in terms of interfaces (in the general sense) helps towards this ideal. If an interface is cluttered with dozens of functions, it's a sign that it could be refactored into more cohesive units.
Isn't coupling just the complexity of the dependency graph? I would say depending on an interface is coupling as well. In my experience complex static typesystems encourage highly abstracted interfaces which are sometimes even more cancerous dependencies (that doesn't apply for really basic and practical things like Iterables which come with the language).
For this reason, IMHO an even better ideal than coding against interfaces is relying on concrete datatypes that are just a given, like structures and arrays of integers.
Depending on an interface can be a kind of coupling, sure, but I think it depends. I think abstract interfaces, or at least statically typed data types, can be great for understandable systems...
In Haskell, there's work being done on allowing some kind of module signature, so that you can have several implementations of the same API be compatible with the same source code. I'm interested to see how that will play out.
You could see a Java-style interface as a specific global name given to a set of method signatures. Using that name incurs a dependency on the library that defines it. Sometimes it makes sense to redefine the interface in your own library, and then you can use adapters to make other implementations compatible—but this is all kind of boilerplate stuff that you wouldn't need if Java was more flexible with interfaces...
Where do I find information about the Haskell module thing?
It's called Backpack.