Great write up! I've made the same mistake of having too many subclassed entities, and realising my queries getting slower because my indexes were not being used.
I think Apple should mention this caveat in their official documentation. I'm sure many folks have been bitten by this issue.
A pretty common idea in Haskell-land is that "the next Haskell would be strict". Laziness was once a dreamy ideal execution strategy, but thanks to Haskell the practical tradeoff is better understood.
At the same time (as the slides note) the strict/lazy divide is hardly decided! As many people who would rather Haskell be strict are willing to defend laziness to the death for being the key reason Haskell is so composable. [0]
So Lennart Augustsson—the author of the first standard's compliant Haskell compiler, I believe—wrote up Mu and made it strict. There is probably a justification in SC's case for why it was an experiment worth trying.
[0] At the very least it's indisputable that laziness forced Haskell's hand with purity and that ended up being an enormous win in everyone's opinion.
This is interesting, because the well-known paper "Why functional programming matters" identifies two key aspects of FP: higher-order functions and lazy evaluation. I wonder if John Hughes has reviewed his opinion on this, or if the FP community thinks the paper is no longer an accurate insight into FP...
In particular, I'm thinking of the last paragraph of Hughes' conclusions:
"It is also relevant to the present controversy over lazy evaluation. Some believe that functional languages should be lazy, others believe they should not. Some compromise and provide only lazy lists, with a special syntax for constructing them (as, for example, in SCHEME [AS86]). This paper provides further evidence that lazy evaluation is too important to be relegated to second-class citizenship. It is perhaps the most powerful glue functional programmers
possess. One should not obstruct access to such a vital tool."
Maybe it turned out that practical evidence has shown that lazy evaluation wasn't as important for modularity as Hughes thought, or at least that its drawbacks have been found unacceptable in practice?
I think that there is no cut and dried answer. Laziness appears to dramatically improve modularity, but it's unclear whether all of the tradeoffs are worth it. It's difficult to analyze the downsides still since (a) more research is needed and (b) a lot of it can be shrugged off as "weirdness", but it's clear that there are reasons to prefer strictness as well as to prefer laziness.
I've grown to be of the opinion that neither is best and that languages ought to be developed which allow free and clear choice between evaluation strategies throughout. Lazy defaults at the right times and clear strictness types might be a way forward, but it's hardly anything I have expertise in.
I tend to feel this is one area where perhaps Scheme had the right idea (if you ignore set!). Now, I don't know a whole lot about Haskell, but one of (IMO) the elegant parts of Scheme is that you can choose when and where to use streams instead of lists.
The main benefits of streams vs. lists is usually composability, but also that delayed computation can save you from computing something you'll never need. In this way, I think that being able to choose when and where you apply laziness is a huge gain. Now, of course this is where people will complain that streams are either a bad abstraction, or that doing stream-cons vs cons is annoying, but I like to think "what if it was the other way around?". Namely, if you had streams used by default (a la laziness in Clojure), but could switch to lists when it was really necessary or made reasoning about your code easier.
Is there a language that does this smoothly? From what I understand, to work around laziness, you typically have to bend over backwards in Haskell, but if there was a good way to just transform lazy data structures into strict structures similar to the relationship between [stream]-map or [stream]-cons/car/cdr and the list equivalents, I think it'd be pretty exciting. Though to be honest, I'm not sure how this would interact with Monads / Type Classes / etc, especially if you have Type Classes relying on both lazy and non-lazy structures.
The typical idea is that you almost always want "spine lazy" data structures. So, streams are almost always more valuable than strict lists.
In my mind, I tend to think of this as being true up to the point where the size of the structure is statically known. Thus for fixed-sized vectors (or arrays), spine strictness is very important—it gives you better control over memory usage in the very least.
Unfortunately, no language I know of has a good concept of "strictness polymorphism" so in Haskell you end up with duplicate implementations of Strict and Lazy data structures. This ends up being not so much duplicated code, but a whole hell of a lot of duplicated API.
And I think the stream-cons/cons distinction is trivial. It's a lack of proper polymorphism that you're dealing with there and that can be easily implemented in any language with good polymorphism. In Haskell we have a very nice (very nice if you tolerate lenses, anyway, which you should) interface called Cons which is an elaboration of
class Cons s where
type Elem s
_Cons :: Prism s (Elem s, s)
which works like this
instance Cons [a] where
type Elem [a] = a
_Cons = ... -- more complex than worth explaining
instance Cons (Stream a) where
type Elem (Stream a) = a
_Cons = ...
and then gives you
cons :: Cons s => Elem s -> s -> s
uncons :: Cons s => s -> Maybe (Elem s, s)
It's so weird to me that in a world of more asynchronicity than ever we want to bring Haskell to strict-land.
On the opposite end, there is so much boilerplate optimisation out there to get around the strictness of other programming languages that would be solved with a non-strict mode
Strictness can always embed laziness---this is sometimes an argument for the natural superiority of strictness---so long as you have lightweight lambdas. Thus, in OCaml you'll see a lot of
thunk () = long_computation
effectively. Is that syntactic noise enough to disable the advantages of laziness? Actually, maybe!
But note that achieving the strictness type requires the use of `seq` which is perhaps, arguably, a more arbitrary language feature than function abstraction and unit are! In particular, it's been a big debate as to what the proper semantics for seq are—the dust is technically unsettled, despite the long history of seq in Haskell.
And w.r.t. to monads, for a lot of people it is! Most use of monads is done implicitly, right? ;) The only reason Haskellers tolerate the extra syntax is because it's statically required (I claim).
There's an advantage here, of course, in statically ensuring that people do something more explicitly than they would like to. Perhaps the same advantage applies to thunking.
Honestly, I'd rather not comment. I don't know that I or nearly anyone has enough information to make strong, confident opinions about the "right way" to do lazy/strict. I'm hoping that the research into total languages will provide answers!
I don't know, but in my limited experience, lazy evaluation makes memory use worse (usually not much), but more importantly makes performance (time and memory) harder to reason about, because you don't easily know when something will actually evaluate.
Besides that, there's also not much practical gain from it, IMO. One commonly cited benefit is a function that doesn't use all of it's arguments, therefore saving computation time when they're not evaluated. But realistically, an unused parameter should probably be removed.
Generally the argument is never around saved computation but instead around composability. Lazy languages ensure that everything behaves like a value and in that domain operations compose much more effortlessly. You can't reason about operation as easily, so you don't, and the language can cope with making that work more or less correctly.
Which is definitely suboptimal in some cases!
I think honestly the goal should be reasoning about evaluation order statically instead of trying to find some clever argument such that laziness or strictness is clearly "correct".
Obviously you don't want both branches evaluated in any given invocation, and obviously you cannot remove the unused parameter. Note that the purpose is not to "save computation", since for example branch1 may be undefined if cond is false!
When using a language with support for lazy evaluation, you encounter this kind of functions all the time.
> One commonly cited benefit is a function that doesn't use all of it's arguments, therefore saving computation time when they're not evaluated. But realistically, an unused parameter should probably be removed.
How do you do that when the values of the other arguments determine whether or not that argument will be used?
Some level of lazyness is nessasary for Haskell to work. For example, you can do:
a=1:a
main = show $ head a
With lazy evaluation, this will print "1", but with strict evaluation, this program will never terminate as it attempts to fully evaluate "a=1:a", which creates an infinite list.
>But realistically, an unused parameter should probably be removed.
You also have cases where a parameter is only used some of the time.
It may be that their applications have high memory usage, which lazy evaluation would make slightly worse. Alternatively, they may have found that the applications that they tend to run benefit from a slightly stricter evaluation strategy, and have thus changed the compiler to better reflect that...
Not sure, but having encountered laziness in Clojure and Haskell, it can be non-intuitive and it can be a bitch to debug. It allows for some conceptual beauty, though, and there are certainly some use cases in which laziness is the right behavior. The question is what should be the default; both ought to be allowed. In Haskell, they are, but you start using bangs a lot (e.g. Point !Int !Int and the ($!) operator instead of ($)) and there are also shallow vs. deep considerations, because forcing a thunk only evaluates it one constructor-level deep-- to "weak head normal form".
That said, I much prefer Haskell's laziness or Clojure's laziness in seqs over the broken laziness in other languages. There's a lot to like about R's libraries but... fuck this:
> Map(function(x) (function(y) x + y), 1:5)[[3]](0)
[1] 5
Python can be tricked into the same evil if you build closures in a loop. Haskell doesn't have that, thankfully.
Laziness in data structures has the biggest benefit in the spine. Leaf laziness is just more surface area to hide unexpected thunks. If you really want that, do something like
data Box a = Box a
type Lazier a = Tree (Box a)
Generally, I find that a little habit around leaf strictness ends up eliminating laziness concerns entirely until you get to explicit concurrent programming and need to think carefully about what thread is forcing what execution.
For clojure, there's kern (https://github.com/blancas/kern) for parsec style parser combinators. It's quite easy to use. Ironically, I actually understood parser combinators better by reading its wiki (By understood, I mean understood how to _use_ parser combinators). Parsec was initially difficult to grasp because the type system complicated things for me then.
I only got Parsec completely after that eureka moment when I finally understood Monads.
Why all this hate about Clojure? I didn't downvote you, but I can see why people would, seems olenhad was mentioning a library similar to Parsec in a language that is very related to Common Lisp. There is definitely no need to get carried away.
I am honestly curious what you have against Clojure. I dabbed with CL and Clojure, not as much as I would have liked to, but for me they felt similar. I would be curious to have a list of things that can easily be achieved with CL but not with Clojure.
Common Lisp is a multi-paradigm language (imperative, functional, object-oriented) in the Lisp 1 / Maclisp tradition. It has a language spec and multiple different, but very compatible, implementations. Among its goals are portability across different machines/systems, efficiency, power and stability. Thus complex Lisp software can be ported or written in Common Lisp and usually runs on top of several implementations (examples: Maxima, Axiom, ACL2, ACT-R, Common Lisp Music, ...).
"Clojure is predominantly a functional programming language, and features a rich set of immutable, persistent data structures. When mutable state is needed, Clojure offers a software transactional memory system and reactive Agent system that ensure clean, correct, multithreaded designs."
Things can be both incompatible and related. For any given thing, the set of things related to that thing will typically be bigger than and encompassing of the set of things compatible with that thing.
I guess I don't really get your point. Is Clojure not a pure enough Lisp for your taste, or what? I've used Clojure and CL a similar (small) amount, and they seem incredibly more similar to each other than either of them to Fortran.
For me languages which are fully incompatible and allow no code sharing are not VERY related. Languages VERY related to CL are Emacs Lisp, ISLisp and some other Lisps in the Lisp 1 / Maclisp tradition.
More food for the downvoters: Clojure has been carefully designed such that ZERO code sharing with any other Lisp dialect is possible. That's its definition of 'very related'.
As a Pakistani I find this report alarming. However I also believe that the situation, especially the army's view of the Taliban, in particular groups under the TTP, has drastically changed. The TTP's brazen attacks on the Pakistani military in recent months have prompted the army to respond with force. In particular, the army actually is waiting for the govt's green light to clean up North Waziristan. It recently launched air strikes [1] on Taliban positions, and until a couple of weeks ago a full scale military operation similar to the a successful one conducted in Swat in 2009 seemed imminent.
Ironically though the Sharif administration is not entirely in favour of an operation. Some of the complexities include the fact that Khyber Pakhtunkhwa is governed by PTI which disagrees with a military operation entirely.
Currently there is a shaky ceasefire between the govt and the TTP. However a new splinter group called Aharar-ul-Hind, forked off from the TTP and carried out a daring attack on a court in Islamabad a few days back [2].
Interestingly enough, the new TTP chief Maulana Fazlullah is supposedly based in Afghanistan, and there is a worrying, ironic prospect of TTP raids from Afghanistan into Pakistan after NATO withdrawal.
> However I also believe that the situation, especially the army's view of the Taliban, in particular groups under the TTP, has drastically changed.
That's possible, but it's pretty clear that ISI is a state within the state. As long as it isn't broken up and brought under civilian control, it is impossible to ensure it does not continue supporting these Talibans it deems acceptable. The testimony of the unnamed former intelligence official in the article speaks volume about it: "In 2007, a former senior intelligence official who worked on tracking members of Al Qaeda after Sept. 11 told me that while one part of the ISI was engaged in hunting down militants, another part continued to work with them." As long as they're willing to fund LeT actions like the Mumbai bombings, it's unlikely they will care that Pakistanis also become victims of terrorism.
I'm slightly surprised to see regex being used to parse and retrieve arguments from a function's source code. I would've normally assumed this to be a bad idea.
I asked the original author about this at I/O last year shortly after I was introduced to Angular. I was baffled that they were just using toString and regexs. On the one hand, there isn't any (obvious) cleaner was to do it. On the other, it feels so duct taped I couldn't believe it was the foundation of a Google-sponsored project.
But it's not a foundation. A production Angular app really shouldn't rely on this (if for no other reason minification becomes impossible). This flavor of parameter parsing really only exists for demos and throw away code.