I am not a lisper, but that looks pretty cool. The non-lisp ways of writing that would either require adding parens like GP or writing things like "result /= 2", but that means repeating "result" or whatever variable many times.
Still being an absolute beginner at lispy syntax, I'll have to ask you the same question though.
While I see the awesomeness in threading macros, I'm usually left pondering a while on where the result actually goes in the next s-expression. How long did it take you to get used to that? Would you prefer someting like `as->` for readability?
I believe the use of "~>" rather than "->" in Racket was set in #lang rackjure which adds in some of the Clojure syntax to Racket. Also amusing is their explanation:
> For example the threading macros are ~> and ~>> (using ~ instead of -) because Racket already uses -> for contracts. Plus as Danny Yoo pointed out to me, ~ is more "thready".
Also, I've seen lisps that obviate the need for the outermost parentheses (I think by using significant whitespace). In that case, the above example would have as many parens as the python example (for those who are keeping count).
Because the operators all take a list of values it's sometimes convenient to format them like any other long list, with each nested sub-expression on its own line to form a tree:
(setv result (- (/ (+ 1 3 88)
2)
(* 8
(+ 3 2)))
Another possibility is to use a (let ...) to give nested values temporary names, like:
(setv result (let ((first (/ (+ 1 3 88) 2))
(second (* 8 (+ 3 2))))
(- first second)))
It helps if "first" and "second" have meaningful names like "velocity" or "force" that can describe what they are.
There are a few links in the post that link to our signup page, but it's tough to see with the dark background. Maybe it's time to change the background color of our blog...
My understanding of hadoop and hdfs is that both input and ourput are to/from disk. Your MR jobs are spun up and pointed to hdfs urls to read input from and they write to hdfs when done. This means that when there is an error, the intermediate computations don't necessarily have to be redone. However there is a trade off.
Additionally, I believe that HDFS keeps 3 copies of the data around on 3 different nodes for redundancy. So there is the overhead of that network traffic.
My monitor brightness is indeed set very low (2/100), but increasing this to 100/100 doesn't change anything (nor does fiddling with the contrast). Don't think it's _purely_ a hardware problem anyway, since a regular terminal doesn't show it; only a terminal in which I'm running over SSH. Thanks for the suggestion, though!