Ever since seeing his name on my screen have been a delight, because I know I'm going to have a nice moment reading a bit more of its utterances.
I often wonder where we'd be today if Perl 6 had got "finished" 10 years before it did...
Most interview-style programming problems are catamorphisms, or “boiling down” an initial condition to “smaller” result. (For example, summing the numbers in a string.)
No, the catamorphism operates on one part of the structure but that part of the structure is not necessarily (head, tail) like it is for reduce - rather it's the "base" shape of the recursive data structure.
E.g. it's easy to write a catamorphism on a tree that finds the maximum number of children any one node has. But if you flatten the tree into an array then it's obviously impossible to recover that information.
They do get a bit much at times though, since you can compose them easily. Zygohistoprepromorphisms are one example of this.
strong :: (Integral i) => [i] -> [i]
strong = mapMaybe fst3 . scanr g (Nothing, 0, 0)
where fst3 (x, _, _) = x
g n (_, p, pp) = (p <$ guard (2 * p > pp + n), n, p)
Of course it’s the polar opposite of self-documenting, legible, understandable, easy-to-maintain code.
I'm glad we never needed to learn these forms in school:
horizontalDisplacement = -(linearCoefficient +- sqrt(linearCoefficient^2 - 4 x quadraticCoefficient x constantCoefficient)) / (2 x quadraticCoefficient)
triangle_hypotenuse_length^2 = triangle_vertical_length^2 + triangle_horizontal_length^2
I'm not. You vastly underestimate how much easier that is to read, especially for students in school.
My first thought upon reading those was... my god, if that’s how I’d first learned it in 7th grade, it would have made so much more immediate, intuitive sense to me and the rest of the students!
We’re taught so many formulas where we don’t have the slightest intuitive idea of what the variables mean or the effect they have (unless we’re naturally really intuitively good at math, which only a very small proportion of students are).
But if they said it right in their name... that seriously would have been a game-changer for a lot of students, I think. A gigantic help for understanding how to connect formulas to practical word problems.
Because the syntax favours terseness for readabilities sake (to see structure and patterns rather than reading a description of the variable).
Because Python specifically favours verboseness and has many principles around using long names, underscores instead of camelcase and the like.
primes = 2 : filter isPrime [3..]
isPrime n =
let divisors = takeWhile (\d -> d * d <= n) primes
in not $ any (\d -> n `mod` d == 0) divisors
let primeTriples = zip3 primes (drop 1 primes) (drop 2 primes)
in [b | (a, b, c) <- primeTriples, b*2 > a + c]
(In practice, most—although not all, admittedly—of the Haskell codebases I've worked with tended to be a lot less dense than the occasional combinator-heavy sample code you see in blog posts and comments.)
scanr is a reduce function that produces a list of the reduction steps rather than just the final output. So I'm processing the list of primes and punting out (Maybe i, i i) at each stage (I could have included the type signature for "g" but I was deliberately going for a perl-like style.)
fst3 just takes the first element of a triple i.e. Maybe i. mapMaybe maps and throws away the "Nothing"s. Think of those as nulls.
So, returning to g, it returns a (Maybe i, i, i). The Maybe i is the prime if it's strong, Nothing if it isn't. The other two are the previous two primes. So what g does is a) compute the first element and b) copy the new prime (n) into the first location and the previous prime (p) into the second location.
"<$ guard" is a bit of a Stupid Dwarf Trick in all honesty, but the effect is: the expression on the right is a boolean expression. If it returns true, the expression on the left is returned. If it returns false, "Nothing" is returned.
If you are interested, I highly recommend working through this: https://www.cis.upenn.edu/~cis194/spring13/lectures.html
(and that means doing the exercises). Haskell's a funny language in that there's a fairly easy way of doing things that would read how you would expect, and a whole bunch of "oh, but this is more elegant" things you can add on top. Thankfully, some of the most cutting edge features like ApplyingVia are actually producing _more_ readable code, not less.
strong :: (Integral i) => [i] -> [i]
strong λ@(_:b:c) = view _2 <$> filter p (zip3 λ (b:c) c)
where p (x, y, z) = 2 * y > x + z
Given that a strong prime is < (p[n-1] + p[n+1])/2, and that an infinite number of prime pairs with prime gap < 400 exist (twin prime conjecture and derivatives), and that the average prime gap is ln(p[n]), does it then not also follow that there is an infinite number of strong primes?
A strong prime is precisely one for which the prime gap preceding it is strictly larger than the one following it. There are arbitrarily large prime gaps (for example none of the n numbers following n! are prime, since n!+m divides by m whenever m≤n). But it has recently been proven that there are infinitely many prime gaps with length less than or equal to 246. So given any strong prime we can find a prime gap after it which is longer than 246, and then a prime gap after that whose length is less than or equal to 246. Somewhere in between those two prime gaps there must be a prime at which the gap size decreases; a strong prime larger than our original strong prime. Hence there are infinitely many strong primes.
Assume that there are only a finite number of strong primes.
Then after some point for all triplets of consecutive primes p1, p2, p3, we must have p2-p1 <= p3-p2. In other words, the gap between consecutive primes must be non-decreasing after some point.
But since there are arbitrarily large runs of non-primes (e.g., [n!+2, n!+n] is a run of n-1 consecutive non-primes), non-decreasing gaps contradicts the theorem that there are an infinite number of gaps < 400.
* referring to Perl 6 as Perl, or
* mistaking Perl 6 for Perl 5.
Although from the same family of programming languages, Perl (or Perl 5) and
Perl 6 (or Raku) are two completely different languages.
The article is about Perl 6 (or Raku), not Perl 5.
- revisionist history. For most intent and purposes, Perl 5 and Perl 6
are two different languages and pretending otherwise helps nobody. In fact,
it just creates misunderstandings and misplaced expectations. But why didn't
they change the language's name a lot sooner? I honestly don't know. If many
of the people involved in the project would've acted sooner, things
such as clarifying that Perl 5 and Perl 6 are actively-developed, independent and
different languages, even when the names suggest otherwise,
would be things of the past. Nonetheless, even now there's the alias Raku,
the name Perl 6 is the most used and probably will be for time to come. Will it catch on?
- scope creeps. As you state, Wall's goal was "to remove historical warts,
clean up the language design, etc" which he deemed "the community's
rewrite of Perl and of the community." but as we all know, things changed
along the way (e.g., untimely delivery) and Perl 6 turned out to be a total
different language to Perl 5 or to what many people envisioned as the
replacement for Perl 5.
It's not like Larry/Perl don't have previous form here...
Pathologically Eclectic Rubbish Lister...
The way I see it, Perl initially (up to version 4) was mostly about compositing various Unix tools and sub-languages (sed, awk, (k|c)*sh) -- and believe me, back in those days that was really needed, as every Unix version out there did it another way.
With Perl 5, some "programming in the large" concepts were added, although it was decided that no definite OO style was needed due to the malleability of the language (if you haven't had a look at Perl5 since CGI days, take a look at the Moose ecosystem these days).
And Perl6 branched out into more "esoteric" topics, which included functional programming. I really need to take a better look at it and it's great that it's now stable enough that writing interesting articles about it doesn't mean that you can't run the code a week later (or that it's horribly slow).
And Damian Conway is always worth a look. Can't wait for 'Perl6 Best Practices'…
I worked with Perl 5 intensively for three years and know the language in and out. These Perl 6 snippets there are mostly foreign to me. It's like how a non-Perler looks at Perl 5: mostly a mess of unexpected symbols with some recognizable keywords inbetween.
That's a fantasy, pure and simple. Taking the statement on its face, let us ignore the undefined boundaries of "good" or "readable" and focus on the practical issue. Have you seen Brainfuck or Whitespace? Many languages are less readable than others, be it incidentally or by design. People are imperfect beings and for whatever "good" means, a "good" or "bad" programmer will write code that is against their assessed skill at times.
I like it, though. Perl 6 is a fun language in which to program, and it's therefore one I like trying to use, even if said use is less than practical.
Here is my response with the same problem solved in Python.
The `take` function is borrowed from Haskell. It feels lot more natural to say/read `take(10, primes())`, than `islice(primes(), 10)`.
I think I prefer Python's behaviour here, of just raising an OutOfRangeError, and not permitting NoneType to be coerced as an number.
while len(strongs) < AskedNumbers:
if p(n) * 2 > ( p(n-1) + p(n+1) ):
if len(strongs) < AskedNumbers:
if p(n) * 2 < ( p(n-1) + p(n+1)):
if len(weaks) < AskedNumbers:
What I would worry about is performance. But we are talking about top ten numbers, so whatever. (If it does actually matter, just precompute the lists.) But for larger sets I would check how is-prime is actually implemented :)
Umm.. backslash is a sigil. Why do we need to add this ?
> my \x = 1
> my \y = 2
> say "Look Ma, no punctuation!" if x + y == 3
Look Ma, no punctuation!
> my \x = 1
> x = 2
Cannot modify an immutable Int (1)
in block <unit> at <unknown file> line 1
For those interested on learning more about containers in Raku, lizmat wrote a nice article about them . There's also this documentation page  that gives a detailed overview of them, where you can find more about sigilless variables.
Infinite things don't exist in computing, and infinite things in math are not the same as lazily-generated things. Conflating terminology makes the truth harder to understand. People prove statements about infinite objects and then misapply them to lazily-generated objects.
Explaining laziness without infinite lists makes laziness harder to understand.