Hacker News new | past | comments | ask | show | jobs | submit login
Infinite work is less work (perl.org)
161 points by lizmat 11 days ago | hide | past | web | favorite | 83 comments





I'm not sure I've wielded Perl in anger in a decade, but I still drop what I'm doing whenever Damain writes a new post about it... His writing and writing style and his sense of humour which he intersperses into his techincal mastery of Perl is still wonderful to read. Even though I reach for Python by default these days...

I discovered Damian Conway with Perligata while making researches for my interest in code translation toward other lexical bases. I translated its documentation to Esperanto[1] for the sake of improving my vocabulary, and had some questions he gently answered by email.

Ever since seeing his name on my screen have been a delight, because I know I'm going to have a nice moment reading a bit more of its utterances.

[1] https://beta.wikiversity.org/wiki/Lingua::Romana::Perligata


I have the pleasure of seeing him and Larry present _years_ ago on Perl 6's evolution. Their styles were polar opposites, but the comradery was fun to watch.

I often wonder where we'd be today if Perl 6 had got "finished" 10 years before it did...


Probably with more than a handful of Perl job opportunities nationwide.

Yes, The Damian's brilliance is one of the things I miss most since I stopped using Perl as my goto language.

There's a few other names that I recall of similar influence on me from way back in the "Perl 4 is only reliable way to make websites dynamic" days. Randal Schwartz. Toms Christiansen and Phoenix. Tad McLelland(sp?) Simon Cozens, brian d foy - without hints/examples/advice in comp.lang.perl.misc from them (and others) I 100% wouldn't have ended up where I am today (whether that's a good thing or a bad thing is, perhaps, still up for argument...)

Same for his videos/talk.

I believe the fancy math word for this “building up” of a value from a “smaller” initial condition is “anamorphism.”

Most interview-style programming problems are catamorphisms, or “boiling down” an initial condition to “smaller” result. (For example, summing the numbers in a string.)

In, say, JavaScript, knowing that you’re implementing a catamorphism usually (or always?) means that you can convert your input to an array and use the aptly-named reduce method to write a reducer that works on one piece at a time to build your result. (Sometimes your reducer needs a sophisticated memo object to keep track of details, like whether you’re in the middle of multiple digits.)

If you know you’ll be dealing with an anamorphism in JavaScript, your two options are a while loop (for an imperative flavor) or a generator (for a functional flavor).


Catamorphisms are more general than reducing an array, e.g. evaluating down a tree structure (or any other recursive datastructure) is also a catamorphism.

True, but isn’t it also true that a catamorphism on any other data structure still operates one piece at a time? If so, I think this would imply that you _could_ concert your tree or whatever to an array and still get your result with Array#reduce. (Not saying you should, just speculating that it might be always possible.)

> True, but isn’t it also true that a catamorphism on any other data structure still operates one piece at a time? If so, I think this would imply that you _could_ concert your tree or whatever to an array and still get your result with Array#reduce.

No, the catamorphism operates on one part of the structure but that part of the structure is not necessarily (head, tail) like it is for reduce - rather it's the "base" shape of the recursive data structure.

E.g. it's easy to write a catamorphism on a tree that finds the maximum number of children any one node has. But if you flatten the tree into an array then it's obviously impossible to recover that information.


Indeed. Reduce is a catamorphism, but not all catamorphisms are reduce. On the other hand, it's generally underappreciated how useful unfold (ana) and refold (hylo) are, even if we're just talking about lists.

Recursion schemes are the FP way of doing the Visitor Pattern. You can get annotations with (coelgot, zygo) which are super useful for tupecheckers, allow the fold to accumulate effects (gana, gcata), or even look into the future of the recursion (futu).

They do get a bit much at times though, since you can compose them easily. Zygohistoprepromorphisms are one example of this.


    strong :: (Integral i) => [i] -> [i]
    strong = mapMaybe fst3 . scanr g (Nothing, 0, 0)
        where fst3 (x, _, _) = x
              g n (_, p, pp) = (p <$ guard (2 * p > pp + n), n, p)
Haskell makes a better Perl (especially if, like here, you don't bother with readability).

What is it with functional languages and people naming their variables with single letters? If I wrote some python code like that I'd be laughed out of the room

Everyone I know who programs with single letters comes from a graduate-level math background. There seems to be a kind of belief that elegance and compactness are desirable.

Of course it’s the polar opposite of self-documenting, legible, understandable, easy-to-maintain code.


Academic notation is much less concerned with self-documentation and much more concerned with brevity and correctness.

I'm glad we never needed to learn these forms in school:

horizontalDisplacement = -(linearCoefficient +- sqrt(linearCoefficient^2 - 4 x quadraticCoefficient x constantCoefficient)) / (2 x quadraticCoefficient)

triangle_hypotenuse_length^2 = triangle_vertical_length^2 + triangle_horizontal_length^2


>I'm glad we never needed to learn these forms in school:

I'm not. You vastly underestimate how much easier that is to read, especially for students in school.


I agree.

My first thought upon reading those was... my god, if that’s how I’d first learned it in 7th grade, it would have made so much more immediate, intuitive sense to me and the rest of the students!

We’re taught so many formulas where we don’t have the slightest intuitive idea of what the variables mean or the effect they have (unless we’re naturally really intuitively good at math, which only a very small proportion of students are).

But if they said it right in their name... that seriously would have been a game-changer for a lot of students, I think. A gigantic help for understanding how to connect formulas to practical word problems.


Because many of them, especially Haskell, have their roots in Mathematics which are famous for one letter variable names.

Because the syntax favours terseness for readabilities sake (to see structure and patterns rather than reading a description of the variable).

Because Python specifically favours verboseness and has many principles around using long names, underscores instead of camelcase and the like.


Fine, but that doesn't address the point GP is making. This is a subject near and dear to my heart, since I am constantly frustrated by the opaque nature of functional programming tutorials on the internet. Functional composition is very difficult for beginners to pick up, and using single letter variable names for everything significantly compounds that problem. There are many concepts in FP that can elevate the craft of software design, and it's a tragedy that a lack of approachable instructional materials convinces most programmers that either they are not smart enough to learn it, or that it's Ivory tower nonsense. Ease of readability is a virtue in any programming environment, and I personally feel that the FP community as a whole should make lowering the barriers to entry a top priority. I'm grateful to all of the people working hard to bring functional programming to the masses, but I have a sneaking suspicion that many functional programming practitioners like that it looks complex and difficult to outsiders. Just my two cents.

Just because I can't resist writing little math things in Haskell:

    primes = 2 : filter isPrime [3..]

    isPrime n =
      let divisors = takeWhile (\d -> d * d <= n) primes
      in not $ any (\d -> n `mod` d == 0) divisors

    strongPrimes =
      let primeTriples = zip3 primes (drop 1 primes) (drop 2 primes)
      in [b | (a, b, c) <- primeTriples, b*2 > a + c]

Proof that you can write something elegant in Haskell and use comprehensible variable names.

I like the zip3, I should have thought of that. I think I prefer the mapMaybe, but then I avoid list comprehensions.

Wow, Haskell is just as opaque as ever to me. I wonder if I'll ever be able to comprehend this language.

For what it's worth, I've worked as a professional Haskell programmer at multiple companies and have been writing Haskell for more than a decade, and I still have trouble reading code written in that style. I usually end up decomposing it at a REPL so I can look at the types of intermediate parts of the program.

(In practice, most—although not all, admittedly—of the Haskell codebases I've worked with tended to be a lot less dense than the occasional combinator-heavy sample code you see in blog posts and comments.)


In fairness to Haskell, I didn't exactly go for readability in the above example, and indeed someone's already added a more readable example. I'll break it down for the sake of completeness:

scanr is a reduce function that produces a list of the reduction steps rather than just the final output. So I'm processing the list of primes and punting out (Maybe i, i i) at each stage (I could have included the type signature for "g" but I was deliberately going for a perl-like style.)

fst3 just takes the first element of a triple i.e. Maybe i. mapMaybe maps and throws away the "Nothing"s. Think of those as nulls.

So, returning to g, it returns a (Maybe i, i, i). The Maybe i is the prime if it's strong, Nothing if it isn't. The other two are the previous two primes. So what g does is a) compute the first element and b) copy the new prime (n) into the first location and the previous prime (p) into the second location.

"<$ guard" is a bit of a Stupid Dwarf Trick in all honesty, but the effect is: the expression on the right is a boolean expression. If it returns true, the expression on the left is returned. If it returns false, "Nothing" is returned.

If you are interested, I highly recommend working through this: https://www.cis.upenn.edu/~cis194/spring13/lectures.html (and that means doing the exercises). Haskell's a funny language in that there's a fairly easy way of doing things that would read how you would expect, and a whole bunch of "oh, but this is more elegant" things you can add on top. Thankfully, some of the most cutting edge features like ApplyingVia are actually producing _more_ readable code, not less.


Have you tried working through a more structured learning process like Haskelbook? Once you get the basics, the types begin to add lots of clarity to what is going on in a function and make it easier and faster to comprehend complicated behavior.

Just go slowly, you can do it, just not as naturally as Ruby for example.

You can also swap out that fst3 for a lens! (^. _3) works for any size tuple.

TIL. But I was trying to stick to base...

Base is great but the most powerful libraries aren't included :)

Golf:

  strong :: (Integral i) => [i] -> [i]
  strong λ@(_:b:c) = view _2 <$> filter p (zip3 λ (b:c) c)
    where p (x, y, z) = 2 * y > x + z
Also, it hangs with scanr (scanl works) and includes 2 which isn't strong.

You could also express it as a query over a comonadic stream.

> There are an infinite number of primes and of weak primes (and possibly of strong primes too, though that's still only conjectured)

Given that a strong prime is < (p[n-1] + p[n+1])/2, and that an infinite number of prime pairs with prime gap < 400 exist (twin prime conjecture and derivatives), and that the average prime gap is ln(p[n]), does it then not also follow that there is an infinite number of strong primes?


Yes, nice! Here's a more rigorous way of proving it:

A strong prime is precisely one for which the prime gap preceding it is strictly larger than the one following it. There are arbitrarily large prime gaps (for example none of the n numbers following n! are prime, since n!+m divides by m whenever m≤n). But it has recently been proven that there are infinitely many prime gaps with length less than or equal to 246. So given any strong prime we can find a prime gap after it which is longer than 246, and then a prime gap after that whose length is less than or equal to 246. Somewhere in between those two prime gaps there must be a prime at which the gap size decreases; a strong prime larger than our original strong prime. Hence there are infinitely many strong primes.


n!+1 can be prime, as primes can be divisible by 1. None of the n-1 numbers after n!+1 are prime.

Good point! But n-1 still gets arbitrarily large so the proof still works.

I don't think that we need to know the average prime gap for this.

Assume that there are only a finite number of strong primes.

Then after some point for all triplets of consecutive primes p1, p2, p3, we must have p2-p1 <= p3-p2. In other words, the gap between consecutive primes must be non-decreasing after some point.

But since there are arbitrarily large runs of non-primes (e.g., [n!+2, n!+n] is a run of n-1 consecutive non-primes), non-decreasing gaps contradicts the theorem that there are an infinite number of gaps < 400.


I've gone through most of the comments and it seems that some commenters are either

* referring to Perl 6 as Perl, or

* mistaking Perl 6 for Perl 5.

Although from the same family of programming languages, Perl (or Perl 5) and Perl 6 (or Raku[1]) are two completely different languages.

The article is about Perl 6 (or Raku), not Perl 5.

[1] https://docs.perl6.org/language/faq#What's_the_difference_be...?


I think this "different language" concept is in many ways either revisionist history or scope creep. Originally, Larry said he wanted to remove historical warts, clean up the language design, and etc. I think the apocalyptic and exegetical snowball rolled faster than expected, and after the mushroom cloud dissipated and the dust cleared it was easier to say, "Oh, it's a completely different language."

As Grinnz notes, I also think it's both:

- revisionist history. For most intent and purposes, Perl 5 and Perl 6 are two different languages and pretending otherwise helps nobody. In fact, it just creates misunderstandings and misplaced expectations. But why didn't they change the language's name a lot sooner? I honestly don't know. If many of the people involved in the project would've acted sooner, things such as clarifying that Perl 5 and Perl 6 are actively-developed, independent and different languages, even when the names suggest otherwise, would be things of the past. Nonetheless, even now there's the alias Raku, the name Perl 6 is the most used and probably will be for time to come. Will it catch on?

- scope creeps. As you state, Wall's goal was "to remove historical warts, clean up the language design, etc" which he deemed "the community's rewrite of Perl and of the community." but as we all know, things changed along the way (e.g., untimely delivery) and Perl 6 turned out to be a total different language to Perl 5 or to what many people envisioned as the replacement for Perl 5.


If they wanted it to be considered a separate language, they should have changed the name. Perl 5 is not a different language than Perl 4 or Perl 3. Changing the meaning of the version number at that point is just hostile to being understood.

Oh it's absolutely scope creep. But the end result is that it's a completely separate language, and should be treated as such.

It is both, but it is the only correct reflection of the current reality. Trying to pretend it's the same language will only end in (further) confusion and disappointment.

> ... revisionist history ...

It's not like Larry/Perl don't have previous form here...

Pathologically Eclectic Rubbish Lister...


Lazy, functional, and pretty: I had no idea Perl could be this nice! (Actually, I know next to zero Perl, but that's not the point…)

During the development of Perl 6, Haskell and similar languages weren't ignored. If I recall correctly, the first actual interpreter was written in Haskell (by Audrey Tang).

The way I see it, Perl initially (up to version 4) was mostly about compositing various Unix tools and sub-languages (sed, awk, (k|c)*sh) -- and believe me, back in those days that was really needed, as every Unix version out there did it another way.

With Perl 5, some "programming in the large" concepts were added, although it was decided that no definite OO style was needed due to the malleability of the language (if you haven't had a look at Perl5 since CGI days, take a look at the Moose ecosystem these days).

And Perl6 branched out into more "esoteric" topics, which included functional programming. I really need to take a better look at it and it's great that it's now stable enough that writing interesting articles about it doesn't mean that you can't run the code a week later (or that it's horribly slow).

And Damian Conway is always worth a look. Can't wait for 'Perl6 Best Practices'…


> If I recall correctly, the first actual interpreter was written in Haskell (by Audrey Tang).

Yup!

https://en.wikipedia.org/wiki/Pugs_(programming)


Perl gets a bad rep, but a decent programmer should be able to write readable Perl.

I found Modern Perl [0] to be an absolutely brilliant exposition of readable Perl 5 and how great Perl can be. I'll be forever grateful to chromatic for writing and revising the book!

[0] http://modernperlbooks.com/


> readable Perl

I worked with Perl 5 intensively for three years and know the language in and out. These Perl 6 snippets there are mostly foreign to me. It's like how a non-Perler looks at Perl 5: mostly a mess of unexpected symbols with some recognizable keywords inbetween.


It's a bit like if a C programmer looks at a Schwartzian transform.

A good programmer will write readable code in any language, and a bad code will write unreadable code in any language. But most programmers are somewhere in the middle, and so whether mediocre programmers end up writing readable code ends up being what's important.

> A good programmer will write readable code in any language,

That's a fantasy, pure and simple. Taking the statement on its face, let us ignore the undefined boundaries of "good" or "readable" and focus on the practical issue. Have you seen Brainfuck or Whitespace? Many languages are less readable than others, be it incidentally or by design. People are imperfect beings and for whatever "good" means, a "good" or "bad" programmer will write code that is against their assessed skill at times.


Also, Raku is a different language with a very different design, so Perl's bad rep ideally wouldn't apply.

perl6 seems ultra versatile, near lisp levels and can thus be molded into concise lazy functional

That people still use this wacky ass language is a kind of charming miracle.

Not quite so miraculous as Java maintaining its lead position in the software industry. At least Perl has always been able to handle a regex without adding backslashes to everything.

Care to elaborate what is 'wacky ass' in Perl 6?

Perl 6 is pretty "wacky ass" throughout. The crazy blending of object orientation and functional programming, the twigils, the built-in support for PEGs, the scoping rules, the operators, lots of things.

I like it, though. Perl 6 is a fun language in which to program, and it's therefore one I like trying to use, even if said use is less than practical.


The article is about Perl6, not Perl5. The history may be old but the language is a complete rewrite from scratch.

This is a good point. Whilst it might be just about defensible to describe Perl 5 as a wacky ass language that people still use, Perl 6 is different - it's a wacky ass language that nobody uses yet.

I think Infinity can be handled lot more elegantly in Python.

Here is my response with the same problem solved in Python.

https://anandology.com/blog/python-is-infintely-beautiful/


You can replace your `integers` function with `itertools.count` (same signature but defaults to starting at 0), and `take` with `itertools.islice`.

Didn't realize I could use `itertools.count`. In fact, it accepts a `start` parameter.

The `take` function is borrowed from Haskell. It feels lot more natural to say/read `take(10, primes())`, than `islice(primes(), 10)`.


perl6 -e 'my $integers := gather for 0..Inf { take $_ }; say $integers[2]'

There's no memoization in that approach however.

> However, as we stopped generating primes at 100, the &strong subroutine would have received an undefined value when it requested that n+1th prime. When used as a number, that undefined value would convert to zero, so the average of 97's two prime neighbours would be computed as (89+0)/2

I think I prefer Python's behaviour here, of just raising an OutOfRangeError, and not permitting NoneType to be coerced as an number.


Well Damian's program actually reports "Use of uninitialized value of type Any in numeric context" which the programmer can choose to be a warning or an exception.

Yeah a lot of languages don't fall for the undefined should silently give valid results thing

I am not familiar with Perl 6, nor with list comprehensions, but I am wondering if this is really more readable than the 'traditional' way of looping through the prime numbers and appending to the list, like this:

  strongs=[]
  weaks=[]

  n=1
  AskedNumbers=10
  while len(strongs) < AskedNumbers:
    if p(n) * 2 > ( p(n-1) + p(n+1) ):
       if len(strongs) < AskedNumbers:
         strongs.append(p(n))
    if p(n) * 2 < ( p(n-1) + p(n+1)):
       if len(weaks) < AskedNumbers:
         weaks.append(p(n))
    n+=1
  print(strongs)
  print(weaks)

That does lots of redundant prime checks, and presumed strong primes are rarer than weak primes, which is _implied_ in the article, but reading your code, nobody would know that (I'm not even sure I know it, really).

I am not familiar with Perl at all, though I am with list comprehensions, and FWIW I find the code very readable. (Though the lambda syntax was not obvious at first.)

What I would worry about is performance. But we are talking about top ten numbers, so whatever. (If it does actually matter, just precompute the lists.) But for larger sets I would check how is-prime is actually implemented :)

[EDIT: Grammar.]


I was a Perl 5 developer for over a decade and didn't recognize this as valid Perl.

this is not valid Perl, its Perl 6.

I understood it was about Perl 6, but I thought the article was starting off with some pseudo-code or some other language.

So variables in Perl 6 can be sigil-less...by prefixing a backslash like my \p = [];

Umm.. backslash is a sigil. Why do we need to add this ?


You only need the sigil during definition. So, for example (in the Perl 6 REPL):

    > my \x = 1
    1
    > my \y = 2
    2
    > say "Look Ma, no punctuation!" if x + y == 3
    Look Ma, no punctuation!
There are some implications of this; namely: x and y are now immutable. Back in the REPL:

    > my \x = 1
    1
    > x = 2
    Cannot modify an immutable Int (1)
      in block <unit> at <unknown file> line 1
This is because in Perl6 the sigil is a container for the value represented by that variable, so since you don't have a sigil, you don't have a container; that is: the names "x" and "y" are now literal synonyms for "1" and "2" (respectively). "x = 2" fails here for the same exact reason (and with the same exact error message) you'd get if you tried to do "1 = 2".

Sweet explanation ;-)!

For those interested on learning more about containers in Raku, lizmat wrote a nice article about them [1]. There's also this documentation page [2] that gives a detailed overview of them, where you can find more about sigilless variables.

[1] https://opensource.com/article/18/8/containers-perl-6

[2] https://docs.perl6.org/language/containers


See the examples. The backslash appears only in the function signature, but not in the function itself. You have to declare that you are using sigil-less variables somehow!

Using "infinite" when you mean "lazily generated" is a large source of confusion for learners of math, computer science, and programming.

Infinite things don't exist in computing, and infinite things in math are not the same as lazily-generated things. Conflating terminology makes the truth harder to understand. People prove statements about infinite objects and then misapply them to lazily-generated objects.


The infinity symbol is part of the code. This is about "the first N elements of an infinite sequence".

Explaining laziness without infinite lists makes laziness harder to understand.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: